Skip to main content

The prompt Step

Use a prompt step when you need a single-turn response from the fast query LLM without spinning up a full agent. The step renders your message with the task context, sends it to the configured simple prompt model, and stores the model's reply so later steps can reference it.

Key characteristics of a prompt step:

  • Single exchange: Bosun sends exactly one message and records the reply. There is no tool use or iterative reasoning like you would get from an agent.
  • Templated message: The prompt string is rendered with Tera, so you can interpolate inputs, previous outputs, or values produced inside a for_each loop.
  • Stored result: The raw model response is registered under the step's index or id, allowing follow-up steps to reuse or transform the text.

Example from workflow

  - id: summarize_logs
name: Summarise overnight logs
prompt: |
Summarise the key incidents from the last CI run.
Tests that failed:
{{ outputs.fetch_failures }}

In this example:

  1. The prompt text is rendered with the current context so {{ outputs.fetch_failures }} is replaced with the earlier step output.
  2. The message is sent to the simple prompt model configured in Bosun.
  3. The model's reply is stored as outputs.summarize_logs, ready for subsequent steps (for example, to feed into a structured_prompt or to post into a ticket).

When to choose prompt

Use prompt instead of agent when you only need a concise answer or transformation and do not require repository access, tool calls, or autonomous multi-step behaviour.

Error handling

prompt steps stop the task when rendering fails or the LLM request errors—unless you set continue_on_error: true. When enabled, the runtime records the failure (including the error message) and keeps going, making the data available via errors.<step> for later retries or summaries. See Error handling for more details.