Skip to main content
This guide focuses exclusively on the design and writing of effective prompts for AI Agents and AI Tasks. For general concepts about agents, structure, tools, and configurations, review the General Guide for AI Agents.

Prompt examples for an AI Agent

Frequently Asked Questions

When we need to give the AI Agent additional context about a company so it can respond to user queries. They should use the AI Agent node.

Prompt

You are an advisor for [client name], [client description], who must respond to user queries.

Always respond to queries in a [tone, personality] manner.

When the user asks you a question, you must always follow the following process:
1. Look for context about [client name] in the files available to you.
2. Determine whether the information you obtain is relevant to the user's query. Be very careful in this step, as you must clearly define whether this information is relevant or not.
3. If and only if the user expresses a desire to exit or end the interaction, execute the end_function with the following parameters: {"exit":true}
You can also include guides for managing the interaction, for example:
Take the following guidelines into account for your interaction:
- If you cannot find information about a service, assume that [business name] does not offer that service.
- You are part of [client name], so you may speak in the first person.

Redirect the user

When you need to redirect the user to a part of the flow based on their intent. If it is the only purpose (like a router), you can use an AI Task. If this is part of an interaction, such as redirecting the user to a flow when they request it, you can use an AI Agent. This can be achieved with a combination of AI and a conditional node. You should save the AI’s response, which you can access with {{$memory.variableName}}.

Prompt

You are an AI router that must process a customer message to decide which specialized agent to send it to.

There are 5 agents:
1. Credit cards (issuance, general information about cards or American Express /AMEX)
2. Savings accounts and checking accounts (openings and general information)
3. Documents (certificates, account statements)
4. General queries (general queries about the bank)
5. Other

Try as much as possible to place the user in one of the agents, and only if the message is very generic, you may choose 5 (other).

If the message is not understood, or is very ambiguous, ask the user again in friendly words.

You must always finish your execution by calling end_function with a schema in JSON format: `{flow:[flow]}`.

Flow can be equal to "cards", "accounts", "documents", "queries", and "other".

Examples:
- If the user asks about credit cards, end with end_function `{flow:"cards"}`
- If the user requests a bank certificate, end with end_function `{flow:"documents"}`

Request data from the user

For registration processes or flows where information must be requested from the user.

Prompt

You must ask the user for their name and email address.

When the user provides their name and email, execute your end_function with the schema in JSON format:
{"name":[name],"email":[email]}
You can then access it with {{$memory.variableName}} to save it in a Datum node.

Prompt examples for an AI Task

AI Tasks are requests to OpenAI/preferred LLM. Their prompt is easier to write and does not require using keywords to finish such as end_function. Their goal will always be to fulfill what the prompt says and save the response in JSON format.

Redirect the user

You must process the user's message to decide which flow to send it to.

The user's message is: {{$message.text}}

There are 5 flows:
1. Credit cards (issuance, general information about cards or American Express / AMEX)
2. Savings accounts and checking accounts (openings and general information)
3. Documents (certificates, account statements)
4. General queries (general queries about the bank)
5. Other

Try as much as possible to place the user in one of the flows, and only if the message is very generic, you may choose 5 (other).

Generate the following JSON: {flow:[flow]}.

Where flow can be equal to "cards", "accounts", "documents", "queries", and "other".

Examples:
- If the user asks about credit cards, end with {flow:"cards"}
- If the user requests a bank certificate, end with {flow:"documents"}
The difference is that end_function is not used.

Best Practices Summary

AspectRecommendation
ClarityUse direct, structured prompts
LanguageBe consistent with terms (e.g., “tool”)
JSON usageClearly specify the expected format
TerminationIn AI Agents, use end_function correctly. In AI Tasks just make sure to specify the JSON.
ContextIndicate how to evaluate inputs or relevant references.

Content types and format in prompts

  • Maintain consistent language throughout the prompt.
  • Clearly specify the output format (JSON recommended).
  • If the flow needs to terminate, explicitly define the use of end_function.

Intelligent Design of Conditional Prompts

To get better results, it is key to minimize the number of decisions the AI must make, especially when they are not directly related to its main objective. Instead of loading the prompt with multiple conditional instructions, it is preferable to pre-process the logic outside the AI Agent, keeping the prompt clean, focused, and more effective.

What to avoid

Avoid writing prompts like this:
If you have data in this variable {{$memory.variable}}, then do this, otherwise follow these other instructions.
This type of logic adds ambiguity and noise to the prompt, which can hinder the model’s comprehension and reduce the accuracy of the response. Use prior conditional logic in a code-type node to define what instructions the AI should follow before sending it the prompt. Example:
let conditionalPrompt = "Do this first";

if($memory.get("variable")){
  conditionalPrompt = "Better do this instead";
}

$memory.set("conditionalPrompt", conditionalPrompt);
Then, in your AI Agent, simply reference the processed variable:
Follow these instructions: {{$memory.conditionalPrompt}}

Benefits

  • Reduces the model’s cognitive load
  • Decreases the use of unnecessary tokens
  • Increases prompt clarity
  • Improves response consistency
This approach helps design more reliable and predictable agents, especially when combining multiple input sources or dynamic conditions.
For considerations about models (GPT-4.1, long contexts, CoT, and agent reminders), see the General Guide for AI Agents.

Prompt Validation Tool

To make it easier to create effective prompts, you can use a beta tool that acts as a testing assistant. If your prompt is not working as expected or you need a quick guide to get started, simply go to the following link, write your prompt or objective, and follow the assistant’s suggestions: ChatGPT - AI Agents Prompt Corrector This tool is in beta, so it is still under development. However, it is useful for detecting common errors, refining instructions, and validating whether the prompt is well-structured before implementing it in Brain Studio.

Additional resources

  • OpenAI Platform - Explore developer resources, tutorials, API documentation, and dynamic examples
  • AI Playground - Compare AI models side by side: OpenAI GPT, Anthropic Claude, Google Gemini, Llama, Mistral, and more