Instruction
The instruction defines the base behavior of the agent: what role it adopts, what tone it uses, and what steps it follows before responding. Make sure it is brief, concrete, and free of ambiguity. If you need to go deeper into writing instructions, see the best practices guide in@prompting.mdx.
View prompting recommendations and examples
Model
Choose the model that best fits the type of interaction you are looking for. Consider latency, cost, and task complexity before selecting one.Available models
- Llama 4 Scout: an agile, low-latency model, ideal for quick ideas and lightweight interactions.
- GPT 4.1 Mini: a simplified variant of GPT-4.1 optimized for fast responses with lower resource demands.
- Claude 3.5 Sonnet: excellent for complex tasks that require more elaborate text and extended contexts.
- GPT 4-o (Azure): a version of GPT 4-o hosted on Azure, focused on stability and performance in enterprise environments.
- Llama 4 Maverick: a high-performance model designed for demanding reasoning and multi-step problem solving.
- GPT 4.1: a refined evolution of GPT-4, with better comprehension, reasoning, and accuracy.
- GPT 4-o Mini: a faster, lighter version of GPT 4-o, oriented toward cases where speed is the priority.
Temperature
Temperature controls the level of creativity of the model. Use values between0 and 1: at 0 you will get deterministic and consistent responses; as you approach 1, responses will be more creative and varied. Adjust this parameter depending on whether you need precision or exploration.
Knowledge Bases
Reinforce the agent with specific documentation so it responds with up-to-date and verifiable information.- Files: upload files up to 2 MB in
.pdfor.csvformats. - URLs: add links whose download does not exceed 5 MB. Verify that the content is publicly accessible.
MCPs
Connect Model Context Protocols (MCPs) to expand the agent’s capabilities. Each integration requires:- Endpoint URL to which the query will be made.
- Optional headers with credentials or metadata required for authentication.
- Identifier name to recognize the MCP within the studio.
- Description that summarizes what information or actions it provides.
Tools
Tools allow the agent to perform external actions. You can configure them in two ways:- Tools created on the platform: define them directly in Brain Studio for reusable, managed integrations.
- HTTP calls: specify an endpoint, method, and input/output schema to make one-off requests to your services.
Advanced Configuration
In the advanced configuration tab of the AI Agent node you will find additional options that expand the agent’s capabilities for processing different types of content and interactive messages.Payload reading support for incoming messages
Enable this option to allow the AI Agent to process quick-reply payloads from interactive messages. When activated, the agent can interpret and respond to user actions on quick-reply buttons, improving the experience in conversational flows that use interactive elements. When to enable it:- When your flow includes messages with quick-reply buttons.
- If you need the agent to process the information sent through payloads instead of just the visible button text.
- For flows that require capturing structured data from button interactions.
PDF document support
Activate this setting so the AI Agent can read and process PDF documents sent by users during the conversation. The agent will extract the document content and will be able to answer questions or perform actions based on the information it contains. Important note: This functionality only processes the text of the PDF. Images contained within the document will not be processed or analyzed by the agent. When to enable it:- When you expect users to send PDF documents as part of the interaction.
- If you need the agent to analyze, summarize, or extract information from PDF files.
- For use cases such as document validation, data extraction, or content analysis.
Read image with URL and caption
This setting allows the AI Agent to process images that include a caption. The agent will be able to access both the image and the associated descriptive text, processing both elements together. When to enable it:- When users send images with text descriptions.
- If you need the agent to process both the visual content and the textual context of the image.
- For flows that require combined image and text analysis.
Best Practices
- It is recommended to always activate the Knowledge Base section to anchor the agent’s responses to your business’s official documentation or to the basic documentation needed by the prompt. This drastically reduces the risk of hallucinations, as the model will prioritize the information from your files over its general knowledge. To maximize its effectiveness, assign clear descriptions to each uploaded file, allowing the agent to know exactly when it should look for information in them.