Skip to main content
The AI Agent node is the central component for creating autonomous conversations powered by artificial intelligence in Brain Studio. It allows you to configure an agent that processes user messages, queries knowledge bases, executes tools, and generates contextual responses autonomously. The configuration panel is organized into four tabs:
  1. General — Model and instructions
  2. Tools — Native, custom, and MCP tools
  3. Knowledge — Documents, images, and datastores
  4. Advanced — Fallback model, security, expiration, DLP, and more

General

The General tab contains the essential agent configuration.

Model

Select the language model (LLM) the agent will use to generate responses. Consider latency, cost, and task complexity before making your selection. Available models include: OpenAI
  • GPT 4.1 Mini: a simplified variant of GPT-4.1 optimized for fast responses with lower resource demands.
  • GPT 4.1: a refined evolution of GPT-4, with better understanding, reasoning, and accuracy.
  • GPT 4-o (Azure): a GPT 4-o version hosted on Azure, focused on stability and performance in enterprise environments. Supports vision.
  • GPT 4-o Mini: a faster, lighter version of GPT 4-o, oriented toward speed-first use cases. Supports vision.
  • GPT 5.2: OpenAI’s latest-generation model with advanced reasoning, larger context, and high precision on complex tasks.
Anthropic (Claude)
  • Claude 3.5 Sonnet: excellent for complex tasks requiring more elaborate text and extensive contexts.
  • Claude 4 Sonnet: high reasoning and analysis capability.
  • Claude 4.6 Sonnet: Claude’s latest generation, with improved reasoning and greater accuracy.
Google (Gemini)
  • Gemini 2.5 Flash: fast processing with multimodal capabilities. Supports vision.
  • Gemini 2.5 Pro: advanced reasoning with multimodal capabilities. Supports vision.
  • Gemini 3 Flash: latest generation of Gemini with optimized multimodal processing. Supports vision.
Meta (Llama)
  • Llama 4 Scout: agile, low-latency model, ideal for quick ideas and lightweight interactions.
  • Llama 4 Maverick: high-performance model designed for demanding reasoning and multi-step problem solving.
Models marked with “Supports vision” can process images uploaded in the Knowledge tab. If you need the agent to interpret images as part of its context, select one of these models.
You can also add custom models using the “Add model” button. This allows you to connect your own or third-party models that are not in the predefined list.

Instructions

Define the base behavior of the agent through a system prompt. The instructions determine what role it adopts, what tone it uses, and what steps it follows before responding. Make sure they are concise, concrete, and free of ambiguity. Instructions support variable interpolation using the {{$variable}} syntax, which allows you to dynamically customize behavior based on the conversation context. The field includes a character counter that adjusts based on the selected model, since each model has a different maximum limit. See prompting recommendations and examples

Tools

The Tools tab allows you to add tools that extend the agent’s capabilities beyond text generation. The agent autonomously decides when to invoke each tool based on the conversation context. To add tools, click the ”+ Add tools” button and select the tools you need.

Native tools

These are predefined tools built into the platform. They are marked with the text “(Native)” in the selector:
ToolDescription
Product searchQueries the product catalog configured in the bot
Send interactive messageRenders buttons, lists, and quick replies to the user
Send Call to ActionSends CTA buttons with URL, with WebView support
Current date and timeGets the current date and time for a specific timezone
Day of the weekCalculates the day corresponding to a date
Transfer to agentTransfers the conversation to a human agent or support queue
See detailed documentation on native tools

Custom tools

Tools created by the user from Brain Studio. When adding a custom tool you can configure:
  • Name and description that the model uses to decide when to invoke it.
  • Custom input parameters.
  • Action on execution: behavior after the tool has been executed.
See how to create your first tool

MCP Tools (Model Context Protocol)

Integrate external services using the MCP standard. There are two types of integration:

Native MCP apps

Applications from the Jelou marketplace that are installed directly on the platform. Each MCP app can be configured in two modes:
  • Integration mode: enables all of the app’s tools at once.
  • Granular mode: allows you to select individual tools from the app and configure each one separately.

External MCP servers

Connections to your own MCP servers via custom URL.
1

Configure the server URL

Enter the URL of your MCP server endpoint.
2

Add headers (optional)

Configure custom headers (name and value) for authentication or other metadata.
3

Select the tools

Once connected, select the available tools you want to enable.

Tool actions

Each tool (native, custom, or MCP) can configure an action that runs after it is invoked:
ActionBehavior
NoneThe agent continues the conversation normally
End functionThe skill execution ends after using the tool
Pause interactionThe conversation pauses until it is resumed externally

Knowledge

The Knowledge tab allows you to add information sources that the agent consults to ground its responses, significantly reducing the risk of hallucinations. From this tab you manage documents, images, and datastores in a single place.

Upload block

The unified upload block accepts documents and images in the same interaction. You can add files in three ways:
  • Drag and drop files directly onto the block
  • Click on the block to open the file selector
  • Paste a URL in the link field at the bottom of the block (supports the same document and image formats listed below)
TypeSupported formatsMaximum size
Documents.pdf, .xlsx, .md, .txt, .json, .csv2 MB per file
Images.jpg, .jpeg, .png10 MB per file
In addition to .pdf and .csv, you can upload .xlsx, .md, .txt, and .json files as knowledge sources.
Documents are processed one at a time. If you select or drag multiple documents simultaneously, only the first one will be processed and you will receive a notice to upload the remaining ones individually. Images, on the other hand, are processed in batch.

Documents

When you upload a document, a panel opens where you define its name (maximum 30 characters) and an optional description that helps the agent understand the file’s content. Once uploaded, you can edit its metadata or delete it from the list.

Images

Images are uploaded directly without additional steps and appear as thumbnails below the upload block. You can upload up to 3 images per agent.
  • Click a thumbnail to view it in full screen
  • Hover over a thumbnail to see the delete button
Images require a model with multimodal capabilities (marked with “Supports vision” in the model list). If the selected model does not support vision, you will see a warning banner and the agent will not process images during the conversation.

Datastores

Connect Datum databases so the agent can query structured information in real time:
  • Select the datastore to connect
  • Configure the allowed operations on the database
It is recommended to always enable the Knowledge section to anchor the agent’s responses to your business’s official documentation. Assign clear descriptions to each uploaded file to maximize effectiveness. The model will prioritize information from your files and datastores over its general knowledge, drastically reducing the risk of hallucinations.

Advanced

The Advanced tab contains additional settings for finer control over agent behavior.

Fallback model

Select an alternative model that will be used automatically if the primary model fails or is unavailable.
The fallback model cannot be the same as the primary model.

Opening message

Configure an optional message that the agent automatically sends to the user at the start of the conversation, before the user writes anything.
  • Maximum 100 characters
  • Supports variable interpolation with {{$variable}}

External resume

When enabled, allows conversations to be paused and resumed via the API. This is useful for flows that require external asynchronous processing, such as validations, payments, or approvals. Endpoint: POST /v1/skills/resume The request payload must include the execution ID of the conversation to be resumed.

Processing options

Independent options to enable additional capabilities for messages the user sends during the conversation:
OptionDescription
PDF supportAllows the agent to read and process PDF documents sent by the user during the conversation (text only, not images inside the PDF)
Read image URLsEnables the agent to process images that the user sends as a URL with descriptive text during the conversation
Quick Reply Payload supportAllows the agent to interpret the payload of quick replies from interactive messages
These options apply to files that the end user sends during the conversation. Images and documents you configure as agent context are managed from the Knowledge tab.

Security (Guardrails)

Configure the agent’s level of protection against misuse, prompt injection, and out-of-scope requests.
1

Enable security

Activate the security toggle to access the configuration options.
2

Select security level

Choose the preset that best fits your use case:
LevelDescription
LowMinimum protection, greater response flexibility
MediumBalance between security and flexibility (recommended)
HighStrict protection, restricts out-of-scope responses
CriticalMaximum protection, ideal for regulated contexts
3

Model Armor (optional)

Enable an additional protection layer that filters model inputs and outputs to detect malicious content.
See complete security and guardrails guide

Expiration

Configure a time limit for the agent session. If the user does not respond within the configured time, the session expires automatically.
1

Enable expiration

Activate the expiration toggle.
2

Configure duration

Define the time and select the unit:
  • Minutes: range from 1 to 1200
  • Hours: range from 1 to 20
  • Default value: 8 hours (28,800 seconds)

DLP (Data Loss Prevention)

Enable automatic detection and masking of sensitive information in agent conversations.
1

Enable DLP

Activate the DLP toggle to access the configuration.
2

Select sensitive data types

Choose which types of information should be automatically detected and masked:
Data typeDefault replacement value
Credit card number[CreditCardNumber]
Credit card track number[CreditCardTrackNumber]
Email address[EmailAddress]
Financial account number[FinancialAccountNumber]
IP address[IpAddress]
Location[Location]
Geographic coordinates[LocationCoordinates]
Phone number[PhoneNumber]
Date[Date]
Person name[PersonName]
3

Customize replacements (optional)

Modify the replacement value for each data type according to your needs.
The Credit card number type is always selected by default and cannot be disabled.

Save response

Allows you to store the agent’s last response in a variable for use in subsequent nodes of the flow.
1

Enable save response

Activate the corresponding toggle.
2

Define the variable name

Enter the name of the variable where the response will be stored (for example: ai_agent_response).
The variable will be available as {{$context.ai_agent_response}} in subsequent nodes of the flow.

Configuration example

A typical configuration flow for a customer service agent:
1

Configure General

Select GPT-4.1 as the model and write clear instructions defining the agent’s role, tone, and scope.
2

Add Tools

Enable Transfer to agent to escalate complex conversations, Product search if the agent handles catalog queries, and Send interactive message to show options to the user.
3

Configure Knowledge

Upload the FAQ document in PDF format, add reference images if the model supports vision, and connect the product datastore if applicable.
4

Adjust Advanced

Configure a fallback model, set expiration to 30 minutes, enable security at medium level, and activate DLP if financial or personal data is handled.