Creating custom agents
Learn how to create custom agents, what to prepare before you start, and how to put them to work across messaging, decisioning, and data management. For more general information, see Braze Agents.
Braze Agents are currently in beta. For help getting started, contact your customer success manager.
Prerequisites
Before you start, you’ll need the following:
- Access to the Agent Console in your workspace. Check with your Braze admins if you don’t see this option.
- Permission to create and edit custom AI Agents.
- An idea of what you want the agent to accomplish. Braze Agents can support the following actions:
- Messaging: Generate subject lines, headlines, in-product copy, or other content.
- Decisioning: Route users in Canvas based on behavior, preferences, or custom attributes.
- Data management: Calculate values, enrich catalog entries, or refresh profile fields.
How it works
When you create an agent, you define its purpose and set guardrails for how it should behave. After it’s live, the agent can be deployed in Braze to generate personalized copy, make real-time decisions, or update catalog fields. You can pause or update an agent anytime from the dashboard.
Create an agent
To create your custom agent:
- Go to Agent Console > Agent Management in the Braze dashboard.
- Select Create agent.
- Enter a name and description to help your team understand its purpose.
- Choose the model your agent will use.
- Give the agent instructions. Refer to Writing instructions for guidance.
- Test the agent output and adjust the instructions as needed.
- When you’re ready, select Create Agent to activate the agent.
Next step
Your agent is now ready to use! For details, see Deploying agents.
Reference
Models
When you set up an agent, you’ll choose the model it uses to generate responses. You have two options:
Option 1: Use a Braze-powered model
This is the simplest option, with no extra setup required. Braze provides access to large language models (LLM) directly. To use this option, select Auto.
If you use the Braze-powered LLM, you will not incur any cost during the Beta period. Invocation is limited to 50,000 runs per day and 500,000 runs in total. See Limitations for details.
Option 2: Bring your own API key
With this option, you can connect your Braze account with providers like OpenAI, Anthropic, AWS Bedrock, or Google Gemini. If you bring your own API key from an LLM provider, costs are billed directly through your provider, not from Braze.
To set this up:
- Go to Partner Integrations > Technology Partners and find your provider.
- Enter your API key from the provider.
- Select Save.
Then, you can return to your agent and select your model.
Writing instructions
Instructions are the rules or guidelines you give the agent (system prompt). They define how the agent should behave each time it runs. System instructions can be up to 10 KB.
Here are some general best practices to get you started with prompting:
- Start with the end in mind. State the goal first.
- Give the model a role or persona (“You are a …”).
- Set clear context and constraints (audience, length, tone, format).
- Ask for structure (“Return JSON/bullet list/table…”).
- Show, don’t tell. Include a few high-quality examples.
- Break complex tasks into ordered steps (“Step 1… Step 2…”).
- Encourage reasoning (“Think aloud, then answer”).
- Pilot, inspect, and iterate. Small tweaks can lead to big quality gains.
- Handle the edge cases, add guardrails, and add refusal instructions.
- Measure and document what works internally for re-use and scaling.
For more details on prompting best practices, refer to guides from the following model providers:
Simple prompt
This example prompt takes a survey input and outputs a simple sentiment analysis:
1
2
3
4
5
6
7
From the survey text, classify overall sentiment toward product quality, delivery, and price as Positive, Neutral, or Negative
Always output a single string with just one label.
If any category is missing or unclear, treat it as Neutral.
If sentiment across categories is mixed, return Neutral.
Example Input: “The product works great, but shipping took forever and the cost felt too high.”
Example Output: Neutral
Complex prompt
This example prompt takes a survey input from a user and classifies it into a single sentiment label. The result can then be used to route users down different Canvas paths (such as positive versus negative feedback) or store the sentiment as a custom attribute on their profile for future targeting.
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
You are a customer research AI for a retail brand.
Input: one open-text survey response from a user.
Output: A single structured JSON object with:
- sentiment (Positive, Neutral, Negative)
- topic (Product, Delivery, Price, Other)
- action_recommendation (Route: High-priority follow-up | Low-priority follow-up | No action)
Rules:
- Always return valid JSON.
- If the topic is unclear, default to Other.
- If sentiment is mixed, default to Neutral.
- If sentiment is Negative and topic = Product or Delivery → action_recommendation = High-priority follow-up.
- Otherwise, action_recommendation = Low-priority follow-up.
Example Input:
"The product works great, but shipping took forever and the cost felt too high."
Example Output:
{
"sentiment": "Neutral",
"topic": "Delivery",
"action_recommendation": "High-priority follow-up"
}
Testing your agent
The Live preview pane is an instance of the agent that shows up as a side-by-side panel within the configuration experience. You can use it to test the agent while you’re creating or making updates to it to experience it in a similar way to end users. This step helps you confirm that it’s behaving the way you expect, and gives you a chance to fine-tune before it goes live.
- In the Sample inputs field, enter example customer data or customer responses—anything that reflects real scenarios your agent will handle.
- Select Run test. The agent will execute based on your configuration and display its response. Test runs count toward your daily and total invocation limit.
Review the output with a critical eye. Consider the following questions:
- Does the copy feel on brand?
- Does the decision logic route customers as intended?
- Are the calculated values accurate?
If something feels off, update the agent’s configuration and test again. Run a few different inputs to see how the agent adapts across scenarios, especially edge cases like no data or invalid responses.