Make AI Agents
AI agent best practices
19 min
ai agents are powerful ai systems that can run workflows for you with minimal human intervention however, they are only as good as their setup and instructions; poorly designed agents can lead to inconsistent and undesirable outcomes this guide outlines tips for building effective ai agents in make, including tool naming, prompting, data access, and more after reading, you'll have the foundational knowledge to design ai agents that can perform as expected tools if a large language model (llm) is an ai agent's brain, tools are its hands and actions – they give the agent the capabilities to do its job in make, you can give your agent tools in three ways as scenarios, modules, and mcp server tools tool naming an agent chooses the right tool for a task based on your tool names and descriptions write concise names and descriptions that reflect the tool's purpose and when to run it for example "add customer email to customer contacts table" as the scenario name and "this adds a new customer email address to the customer contacts table do not run this if '@' is missing" as the scenario description well named tools are essential when sharing scenarios as tools to external ai systems, such as agents, through make mcp server these systems rely on tool metadata (not the system prompt) to make the right decisions input and output naming well defined scenario inputs and outputs help the ai agent understand what data to receive and share inputs and output fields include name type description default value required (yes/no) multi line (yes/no) take advantage of this configuration to specify your requirements for inputs or outputs for example "customer email" as the scenario input name and "full name of the customer" as the description it is a required text input you can also add stylistic requirements to input descriptions, if relevant for example, you can specify that a string (text) input must be all lowercase, sentence case, all caps, and so on debugging in tools if you notice your agent returns suboptimal outputs, first check if the issue is with your tools you can debug by adjusting tool specific logic in tool names, descriptions, and input and output schemas for example, you can add a hint in your tool description about when the agent should run the tool ("only run when you have a bank user id") also, check that all scenarios with an agent are activated; scenario errors can cause deactivation (for handling errors in scenarios, see error handling docid w3dqogjxm4xl3s64h40 ) you can adjust the logic in the system prompt if that change applies to all tools prompting make ai agent prompts serve two distinct but related functions system instructions and user agent communication the system prompt serves as the core instructions guiding agent behavior, including role, limitations, and processes prompting (or communicating with) agents involves task requests and instructions you can send prompts in scenario based messages in the ai agent > run an agent module the chat window, testing & training , in the ai agents configuration tab global system prompts keep the system prompt broad enough to enable your agent to work across different scenarios (e g , quoting, inventory, and customer support) an overly specific prompt can risk restricting your agent's effectiveness to a limited set of tasks for example, consider these two approaches to writing system prompts – specific or simple – for a content agent you expect to use across multiple content creation scenarios specific and restrictive prompt "you're a content agent designed to generate instagram content under 150 characters drawing from various sources, you will write captions for lifestyle brands use an upbeat tone, two relevant hashtags, and three emojis end the caption with a question to the audience " simple and global prompt "you're a content agent designed to generate and refine text across formats and platforms you can produce engaging, informative, or persuasive content (including short form, long form, outlines, and more) tailored to audience needs and platform requirements " the specific prompt restricts the agent in platform (instagram), form (character count, emojis, hashtags), and task (social media caption writing), which limits its ability to help with other content related tasks like seo optimized blog writing keep system prompts general and include more specific instructions in the relevant scenario's description or messages field redundancy in ai assisted prompts if you are not an advanced prompt writer, you may benefit from using the improve button in system prompt , which enhances your prompt with ai overly lengthy prompts can increase the risk of llm confusion or hallucinations; using ai to write prompts may contribute to this risk if you don't review its work always review your ai enhanced prompts for redundancies, such as repetitive or obvious instructions, and remove them similarly, edit or remove the workflow section if it does not align with your process additional prompts in tools after writing your system prompt, you can add further specifications in tools you can do this in messages and additional system instructions in the make ai agents > run an agent module in messages , you can write a prompt or input that tells the agent what to do in that specific scenario, including limitations and conditions for example, in a social media caption scenario, you can instruct the agent to limit the length to 150 characters, avoid complex punctuation like em dashes or semicolons, and request human validation for specific topics in the optional additional system instructions , you can add extra contextual information that the agent wouldn't get from the user for example, if users call your agent through communication channels like slack or telegram, you can use this field to pass metadata such as user type, timestamp, and profile name from that channel data security and access llms are imperfect systems and still vulnerable to risk, such as pii disclosure in outputs or prompt hacking the best precaution is to assume that anyone could successfully access what you share with your agent, including its tools and data agent access to information agents only use the personal data you explicitly provide through tools and context files to reduce data security risk, limit the agent's access to the data it needs to do its job, and avoid exposing it to sensitive, non essential information for example, instead of connecting the agent to a tool that exports your entire calendar, its access could be limited to a public calendar, with only free/busy slots listed limits of user imposed constraints an agent's system prompt includes limitations and constraints, or rules limitations guide the agent's behavior by defining desirable and undesirable actions for example, you can instruct your ai agent to limit engagement with off topic queries avoid sharing sensitive information in outputs flag harmful or unsafe inputs prioritize internal knowledge bases as references prompt a human to validate outputs however, even with thorough, explicit guardrails, the agent could still behave unpredictably in ai systems, constraints themselves have limitations agents may not always follow them given the implications for data security, the most reliable constraint in this area is giving the agent minimal access to sensitive information model configuration in agent settings in the ai agents configuration tab, you can improve how your agent functions by setting limits on its output tokens, execution steps, and thread history max output tokens agents can provide lengthy responses, which can get costly use max output tokens to limit the number of tokens an agent outputs if you set the maximum tokens too low, you may get incomplete responses one token equals about 4 characters of text steps per agent call in rare cases, agents can get stuck in a loop, repeating the same actions without achieving their goal prevent this by limiting the execution steps an agent can take a safe estimate for execution steps is to multiply your maximum expected number of tool calls by 10 maximum number of agent runs in thread history agents use thread history, similar to a chat history, to remember interactions with you however, you may not want to keep the entire thread history doing so consumes additional tokens, as the agent needs to read the entire history during each execution to save tokens, limit the number of previous messages the system stores for example, if you only want to keep the last 5 messages, set this field to 5 knowledge enhancement the llm behind your agent doesn't know your internal knowledge and processes if they aren't publicly available improve your agent's decision making by uploading private files to context in the ai agents configuration tab these files serve as reference information for the agent context files should contain fixed information that applies to all tools using an agent for example company guidelines internal knowledge bases (e g , confluence) support tickets community posts company slack conversations your files shouldn't include sensitive information when the agent will be used by users who shouldn't access that data when deciding on files to upload, consider the garbage in, garbage out (gigo) principle poor quality or unrepresentative inputs lead to similarly flawed outputs upload information that reflects the results you want the agent to reproduce for example, do not upload poorly executed work as examples to follow cost optimization when using make ai agents, you incur fees for make operations and llm provider (e g , chatgpt) tokens overall costs can rise quickly if using one thread id, multiple tools, and advanced reasoning llms keep llm costs low by optimizing your token usage and choosing a cost effective llm token usage token usage depends on two main factors the amount of information the llm processes in requests and the number of user to llm interactions data scope to reduce the amount of information an agent handles in a request, specify the data you want to pass to the agent and the data you want it to return for example, instead of instructing the agent to scan an entire database, narrow the scope of its search to entries after june 1st, 2025 to limit the data handled in requests define scenario inputs and outputs filter inputs before passing them into the agent in a scenario, you can add a filter in the route before the ai agents > run an agent module store files with global information (e g , contact details) in the agent's context interactions user to llm interactions, which include your inputs and the llm's outputs, also consume tokens to reduce interactions, prioritize clarity in prompts and consolidate data requests instead of making multiple sequential requests these strategies enable complete, accurate outputs sooner additionally, when you use the same thread id across make ai agents module runs, the entire thread gets passed into the agent if you don't need to reference your conversation history with an agent, leave the thread id field blank this action creates a new thread id, which can significantly save costs you can monitor token usage in the make ai agents > run an agent output bundle in tokenusagesummary , and on your llm provider account choice of starter llm you have multiple llms to choose from for powering your ai agent, each with varying costs depending on factors like speed and reasoning abilities when creating an ai agent, start with an llm with a good speed to cost ratio the ideal starter model is fast and inexpensive, such as openai's gpt‑4 1 mini more advanced, slower models, like openai's o3, can rapidly increase costs test how well an affordable model can achieve your goals, and scale up as needed while a fast llm performs well in many cases, the right model depends on what you want to do with an agent some llms are better suited for certain use cases than others, and many llm providers classify their models accordingly