Building AI Agents from Plain English: Introducing AgentBuilder
Because nobody got into AI to write YAML
AI agents are having a moment. The idea of autonomous software that can search the web, read files, run code, and chain reasoning steps together is genuinely compelling. Frameworks like CrewAI have made the underlying plumbing much more accessible, and the LLMs powering these agents keep getting better.
But there is still a gap between “I want an agent that does X” and actually having one running. You need to understand how to define roles, goals, backstories, and tool configurations. You need to wire up API keys securely. You need to decide which tools to expose and how to parameterize tasks. And if you want to reuse an agent later, you need to figure out where to store it and how to invoke it consistently.
AgentBuilder is a Python CLI tool designed to close that gap. You describe what you want in plain English, and it handles the rest: LLM-driven configuration generation, validation, a persistent agent library, and clean terminal output. It is pip-installable, works with multiple AI backends, and is built on a straightforward YAML format that you can inspect and edit by hand.
The Core Workflow
The central command is agentbuilder create. You pass it a natural language description of the agent you want, and AgentBuilder sends that description to an LLM, which returns a validated YAML configuration for a CrewAI agent.
agentbuilder create “a stock market analyst that researches equities and gives buy/hold/sell recommendations” --saveBehind the scenes, AgentBuilder calls your configured LLM backend (Anthropic Claude by default), receives the generated configuration, validates it with Pydantic, and renders it as syntax-highlighted YAML directly in your terminal. You can preview it before committing to anything. The --save flag stores it to your local agent library. If you want to skip the save step and just run the agent immediately, use --run with a task:
agentbuilder create “a code reviewer focused on Python best practices” --run --task “Review this PR for style issues and potential bugs”That is the whole loop: describe, generate, validate, run. No YAML written by hand, no boilerplate agent setup code.
What the YAML Actually Looks Like
Every agent in AgentBuilder is backed by a YAML file. These are not opaque binary configs; they are readable, editable text files that map directly to CrewAI agent concepts. Here is a real example of what AgentBuilder generates for a stock analyst:
stock_analyst:
role: Stock Market Analyst
goal: >
Analyze stock market trends and provide actionable investment recommendations
backstory: >
You are a seasoned financial analyst with 15 years of experience in equity
research, technical analysis, and fundamental valuation...
verbose: true
allow_delegation: false
llm: grok-3
tools:
- SerperDevTool
- ScrapeWebsiteTool
task_description: >
Research and analyze {input}. Gather current price data, recent news,
earnings history, and analyst sentiment.
task_expected_output: >
A structured analysis with a buy / hold / sell recommendation.A few things worth noting here. The task_description field supports an {input} placeholder, which gets filled in at runtime when you run the agent with --task. This makes agents reusable across different inputs without editing the underlying config. The llm field lets each agent specify which model it runs on independently of which model was used to generate it. And the tools list maps to real CrewAI tool integrations that AgentBuilder wires up automatically.
Once you understand this format, you can also write or edit configurations by hand and import them:
agentbuilder import ./my_custom_agent.yamlThe Agent Library
Agents you save with --save go into ~/.agentbuilder/agents/. AgentBuilder treats this directory as a persistent library with a full set of management commands:
agentbuilder list # show all saved agents in a formatted table
agentbuilder show stock_analyst # display the full YAML for a specific agent
agentbuilder edit stock_analyst # open the config in your default editor
agentbuilder delete stock_analystThe run command is the payoff for having a library. Once an agent is saved, you can invoke it any time with a specific task:
agentbuilder run stock_analyst --task “Analyze NVDA”
agentbuilder run stock_analyst --task “Analyze MSFT and compare it to GOOGL”The {input} placeholder in the stored YAML gets replaced with whatever you pass as --task. This pattern means you build and validate an agent configuration once, then reuse it repeatedly without going back through the generation step.
Tool Support
AgentBuilder ships with 13 built-in CrewAI tool integrations that can be assigned to any agent. You can see them all with:
agentbuilder toolsThe list covers a reasonable range of practical use cases:
Web research:
SerperDevTool(Google search via Serper API),ScrapeWebsiteTool,WebsiteSearchToolFile operations:
FileReadTool,FileWriteTool,DirectoryReadToolStructured data:
JSONSearchTool,CSVSearchTool,PDFSearchTool,TXTSearchToolCode and development:
CodeInterpreterTool,GithubSearchToolMedia:
YoutubeVideoSearchTool
When the LLM generates a configuration from your description, it selects tools from this list based on what makes sense for the described role. A research agent gets web search tools. A file processing agent gets file operation tools. You can adjust the tool list by editing the YAML directly if the generated selection is not quite right.
Multiple LLM Backends
AgentBuilder supports three backends for agent generation: Anthropic Claude (claude-opus-4-6, the default), xAI Grok (grok-3), and OpenAI (gpt-4o). This is distinct from the llm field in the agent YAML, which controls which model the agent itself uses when executing tasks.
API keys are stored in ~/.agentbuilder/config.toml, never in any project directory or version-controlled file. The config commands handle key management:
agentbuilder config set anthropic_api_key sk-ant-...
agentbuilder config showThe separation between the generation backend and the agent execution model is useful in practice. You might use Claude to generate a well-structured agent config, but have that agent run on Grok or GPT-4o at execution time, depending on cost or capability preferences.
Getting Started
AgentBuilder requires Python 3.10 or later. Installation is straightforward:
git clone https://github.com/rod-trent/AgentBuilder.git
cd AgentBuilder
pip install -e .
agentbuilder initThe init command walks you through interactive setup: choosing your default LLM backend and setting your API keys. After that, you are ready to start creating agents.
The project is available on GitHub at https://github.com/rod-trent/AgentBuilder. The codebase is built on a readable Python stack: Click for the CLI, Rich for terminal output, Pydantic for validation, and the official Anthropic and OpenAI SDKs for LLM calls.
If you have been wanting to experiment with CrewAI agents but found the setup overhead discouraging, AgentBuilder is worth a look. The workflow from “I want an agent that does X” to actually running that agent is short, and the library model means the agents you build accumulate into something useful over time rather than staying as one-off experiments.



