How CrewAI agents work

Agents are the building blocks of every CrewAI system. Each agent has a role, goal, and backstory that shape how it thinks and acts. This guide covers everything you need to configure them effectively.

What is a CrewAI agent?

A CrewAI agent is an autonomous unit powered by a large language model. You define what it does (role), what it's optimizing for (goal), and what context shapes its behavior (backstory). The agent uses these to reason about tasks and decide when to call tools.

Think of agents as team members with specialized expertise. A researcher agent searches for information. A writer agent creates content. An analyst agent reviews data. Each focuses on what it does best, and the crew coordinates their work.

Agent attributes

Every agent is configured with these core attributes. The first three — role, goal, and backstory — are required.

role

What the agent does. Think of it as a job title. Keep it specific — "Senior Data Researcher" is better than "Researcher".

goal

What the agent is optimizing for. This shapes how the agent approaches tasks and makes decisions during execution.

backstory

Context that shapes the agent's behavior and expertise. The more specific the backstory, the better the agent performs in its domain.

tools

Functions the agent can call — search APIs, databases, web scrapers, or custom Python code. Assign only the tools each agent needs.

llm

The language model powering this agent. Each agent can use a different model — use a cheaper model for simple tasks, a stronger one for reasoning.

allow_delegation

When True, this agent can hand off tasks to other agents in the crew. Useful for manager agents in hierarchical processes.

YAML configuration

The simplest way to define agents is in a YAML file. CrewAI's @CrewBase decorator loads agent configs from config/agents.yaml automatically. Variables like {"{topic}"} are replaced at runtime from crew inputs.

config/agents.yaml
# config/agents.yaml
researcher:
  role: >
    Senior Data Researcher
  goal: >
    Uncover cutting-edge developments in {topic}
  backstory: >
    You're a seasoned researcher with a knack for
    uncovering the latest developments in {topic}.
    You're known for your ability to find the most
    relevant information.

reporting_analyst:
  role: >
    Reporting Analyst
  goal: >
    Create detailed reports based on {topic} data
    analysis and research findings
  backstory: >
    You're a meticulous analyst with a keen eye for
    detail. You're known for your ability to turn
    complex data into clear, concise reports.

Python configuration

For more control, define agents in Python using the @agent decorator. This lets you attach custom tools, configure LLMs programmatically, and add conditional logic.

crew.py
from crewai import Agent, CrewBase

@CrewBase
class ExampleCrew():
    agents_config = "config/agents.yaml"
    tasks_config = "config/tasks.yaml"

    @agent
    def researcher(self) -> Agent:
        return Agent(
            config=self.agents_config["researcher"],
            tools=[search_tool, scrape_tool],
            verbose=True,
        )

    @agent
    def reporting_analyst(self) -> Agent:
        return Agent(
            config=self.agents_config["reporting_analyst"],
            verbose=True,
        )

LLM configuration per agent

Each agent can use a different language model. This lets you balance cost and capability — use a powerful model for complex reasoning and a cheaper one for straightforward tasks. CrewAI supports any model available through LiteLLM.

agents.py
from crewai import Agent, LLM

# Use a specific model per agent
agent = Agent(
    role="Research Analyst",
    goal="Find accurate, up-to-date information",
    backstory="You are an expert researcher.",
    llm=LLM(model="openrouter/anthropic/claude-sonnet-4-6"),
)

# Or use OpenAI models
agent = Agent(
    role="Content Writer",
    goal="Write clear, engaging content",
    backstory="You are a skilled writer.",
    llm=LLM(model="gpt-4o"),
)

Agent collaboration patterns

Delegation

When allow_delegation=True, an agent can ask other agents in the crew to help with subtasks. This is how hierarchical processes work — a manager agent delegates to specialist agents based on the situation.

Context passing

In sequential processes, each task's output becomes context for the next task. Agents build on each other's work — a researcher produces findings, and a writer uses those findings to create content. You can also explicitly set context=[other_task] on a task to pull in output from specific tasks.

Memory sharing

When memory is enabled on the crew, agents share short-term memory within a run. This lets later agents reference information from earlier in the execution without it being explicitly passed through task context.

Best practices

Be specific with roles

A "Senior Data Researcher specializing in market trends" outperforms a generic "Researcher". Specific roles produce focused, higher-quality output.

Write goal-oriented goals

Goals should describe the outcome, not the process. "Find the 10 most impactful developments" is better than "Research things".

Give agents minimal tools

Each agent should have only the tools it needs. Giving every agent every tool leads to confusion and wasted tokens.

Split complex agents

If an agent is doing too many different things, split it into two agents. A researcher and a writer will outperform a single "research and write" agent.

Use context-rich backstories

Backstories are not just flavor text — they shape how the LLM approaches problems. Include domain expertise, standards it should follow, and how it should handle edge cases.

Match LLMs to tasks

Use stronger models (Claude, GPT-4o) for complex reasoning and cheaper models for straightforward tasks like formatting or summarization.

FAQ

Agent configuration questions

Deploy your agents to production

Built your agents? Deploy them with a single command.