Agents API

Agents are the AI entities that interact with the Agents Research Environments to complete tasks. This section documents the core Agent API and related classes for creating and customizing agent behavior.

Overview

Agents in Meta Agents Research Environments are responsible for:

  • Task Execution: Understanding and completing assigned tasks

  • Tool Usage: Interacting with apps through their exposed APIs

  • Reasoning: Making decisions based on available information

  • Learning: Adapting behavior based on feedback and results

The agent system is built around a ReAct (Reasoning + Acting) framework that allows agents to think, act, and observe in iterative cycles.

Creating Custom Agents

Agents used on the CLI are built through are/simulation/agents/default_agent/agent_factory.py. The BaseAgent class describes the “inner loop” of the agent - the core reasoning and action execution logic. The environment event reaction loop (handling user messages, notifications, and the overall agent lifecycle) is implemented in are/simulation/agents/default_agent/are_simulation_main.py.

When creating custom agents, you should focus on configuring the BaseAgent and should not need to modify the environment event handling in are_simulation_main.py.

To create a custom agent, inherit from the base BaseAgent class and customize the behavior:

Base Agent Structure

from are.simulation.agents.default_agent.base_agent import BaseAgent, ConditionalStep
from are.simulation.agents.default_agent.tools.action_executor.json_action_executor import JsonActionExecutor

class MyCustomAgent(BaseAgent):
    def __init__(self, llm_engine, **kwargs):
        # Custom system prompts
        system_prompts = {
            "system_prompt": """You are a helpful AI assistant specialized in email management.
            Your goal is to help users organize and respond to emails efficiently.

            Available tools: <<tool_descriptions>>

            Always think step by step and explain your reasoning."""
        }

        # Custom action executor
        action_executor = JsonActionExecutor(llm_engine=llm_engine)

        # Custom conditional steps
        pre_steps = [
            ConditionalStep(
                condition=lambda agent: agent.iterations == 0,
                function=self.initial_setup,
                name="initial_setup"
            )
        ]

        super().__init__(
            llm_engine=llm_engine,
            system_prompts=system_prompts,
            action_executor=action_executor,
            conditional_pre_steps=pre_steps,
            max_iterations=15,
            **kwargs
        )

        self.name = "email_specialist_agent"

    def initial_setup(self):
        """Custom initialization logic."""
        self.logger.info("Initializing email specialist agent...")
        # Add custom setup logic here

Agent Customization Patterns

Custom System Prompts

system_prompts = {
    "system_prompt": """You are an expert data analyst.

    Your capabilities include:
    - Analyzing datasets and identifying patterns
    - Creating visualizations and reports
    - Providing statistical insights

    Available tools: <<tool_descriptions>>

    Always provide detailed explanations for your analysis.""",
}

Conditional Steps

def create_monitoring_agent():
    def check_progress(agent):
        """Check if agent is making progress."""
        return agent.iterations > 5 and agent.planning_counter == 0

    def ask_user_help(agent):
        """Ask the user to help when agent is stuck."""
        agent.send_message_to_user(
            "I might be stuck. Can you help me with this task?"
        )

    monitoring_step = ConditionalStep(
        condition=check_progress,
        function=ask_user_help,
        name="progress_monitoring"
    )

    return BaseAgent(
        llm_engine=my_llm_engine,
        conditional_post_steps=[monitoring_step]
    )

Custom Termination Conditions

from are.simulation.agents.default_agent.base_agent import TerminationStep

def success_based_termination(agent):
    """Terminate when task is successfully completed."""
    # Check if the last action was successful
    last_log = agent.get_last_log_of_type(ObservationLog)
    if last_log and "success" in last_log.content.lower():
        return True
    return agent.iterations >= agent.max_iterations

def cleanup_on_termination(agent):
    """Clean up resources when terminating."""
    agent.logger.info("Task completed, cleaning up...")
    # Add cleanup logic here
    return "Task completed successfully"

custom_termination = TerminationStep(
    condition=success_based_termination,
    function=cleanup_on_termination,
    name="success_termination"
)

Agent Configuration

Tool Integration

Sometimes agents need to be configured with tools, in addition to the apps provided by the environment. Think about an agent that knows how to browse the web, or needs to execute code, in addition to the apps provided by the environment.

from are.simulation.tools import Tool

class CustomTool(Tool):
    name = "custom_calculator"
    description = "Performs custom mathematical calculations"

    def __call__(self, a: float, b: float) -> str:
        # Implement custom addition logic
        try:
            result = a + b
            return f"Result: {result}"
        except Exception as e:
            return f"Error: {str(e)}"

# Add custom tools to agent
custom_tools = {"custom_calculator": CustomTool()}

agent = BaseAgent(
    llm_engine=llm_engine,
    tools=custom_tools
)

Logging and Monitoring

Custom Log Callbacks

def custom_log_callback(log: BaseAgentLog):
    """Custom logging callback for monitoring agent behavior."""
    if isinstance(log, ErrorLog):
        print(f"ERROR: {log.exception}")
    elif isinstance(log, TaskLog):
        print(f"NEW TASK: {log.content}")
    elif isinstance(log, StepLog):
        print(f"STEP {log.iteration}: Starting...")

agent = BaseAgent(
    llm_engine=llm_engine,
    log_callback=custom_log_callback
)

Log Analysis

def analyze_agent_performance(agent: BaseAgent):
    """Analyze agent performance from logs."""
    logs = agent.get_agent_logs()

    # Count different log types
    log_counts = {}
    for log in logs:
        log_type = type(log).__name__
        log_counts[log_type] = log_counts.get(log_type, 0) + 1

    # Calculate success rate
    error_count = log_counts.get('ErrorLog', 0)
    total_steps = log_counts.get('StepLog', 0)
    success_rate = (total_steps - error_count) / total_steps if total_steps > 0 else 0

    return {
        'total_steps': total_steps,
        'errors': error_count,
        'success_rate': success_rate,
        'log_distribution': log_counts
    }

For more examples of agent implementation, see the built-in agents in the are.simulation.agents module.

Core Agent Classes

are.simulation.agents.default_agent.base_agent.to_text(input)[source]
Return type:

str

class are.simulation.agents.default_agent.base_agent.RunningState(value)[source]

Bases: Enum

An enumeration.

RUNNING = 'running'
PAUSED = 'paused'
TERMINATED = 'terminated'
FAILED = 'failed'
are.simulation.agents.default_agent.base_agent.convert_plan_fact_messages_to_user(content)[source]
Return type:

str

are.simulation.agents.default_agent.base_agent.format_message(message_dict, message_type, content, i=None, timestamp=None)[source]
Return type:

str

are.simulation.agents.default_agent.base_agent.default_termination_condition(agent)[source]
Return type:

bool

class are.simulation.agents.default_agent.base_agent.ConditionalStep(condition, function, name)[source]

Bases: object

condition: Optional[Callable[[BaseAgent], bool]]
function: Callable[[BaseAgent], None]
name: str
class are.simulation.agents.default_agent.base_agent.TerminationStep(condition, function=<function TerminationStep.<lambda>>, name='termination')[source]

Bases: ConditionalStep

function()
name: str = 'termination'
are.simulation.agents.default_agent.base_agent.update_system_prompt_with_tools(system_prompts, tools)[source]
Return type:

dict[str, str]

are.simulation.agents.default_agent.base_agent.update_system_prompt_with_authorized_imports(system_prompts, authorized_imports)[source]
Return type:

dict[str, str]

class are.simulation.agents.default_agent.base_agent.BaseAgent(llm_engine, system_prompts={}, tools={}, update_system_prompt_tools=<function update_system_prompt_with_tools>, conditional_pre_steps=None, conditional_post_steps=None, termination_step=TerminationStep(condition=<function default_termination_condition>, function=<function TerminationStep.<lambda>>, name='termination'), role_dict={'agent_user_interface': MessageRole.USER, 'environment_notifications': MessageRole.USER, 'error': MessageRole.TOOL_RESPONSE, 'facts': MessageRole.USER, 'llm_output': MessageRole.ASSISTANT, 'observation': MessageRole.TOOL_RESPONSE, 'plan': MessageRole.USER, 'rationale': MessageRole.ASSISTANT, 'system_prompt': MessageRole.SYSTEM, 'task': MessageRole.USER, 'tool_call': MessageRole.TOOL_CALL}, message_dict={'agent_user_interface': 'User messages updates:\\n***\\n{content}\\n***\\n', 'environment_notifications': 'Environment notifications updates:\\n***\\n{content}\\n***\\n', 'error': "[OUTPUT OF STEP {i}] ERROR:\\n***\\n{content}\\n***\\n\\nNow let's retry: take care not to repeat previous errors! If you have retried several times, try a completely different approach.\\n", 'facts': '[FACTS LIST]:\\n{content}\\n', 'llm_output': '{content}\\n', 'observation': '[OUTPUT OF STEP {i}] Observation:\\n***\\n{content}\\n***\\n', 'plan': '[PLAN]:\\n{content}\\n', 'rationale': '{content}\\n', 'system_prompt': '{content}\\n', 'task': '[TASK]: \\n{content}\\n', 'tool_call': '[STEP {i} TOOL CALL]:\\n{content}\\n'}, action_executor=None, max_iterations=10, total_iterations=50, shuffle_tools=False, shuffle_authorized_imports=False, thought_token=None, action_token=None, retry_llm_call_on_error=True, time_manager=None, log_callback=None, handle_prompt_too_long=False, simulated_generation_time_config=None, use_custom_logger=True)[source]

Bases: object

This agent is a ReAct loop on steroids.

Parameters:
  • tools (dict[str, Tool]) – list[Tool] - List of tools available to the agent.

  • tool_description_template – str - Template for describing tools to LLM prompt.

  • llm_engine (Callable) – Callable - Function to call LLM.

  • system_prompts (dict[str, str]) – dict[str, str] - System prompts for different steps of the agent.

  • post_step_functions – dict[str, Callable] - Functions to call after each step of the agent.

  • termination_methods – dict[str, Callable] - Methods to handle the end of the agent.

  • step_conditions – dict[str, Callable] - Conditions to check before each step of the agent.

  • role_dict (OrderedDict) – dict[str, str] - Dictionary mapping roles to their names.

  • tool_parser – Callable - Function to parse tool calls.

  • max_iterations (int) – int - Maximum self.planning_counter before termination (excludes errors)

  • total_iterations (int) – int - Maximum self.iterations before termination (includes errors)

  • shuffle_tools (bool) – bool - Whether to shuffle the list of tools and authorized imports.

  • time_manager (Optional[TimeManager]) – TimeManager - Time manager for the agent.

  • log_callback (Optional[Callable[[BaseAgentLog], None]]) – Callable[[BaseAgentLog], None] - Callback function to log agent actions.

  • simulated_generation_time_config (Optional[SimulatedGenerationTimeConfig]) – SimulatedGenerationTimeConfig - Configuration for simulated generation time.

T = ~T
get_last_log_of_type(log_type, break_on=[<class 'are.simulation.agents.agent_log.TaskLog'>])[source]
Return type:

Optional[TypeVar(T, bound= BaseAgentLog)]

build_history_from_logs(exclude_log_types=[])[source]

Build the history of messages from the logs, ensuring a specific order of steps. :type exclude_log_types: list[str] :param exclude_log_types: List of log types to exclude from the history. :rtype: list[dict[str, str | list[Attachment]]] :return: List[Dict[str, str]] - List of messages.

Return type:

list[dict[str, str | list[Attachment]]]

seed_observation(observation)[source]
init_tools()[source]
is_initialized()[source]
Return type:

bool

initialize(attachments=None, **kwargs)[source]

Initialize the agent for a given task. :type kwargs: :param kwargs: dict[str, Any] - Additional arguments for the agent.

Return type:

None

step()[source]

Perform one step in the ReAct framework: the agent thinks, acts, and observes the result. The errors are raised here, they are caught and logged in the run() method.

get_agent_logs()[source]
Return type:

list[BaseAgentLog]

append_subagent_log(subagent_log, group_id)[source]
Return type:

None

append_agent_log(log)[source]
Return type:

None

stop()[source]
Return type:

None

execute_agent_loop()[source]
Return type:

str | None | MMObservation

run(task, reset=True, attachments=None, **kwargs)[source]

Run the agent on a given task. :type task: str :param task: str - Task to solve. :type reset: bool :param reset: bool - Whether to reset the agent before running. :type kwargs: :param kwargs: Dict[str, Any] - Additional arguments for the agent. :rtype: Union[str, MMObservation, None] :return: Any - Result of the agent (depends on the termination_methods)

Return type:

str | MMObservation | None

log_error(e)[source]

Add an error to the last agent log.

Return type:

None

make_timestamp()[source]

Make a timestamp for the current time.

Return type:

float

replay(start_logs)[source]

Reload the state of the agent at a given starting point.

Return type:

None

get_original_tool_name(tool_name)[source]

Get the original tool name from the variant name.

Return type:

str

send_message_to_user(content)[source]
Return type:

None

are.simulation.agents.default_agent.base_agent.get_offset_from_time_config_mode(time_config, completion_duration)[source]

Determine the time to advance based on the mode

Return type:

float

Base Agent

class are.simulation.agents.default_agent.base_agent.BaseAgent(llm_engine, system_prompts={}, tools={}, update_system_prompt_tools=<function update_system_prompt_with_tools>, conditional_pre_steps=None, conditional_post_steps=None, termination_step=TerminationStep(condition=<function default_termination_condition>, function=<function TerminationStep.<lambda>>, name='termination'), role_dict={'agent_user_interface': MessageRole.USER, 'environment_notifications': MessageRole.USER, 'error': MessageRole.TOOL_RESPONSE, 'facts': MessageRole.USER, 'llm_output': MessageRole.ASSISTANT, 'observation': MessageRole.TOOL_RESPONSE, 'plan': MessageRole.USER, 'rationale': MessageRole.ASSISTANT, 'system_prompt': MessageRole.SYSTEM, 'task': MessageRole.USER, 'tool_call': MessageRole.TOOL_CALL}, message_dict={'agent_user_interface': 'User messages updates:\\n***\\n{content}\\n***\\n', 'environment_notifications': 'Environment notifications updates:\\n***\\n{content}\\n***\\n', 'error': "[OUTPUT OF STEP {i}] ERROR:\\n***\\n{content}\\n***\\n\\nNow let's retry: take care not to repeat previous errors! If you have retried several times, try a completely different approach.\\n", 'facts': '[FACTS LIST]:\\n{content}\\n', 'llm_output': '{content}\\n', 'observation': '[OUTPUT OF STEP {i}] Observation:\\n***\\n{content}\\n***\\n', 'plan': '[PLAN]:\\n{content}\\n', 'rationale': '{content}\\n', 'system_prompt': '{content}\\n', 'task': '[TASK]: \\n{content}\\n', 'tool_call': '[STEP {i} TOOL CALL]:\\n{content}\\n'}, action_executor=None, max_iterations=10, total_iterations=50, shuffle_tools=False, shuffle_authorized_imports=False, thought_token=None, action_token=None, retry_llm_call_on_error=True, time_manager=None, log_callback=None, handle_prompt_too_long=False, simulated_generation_time_config=None, use_custom_logger=True)[source]

Bases: object

This agent is a ReAct loop on steroids.

Parameters:
  • tools (dict[str, Tool]) – list[Tool] - List of tools available to the agent.

  • tool_description_template – str - Template for describing tools to LLM prompt.

  • llm_engine (Callable) – Callable - Function to call LLM.

  • system_prompts (dict[str, str]) – dict[str, str] - System prompts for different steps of the agent.

  • post_step_functions – dict[str, Callable] - Functions to call after each step of the agent.

  • termination_methods – dict[str, Callable] - Methods to handle the end of the agent.

  • step_conditions – dict[str, Callable] - Conditions to check before each step of the agent.

  • role_dict (OrderedDict) – dict[str, str] - Dictionary mapping roles to their names.

  • tool_parser – Callable - Function to parse tool calls.

  • max_iterations (int) – int - Maximum self.planning_counter before termination (excludes errors)

  • total_iterations (int) – int - Maximum self.iterations before termination (includes errors)

  • shuffle_tools (bool) – bool - Whether to shuffle the list of tools and authorized imports.

  • time_manager (Optional[TimeManager]) – TimeManager - Time manager for the agent.

  • log_callback (Optional[Callable[[BaseAgentLog], None]]) – Callable[[BaseAgentLog], None] - Callback function to log agent actions.

  • simulated_generation_time_config (Optional[SimulatedGenerationTimeConfig]) – SimulatedGenerationTimeConfig - Configuration for simulated generation time.

__init__(llm_engine, system_prompts={}, tools={}, update_system_prompt_tools=<function update_system_prompt_with_tools>, conditional_pre_steps=None, conditional_post_steps=None, termination_step=TerminationStep(condition=<function default_termination_condition>, function=<function TerminationStep.<lambda>>, name='termination'), role_dict={'agent_user_interface': MessageRole.USER, 'environment_notifications': MessageRole.USER, 'error': MessageRole.TOOL_RESPONSE, 'facts': MessageRole.USER, 'llm_output': MessageRole.ASSISTANT, 'observation': MessageRole.TOOL_RESPONSE, 'plan': MessageRole.USER, 'rationale': MessageRole.ASSISTANT, 'system_prompt': MessageRole.SYSTEM, 'task': MessageRole.USER, 'tool_call': MessageRole.TOOL_CALL}, message_dict={'agent_user_interface': 'User messages updates:\\n***\\n{content}\\n***\\n', 'environment_notifications': 'Environment notifications updates:\\n***\\n{content}\\n***\\n', 'error': "[OUTPUT OF STEP {i}] ERROR:\\n***\\n{content}\\n***\\n\\nNow let's retry: take care not to repeat previous errors! If you have retried several times, try a completely different approach.\\n", 'facts': '[FACTS LIST]:\\n{content}\\n', 'llm_output': '{content}\\n', 'observation': '[OUTPUT OF STEP {i}] Observation:\\n***\\n{content}\\n***\\n', 'plan': '[PLAN]:\\n{content}\\n', 'rationale': '{content}\\n', 'system_prompt': '{content}\\n', 'task': '[TASK]: \\n{content}\\n', 'tool_call': '[STEP {i} TOOL CALL]:\\n{content}\\n'}, action_executor=None, max_iterations=10, total_iterations=50, shuffle_tools=False, shuffle_authorized_imports=False, thought_token=None, action_token=None, retry_llm_call_on_error=True, time_manager=None, log_callback=None, handle_prompt_too_long=False, simulated_generation_time_config=None, use_custom_logger=True)[source]
Parameters:
  • tools (dict[str, Tool]) – list[Tool] - List of tools available to the agent.

  • tool_description_template – str - Template for describing tools to LLM prompt.

  • llm_engine (Callable) – Callable - Function to call LLM.

  • system_prompts (dict[str, str]) – dict[str, str] - System prompts for different steps of the agent.

  • post_step_functions – dict[str, Callable] - Functions to call after each step of the agent.

  • termination_methods – dict[str, Callable] - Methods to handle the end of the agent.

  • step_conditions – dict[str, Callable] - Conditions to check before each step of the agent.

  • role_dict (OrderedDict) – dict[str, str] - Dictionary mapping roles to their names.

  • tool_parser – Callable - Function to parse tool calls.

  • max_iterations (int) – int - Maximum self.planning_counter before termination (excludes errors)

  • total_iterations (int) – int - Maximum self.iterations before termination (includes errors)

  • shuffle_tools (bool) – bool - Whether to shuffle the list of tools and authorized imports.

  • time_manager (Optional[TimeManager]) – TimeManager - Time manager for the agent.

  • log_callback (Optional[Callable[[BaseAgentLog], None]]) – Callable[[BaseAgentLog], None] - Callback function to log agent actions.

  • simulated_generation_time_config (Optional[SimulatedGenerationTimeConfig]) – SimulatedGenerationTimeConfig - Configuration for simulated generation time.

parent: BaseAgent | None
decoding_schema: DecodingSchema | None
logs: list[BaseAgentLog]
notification_system: BaseNotificationSystem | None
pause_env: Optional[Callable[[], None]]
resume_env: Optional[Callable[[float], None]]
T = ~T
get_last_log_of_type(log_type, break_on=[<class 'are.simulation.agents.agent_log.TaskLog'>])[source]
Return type:

Optional[TypeVar(T, bound= BaseAgentLog)]

build_history_from_logs(exclude_log_types=[])[source]

Build the history of messages from the logs, ensuring a specific order of steps. :type exclude_log_types: list[str] :param exclude_log_types: List of log types to exclude from the history. :rtype: list[dict[str, str | list[Attachment]]] :return: List[Dict[str, str]] - List of messages.

Return type:

list[dict[str, str | list[Attachment]]]

seed_observation(observation)[source]
init_tools()[source]
is_initialized()[source]
Return type:

bool

initialize(attachments=None, **kwargs)[source]

Initialize the agent for a given task. :type kwargs: :param kwargs: dict[str, Any] - Additional arguments for the agent.

Return type:

None

step()[source]

Perform one step in the ReAct framework: the agent thinks, acts, and observes the result. The errors are raised here, they are caught and logged in the run() method.

get_agent_logs()[source]
Return type:

list[BaseAgentLog]

append_subagent_log(subagent_log, group_id)[source]
Return type:

None

append_agent_log(log)[source]
Return type:

None

stop()[source]
Return type:

None

execute_agent_loop()[source]
Return type:

str | None | MMObservation

run(task, reset=True, attachments=None, **kwargs)[source]

Run the agent on a given task. :type task: str :param task: str - Task to solve. :type reset: bool :param reset: bool - Whether to reset the agent before running. :type kwargs: :param kwargs: Dict[str, Any] - Additional arguments for the agent. :rtype: Union[str, MMObservation, None] :return: Any - Result of the agent (depends on the termination_methods)

Return type:

str | MMObservation | None

log_error(e)[source]

Add an error to the last agent log.

Return type:

None

make_timestamp()[source]

Make a timestamp for the current time.

Return type:

float

replay(start_logs)[source]

Reload the state of the agent at a given starting point.

Return type:

None

get_original_tool_name(tool_name)[source]

Get the original tool name from the variant name.

Return type:

str

send_message_to_user(content)[source]
Return type:

None

Key Methods

Initialization and Setup

BaseAgent.initialize(attachments=None, **kwargs)[source]

Initialize the agent for a given task. :type kwargs: :param kwargs: dict[str, Any] - Additional arguments for the agent.

Return type:

None

BaseAgent.init_tools()[source]
BaseAgent.is_initialized()[source]
Return type:

bool

Execution Control

BaseAgent.run(task, reset=True, attachments=None, **kwargs)[source]

Run the agent on a given task. :type task: str :param task: str - Task to solve. :type reset: bool :param reset: bool - Whether to reset the agent before running. :type kwargs: :param kwargs: Dict[str, Any] - Additional arguments for the agent. :rtype: Union[str, MMObservation, None] :return: Any - Result of the agent (depends on the termination_methods)

Return type:

str | MMObservation | None

BaseAgent.step()[source]

Perform one step in the ReAct framework: the agent thinks, acts, and observes the result. The errors are raised here, they are caught and logged in the run() method.

BaseAgent.stop()[source]
Return type:

None

Logging and History

BaseAgent.get_agent_logs()[source]
Return type:

list[BaseAgentLog]

BaseAgent.append_agent_log(log)[source]
Return type:

None

BaseAgent.build_history_from_logs(exclude_log_types=[])[source]

Build the history of messages from the logs, ensuring a specific order of steps. :type exclude_log_types: list[str] :param exclude_log_types: List of log types to exclude from the history. :rtype: list[dict[str, str | list[Attachment]]] :return: List[Dict[str, str]] - List of messages.

Return type:

list[dict[str, str | list[Attachment]]]

BaseAgent.replay(start_logs)[source]

Reload the state of the agent at a given starting point.

Return type:

None

Communication

BaseAgent.send_message_to_user(content)[source]
Return type:

None

BaseAgent.log_error(e)[source]

Add an error to the last agent log.

Return type:

None

Configuration Classes

class are.simulation.agents.default_agent.base_agent.ConditionalStep(condition, function, name)[source]

Bases: object

condition: Optional[Callable[[BaseAgent], bool]]
function: Callable[[BaseAgent], None]
name: str
class are.simulation.agents.default_agent.base_agent.TerminationStep(condition, function=<function TerminationStep.<lambda>>, name='termination')[source]

Bases: ConditionalStep

function()
name: str = 'termination'
condition: Optional[Callable[[BaseAgent], bool]]

Agent Logging System

class are.simulation.agents.agent_log.BaseAgentLog(timestamp, agent_id)[source]

Bases: ABC

timestamp: float
id: str
agent_id: str
abstract get_content_for_llm()[source]
Return type:

str | None

get_attachments_for_llm()[source]

Contains attachments that should be sent to the LLM

Returns:

Attachments to include in LLM message for this log entry.

Return type:

list[Attachment]

abstract get_type()[source]
Return type:

str

to_dict()[source]
Return type:

dict[str, Any]

serialize()[source]
Return type:

str

classmethod from_dict(d)[source]
Return type:

BaseAgentLog

class are.simulation.agents.agent_log.SystemPromptLog(timestamp, agent_id, content)[source]

Bases: BaseAgentLog

content: str
get_content_for_llm()[source]
Return type:

str | None

get_type()[source]
Return type:

str

class are.simulation.agents.agent_log.TaskLog(timestamp, agent_id, content, attachments=<factory>)[source]

Bases: BaseAgentLog

content: str
attachments: list[Attachment]
get_content_for_llm()[source]
Return type:

str | None

get_content_for_llm_no_attachment()[source]
Return type:

str | None

get_attachments_for_llm()[source]

Contains attachments that should be sent to the LLM

Returns:

Attachments to include in LLM message for this log entry.

Return type:

list[Attachment]

get_type()[source]
Return type:

str

class are.simulation.agents.agent_log.LLMInputLog(timestamp, agent_id, content)[source]

Bases: BaseAgentLog

content: list[dict[str, str | list[Attachment]]]
get_content_for_llm()[source]
Return type:

str | None

get_type()[source]
Return type:

str

class are.simulation.agents.agent_log.LLMOutputThoughtActionLog(timestamp, agent_id, content, prompt_tokens=0, completion_tokens=0, total_tokens=0, reasoning_tokens=0, completion_duration=0.0)[source]

Bases: BaseAgentLog

content: str
prompt_tokens: int = 0
completion_tokens: int = 0
total_tokens: int = 0
reasoning_tokens: int = 0
completion_duration: float = 0.0
get_content_for_llm()[source]
Return type:

str | None

get_type()[source]
Return type:

str

class are.simulation.agents.agent_log.RationaleLog(timestamp, agent_id, content)[source]

Bases: BaseAgentLog

content: str
get_content_for_llm()[source]
Return type:

str | None

get_type()[source]
Return type:

str

class are.simulation.agents.agent_log.ToolCallLog(timestamp, agent_id, tool_name, tool_arguments)[source]

Bases: BaseAgentLog

tool_name: str
tool_arguments: str | dict[str, str]
get_content_for_llm()[source]
Return type:

str | None

get_type()[source]
Return type:

str

class are.simulation.agents.agent_log.ObservationLog(timestamp, agent_id, content, attachments=<factory>)[source]

Bases: BaseAgentLog

content: str
attachments: list[Attachment]
get_content_for_llm()[source]
Return type:

str | None

get_content_for_llm_no_attachment()[source]
Return type:

str | None

get_attachments_for_llm()[source]

Contains attachments that should be sent to the LLM

Returns:

Attachments to include in LLM message for this log entry.

Return type:

list[Attachment]

get_type()[source]
Return type:

str

class are.simulation.agents.agent_log.StepLog(timestamp, agent_id, iteration)[source]

Bases: BaseAgentLog

iteration: int
get_content_for_llm()[source]
Return type:

str | None

get_type()[source]
Return type:

str

class are.simulation.agents.agent_log.SubagentLog(timestamp, agent_id, group_id, children, name=None)[source]

Bases: BaseAgentLog

group_id: str
children: list[BaseAgentLog]
name: str | None = None
get_content_for_llm()[source]
Return type:

str | None

get_type()[source]
Return type:

str

to_dict()[source]
Return type:

dict[str, Any]

class are.simulation.agents.agent_log.FinalAnswerLog(timestamp, agent_id, content, attachments=<factory>)[source]

Bases: BaseAgentLog

content: str
attachments: list[Attachment]
get_content_for_llm()[source]
Return type:

str | None

get_content_for_llm_no_attachment()[source]
Return type:

str | None

get_attachments_for_llm()[source]

Contains attachments that should be sent to the LLM

Returns:

Attachments to include in LLM message for this log entry.

Return type:

list[Attachment]

get_type()[source]
Return type:

str

class are.simulation.agents.agent_log.ErrorLog(timestamp, agent_id, error, exception, category, agent)[source]

Bases: BaseAgentLog

error: str
exception: str
category: str
agent: str
get_content_for_llm()[source]
Return type:

str | None

get_type()[source]
Return type:

str

class are.simulation.agents.agent_log.ThoughtLog(timestamp, agent_id, content)[source]

Bases: BaseAgentLog

content: str
get_content_for_llm()[source]
Return type:

str | None

get_type()[source]
Return type:

str

class are.simulation.agents.agent_log.PlanLog(timestamp, agent_id, content)[source]

Bases: BaseAgentLog

content: str
get_content_for_llm()[source]
Return type:

str | None

get_type()[source]
Return type:

str

class are.simulation.agents.agent_log.FactsLog(timestamp, agent_id, content)[source]

Bases: BaseAgentLog

content: str
get_content_for_llm()[source]
Return type:

str | None

get_type()[source]
Return type:

str

class are.simulation.agents.agent_log.ReplanLog(timestamp, agent_id, content)[source]

Bases: BaseAgentLog

content: str
get_content_for_llm()[source]
Return type:

str | None

get_type()[source]
Return type:

str

class are.simulation.agents.agent_log.RefactsLog(timestamp, agent_id, content)[source]

Bases: BaseAgentLog

content: str
get_content_for_llm()[source]
Return type:

str | None

get_type()[source]
Return type:

str

class are.simulation.agents.agent_log.StopLog(timestamp, agent_id, content)[source]

Bases: BaseAgentLog

content: str
get_content_for_llm()[source]
Return type:

str | None

get_type()[source]
Return type:

str

class are.simulation.agents.agent_log.ActionLog(timestamp, agent_id, content, input, event_type, output, action_name, app_name, exception, exception_stack_trace)[source]

Bases: BaseAgentLog

content: str
input: dict[str, Any]
event_type: str
output: Any
action_name: str
app_name: str
exception: str | None
exception_stack_trace: str | None
get_content_for_llm()[source]
Return type:

str | None

get_type()[source]
Return type:

str

class are.simulation.agents.agent_log.EndTaskLog(timestamp, agent_id)[source]

Bases: BaseAgentLog

get_content_for_llm()[source]
Return type:

str | None

get_type()[source]
Return type:

str

class are.simulation.agents.agent_log.LLMOutputPlanLog(timestamp, agent_id, content, prompt_tokens=0, completion_tokens=0, total_tokens=0, reasoning_tokens=0, completion_duration=0.0)[source]

Bases: BaseAgentLog

content: str
prompt_tokens: int = 0
completion_tokens: int = 0
total_tokens: int = 0
reasoning_tokens: int = 0
completion_duration: float = 0.0
get_content_for_llm()[source]
Return type:

str | None

get_type()[source]
Return type:

str

class are.simulation.agents.agent_log.LLMOutputFactsLog(timestamp, agent_id, content, prompt_tokens=0, completion_tokens=0, total_tokens=0, reasoning_tokens=0, completion_duration=0.0)[source]

Bases: BaseAgentLog

content: str
prompt_tokens: int = 0
completion_tokens: int = 0
total_tokens: int = 0
reasoning_tokens: int = 0
completion_duration: float = 0.0
get_content_for_llm()[source]
Return type:

str | None

get_type()[source]
Return type:

str

class are.simulation.agents.agent_log.AgentUserInterfaceLog(timestamp, agent_id, content)[source]

Bases: BaseAgentLog

content: str
get_content_for_llm()[source]
Return type:

str | None

get_type()[source]
Return type:

str

class are.simulation.agents.agent_log.EnvironmentNotificationLog(timestamp, agent_id, content)[source]

Bases: BaseAgentLog

content: str
get_content_for_llm()[source]
Return type:

str | None

get_type()[source]
Return type:

str

class are.simulation.agents.agent_log.HintLog(timestamp, agent_id, content)[source]

Bases: BaseAgentLog

content: str
get_content_for_llm()[source]
Return type:

str | None

get_type()[source]
Return type:

str

class are.simulation.agents.agent_log.TaskReminderLog(timestamp, agent_id, content)[source]

Bases: BaseAgentLog

content: str
get_content_for_llm()[source]
Return type:

str | None

get_type()[source]
Return type:

str

class are.simulation.agents.multimodal.Attachment(**kwargs)[source]

Bases: BaseModel

Create a new model by parsing and validating input data from keyword arguments.

Raises [ValidationError][pydantic_core.ValidationError] if the input data cannot be validated to form a valid model.

self is explicitly positional-only to allow self as a field name.

base64_data: bytes
mime: str
name: str | None
classmethod encode_base64_data(value)[source]

Convert base64_data string to bytes for proper handling. Only accepts string input - raises ValueError for other types.

serialize_base64_data(value)[source]

Serialize base64_data bytes to string for JSON serialization. This ensures json.dumps() works properly on Attachment objects.

Return type:

str

to_openai_json()[source]
model_config: ClassVar[ConfigDict] = {}

Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].

are.simulation.agents.multimodal.attachments_to_pil(attachments)[source]
Return type:

list[Image]

are.simulation.agents.multimodal.pil_to_attachments(images)[source]
Return type:

list[Attachment]

Multimodal Classes

class are.simulation.agents.multimodal.Attachment(**kwargs)[source]

Bases: BaseModel

Create a new model by parsing and validating input data from keyword arguments.

Raises [ValidationError][pydantic_core.ValidationError] if the input data cannot be validated to form a valid model.

self is explicitly positional-only to allow self as a field name.

base64_data: bytes
mime: str
name: str | None
classmethod encode_base64_data(value)[source]

Convert base64_data string to bytes for proper handling. Only accepts string input - raises ValueError for other types.

serialize_base64_data(value)[source]

Serialize base64_data bytes to string for JSON serialization. This ensures json.dumps() works properly on Attachment objects.

Return type:

str

to_openai_json()[source]
model_config: ClassVar[ConfigDict] = {}

Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].

Base Log Classes

class are.simulation.agents.agent_log.BaseAgentLog(timestamp, agent_id)[source]

Bases: ABC

timestamp: float
id: str
agent_id: str
abstract get_content_for_llm()[source]
Return type:

str | None

get_attachments_for_llm()[source]

Contains attachments that should be sent to the LLM

Returns:

Attachments to include in LLM message for this log entry.

Return type:

list[Attachment]

abstract get_type()[source]
Return type:

str

to_dict()[source]
Return type:

dict[str, Any]

serialize()[source]
Return type:

str

classmethod from_dict(d)[source]
Return type:

BaseAgentLog

class are.simulation.agents.agent_log.TaskLog(timestamp, agent_id, content, attachments=<factory>)[source]

Bases: BaseAgentLog

content: str
attachments: list[Attachment]
get_content_for_llm()[source]
Return type:

str | None

get_content_for_llm_no_attachment()[source]
Return type:

str | None

get_attachments_for_llm()[source]

Contains attachments that should be sent to the LLM

Returns:

Attachments to include in LLM message for this log entry.

Return type:

list[Attachment]

get_type()[source]
Return type:

str

timestamp: float
id: str
agent_id: str
class are.simulation.agents.agent_log.StepLog(timestamp, agent_id, iteration)[source]

Bases: BaseAgentLog

iteration: int
get_content_for_llm()[source]
Return type:

str | None

get_type()[source]
Return type:

str

timestamp: float
id: str
agent_id: str
class are.simulation.agents.agent_log.ObservationLog(timestamp, agent_id, content, attachments=<factory>)[source]

Bases: BaseAgentLog

content: str
attachments: list[Attachment]
get_content_for_llm()[source]
Return type:

str | None

get_content_for_llm_no_attachment()[source]
Return type:

str | None

get_attachments_for_llm()[source]

Contains attachments that should be sent to the LLM

Returns:

Attachments to include in LLM message for this log entry.

Return type:

list[Attachment]

get_type()[source]
Return type:

str

timestamp: float
id: str
agent_id: str
class are.simulation.agents.agent_log.ActionLog(timestamp, agent_id, content, input, event_type, output, action_name, app_name, exception, exception_stack_trace)[source]

Bases: BaseAgentLog

content: str
input: dict[str, Any]
event_type: str
output: Any
action_name: str
app_name: str
exception: str | None
exception_stack_trace: str | None
get_content_for_llm()[source]
Return type:

str | None

get_type()[source]
Return type:

str

timestamp: float
id: str
agent_id: str
class are.simulation.agents.agent_log.LLMOutputThoughtActionLog(timestamp, agent_id, content, prompt_tokens=0, completion_tokens=0, total_tokens=0, reasoning_tokens=0, completion_duration=0.0)[source]

Bases: BaseAgentLog

content: str
prompt_tokens: int = 0
completion_tokens: int = 0
total_tokens: int = 0
reasoning_tokens: int = 0
completion_duration: float = 0.0
get_content_for_llm()[source]
Return type:

str | None

get_type()[source]
Return type:

str

timestamp: float
id: str
agent_id: str
class are.simulation.agents.agent_log.ErrorLog(timestamp, agent_id, error, exception, category, agent)[source]

Bases: BaseAgentLog

error: str
exception: str
category: str
agent: str
get_content_for_llm()[source]
Return type:

str | None

get_type()[source]
Return type:

str

timestamp: float
id: str
agent_id: str

Action Executors

class are.simulation.agents.default_agent.tools.action_executor.BaseActionExecutor(use_custom_logger=True)[source]

Bases: object

state: dict[str, Any] = {}
action_token: str = ''
thought_token: str = ''
extract_action(llm_output, split_token)[source]
Return type:

AgentAction

execute_action(action, append_agent_log, make_timestamp, agent_id)[source]
update_tools(tools)[source]
inject_state(state)[source]
abstract execute_parsed_action(parsed_action, append_agent_log, make_timestamp, agent_id)[source]
abstract parse_action(action)[source]
Return type:

ParsedAction

class are.simulation.agents.default_agent.tools.json_action_executor.JsonActionExecutor(tools={}, use_custom_logger=True)[source]

Bases: BaseActionExecutor

execute_action(action, append_agent_log, make_timestamp, agent_id)[source]
parse_action(action)[source]
Return type:

ParsedAction

execute_parsed_action(parsed_action, append_agent_log, make_timestamp, agent_id)[source]
Return type:

None

execute_tool_call(parsed_action, append_agent_log, make_timestamp)[source]
Return type:

Any

update_tools(tools)[source]

Action Types

class are.simulation.agents.default_agent.tools.action_executor.AgentAction(rationale, action, action_type=None)[source]

Bases: object

rationale: str
action: str | None
action_type: str | None = None
class are.simulation.agents.default_agent.tools.action_executor.ParsedAction(action_code=None, action_name=None, tool_name=None, app_name=None, arguments=None, rationale=None)[source]

Bases: object

action_code: str | None = None
action_name: str | None = None
tool_name: str | None = None
app_name: str | None = None
arguments: str | dict[str, Any] | None = None
rationale: str | None = None