Agentic AI

AI Agents SDK

A C++ framework for building autonomous AI agents that run locally — no cloud dependency. The edge-native alternative, optimized for speed and memory efficiency.

Get started in a few lines

Create an LLM-powered agent, register tools, pick a reasoning strategy, and run.

auto llm = createLLM("anthropic", "<api_key>",
    "claude-sonnet-4-20250514");

auto context = std::make_shared<Context>();
context->setLLM(llm);
context->registerTool(tools::createWebSearchTool(llm));

AutonomousAgent agent(context);
agent.setPlanningStrategy(
    AutonomousAgent::PlanningStrategy::REACT);

JsonObject result = agent.run(
    "Research quantum computing breakthroughs");

Built-in Workflow Patterns

Production-ready patterns so you don't have to build agent orchestration from scratch.

Prompt Chaining

Sequence multiple LLM calls where each step's output feeds the next. Break complex reasoning into reliable, composable stages.

Multi-Agent Routing

Route tasks to specialized agents based on intent classification. Each agent focuses on what it does best.

Parallel Execution

Fan out independent sub-tasks across threads and merge results. Ideal for research, comparison, and aggregation workloads.

Orchestrator-Worker

A coordinator agent decomposes goals into sub-tasks and delegates to worker agents, synthesizing their outputs into a final result.

Evaluator-Optimizer

Continuous feedback loops that evaluate agent output and refine it iteratively until quality thresholds are met.

SDK Capabilities

Multi-Provider LLM Support

OpenAI, Anthropic, Google, and local models via Ollama and llama.cpp. Switch providers without rewriting agent logic.

Reasoning Strategies

ReAct (Reason + Act) and Plan-Execute built in. Chain-of-Thought, Zero-Shot, and Reflexion coming soon.

Multi-Modal Agents

Process and reason across vision, audio, and text modalities for robust agentic capabilities.

Extensible Tool System

Built-in web search, Wikipedia, and Python code execution. Register custom tools through the tool registry.

Cross-Platform

Linux, macOS, and Windows. C++20 with Bazel build system. GCC 14+, Clang 17+, or MSVC 2022+.

Edge-Native Performance

C++ implementation optimized for speed and memory efficiency. No Python runtime overhead in the hot path.

Included Examples

Eight ready-to-run examples covering basic agents to full multi-modal orchestration.

simple_agentBasic autonomous agent with tool use
prompt_chain_exampleSequenced LLM calls with output chaining
routing_exampleIntent-based task routing to specialized agents
parallel_exampleConcurrent sub-task execution and result merging
orchestrator_exampleCoordinator agent delegating to workers
evaluator_optimizer_exampleIterative output refinement with feedback loops
multimodal_exampleVoice, audio, and image processing
autonomous_agent_exampleFull-featured agent with all capabilities

Start building agents

Clone the repo, set your API key, and run your first agent in minutes.