Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
5 changes: 5 additions & 0 deletions sdk/agentserver/TASK.md
Original file line number Diff line number Diff line change
Expand Up @@ -4,6 +4,11 @@

## Done

- [x] 2026-02-06 — Add README files for Foundry checkpoint samples
- Files: `azure-ai-agentserver-agentframework/samples/workflow_with_foundry_checkpoints/README.md`,
`azure-ai-agentserver-langgraph/samples/simple_agent_with_foundry_checkpointer/README.md`
- Updated setup/run/request docs, added missing LangGraph sample README, and corrected `.env` setup guidance.

- [x] 2026-02-04 — Implement managed checkpoints feature
- Files: core/checkpoints/ (new), agentframework/persistence/_foundry_checkpoint_*.py (new),
agentframework/__init__.py (modified)
Expand Down
Original file line number Diff line number Diff line change
@@ -0,0 +1,89 @@
# Workflow Agent with Foundry Managed Checkpoints

This sample hosts a two-step Agent Framework workflow—`Writer` followed by `Reviewer`—and uses
`FoundryCheckpointRepository` to persist workflow checkpoints in Azure AI Foundry managed storage.

With Foundry managed checkpoints, workflow state is stored remotely so long-running conversations can
resume even after the host process restarts, without managing your own storage backend.

### What `main.py` does

- Builds a workflow with `WorkflowBuilder` (writer + reviewer)
- Creates a `FoundryCheckpointRepository` pointed at your Azure AI Foundry project
- Passes both to `from_agent_framework(..., checkpoint_repository=...)` so the adapter spins up an
HTTP server (defaults to `0.0.0.0:8088`)

---

## Prerequisites

- Python 3.10+
- Azure CLI authenticated with `az login` (required for `AzureCliCredential`)
- An Azure AI Foundry project with a chat model deployment

---

## Setup

1. Create a `.env` file in this folder:
```
AZURE_AI_PROJECT_ENDPOINT=https://<resource>.services.ai.azure.com/api/projects/<project-id>
AZURE_AI_MODEL_DEPLOYMENT_NAME=<model-deployment-name>
```

2. Install dependencies:
```bash
pip install azure-ai-agentserver-agentframework agent-framework-azure-ai azure-identity python-dotenv
```

---

## Run the Workflow Agent

From this folder:

```bash
python main.py
```

The adapter starts the server on `http://0.0.0.0:8088` by default.

---

## Send Requests

**Non-streaming:**

```bash
curl -sS \
-H "Content-Type: application/json" \
-X POST http://localhost:8088/responses \
-d '{
"agent": {"name": "local_agent", "type": "agent_reference"},
"stream": false,
"input": "Write a short blog post about cloud-native AI applications",
"conversation": {"id": "test-conversation-1"}
}'
```

The `conversation.id` ties requests to the same checkpoint session. Subsequent requests with the same
ID will resume the workflow from its last checkpoint.

---

## Checkpoint Repository Options

The `checkpoint_repository` parameter in `from_agent_framework` accepts any `CheckpointRepository` implementation:

| Repository | Use case |
|---|---|
| `InMemoryCheckpointRepository()` | Quick demos; checkpoints vanish when the process exits |
| `FileCheckpointRepository("<path>")` | Local file-based persistence |
| `FoundryCheckpointRepository(project_endpoint, credential)` | Azure AI Foundry managed remote storage (this sample) |

---

## Related Resources

- Agent Framework repo: https://github.com/microsoft/agent-framework
- Adapter package docs: `azure.ai.agentserver.agentframework` in this SDK
Original file line number Diff line number Diff line change
@@ -0,0 +1,84 @@
# Copyright (c) Microsoft. All rights reserved.

"""
Workflow Agent with Foundry Managed Checkpoints

This sample demonstrates how to use FoundryCheckpointRepository with
a WorkflowBuilder agent to persist workflow checkpoints in Azure AI Foundry.

Foundry managed checkpoints enable workflow state to be persisted across
requests, allowing workflows to be paused, resumed, and replayed.

Prerequisites:
- Set AZURE_AI_PROJECT_ENDPOINT to your Azure AI Foundry project endpoint
e.g. "https://<resource>.services.ai.azure.com/api/projects/<project-id>"
- Azure credentials configured (e.g. az login)
"""

import asyncio
import os

from dotenv import load_dotenv

from agent_framework import ChatAgent, WorkflowBuilder
from agent_framework.azure import AzureAIAgentClient
from azure.identity.aio import AzureCliCredential

from azure.ai.agentserver.agentframework import from_agent_framework
from azure.ai.agentserver.agentframework.persistence import FoundryCheckpointRepository

load_dotenv()


def create_writer_agent(client: AzureAIAgentClient) -> ChatAgent:
"""Create a writer agent that generates content."""
return client.create_agent(
name="Writer",
instructions=(
"You are an excellent content writer. "
"You create new content and edit contents based on the feedback."
),
)


def create_reviewer_agent(client: AzureAIAgentClient) -> ChatAgent:
"""Create a reviewer agent that provides feedback."""
return client.create_agent(
name="Reviewer",
instructions=(
"You are an excellent content reviewer. "
"Provide actionable feedback to the writer about the provided content. "
"Provide the feedback in the most concise manner possible."
),
)


async def main() -> None:
"""Run the workflow agent with Foundry managed checkpoints."""
project_endpoint = os.getenv("AZURE_AI_PROJECT_ENDPOINT", "")

async with AzureCliCredential() as cred, AzureAIAgentClient(credential=cred) as client:
builder = (
WorkflowBuilder()
.register_agent(lambda: create_writer_agent(client), name="writer")
.register_agent(lambda: create_reviewer_agent(client), name="reviewer", output_response=True)
.set_start_executor("writer")
.add_edge("writer", "reviewer")
)

# Use FoundryCheckpointRepository for Azure AI Foundry managed storage.
# This persists workflow checkpoints remotely, enabling pause/resume
# across requests and server restarts.
checkpoint_repository = FoundryCheckpointRepository(
project_endpoint=project_endpoint,
credential=cred,
)

await from_agent_framework(
builder,
checkpoint_repository=checkpoint_repository,
).run_async()


if __name__ == "__main__":
asyncio.run(main())
Original file line number Diff line number Diff line change
@@ -0,0 +1,80 @@
# Simple LangGraph Agent with Foundry Managed Checkpointer

This sample hosts a LangGraph ReAct-style agent and uses `FoundryCheckpointSaver` to persist
checkpoints in Azure AI Foundry managed storage.

With Foundry managed checkpoints, graph state is stored remotely so conversations can resume across
requests and server restarts without self-managed storage.

### What `main.py` does

- Creates an `AzureChatOpenAI` model and two tools (`get_word_length`, `calculator`)
- Builds a LangGraph agent with `create_react_agent(..., checkpointer=saver)`
- Creates `FoundryCheckpointSaver(project_endpoint, credential)` and runs the server via
`from_langgraph(...).run_async()`

---

## Prerequisites

- Python 3.10+
- Azure CLI authenticated with `az login` (required for `AzureCliCredential`)
- An Azure AI Foundry project endpoint
- An Azure OpenAI chat deployment (for example `gpt-4o`)

---

## Setup

1. Create a `.env` file in this folder:
```env
AZURE_AI_PROJECT_ENDPOINT=https://<resource>.services.ai.azure.com/api/projects/<project-id>
AZURE_OPENAI_ENDPOINT=https://<resource>.openai.azure.com/
AZURE_OPENAI_API_KEY=<api-key>
OPENAI_API_VERSION=2025-03-01-preview
AZURE_OPENAI_CHAT_DEPLOYMENT_NAME=gpt-4o
```

2. Install dependencies:
```bash
pip install azure-ai-agentserver-langgraph python-dotenv azure-identity langgraph
```

---

## Run the Agent

From this folder:

```bash
python main.py
```

The adapter starts the server on `http://0.0.0.0:8088` by default.

---

## Send Requests

Non-streaming example:

```bash
curl -sS \
-H "Content-Type: application/json" \
-X POST http://localhost:8088/responses \
-d '{
"agent": {"name": "local_agent", "type": "agent_reference"},
"stream": false,
"input": "What is (15 * 4) + 6?",
"conversation": {"id": "test-conversation-1"}
}'
```

Use the same `conversation.id` on follow-up requests to continue the checkpointed conversation state.

---

## Related Resources

- LangGraph docs: https://langchain-ai.github.io/langgraph/
- Adapter package docs: `azure.ai.agentserver.langgraph` in this SDK
Original file line number Diff line number Diff line change
@@ -0,0 +1,82 @@
# Copyright (c) Microsoft. All rights reserved.

"""
Simple Agent with Foundry Managed Checkpointer

This sample demonstrates how to use FoundryCheckpointSaver with a LangGraph
agent to persist checkpoints in Azure AI Foundry.

Foundry managed checkpoints enable graph state to be persisted across
requests, allowing conversations to be paused, resumed, and replayed.

Prerequisites:
- Set AZURE_AI_PROJECT_ENDPOINT to your Azure AI Foundry project endpoint
e.g. "https://<resource>.services.ai.azure.com/api/projects/<project-id>"
- Set AZURE_OPENAI_CHAT_DEPLOYMENT_NAME (defaults to "gpt-4o")
- Azure credentials configured (e.g. az login)
"""

import asyncio
import os

from dotenv import load_dotenv
from langchain_core.tools import tool
from langchain_openai import AzureChatOpenAI
from azure.identity.aio import AzureCliCredential

from azure.ai.agentserver.langgraph import from_langgraph
from azure.ai.agentserver.langgraph.checkpointer import FoundryCheckpointSaver

load_dotenv()

deployment_name = os.getenv("AZURE_OPENAI_CHAT_DEPLOYMENT_NAME", "gpt-4o")
model = AzureChatOpenAI(model=deployment_name)


@tool
def get_word_length(word: str) -> int:
"""Returns the length of a word."""
return len(word)


@tool
def calculator(expression: str) -> str:
"""Evaluates a mathematical expression."""
try:
result = eval(expression) # noqa: S307
return str(result)
except Exception as e:
return f"Error: {str(e)}"


tools = [get_word_length, calculator]


def create_agent(checkpointer):
"""Create a react agent with the given checkpointer."""
from langgraph.prebuilt import create_react_agent

return create_react_agent(model, tools, checkpointer=checkpointer)


async def main() -> None:
"""Run the agent with Foundry managed checkpoints."""
project_endpoint = os.getenv("AZURE_AI_PROJECT_ENDPOINT", "")

async with AzureCliCredential() as cred:
# Use FoundryCheckpointSaver for Azure AI Foundry managed storage.
# This persists graph checkpoints remotely, enabling pause/resume
# across requests and server restarts.
saver = FoundryCheckpointSaver(
project_endpoint=project_endpoint,
credential=cred,
)

# Pass the checkpointer via LangGraph's native compile/create API
executor = create_agent(checkpointer=saver)

await from_langgraph(executor).run_async()


if __name__ == "__main__":
asyncio.run(main())