Azure AI Foundry Agent#

In AutoGen, you can build and deploy agents that are backed by the Azure AI Foundry Agent Service using the AzureAIAgent class. Here, important aspects of the agent including the provisioned model, tools (e.g, code interpreter, bing search grounding, file search etc.), observability, and security are managed by Azure. This allows you to focus on building your agent without worrying about the underlying infrastructure.

In this guide, we will explore an example of creating an Azure AI Foundry Agent using the AzureAIAgent that can address tasks using the Azure Grounding with Bing Search tool.

# pip install "autogen-ext[azure]"  # For Azure AI Foundry Agent Service

Bing Search Grounding#

An AzureAIAgent can be assigned a set of tools including Grounding with Bing Search.

Grounding with Bing Search allows your Azure AI Agents to incorporate real-time public web data when generating responses. You need to create a Grounding with Bing Search resource, and then connect this resource to your Azure AI Agents. When a user sends a query, Azure AI Agents decide if Grounding with Bing Search should be leveraged or not. If so, it will leverage Bing to search over public web data and return relevant chunks. Lastly, Azure AI Agents will use returned chunks to generate a response.

Prerequisites#

  • You need to have an Azure subscription.

  • You need to have the Azure CLI installed and configured. (also login using the command az login to enable default credentials)

  • You need to have the autogen-ext[azure] package installed.

You can create a Grounding with Bing Search resource in the Azure portal. Note that you will need to have owner or contributor role in your subscription or resource group to create it. Once you have created your resource, you can then pass it to the Azure Foundry Agent using the resource name.

In the following example, we will create a new Azure Foundry Agent that uses the Grounding with Bing Search resource.

import os

import dotenv
from autogen_agentchat.messages import TextMessage
from autogen_core import CancellationToken
from autogen_ext.agents.azure import AzureAIAgent
from azure.ai.agents.models import BingGroundingTool
from azure.ai.projects.aio import AIProjectClient
from azure.identity.aio import DefaultAzureCredential

dotenv.load_dotenv()


async def bing_example() -> None:
    async with DefaultAzureCredential() as credential:  # type: ignore
        async with AIProjectClient(  # type: ignore
            credential=credential, endpoint=os.getenv("AZURE_PROJECT_ENDPOINT", "")
        ) as project_client:
            conn = await project_client.connections.get(name=os.getenv("BING_CONNECTION_NAME", ""))

            bing_tool = BingGroundingTool(conn.id)
            agent_with_bing_grounding = AzureAIAgent(
                name="bing_agent",
                description="An AI assistant with Bing grounding",
                project_client=project_client,
                deployment_name="gpt-4o",
                instructions="You are a helpful assistant.",
                tools=bing_tool.definitions,
                metadata={"source": "AzureAIAgent"},
            )

            # For the bing grounding tool to return the citations, the message must contain an instruction for the model to do return them.
            # For example: "Please provide citations for the answers"

            result = await agent_with_bing_grounding.on_messages(
                messages=[
                    TextMessage(
                        content="What is Microsoft's annual leave policy? Provide citations for your answers.",
                        source="user",
                    )
                ],
                cancellation_token=CancellationToken(),
                message_limit=5,
            )
            print(result)


await bing_example()

Note that you can also provide other Azure Backed tools and local client side functions to the agent.

See the AzureAIAgent class api documentation for more details on how to create an Azure Foundry Agent.