Skip to content
Draft
Show file tree
Hide file tree
Changes from 2 commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
3 changes: 3 additions & 0 deletions agent-samples/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,3 @@
# Declarative Agents

This folder contains sample agent definitions than be ran using the [Declarative Agents](../dotnet/samples/GettingStarted/DeclarativeAgents) demo.
25 changes: 25 additions & 0 deletions agent-samples/azure/AzureOpenAI.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,25 @@
kind: Prompt
name: Assistant
description: Helpful assistant
instructions: You are a helpful assistant. You answer questions is the language specified by the user. You return your answers in a JSON format. You must include Chat as the type in your response.
model:
id: =Env.AZURE_OPENAI_DEPLOYMENT_NAME
provider: AzureOpenAI
apiType: Chat
options:
temperature: 0.9
topP: 0.95
outputSchema:
properties:
language:
kind: string
required: true
description: The language of the answer.
answer:
kind: string
required: true
description: The answer text.
type:
kind: string
required: true
description: The type of the response.
25 changes: 25 additions & 0 deletions agent-samples/azure/AzureOpenAIAssistants.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,25 @@
kind: Prompt
name: Assistant
description: Helpful assistant
instructions: You are a helpful assistant. You answer questions is the language specified by the user. You return your answers in a JSON format. You must include Assistants as the type in your response.
model:
id: =Env.AZURE_OPENAI_DEPLOYMENT_NAME
provider: AzureOpenAI
apiType: Assistants
options:
temperature: 0.9
topP: 0.95
outputSchema:
properties:
language:
kind: string
required: true
description: The language of the answer.
answer:
kind: string
required: true
description: The answer text.
type:
kind: string
required: true
description: The type of the response.
28 changes: 28 additions & 0 deletions agent-samples/azure/AzureOpenAIResponses.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,28 @@
kind: Prompt
name: Assistant
description: Helpful assistant
instructions: You are a helpful assistant. You answer questions is the language specified by the user. You return your answers in a JSON format. You must include Responses as the type in your response.
model:
id: =Env.AZURE_OPENAI_DEPLOYMENT_NAME
provider: AzureOpenAI
apiType: Responses
options:
text:
verbosity: medium
connection:
kind: remote
endpoint: =Env.AZURE_OPENAI_ENDPOINT
outputSchema:
properties:
language:
kind: string
required: true
description: The language of the answer.
answer:
kind: string
required: true
description: The answer text.
type:
kind: string
required: true
description: The type of the response.
18 changes: 18 additions & 0 deletions agent-samples/chatclient/Assistant.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,18 @@
kind: Prompt
name: Assistant
description: Helpful assistant
instructions: You are a helpful assistant. You answer questions is the language specified by the user. You return your answers in a JSON format.
model:
options:
temperature: 0.9
topP: 0.95
outputSchema:
properties:
language:
kind: string
required: true
description: The language of the answer.
answer:
kind: string
required: true
description: The answer text.
26 changes: 26 additions & 0 deletions agent-samples/chatclient/GetWeather.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,26 @@
kind: Prompt
name: Assistant
description: Helpful assistant
instructions: You are a helpful assistant. You answer questions using the tools provided.
model:
options:
allowMultipleToolCalls: true
chatToolMode: auto
tools:
- kind: function
name: GetWeather
description: Get the weather for a given location.
bindings:
get_weather: get_weather
parameters:
location:
kind: string
description: The city and state, e.g. San Francisco, CA
required: true
unit:
kind: string
description: The unit of temperature. Possible values are 'celsius' and 'fahrenheit'.
required: false
enum:
- celsius
- fahrenheit
21 changes: 21 additions & 0 deletions agent-samples/foundry/MicrosoftLearnAgent.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,21 @@
kind: Prompt
name: MicrosoftLearnAgent
description: Microsoft Learn Agent
instructions: You answer questions by searching the Microsoft Learn content only.
model:
id: =Env.AZURE_FOUNDRY_PROJECT_MODEL_ID
options:
temperature: 0.9
topP: 0.95
connection:
kind: remote
endpoint: =Env.AZURE_FOUNDRY_PROJECT_ENDPOINT
tools:
- kind: mcp
name: microsoft_learn
description: Get information from Microsoft Learn.
url: https://learn.microsoft.com/api/mcp
approvalMode:
kind: never
allowedTools:
- microsoft_docs_search
22 changes: 22 additions & 0 deletions agent-samples/foundry/PersistentAgent.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,22 @@
kind: Prompt
name: Assistant
description: Helpful assistant
instructions: You are a helpful assistant. You answer questions is the language specified by the user. You return your answers in a JSON format.
model:
id: =Env.AZURE_FOUNDRY_PROJECT_MODEL_ID
options:
temperature: 0.9
topP: 0.95
connection:
kind: remote
endpoint: =Env.AZURE_FOUNDRY_PROJECT_ENDPOINT
outputSchema:
properties:
language:
kind: string
required: true
description: The language of the answer.
answer:
kind: string
required: true
description: The answer text.
28 changes: 28 additions & 0 deletions agent-samples/openai/OpenAI.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,28 @@
kind: Prompt
name: Assistant
description: Helpful assistant
instructions: You are a helpful assistant. You answer questions is the language specified by the user. You return your answers in a JSON format. You must include Chat as the type in your response.
model:
id: =Env.OPENAI_MODEL
provider: OpenAI
apiType: Chat
options:
temperature: 0.9
topP: 0.95
connection:
kind: key
key: =Env.OPENAI_API_KEY
outputSchema:
properties:
language:
kind: string
required: true
description: The language of the answer.
answer:
kind: string
required: true
description: The answer text.
type:
kind: string
required: true
description: The type of the response.
30 changes: 30 additions & 0 deletions agent-samples/openai/OpenAIAssistants.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,30 @@
kind: Prompt
name: Assistant
description: Helpful assistant
instructions: You are a helpful assistant. You answer questions is the language specified by the user. You return your answers in a JSON format. You must include Assistants as the type in your response.
model:
id: =Env.OPENAI_MODEL
provider: OpenAI
apiType: Assistants
options:
temperature: 0.9
topP: 0.95
connection:
kind: key
key: =Env.OPENAI_APIKEY
outputSchema:
name: AssistantResponse
description: The response from the assistant.
properties:
language:
kind: string
required: true
description: The language of the answer.
answer:
kind: string
required: true
description: The answer text.
type:
kind: string
required: true
description: The type of the response.
28 changes: 28 additions & 0 deletions agent-samples/openai/OpenAIResponses.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,28 @@
kind: Prompt
name: Assistant
description: Helpful assistant
instructions: You are a helpful assistant. You answer questions is the language specified by the user. You return your answers in a JSON format. You must include Responses as the type in your response.
model:
id: =Env.OPENAI_MODEL
provider: OpenAI
apiType: Responses
options:
text:
verbosity: medium
connection:
kind: key
key: =Env.OPENAI_APIKEY
outputSchema:
properties:
language:
kind: string
required: true
description: The language of the answer.
answer:
kind: string
required: true
description: The answer text.
type:
kind: string
required: true
description: The type of the response.
2 changes: 2 additions & 0 deletions python/.cspell.json
Original file line number Diff line number Diff line change
Expand Up @@ -45,6 +45,7 @@
"logit",
"logprobs",
"lowlevel",
"maml",

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Do not use this term it is for internal use only, just use the generic term declarative agents

"Magentic",
"mistralai",
"mongocluster",
Expand All @@ -59,6 +60,7 @@
"OPENAI",
"opentelemetry",
"OTEL",
"powerfx",
"protos",
"pydantic",
"pytestmark",
Expand Down
12 changes: 11 additions & 1 deletion python/packages/core/agent_framework/_agents.py
Original file line number Diff line number Diff line change
Expand Up @@ -587,9 +587,11 @@ def __init__(
name: str | None = None,
description: str | None = None,
chat_message_store_factory: Callable[[], ChatMessageStoreProtocol] | None = None,
conversation_id: str | None = None,
context_providers: ContextProvider | list[ContextProvider] | AggregateContextProvider | None = None,
middleware: Middleware | list[Middleware] | None = None,
# chat options
allow_multiple_tool_calls: bool | None = None,
conversation_id: str | None = None,

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Consider separating this into it's own PR and getting it into main straight away

frequency_penalty: float | None = None,
logit_bias: dict[str | int, float] | None = None,
max_tokens: int | None = None,
Expand Down Expand Up @@ -634,6 +636,7 @@ def __init__(
Cannot be used together with chat_message_store_factory.
context_providers: The collection of multiple context providers to include during agent invocation.
middleware: List of middleware to intercept agent and function invocations.
allow_multiple_tool_calls: Whether to allow multiple tool calls in a single response.
frequency_penalty: The frequency penalty to use.
logit_bias: The logit bias to use.
max_tokens: The maximum number of tokens to generate.
Expand Down Expand Up @@ -688,6 +691,7 @@ def __init__(
agent_tools = [tool for tool in normalized_tools if not isinstance(tool, MCPTool)]
self.chat_options = ChatOptions(
model_id=model_id,
allow_multiple_tool_calls=allow_multiple_tool_calls,
conversation_id=conversation_id,
frequency_penalty=frequency_penalty,
instructions=instructions,
Expand Down Expand Up @@ -758,6 +762,7 @@ async def run(
messages: str | ChatMessage | list[str] | list[ChatMessage] | None = None,
*,
thread: AgentThread | None = None,
allow_multiple_tool_calls: bool | None = None,
frequency_penalty: float | None = None,
logit_bias: dict[str | int, float] | None = None,
max_tokens: int | None = None,
Expand Down Expand Up @@ -793,6 +798,7 @@ async def run(

Keyword Args:
thread: The thread to use for the agent.
allow_multiple_tool_calls: Whether to allow multiple tool calls in a single response.
frequency_penalty: The frequency penalty to use.
logit_bias: The logit bias to use.
max_tokens: The maximum number of tokens to generate.
Expand Down Expand Up @@ -843,6 +849,7 @@ async def run(

co = run_chat_options & ChatOptions(
model_id=model_id,
allow_multiple_tool_calls=allow_multiple_tool_calls,
conversation_id=thread.service_thread_id,
frequency_penalty=frequency_penalty,
logit_bias=logit_bias,
Expand Down Expand Up @@ -887,6 +894,7 @@ async def run_stream(
messages: str | ChatMessage | list[str] | list[ChatMessage] | None = None,
*,
thread: AgentThread | None = None,
allow_multiple_tool_calls: bool | None = None,
frequency_penalty: float | None = None,
logit_bias: dict[str | int, float] | None = None,
max_tokens: int | None = None,
Expand Down Expand Up @@ -922,6 +930,7 @@ async def run_stream(

Keyword Args:
thread: The thread to use for the agent.
allow_multiple_tool_calls: Whether to allow multiple tool calls in a single response.
frequency_penalty: The frequency penalty to use.
logit_bias: The logit bias to use.
max_tokens: The maximum number of tokens to generate.
Expand Down Expand Up @@ -971,6 +980,7 @@ async def run_stream(

co = run_chat_options & ChatOptions(
conversation_id=thread.service_thread_id,
allow_multiple_tool_calls=allow_multiple_tool_calls,
frequency_penalty=frequency_penalty,
logit_bias=logit_bias,
max_tokens=max_tokens,
Expand Down
Loading
Loading