Note
Nightly build is available at:
Firstly, following the installation guide to install AutoGen packages.
Then you can start with the following code snippet to create a conversable agent and chat with it.
using AutoGen;
using AutoGen.OpenAI;
var openAIKey = Environment.GetEnvironmentVariable("OPENAI_API_KEY") ?? throw new Exception("Please set OPENAI_API_KEY environment variable.");
var gpt35Config = new OpenAIConfig(openAIKey, "gpt-3.5-turbo");
var assistantAgent = new AssistantAgent(
name: "assistant",
systemMessage: "You are an assistant that help user to do some tasks.",
llmConfig: new ConversableAgentConfig
{
Temperature = 0,
ConfigList = [gpt35Config],
})
.RegisterPrintMessage(); // register a hook to print message nicely to console
// set human input mode to ALWAYS so that user always provide input
var userProxyAgent = new UserProxyAgent(
name: "user",
humanInputMode: ConversableAgent.HumanInputMode.ALWAYS)
.RegisterPrintMessage();
// start the conversation
await userProxyAgent.InitiateChatAsync(
receiver: assistantAgent,
message: "Hey assistant, please do me a favor.",
maxRound: 10);
You can find more examples under the sample project.
-
ConversableAgent
- function call
- code execution (dotnet only, powered by
dotnet-interactive
)
-
Agent communication
- Two-agent chat
- Group chat
-
Enhanced LLM Inferences
-
Exclusive for dotnet
- Source generator for type-safe function definition generation
- Add link to Discord channel in nuget's readme.md
- Document improvements
- Rename
Workflow
toGraph
- Rename
AddInitializeMessage
toSendIntroduction
- Rename
SequentialGroupChat
toRoundRobinGroupChat
- Refactor over @AutoGen.Message and introducing
TextMessage
,ImageMessage
,MultiModalMessage
and so on. PR #1676 - Add
AutoGen.SemanticKernel
to support seamless integration with Semantic Kernel - Move the agent contract abstraction to
AutoGen.Core
package. TheAutoGen.Core
package provides the abstraction for message type, agent and group chat and doesn't contain dependencies overAzure.AI.OpenAI
orSemantic Kernel
. This is useful when you want to leverage AutoGen's abstraction only and want to avoid introducing any other dependencies. - Move
GPTAgent
,OpenAIChatAgent
and all openai-dependencies toAutoGen.OpenAI
- Fix #1804
- Streaming support for IAgent #1656
- Streaming support for middleware via
MiddlewareStreamingAgent
#1656 - Graph chat support with conditional transition workflow #1761
- AutoGen.SourceGenerator: Generate
FunctionContract
fromFunctionAttribute
#1736
- Add
AutoGen.LMStudio
to support comsume openai-like API from LMStudio local server
- Add
MiddlewareAgent
- Use
MiddlewareAgent
to implement existing agent hooks (RegisterPreProcess, RegisterPostProcess, RegisterReply) - Remove
AutoReplyAgent
,PreProcessAgent
,PostProcessAgent
because they are replaced byMiddlewareAgent
- Simplify
IAgent
interface by removingChatLLM
Property - Add
GenerateReplyOptions
toIAgent.GenerateReplyAsync
which allows user to specify or override the options when generating reply
- Move out dependency of Semantic Kernel
- Add type
IChatLLM
as connector to LLM
- In AutoGen.SourceGenerator, rename FunctionAttribution to FunctionAttribute
- In AutoGen, refactor over ConversationAgent, UserProxyAgent, and AssistantAgent
- update Azure.OpenAI.AI to 1.0.0-beta.12
- update Semantic kernel to 1.0.1