Eino[‘aino] (pronounced similarly to “I know, hoping that the framework can achieve the vision of “I know”) aims to be the ultimate LLM application development framework in Golang. Drawing inspiration from many excellent LLM application development frameworks in the open-source community such as LangChain & LlamaIndex, etc., as well as learning from cutting-edge research and real world applications, Eino offers an LLM application development framework that emphasizes simplicity, scalability, reliability and effectiveness that better aligns with Golang programming conventions.
What Eino provides are:
With the above capabilities and tools, Eino can standardize, simplify operations and improve efficiency at different stages of the artificial intelligence application development lifecycle:
Use a component directly:
model, _ := openai.NewChatModel(ctx, config) // create an invokable LLM instance
message, _ := model.Generate(ctx, []*Message{
SystemMessage("you are a helpful assistant."),
UserMessage("what does the future AI App look like?")}
Of course, you can do that. Eino provides lots of useful components to use out of the box. But you can do more by using orchestration, for three reasons:
Eino provides three sets of APIs for orchestration
API | Characteristics and usage |
Chain | Simple chained directed graph that can only go forward. |
Graph | Cyclic or Acyclic directed graph. Powerful and flexible. |
Workflow | Acyclic graph that supports data mapping at struct field level. |
Let’s create a simple chain: a ChatTemplate followed by a ChatModel.
chain, err := NewChain[map[string]any, *Message]().
AppendChatTemplate(prompt).
AppendChatModel(model).
Compile(ctx)
if err != nil {
return err
}
out, err := chain.Invoke(ctx, map[string]any{"query": "what's your name?"})
Now let’s create a graph that uses a ChatModel to generate tool calls, then uses a ToolsNode to execute those tools, .
graph := NewGraph[map[string]any, *schema.Message]()
_ = graph.AddChatTemplateNode("node_template", chatTpl)
_ = graph.AddChatModelNode("node_model", chatModel)
_ = graph.AddToolsNode("node_tools", toolsNode)
_ = graph.AddLambdaNode("node_converter", takeOne)
_ = graph.AddEdge(START, "node_template")
_ = graph.AddEdge("node_template", "node_model")
_ = graph.AddBranch("node_model", branch)
_ = graph.AddEdge("node_tools", "node_converter")
_ = graph.AddEdge("node_converter", END)
compiledGraph, err := graph.Compile(ctx)
if err != nil {
return err
}
out, err := r.Invoke(ctx, map[string]any{"query":"Beijing's weather this weekend"})
Now let’s create a workflow that flexibly maps input & output at the field level:
wf := NewWorkflow[[]*Message, *Message]()
wf.AddChatModelNode("model", model).AddInput(START)
wf.AddLambdaNode("l1", lambda1).AddInput(NewMapping("model").From("Content").To("Input"))
wf.AddLambdaNode("l2", lambda2).AddInput(NewMapping("model").From("Role").To("Role"))
wf.AddLambdaNode("l3", lambda3).AddInput(
NewMapping("l1").From("Output").To("Query"),
NewMapping("l2").From("Output").To("MetaData"),
)
wf.AddEnd([]*Mapping{NewMapping("node_l3")}
runnable, _ := wf.Compile(ctx)
runnable.Invoke(ctx, []*Message{UserMessage("kick start this workflow!")})
Now let’s create a ‘ReAct’ agent: A ChatModel binds to Tools. It receives input Messages
and decides independently whether to call the Tool
or output the final result. The execution result of the Tool will again become the input Message for the ChatModel and serve as the context for the next round of independent judgment.
We provide a complete implementation of the ReAct agent out of the box in Eino’s flow
package. See the code at: flow/agent/react
Eino automatically does important stuff behind the above code:
For example, you could easily extend the compiled graph with callbacks:
handler := NewHandlerBuilder().
OnStartFn(
func(ctx context.Context, info *RunInfo, input CallbackInput) context.Context) {
log.Infof("onStart, runInfo: %v, input: %v", info, input)
}).
OnEndFn(
func(ctx context.Context, info *RunInfo, output CallbackOutput) context.Context) {
log.Infof("onEnd, runInfo: %v, out: %v", info, output)
}).
Build()
compiledGraph.Invoke(ctx, input, WithCallbacks(handler))
or you could easily assign options to different nodes:
// assign to All nodes
compiledGraph.Invoke(ctx, input, WithCallbacks(handler))
// assign only to ChatModel nodes
compiledGraph.Invoke(ctx, input, WithChatModelOption(WithTemperature(0.5))
// assign only to node_1
compiledGraph.Invoke(ctx, input, WithCallbacks(handler).DesignateNode("node_1"))
Streaming Paradigm | Explanation |
Invoke | Accepts non-stream type I and returns non-stream type O |
Stream | Accepts non-stream type I and returns stream type StreamReader[O] |
Collect | Accepts stream type StreamReader[I] and returns non-stream type O |
Transform | Accepts stream type StreamReader[I] and returns stream type StreamReader[O] |
The Eino framework consists of several parts:
For details, see: The structure of the Eino Framework
For learning and using Eino, we provide a comprehensive Eino User Manual to help you quickly understand the concepts in Eino and master the skills of developing AI applications based on Eino. Start exploring through the Eino User Manual now!
For a quick introduction to building AI applications with Eino, we recommend starting with Eino: Quick Start
Full API Reference:https://pkg.go.dev/github.com/cloudwego/eino
If you discover a potential security issue in this project, or think you may have discovered a security issue, we ask that you notify Bytedance Security via our security center or vulnerability reporting email.
Please do not create a public GitHub issue.
This project is licensed under the [Apache-2.0 License].