Chapter 1: ChatModel and Message (Console)
Introduction to the Eino framework
What is Eino?
Eino is an AI application development framework in Go (Agent Development Kit) designed to help developers quickly build scalable and maintainable AI applications.
What problems does Eino solve?
- Model abstraction: unify interfaces across different LLM providers (OpenAI, Ark, Claude, etc.), so switching models does not require changing business code
- Capability composition: provide replaceable, composable capability units through the Component interfaces (chat, tools, retrieval, etc.)
- Orchestration framework: offer orchestration abstractions such as Agent, Graph, and Chain to support complex multi-step AI workflows
- Runtime support: built-in streaming output, interrupt/resume, state management, and Callback-based observability
Main repositories of Eino:
- eino (this repo): the core library, defining interfaces, orchestration abstractions, and ADK
- eino-ext: the extension library, providing concrete implementations of Components (OpenAI, Ark, Milvus, etc.)
- eino-examples: the examples repo, including this Quickstart series
ChatWithEino: an assistant that talks with Eino docs
What is ChatWithEino?
ChatWithEino is an intelligent assistant built with Eino. It helps developers learn Eino and write Eino code by accessing the Eino repository’s source code, comments, and examples, so it can provide accurate and up-to-date technical help.
Core capabilities:
- Conversational interaction: understand questions about Eino and respond clearly
- Code access: read Eino source code/comments/examples and answer based on real implementations
- Persistent sessions: support multi-turn conversations, remember context, and restore sessions across processes
- Tool calling: perform operations such as file reading and code search
Architecture overview:
- ChatModel: communicate with LLM providers (OpenAI, Ark, Claude, etc.)
- Tool: extend capabilities such as file system access and code search
- Memory: persist conversation history
- Agent: a unified execution framework that coordinates components
Quickstart series: build ChatWithEino from scratch
This series walks you step by step: starting from the most basic ChatModel call, and progressively building a fully functional ChatWithEino Agent.
Learning path:
| Chapter | Topic | Core content | Capability gain |
| Chapter 1 | ChatModel and Message | Understand the Component abstraction and implement a single-turn chat | Basic conversation |
| Chapter 2 | Agent and Runner | Introduce execution abstractions and implement multi-turn chat | Session management |
| Chapter 3 | Memory and Session | Persist chat history and support session recovery | Persistence |
| Chapter 4 | Tools and file system | Add file access to read source code | Tool calling |
| Chapter 5 | Middleware | Middleware mechanism and unified cross-cutting concerns | Extensibility |
| Chapter 6 | Callback | Callbacks to observe the Agent execution process | Observability |
| Chapter 7 | Interrupt and Resume | Interrupt and resume to support long-running tasks | Reliability |
| Chapter 8 | Graph and Tool | Use Graph to orchestrate complex workflows | Complex orchestration |
| Chapter 9 | A2UI | Integration from Agent to UI | Production-grade delivery |
Why design it this way?
Each chapter adds one core capability on top of the previous chapter, so you can:
- Understand the role of each component: features are introduced progressively instead of all at once
- See the architecture evolve: from simple to complex, and why each abstraction exists
- Build practical skills: every chapter comes with runnable code you can try hands-on
Goal of this chapter: understand Eino’s Component abstraction, call a ChatModel once with minimal code (with streaming output), and learn the basics of schema.Message.
Code location
- Entry code: cmd/ch01/main.go
Why we need the Component interfaces
Eino defines a set of Component interfaces (ChatModel, Tool, Retriever, Loader, etc.). Each interface describes one replaceable capability category:
type BaseChatModel interface {
Generate(ctx context.Context, input []*schema.Message, opts ...Option) (*schema.Message, error)
Stream(ctx context.Context, input []*schema.Message, opts ...Option) (
*schema.StreamReader[*schema.Message], error)
}
Benefits of interfaces:
- Replaceable implementations:
eino-extprovides implementations for OpenAI, Ark, Claude, Ollama, and more. Business code depends only on the interface, so switching models only changes construction logic. - Composable orchestration: orchestration layers such as Agent, Graph, and Chain depend only on Component interfaces, not concrete implementations. You can swap OpenAI for Ark without changing orchestration code.
- Mockable in tests: interfaces make mocking natural; unit tests do not need real model calls.
This chapter focuses on ChatModel. Later chapters will introduce Components such as Tool and Retriever.
schema.Message: the basic unit of conversation
Message is the basic structure for conversation data in Eino:
type Message struct {
Role RoleType // system / user / assistant / tool
Content string // text content
ToolCalls []ToolCall // only assistant messages may have this
// ...
}
Common constructors:
schema.SystemMessage("You are a helpful assistant.")
schema.UserMessage("What is the weather today?")
schema.AssistantMessage("I don't know.", nil) // second arg is ToolCalls
schema.ToolMessage("tool result", "call_id")
Role semantics:
system: system instructions, typically placed at the beginning of messagesuser: user inputassistant: model responsetool: tool call result (covered in later chapters)
Prerequisites
Get the code
git clone https://github.com/cloudwego/eino-examples.git
cd eino-examples/quickstart/chatwitheino
- Go version: Go 1.21+ (see
go.mod) - A callable ChatModel (OpenAI by default; Ark is also supported)
Option A: OpenAI (default)
export OPENAI_API_KEY="..."
export OPENAI_MODEL="gpt-4.1-mini" # OpenAI 2025 new model; gpt-4o / gpt-4o-mini also work
# Optional:
# OPENAI_BASE_URL (proxy or compatible service)
# OPENAI_BY_AZURE=true (use Azure OpenAI)
Option B: Ark
export MODEL_TYPE="ark"
export ARK_API_KEY="..."
export ARK_MODEL="..."
# Optional: ARK_BASE_URL
Run
In eino-examples/quickstart/chatwitheino, run:
go run ./cmd/ch01 -- "Explain in one sentence what problem Eino’s Component design solves."
Example output (printed incrementally as the stream arrives):
[assistant] Eino’s Component design defines unified interfaces...
What the entry code does
In execution order:
- Create a ChatModel: choose OpenAI or Ark based on the
MODEL_TYPEenvironment variable - Build input messages:
SystemMessage(instruction)+UserMessage(query) - Call Stream: all ChatModel implementations must support
Stream(), returning aStreamReader[*Message] - Print the result: iterate
StreamReaderand print the assistant reply chunk by chunk
Key code snippet (note: simplified and not directly runnable; for the full code see cmd/ch01/main.go):
// Build input
messages := []*schema.Message{
schema.SystemMessage(instruction),
schema.UserMessage(query),
}
// Call Stream (all ChatModels must implement this)
stream, err := cm.Stream(ctx, messages)
if err != nil {
log.Fatal(err)
}
defer stream.Close()
for {
chunk, err := stream.Recv()
if errors.Is(err, io.EOF) {
break
}
if err != nil {
log.Fatal(err)
}
fmt.Print(chunk.Content)
}
Summary
- Component interfaces: define boundaries for replaceable, composable, and testable capabilities
- Message: the basic unit of conversation data, with semantics defined by roles
- ChatModel: the most fundamental Component, providing
GenerateandStream - Implementation choice: switch between OpenAI/Ark implementations via env/config without changing business code