一、LangChain简介

LangChain是开发大语言模型应用的框架,提供了模型调用、提示管理、链式调用等能力。

核心组件:

  • Model I/O:模型输入输出
  • Prompts:提示模板
  • Chains:链式调用
  • Memory:记忆管理
  • Agents:智能代理
  • Tools:工具集成

二、环境配置

安装

1
pip install langchain langchain-openai

配置API Key

1
2
import os
os.environ["OPENAI_API_KEY"] = "your-api-key"

三、Model I/O

调用LLM

1
2
3
4
5
from langchain_openai import OpenAI

llm = OpenAI()
response = llm.invoke("你好")
print(response)

调用Chat模型

1
2
3
4
5
6
7
8
9
10
from langchain_openai import ChatOpenAI
from langchain.schema import HumanMessage, SystemMessage

chat = ChatOpenAI()
messages = [
SystemMessage(content="你是一个助手"),
HumanMessage(content="你好")
]
response = chat.invoke(messages)
print(response.content)

流式输出

1
2
for chunk in llm.stream("讲一个故事"):
print(chunk, end="", flush=True)

四、提示模板

PromptTemplate

1
2
3
4
5
6
7
from langchain.prompts import PromptTemplate

template = "给我推荐{num}个{topic}相关的书籍"
prompt = PromptTemplate.from_template(template)

formatted = prompt.format(num=3, topic="Python")
response = llm.invoke(formatted)

ChatPromptTemplate

1
2
3
4
5
6
7
8
9
from langchain.prompts import ChatPromptTemplate

template = ChatPromptTemplate.from_messages([
("system", "你是一个{role}"),
("human", "{input}")
])

messages = template.format_messages(role="翻译官", input="翻译hello为中文")
response = chat.invoke(messages)

FewShotPromptTemplate

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
from langchain.prompts import FewShotPromptTemplate

examples = [
{"input": "开心", "output": "悲伤"},
{"input": "高", "output": "矮"}
]

example_prompt = PromptTemplate(
input_variables=["input", "output"],
template="输入: {input}\n输出: {output}"
)

few_shot_prompt = FewShotPromptTemplate(
examples=examples,
example_prompt=example_prompt,
prefix="给出反义词",
suffix="输入: {input}\n输出:",
input_variables=["input"]
)

五、链

LLMChain

1
2
3
4
from langchain.chains import LLMChain

chain = LLMChain(llm=llm, prompt=prompt)
result = chain.run(num=3, topic="Python")

SimpleSequentialChain

1
2
3
4
5
6
7
8
9
10
from langchain.chains import SimpleSequentialChain

chain1 = LLMChain(llm=llm, prompt=prompt1)
chain2 = LLMChain(llm=llm, prompt=prompt2)

overall_chain = SimpleSequentialChain(
chains=[chain1, chain2],
verbose=True
)
result = overall_chain.run("Python")

LCEL表达式

1
2
3
4
from langchain_core.output_parsers import StrOutputParser

chain = prompt | llm | StrOutputParser()
result = chain.invoke({"num": 3, "topic": "Python"})

六、Memory

ConversationBufferMemory

1
2
3
4
5
6
from langchain.memory import ConversationBufferMemory

memory = ConversationBufferMemory()
memory.save_context({"input": "你好"}, {"output": "你好,有什么可以帮你?"})

chain = LLMChain(llm=llm, prompt=prompt, memory=memory)

ConversationBufferWindowMemory

1
2
3
4
from langchain.memory import ConversationBufferWindowMemory

# 只保留最近k轮对话
memory = ConversationBufferWindowMemory(k=2)

七、Agents

定义工具

1
2
3
4
5
6
7
8
9
10
11
12
from langchain.tools import Tool

def search(query: str) -> str:
return f"搜索结果: {query}"

tools = [
Tool(
name="Search",
func=search,
description="搜索工具"
)
]

创建Agent

1
2
3
4
5
6
7
8
9
10
from langchain.agents import initialize_agent, AgentType

agent = initialize_agent(
tools=tools,
llm=llm,
agent=AgentType.ZERO_SHOT_REACT_DESCRIPTION,
verbose=True
)

result = agent.run("搜索Python教程")

八、LangGraph简介

LangGraph用于构建有状态的、多角色的LLM应用。

核心概念:

  • State:状态
  • Node:节点
  • Edge:边
  • Graph:图

九、LangGraph基础

定义状态

1
2
3
4
5
from typing import TypedDict

class State(TypedDict):
messages: list
count: int

定义节点

1
2
3
4
5
def node1(state: State) -> State:
return {"count": state["count"] + 1}

def node2(state: State) -> State:
return {"count": state["count"] * 2}

构建图

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
from langgraph.graph import StateGraph, END

workflow = StateGraph(State)

# 添加节点
workflow.add_node("node1", node1)
workflow.add_node("node2", node2)

# 设置入口
workflow.set_entry_point("node1")

# 添加边
workflow.add_edge("node1", "node2")
workflow.add_edge("node2", END)

# 编译
app = workflow.compile()

执行图

1
2
result = app.invoke({"messages": [], "count": 1})
print(result)

十、条件边

1
2
3
4
5
6
7
8
9
10
11
12
13
def should_continue(state: State) -> str:
if state["count"] < 10:
return "continue"
return "end"

workflow.add_conditional_edges(
"node1",
should_continue,
{
"continue": "node2",
"end": END
}
)

十一、循环图

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
workflow = StateGraph(State)

workflow.add_node("node1", node1)
workflow.add_node("node2", node2)

workflow.set_entry_point("node1")

workflow.add_edge("node1", "node2")
workflow.add_conditional_edges(
"node2",
should_continue,
{
"continue": "node1",
"end": END
}
)

十二、总结

LangChain要点:

  • Model I/O管理模型调用
  • PromptTemplate管理提示
  • Chain实现链式调用
  • Memory管理对话历史
  • Agent实现智能代理

LangGraph要点:

  • 状态驱动的图结构
  • 节点和边定义流程
  • 条件边实现分支
  • 循环图实现迭代

LangChain和LangGraph是构建LLM应用的核心框架,适合开发对话系统、智能代理、工作流自动化等应用。