Skip to content
This repository was archived by the owner on Nov 21, 2024. It is now read-only.

Commit f2dbf01

Browse files
eyurtsevrlancemartinhwchase17baskaryan
authored
Docs: Re-organize conceptual docs (langchain-ai#27047)
Reorganization of conceptual documentation --------- Co-authored-by: Lance Martin <122662504+rlancemartin@users.noreply.github.com> Co-authored-by: Lance Martin <lance@langchain.dev> Co-authored-by: Harrison Chase <hw.chase.17@gmail.com> Co-authored-by: Bagatur <22008038+baskaryan@users.noreply.github.com>
1 parent 6d2a76a commit f2dbf01

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

52 files changed

+3621
-1394
lines changed

docs/docs/concepts.mdx

Lines changed: 0 additions & 1391 deletions
This file was deleted.

docs/docs/concepts/agents.mdx

Lines changed: 25 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,25 @@
1+
# Agents
2+
3+
By themselves, language models can't take actions - they just output text. Agents are systems that take a high-level task and use an LLM as a reasoning engine to decide what actions to take and execute those actions.
4+
5+
[LangGraph](/docs/concepts/architecture#langgraph) is an extension of LangChain specifically aimed at creating highly controllable and customizable agents. We recommend that you use LangGraph for building agents.
6+
7+
Please see the following resources for more information:
8+
9+
* LangGraph docs on [common agent architectures](https://langchain-ai.github.io/langgraph/concepts/agentic_concepts/)
10+
* [Pre-built agents in LangGraph](https://langchain-ai.github.io/langgraph/reference/prebuilt/#langgraph.prebuilt.chat_agent_executor.create_react_agent)
11+
12+
## Legacy agent concept: AgentExecutor
13+
14+
LangChain previously introduced the `AgentExecutor` as a runtime for agents.
15+
While it served as an excellent starting point, its limitations became apparent when dealing with more sophisticated and customized agents.
16+
As a result, we're gradually phasing out `AgentExecutor` in favor of more flexible solutions in LangGraph.
17+
18+
### Transitioning from AgentExecutor to langgraph
19+
20+
If you're currently using `AgentExecutor`, don't worry! We've prepared resources to help you:
21+
22+
1. For those who still need to use `AgentExecutor`, we offer a comprehensive guide on [how to use AgentExecutor](/docs/how_to/agent_executor).
23+
24+
2. However, we strongly recommend transitioning to LangGraph for improved flexibility and control. To facilitate this transition, we've created a detailed [migration guide](/docs/how_to/migrate_agent) to help you move from `AgentExecutor` to LangGraph seamlessly.
25+

docs/docs/concepts/architecture.mdx

Lines changed: 78 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,78 @@
1+
import ThemedImage from '@theme/ThemedImage';
2+
import useBaseUrl from '@docusaurus/useBaseUrl';
3+
4+
# Architecture
5+
6+
LangChain as a framework consists of a number of packages.
7+
8+
<ThemedImage
9+
alt="Diagram outlining the hierarchical organization of the LangChain framework, displaying the interconnected parts across multiple layers."
10+
sources={{
11+
light: useBaseUrl('/svg/langchain_stack_062024.svg'),
12+
dark: useBaseUrl('/svg/langchain_stack_062024_dark.svg'),
13+
}}
14+
title="LangChain Framework Overview"
15+
style={{ width: "100%" }}
16+
/>
17+
18+
19+
## langchain-core
20+
21+
This package contains base abstractions of different components and ways to compose them together.
22+
The interfaces for core components like LLMs, vector stores, retrievers and more are defined here.
23+
No third party integrations are defined here.
24+
The dependencies are kept purposefully very lightweight.
25+
26+
## langchain
27+
28+
The main `langchain` package contains chains, agents, and retrieval strategies that make up an application's cognitive architecture.
29+
These are NOT third party integrations.
30+
All chains, agents, and retrieval strategies here are NOT specific to any one integration, but rather generic across all integrations.
31+
32+
## langchain-community
33+
34+
This package contains third party integrations that are maintained by the LangChain community.
35+
Key partner packages are separated out (see below).
36+
This contains all integrations for various components (LLMs, vector stores, retrievers).
37+
All dependencies in this package are optional to keep the package as lightweight as possible.
38+
39+
## Partner packages
40+
41+
While the long tail of integrations is in `langchain-community`, we split popular integrations into their own packages (e.g. `langchain-openai`, `langchain-anthropic`, etc). This was done in order to improve support for these important integrations.
42+
43+
For more information see:
44+
45+
* A list [LangChain integrations](/docs/integrations/providers/)
46+
* The [LangChain API Reference](https://python.langchain.com/api_reference/) where you can find detailed information about the API reference of each partner package.
47+
48+
## LangGraph
49+
50+
`langgraph` is an extension of `langchain` aimed at building robust and stateful multi-actor applications with LLMs by modeling steps as edges and nodes in a graph.
51+
52+
LangGraph exposes high level interfaces for creating common types of agents, as well as a low-level API for composing custom flows.
53+
54+
:::info[Further reading]
55+
56+
* See our LangGraph overview [here](https://langchain-ai.github.io/langgraph/concepts/high_level/#core-principles).
57+
* See our LangGraph Academy Course [here](https://academy.langchain.com/courses/intro-to-langgraph).
58+
59+
:::
60+
61+
## LangServe
62+
63+
A package to deploy LangChain chains as REST APIs. Makes it easy to get a production ready API up and running.
64+
65+
:::important
66+
LangServe is designed to primarily deploy simple Runnables and work with well-known primitives in langchain-core.
67+
68+
If you need a deployment option for LangGraph, you should instead be looking at LangGraph Cloud (beta) which will be better suited for deploying LangGraph applications.
69+
:::
70+
71+
For more information, see the [LangServe documentation](/docs/langserve).
72+
73+
74+
## LangSmith
75+
76+
A developer platform that lets you debug, test, evaluate, and monitor LLM applications.
77+
78+
For more information, see the [LangSmith documentation](https://docs.smith.langchain.com)

docs/docs/concepts/async.mdx

Lines changed: 81 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,81 @@
1+
# Async programming with langchain
2+
3+
:::info Prerequisites
4+
* [Runnable interface](/docs/concepts/runnables)
5+
* [asyncio](https://docs.python.org/3/library/asyncio.html)
6+
:::
7+
8+
LLM based applications often involve a lot of I/O-bound operations, such as making API calls to language models, databases, or other services. Asynchronous programming (or async programming) is a paradigm that allows a program to perform multiple tasks concurrently without blocking the execution of other tasks, improving efficiency and responsiveness, particularly in I/O-bound operations.
9+
10+
:::note
11+
You are expected to be familiar with asynchronous programming in Python before reading this guide. If you are not, please find appropriate resources online to learn how to program asynchronously in Python.
12+
This guide specifically focuses on what you need to know to work with LangChain in an asynchronous context, assuming that you are already familiar with asynch
13+
:::
14+
15+
## Langchain asynchronous apis
16+
17+
Many LangChain APIs are designed to be asynchronous, allowing you to build efficient and responsive applications.
18+
19+
Typically, any method that may perform I/O operations (e.g., making API calls, reading files) will have an asynchronous counterpart.
20+
21+
In LangChain, async implementations are located in the same classes as their synchronous counterparts, with the asynchronous methods having an "a" prefix. For example, the synchronous `invoke` method has an asynchronous counterpart called `ainvoke`.
22+
23+
Many components of LangChain implement the [Runnable Interface](/docs/concepts/runnables), which includes support for asynchronous execution. This means that you can run Runnables asynchronously using the `await` keyword in Python.
24+
25+
```python
26+
await some_runnable.ainvoke(some_input)
27+
```
28+
29+
Other components like [Embedding Models](/docs/concepts/embedding_models) and [VectorStore](/docs/concepts/vectorstores) that do not implement the [Runnable Interface](/docs/concepts/runnables) usually still follow the same rule and include the asynchronous version of method in the same class with an "a" prefix.
30+
31+
For example,
32+
33+
```python
34+
await some_vectorstore.aadd_documents(documents)
35+
```
36+
37+
Runnables created using the [LangChain Expression Language (LCEL)](/docs/concepts/lcel) can also be run asynchronously as they implement
38+
the full [Runnable Interface](/docs/concepts/runnables).
39+
40+
Fore more information, please review the [API reference](https://python.langchain.com/api_reference/) for the specific component you are using.
41+
42+
## Delegation to sync methods
43+
44+
Most popular LangChain integrations implement asynchronous support of their APIs. For example, the `ainvoke` method of many ChatModel implementations uses the `httpx.AsyncClient` to make asynchronous HTTP requests to the model provider's API.
45+
46+
When an asynchronous implementation is not available, LangChain tries to provide a default implementation, even if it incurs
47+
a **slight** overhead.
48+
49+
By default, LangChain will delegate the execution of a unimplemented asynchronous methods to the synchronous counterparts. LangChain almost always assumes that the synchronous method should be treated as a blocking operation and should be run in a separate thread.
50+
This is done using [asyncio.loop.run_in_executor](https://docs.python.org/3/library/asyncio-eventloop.html#asyncio.loop.run_in_executor) functionality provided by the `asyncio` library. LangChain uses the default executor provided by the `asyncio` library, which lazily initializes a thread pool executor with a default number of threads that is reused in the given event loop. While this strategy incurs a slight overhead due to context switching between threads, it guarantees that every asynchronous method has a default implementation that works out of the box.
51+
52+
## Performance
53+
54+
Async code in LangChain should generally perform relatively well with minimal overhead out of the box, and is unlikely
55+
to be a bottleneck in most applications.
56+
57+
The two main sources of overhead are:
58+
59+
1. Cost of context switching between threads when [delegating to synchronous methods](#delegation-to-sync-methods). This can be addressed by providing a native asynchronous implementation.
60+
2. In [LCEL](/docs/concepts/lcel) any "cheap functions" that appear as part of the chain will be either scheduled as tasks on the event loop (if they are async) or run in a separate thread (if they are sync), rather than just be run inline.
61+
62+
The latency overhead you should expect from these is between tens of microseconds to a few milliseconds.
63+
64+
A more common source of performance issues arises from users accidentally blocking the event loop by calling synchronous code in an async context (e.g., calling `invoke` rather than `ainvoke`).
65+
66+
## Compatibility
67+
68+
LangChain is only compatible with the `asyncio` library, which is distributed as part of the Python standard library. It will not work with other async libraries like `trio` or `curio`.
69+
70+
In Python 3.9 and 3.10, [asyncio's tasks](https://docs.python.org/3/library/asyncio-task.html#asyncio.create_task) did not
71+
accept a `context` parameter. Due to this limitation, LangChain cannot automatically propagate the `RunnableConfig` down the call chain
72+
in certain scenarios.
73+
74+
If you are experiencing issues with streaming, callbacks or tracing in async code and are using Python 3.9 or 3.10, this is a likely cause.
75+
76+
Please read [Propagation RunnableConfig](/docs/concepts/runnables#propagation-RunnableConfig) for more details to learn how to propagate the `RunnableConfig` down the call chain manually (or upgrade to Python 3.11 where this is no longer an issue).
77+
78+
## How to use in ipython and jupyter notebooks
79+
80+
As of IPython 7.0, IPython supports asynchronous REPLs. This means that you can use the `await` keyword in the IPython REPL and Jupyter Notebooks without any additional setup. For more information, see the [IPython blog post](https://blog.jupyter.org/ipython-7-0-async-repl-a35ce050f7f7).
81+

docs/docs/concepts/callbacks.mdx

Lines changed: 73 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,73 @@
1+
# Callbacks
2+
3+
:::note Prerequisites
4+
- [Runnable interface](/docs/concepts/#runnable-interface)
5+
:::
6+
7+
LangChain provides a callbacks system that allows you to hook into the various stages of your LLM application. This is useful for logging, monitoring, streaming, and other tasks.
8+
9+
You can subscribe to these events by using the `callbacks` argument available throughout the API. This argument is list of handler objects, which are expected to implement one or more of the methods described below in more detail.
10+
11+
## Callback events
12+
13+
| Event | Event Trigger | Associated Method |
14+
|------------------|---------------------------------------------|-----------------------|
15+
| Chat model start | When a chat model starts | `on_chat_model_start` |
16+
| LLM start | When a llm starts | `on_llm_start` |
17+
| LLM new token | When an llm OR chat model emits a new token | `on_llm_new_token` |
18+
| LLM ends | When an llm OR chat model ends | `on_llm_end` |
19+
| LLM errors | When an llm OR chat model errors | `on_llm_error` |
20+
| Chain start | When a chain starts running | `on_chain_start` |
21+
| Chain end | When a chain ends | `on_chain_end` |
22+
| Chain error | When a chain errors | `on_chain_error` |
23+
| Tool start | When a tool starts running | `on_tool_start` |
24+
| Tool end | When a tool ends | `on_tool_end` |
25+
| Tool error | When a tool errors | `on_tool_error` |
26+
| Agent action | When an agent takes an action | `on_agent_action` |
27+
| Agent finish | When an agent ends | `on_agent_finish` |
28+
| Retriever start | When a retriever starts | `on_retriever_start` |
29+
| Retriever end | When a retriever ends | `on_retriever_end` |
30+
| Retriever error | When a retriever errors | `on_retriever_error` |
31+
| Text | When arbitrary text is run | `on_text` |
32+
| Retry | When a retry event is run | `on_retry` |
33+
34+
## Callback handlers
35+
36+
Callback handlers can either be `sync` or `async`:
37+
38+
* Sync callback handlers implement the [BaseCallbackHandler](https://python.langchain.com/api_reference/core/callbacks/langchain_core.callbacks.base.BaseCallbackHandler.html) interface.
39+
* Async callback handlers implement the [AsyncCallbackHandler](https://python.langchain.com/api_reference/core/callbacks/langchain_core.callbacks.base.AsyncCallbackHandler.html) interface.
40+
41+
During run-time LangChain configures an appropriate callback manager (e.g., [CallbackManager](https://python.langchain.com/api_reference/core/callbacks/langchain_core.callbacks.manager.CallbackManager.html) or [AsyncCallbackManager](https://python.langchain.com/api_reference/core/callbacks/langchain_core.callbacks.manager.AsyncCallbackManager.html) which will be responsible for calling the appropriate method on each "registered" callback handler when the event is triggered.
42+
43+
## Passing callbacks
44+
45+
The `callbacks` property is available on most objects throughout the API (Models, Tools, Agents, etc.) in two different places:
46+
47+
- **Request time callbacks**: Passed at the time of the request in addition to the input data.
48+
Available on all standard `Runnable` objects. These callbacks are INHERITED by all children
49+
of the object they are defined on. For example, `chain.invoke({"number": 25}, {"callbacks": [handler]})`.
50+
- **Constructor callbacks**: `chain = TheNameOfSomeChain(callbacks=[handler])`. These callbacks
51+
are passed as arguments to the constructor of the object. The callbacks are scoped
52+
only to the object they are defined on, and are **not** inherited by any children of the object.
53+
54+
:::warning
55+
Constructor callbacks are scoped only to the object they are defined on. They are **not** inherited by children
56+
of the object.
57+
:::
58+
59+
If you're creating a custom chain or runnable, you need to remember to propagate request time
60+
callbacks to any child objects.
61+
62+
:::important Async in Python&lt;=3.10
63+
64+
Any `RunnableLambda`, a `RunnableGenerator`, or `Tool` that invokes other runnables
65+
and is running `async` in python&lt;=3.10, will have to propagate callbacks to child
66+
objects manually. This is because LangChain cannot automatically propagate
67+
callbacks to child objects in this case.
68+
69+
This is a common reason why you may fail to see events being emitted from custom
70+
runnables or tools.
71+
:::
72+
73+
For specifics on how to use callbacks, see the [relevant how-to guides here](/docs/how_to/#callbacks).

docs/docs/concepts/chat_history.mdx

Lines changed: 46 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,46 @@
1+
# Chat history
2+
3+
:::info Prerequisites
4+
5+
- [Messages](/docs/concepts/messages)
6+
- [Chat models](/docs/concepts/chat_models)
7+
- [Tool calling](/docs/concepts/tool_calling)
8+
:::
9+
10+
Chat history is a record of the conversation between the user and the chat model. It is used to maintain context and state throughout the conversation. The chat history is sequence of [messages](/docs/concepts/messages), each of which is associated with a specific [role](/docs/concepts/messages#role), such as "user", "assistant", "system", or "tool".
11+
12+
## Conversation patterns
13+
14+
![Conversation patterns](/img/conversation_patterns.png)
15+
16+
Most conversations start with a **system message** that sets the context for the conversation. This is followed by a **user message** containing the user's input, and then an **assistant message** containing the model's response.
17+
18+
The **assistant** may respond directly to the user or if configured with tools request that a [tool](/docs/concepts/tool_calling) be invoked to perform a specific task.
19+
20+
So a full conversation often involves a combination of two patterns of alternating messages:
21+
22+
1. The **user** and the **assistant** representing a back-and-forth conversation.
23+
2. The **assistant** and **tool messages** representing an ["agentic" workflow](/docs/concepts/agents) where the assistant is invoking tools to perform specific tasks.
24+
25+
## Managing chat history
26+
27+
Since chat models have a maximum limit on input size, it's important to manage chat history and trim it as needed to avoid exceeding the [context window](/docs/concepts/chat_models#context_window).
28+
29+
While processing chat history, it's essential to preserve a correct conversation structure.
30+
31+
Key guidelines for managing chat history:
32+
33+
- The conversation should follow one of these structures:
34+
- The first message is either a "user" message or a "system" message, followed by a "user" and then an "assistant" message.
35+
- The last message should be either a "user" message or a "tool" message containing the result of a tool call.
36+
- When using [tool calling](/docs/concepts/tool_calling), a "tool" message should only follow an "assistant" message that requested the tool invocation.
37+
38+
:::tip
39+
Understanding correct conversation structure is essential for being able to properly implement
40+
[memory](https://langchain-ai.github.io/langgraph/concepts/memory/) in chat models.
41+
:::
42+
43+
## Related resources
44+
45+
- [How to trim messages](https://python.langchain.com/docs/how_to/trim_messages/)
46+
- [Memory guide](https://langchain-ai.github.io/langgraph/concepts/memory/) for information on implementing short-term and long-term memory in chat models using [LangGraph](https://langchain-ai.github.io/langgraph/).

0 commit comments

Comments
 (0)