You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: src/oss/langchain/agents.mdx
+8-8Lines changed: 8 additions & 8 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -128,10 +128,10 @@ Model instances give you complete control over configuration. Use them when you
128
128
129
129
#### Dynamic model
130
130
131
-
:::python
132
-
133
131
Dynamic models are selected at <Tooltiptip="The execution environment of your agent, containing immutable configuration and contextual data that persists throughout the agent's execution (e.g., user IDs, session details, or application-specific configuration).">runtime</Tooltip> based on the current <Tooltiptip="The data that flows through your agent's execution, including messages, custom fields, and any information that needs to be tracked and potentially modified during processing (e.g., user preferences or tool usage stats).">state</Tooltip> and context. This enables sophisticated routing logic and cost optimization.
134
132
133
+
:::python
134
+
135
135
To use a dynamic model, you need to provide a function that receives the graph state and runtime and returns an instance of `BaseChatModel` with the tools bound to it using `.bind_tools(tools)`, where `tools` is a subset of the `tools` parameter.
**`state`**: The data that flows through your agent's execution, including messages, custom fields, and any information that needs to be tracked and potentially modified during processing (e.g. user preferences or tool usage stats).
158
-
</Info>
159
-
160
-
Dynamic models are selected at runtime based on the current state and context. This enables sophisticated routing logic and cost optimization.
161
156
162
157
To use a dynamic model, you need to provide a function that receives the graph state and runtime and returns an instance of `BaseChatModel` with the tools bound to it using `.bindTools(tools)`, where `tools` is a subset of the `tools` parameter.
163
158
@@ -467,7 +462,12 @@ When no `prompt` is provided, the agent will infer its task from the messages di
467
462
468
463
#### Dynamic prompts with middleware
469
464
465
+
:::python
470
466
For more advanced use cases where you need to modify the system prompt based on runtime context or agent state, you can use the `modify_model_request` decorator to create a simple custom middleware.
467
+
:::
468
+
:::js
469
+
For more advanced use cases where you need to modify the system prompt based on runtime context or agent state, you can use the `modifyModelRequest` decorator to create a simple custom middleware.
470
+
:::
471
471
472
472
Dynamic system prompt is especially useful for personalizing prompts based on user roles, conversation context, or other changing factors:
Copy file name to clipboardExpand all lines: src/oss/langchain/middleware.mdx
+6-2Lines changed: 6 additions & 2 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -470,11 +470,11 @@ const result = await agent.invoke({ messages: [HumanMessage("What's my name?")]
470
470
471
471
### Dynamic system prompt
472
472
473
+
:::python
473
474
A system prompt can be dynamically set right before each model invocation using the `@modify_model_request` decorator. This middleware is particularly useful when the prompt depends on the current agent state or runtime context.
474
475
475
476
For example, you can adjust the system prompt based on the user's expertise level:
476
477
477
-
:::python
478
478
```python
479
479
from typing import TypedDict
480
480
@@ -513,8 +513,12 @@ result = agent.invoke(
513
513
)
514
514
```
515
515
:::
516
-
517
516
:::js
517
+
518
+
A system prompt can be dynamically set right before each model invocation using the `dynamicSystemPromptMiddleware` middleware. This middleware is particularly useful when the prompt depends on the current agent state or runtime context.
519
+
520
+
For example, you can adjust the system prompt based on the user's expertise level:
0 commit comments