ChatBedrockConverse
Amazon Bedrock Converse is a fully managed service that makes Foundation Models (FMs) from leading AI startups and Amazon available via an API. You can choose from a wide range of FMs to find the model that is best suited for your use case.
Setup
You'll need to install the @langchain/aws
package:
- npm
- Yarn
- pnpm
npm install @langchain/aws
yarn add @langchain/aws
pnpm add @langchain/aws
Usage
We're unifying model params across all packages. We now suggest using model
instead of modelName
, and apiKey
for API keys.
import { ChatBedrockConverse } from "@langchain/aws";
import { HumanMessage } from "@langchain/core/messages";
const model = new ChatBedrockConverse({
model: "anthropic.claude-3-sonnet-20240229-v1:0",
region: "us-east-1",
credentials: {
accessKeyId: process.env.BEDROCK_AWS_ACCESS_KEY_ID!,
secretAccessKey: process.env.BEDROCK_AWS_SECRET_ACCESS_KEY!,
},
});
const res = await model.invoke([
new HumanMessage({ content: "Tell me a joke" }),
]);
console.log(res);
/*
AIMessage {
content: "Here's a joke for you:\n" +
'\n' +
"Why can't a bicycle stand up by itself? Because it's two-tired!",
response_metadata: { ... },
id: '08afa4fb-c212-4c1e-853a-d854972bec78',
usage_metadata: { input_tokens: 11, output_tokens: 28, total_tokens: 39 }
}
*/
const stream = await model.stream([
new HumanMessage({ content: "Tell me a joke" }),
]);
for await (const chunk of stream) {
console.log(chunk.content);
}
/*
Here
's
a
silly
joke
for
you
:
Why
di
d the
tom
ato
turn
re
d?
Because
it
saw
the
sal
a
d
dressing
!
*/
API Reference:
- ChatBedrockConverse from
@langchain/aws
- HumanMessage from
@langchain/core/messages
See the LangSmith traces for the above example here, and here for steaming.
Multimodal inputs
Multimodal inputs are currently only supported by Anthropic Claude-3 models.
Anthropic Claude-3 models hosted on Bedrock have multimodal capabilities and can reason about images. Here's an example:
import * as fs from "node:fs/promises";
import { ChatBedrockConverse } from "@langchain/aws";
import { HumanMessage } from "@langchain/core/messages";
const model = new ChatBedrockConverse({
model: "anthropic.claude-3-sonnet-20240229-v1:0",
region: "us-east-1",
credentials: {
accessKeyId: process.env.BEDROCK_AWS_ACCESS_KEY_ID!,
secretAccessKey: process.env.BEDROCK_AWS_SECRET_ACCESS_KEY!,
},
});
const imageData = await fs.readFile("./hotdog.jpg");
const res = await model.invoke([
new HumanMessage({
content: [
{
type: "text",
text: "What's in this image?",
},
{
type: "image_url",
image_url: {
url: `data:image/jpeg;base64,${imageData.toString("base64")}`,
},
},
],
}),
]);
console.log(res);
/*
AIMessage {
content: 'The image shows a hot dog or frankfurter. It has a reddish-pink sausage inside a light tan-colored bread bun. The hot dog bun is split open, allowing the sausage filling to be visible. The image appears to be focused solely on depicting this classic American fast food item against a plain white background.',
response_metadata: { ... },
id: '1608d043-575a-450e-8eac-2fef6297cfe2',
usage_metadata: { input_tokens: 276, output_tokens: 75, total_tokens: 351 }
}
*/
API Reference:
- ChatBedrockConverse from
@langchain/aws
- HumanMessage from
@langchain/core/messages
See the LangSmith trace here.
Tool calling
The examples below demonstrate how to use tool calling, along with the withStructuredOutput
method to easily compose structured output LLM calls.
import { ChatBedrockConverse } from "@langchain/aws";
import { tool } from "@langchain/core/tools";
import { z } from "zod";
const model = new ChatBedrockConverse({
model: "anthropic.claude-3-sonnet-20240229-v1:0",
region: "us-east-1",
credentials: {
accessKeyId: process.env.BEDROCK_AWS_ACCESS_KEY_ID!,
secretAccessKey: process.env.BEDROCK_AWS_SECRET_ACCESS_KEY!,
},
});
const weatherTool = tool(
({ city, state }) => `The weather in ${city}, ${state} is 72°F and sunny`,
{
name: "weather_tool",
description: "Get the weather for a city",
schema: z.object({
city: z.string().describe("The city to get the weather for"),
state: z.string().describe("The state to get the weather for").optional(),
}),
}
);
const modelWithTools = model.bindTools([weatherTool]);
// Optionally, you can bind tools via the `.bind` method:
// const modelWithTools = model.bind({
// tools: [weatherTool]
// });
const res = await modelWithTools.invoke("What's the weather in New York?");
console.log(res);
/*
AIMessage {
content: [
{
type: 'text',
text: "Okay, let's get the weather for New York City."
}
],
response_metadata: { ... },
id: '49a97da0-e971-4d7f-9f04-2495e068c15e',
tool_calls: [
{
id: 'tooluse_O6Q1Ghm7SmKA9mn2ZKmBzg',
name: 'weather_tool',
args: {
'city': 'New York',
},
],
usage_metadata: { input_tokens: 289, output_tokens: 68, total_tokens: 357 }
}
*/
API Reference:
- ChatBedrockConverse from
@langchain/aws
- tool from
@langchain/core/tools
Check out the output of this tool call! We can see here it's using chain-of-thought before calling the tool, where it describes what it's going to do in plain text before calling the tool: Okay, let's get the weather for New York City.
.
See the LangSmith trace here
.withStructuredOutput({ ... })
Using the .withStructuredOutput
method, you can easily make the LLM return structured output, given only a Zod or JSON schema:
import { ChatBedrockConverse } from "@langchain/aws";
import { z } from "zod";
const model = new ChatBedrockConverse({
model: "anthropic.claude-3-sonnet-20240229-v1:0",
region: "us-east-1",
credentials: {
accessKeyId: process.env.BEDROCK_AWS_ACCESS_KEY_ID!,
secretAccessKey: process.env.BEDROCK_AWS_SECRET_ACCESS_KEY!,
},
});
const weatherSchema = z
.object({
city: z.string().describe("The city to get the weather for"),
state: z.string().describe("The state to get the weather for").optional(),
})
.describe("Get the weather for a city");
const modelWithStructuredOutput = model.withStructuredOutput(weatherSchema, {
name: "weather_tool", // Optional, defaults to 'extract'
});
const res = await modelWithStructuredOutput.invoke(
"What's the weather in New York?"
);
console.log(res);
/*
{ city: 'New York', state: 'NY' }
*/
API Reference:
- ChatBedrockConverse from
@langchain/aws
See the LangSmith trace here
Tool result status
This feature is only available in @langchain/aws
version 0.0.2 and above.
The Bedrock Converse tool calling API allows for passing a status
field in the tool result.
This can be one of "success"
, or "error"
to indicate the status of the tool call.
This can be helpful when building complex tool calling agents/graphs, where you want the agent/graph to handle errors gracefully.
With LangChain, you can pass the status
field to the Bedrock Converse API via the raw_output
field on ToolMessage
.
Below is an example showing how this can work in practice:
import { ChatBedrockConverse } from "@langchain/aws";
import {
AIMessage,
HumanMessage,
SystemMessage,
ToolMessage,
} from "@langchain/core/messages";
import { tool } from "@langchain/core/tools";
import { z } from "zod";
const model = new ChatBedrockConverse({
model: "anthropic.claude-3-sonnet-20240229-v1:0",
region: "us-east-1",
credentials: {
accessKeyId: process.env.BEDROCK_AWS_ACCESS_KEY_ID!,
secretAccessKey: process.env.BEDROCK_AWS_SECRET_ACCESS_KEY!,
},
});
// Define two tools. The `weather_tool`, which will have already been called
// and the result will be an error. Next, the `error_handler_tool` will be
// provided to the model to handle the error.
const weatherTool = tool(
(_) => {
return ""; // no-op, we won't actually invoke the tools in this example
},
{
name: "weather_tool",
description: "Fetches the weather for a given location.",
schema: z.object({
location: z.string().describe("The location to fetch the weather for."),
}),
}
);
const errorHandlerTool = tool(
(_) => {
return ""; // no-op, we won't actually invoke the tools in this example
},
{
name: "error_handler_tool",
description: "A tool which handles any errors in the conversation.",
schema: z.object({
errorMessage: z.string().describe("The error message to handle."),
}),
}
);
// Define an array of messages to simulate a conversation history.
// Ensure the `ToolMessage` has a status if "error" to indicate to
// the model that an error occurred.
const messageHistory = [
new SystemMessage(`You are a helpful AI agent.`),
new HumanMessage("What's the weather like in New York, NY?"),
new AIMessage({
content: "",
tool_calls: [
{
name: "weather_tool",
args: {
location: "New York, NY",
},
id: "tool_call_1",
},
],
}),
new ToolMessage({
content: "An error occurred while trying to fetch the weather.",
tool_call_id: "tool_call_1",
raw_output: {
status: "error",
},
}),
];
// Bind both tools to the model.
const modelWithTools = model.bindTools([weatherTool, errorHandlerTool]);
const res = await modelWithTools.invoke(messageHistory);
console.log(JSON.stringify(res, null, 2));
/*
{
"content": [
{
"type": "text",
"text": "It seems there was an issue fetching the weather for New York, NY. Let me try handling the error:"
}
],
"tool_calls": [
{
"id": "tooluse__pIOAIE6QUy8g6gAo_YyqA",
"name": "error_handler_tool",
"args": {
"errorMessage": "An error occurred while trying to fetch the weather."
}
}
],
"response_metadata": { ... },
"usage_metadata": { ... },
"id": "53eb9f1c-b874-4c9a-a476-d1c77c0bea77",
}
*/
API Reference:
- ChatBedrockConverse from
@langchain/aws
- AIMessage from
@langchain/core/messages
- HumanMessage from
@langchain/core/messages
- SystemMessage from
@langchain/core/messages
- ToolMessage from
@langchain/core/messages
- tool from
@langchain/core/tools
We can see the model handled the error exactly as we intended, with it calling the error_handler_tool
after it received a tool result with a status of "error"
!
See the LangSmith trace here