Skip to main content

ChatAnthropic

LangChain supports Anthropic's Claude family of chat models.

You'll first need to install the @langchain/anthropic package:

npm install @langchain/anthropic

You'll also need to sign up and obtain an Anthropic API key. Set it as an environment variable named ANTHROPIC_API_KEY, or pass it into the constructor as shown below.

Usage

tip

We're unifying model params across all packages. We now suggest using model instead of modelName, and apiKey for API keys.

You can initialize an instance like this:

import { ChatAnthropic } from "@langchain/anthropic";

const model = new ChatAnthropic({
temperature: 0.9,
model: "claude-3-sonnet-20240229",
// In Node.js defaults to process.env.ANTHROPIC_API_KEY,
// apiKey: "YOUR-API-KEY",
maxTokens: 1024,
});

const res = await model.invoke("Why is the sky blue?");

console.log(res);

/*
AIMessage {
content: "The sky appears blue because of how air in Earth's atmosphere interacts with sunlight. As sunlight passes through the atmosphere, light waves get scattered by gas molecules and airborne particles. Blue light waves scatter more easily than other color light waves. Since blue light gets scattered across the sky, we perceive the sky as having a blue color.",
name: undefined,
additional_kwargs: {
id: 'msg_01JuukTnjoXHuzQaPiSVvZQ1',
type: 'message',
role: 'assistant',
model: 'claude-3-sonnet-20240229',
stop_reason: 'end_turn',
stop_sequence: null,
usage: { input_tokens: 15, output_tokens: 70 }
}
}
*/

API Reference:

Multimodal inputs

Claude-3 models support image multimodal inputs. The passed input must be a base64 encoded image with the filetype as a prefix (e.g. data:image/png;base64,{YOUR_BASE64_ENCODED_DATA}). Here's an example:

import * as fs from "node:fs/promises";

import { ChatAnthropic } from "@langchain/anthropic";
import { HumanMessage } from "@langchain/core/messages";

const imageData = await fs.readFile("./hotdog.jpg");
const chat = new ChatAnthropic({
model: "claude-3-sonnet-20240229",
});
const message = new HumanMessage({
content: [
{
type: "text",
text: "What's in this image?",
},
{
type: "image_url",
image_url: {
url: `data:image/jpeg;base64,${imageData.toString("base64")}`,
},
},
],
});

const res = await chat.invoke([message]);
console.log({ res });

/*
{
res: AIMessage {
content: 'The image shows a hot dog or frankfurter. It has a reddish-pink sausage filling encased in a light brown bun or bread roll. The hot dog is cut lengthwise, revealing the bright red sausage interior contrasted against the lightly toasted bread exterior. This classic fast food item is depicted in detail against a plain white background.',
name: undefined,
additional_kwargs: {
id: 'msg_0153boCaPL54QDEMQExkVur6',
type: 'message',
role: 'assistant',
model: 'claude-3-sonnet-20240229',
stop_reason: 'end_turn',
stop_sequence: null,
usage: [Object]
}
}
}
*/

API Reference:

See the official docs for a complete list of supported file types.

Agents

Anthropic models that support tool calling can be used in the Tool Calling agent. Here's an example:

import { z } from "zod";

import { ChatAnthropic } from "@langchain/anthropic";
import { DynamicStructuredTool } from "@langchain/core/tools";
import { AgentExecutor, createToolCallingAgent } from "langchain/agents";

import { ChatPromptTemplate } from "@langchain/core/prompts";

const llm = new ChatAnthropic({
model: "claude-3-sonnet-20240229",
temperature: 0,
});

// Prompt template must have "input" and "agent_scratchpad input variables"
const prompt = ChatPromptTemplate.fromMessages([
["system", "You are a helpful assistant"],
["placeholder", "{chat_history}"],
["human", "{input}"],
["placeholder", "{agent_scratchpad}"],
]);

const currentWeatherTool = new DynamicStructuredTool({
name: "get_current_weather",
description: "Get the current weather in a given location",
schema: z.object({
location: z.string().describe("The city and state, e.g. San Francisco, CA"),
}),
func: async () => Promise.resolve("28 °C"),
});

const agent = await createToolCallingAgent({
llm,
tools: [currentWeatherTool],
prompt,
});

const agentExecutor = new AgentExecutor({
agent,
tools: [currentWeatherTool],
});

const input = "What's the weather like in SF?";
const { output } = await agentExecutor.invoke({ input });

console.log(output);

/*
The current weather in San Francisco, CA is 28°C.
*/

API Reference:

tip

See the LangSmith trace here

Custom headers

You can pass custom headers in your requests like this:

import { ChatAnthropic } from "@langchain/anthropic";

const model = new ChatAnthropic({
model: "claude-3-sonnet-20240229",
maxTokens: 1024,
clientOptions: {
defaultHeaders: {
"X-Api-Key": process.env.ANTHROPIC_API_KEY,
},
},
});

const res = await model.invoke("Why is the sky blue?");

console.log(res);

/*
AIMessage {
content: "The sky appears blue because of the way sunlight interacts with the gases in Earth's atmosphere. Here's a more detailed explanation:\n" +
'\n' +
'- Sunlight is made up of different wavelengths of light, including the entire visible spectrum from red to violet.\n' +
'\n' +
'- As sunlight passes through the atmosphere, the gases (nitrogen, oxygen, etc.) cause the shorter wavelengths of light, in the blue and violet range, to be scattered more efficiently in different directions.\n' +
'\n' +
'- The blue wavelengths of about 475 nanometers get scattered more than the other visible wavelengths by the tiny gas molecules in the atmosphere.\n' +
'\n' +
'- This preferential scattering of blue light in all directions by the gas molecules is called Rayleigh scattering.\n' +
'\n' +
'- When we look at the sky, we see this scattered blue light from the sun coming at us from all parts of the sky.\n' +
'\n' +
"- At sunrise and sunset, the sun's rays have to travel further through the atmosphere before reaching our eyes, causing more of the blue light to be scattered out, leaving more of the red/orange wavelengths visible - which is why sunrises and sunsets appear reddish.\n" +
'\n' +
'So in summary, the blueness of the sky is caused by this selective scattering of blue wavelengths of sunlight by the gases in the atmosphere.',
name: undefined,
additional_kwargs: {
id: 'msg_01Mvvc5GvomqbUxP3YaeWXRe',
type: 'message',
role: 'assistant',
model: 'claude-3-sonnet-20240229',
stop_reason: 'end_turn',
stop_sequence: null,
usage: { input_tokens: 13, output_tokens: 284 }
}
}
*/

API Reference:

Tools

The Anthropic API supports tool calling, along with multi-tool calling. The following examples demonstrate how to call tools:

Single Tool

import { ChatAnthropic } from "@langchain/anthropic";
import { ChatPromptTemplate } from "@langchain/core/prompts";
import { z } from "zod";
import { zodToJsonSchema } from "zod-to-json-schema";

const calculatorSchema = z.object({
operation: z
.enum(["add", "subtract", "multiply", "divide"])
.describe("The type of operation to execute."),
number1: z.number().describe("The first number to operate on."),
number2: z.number().describe("The second number to operate on."),
});

const tool = {
name: "calculator",
description: "A simple calculator tool",
input_schema: zodToJsonSchema(calculatorSchema),
};

const model = new ChatAnthropic({
apiKey: process.env.ANTHROPIC_API_KEY,
model: "claude-3-haiku-20240307",
}).bind({
tools: [tool],
});

const prompt = ChatPromptTemplate.fromMessages([
[
"system",
"You are a helpful assistant who always needs to use a calculator.",
],
["human", "{input}"],
]);

// Chain your prompt and model together
const chain = prompt.pipe(model);

const response = await chain.invoke({
input: "What is 2 + 2?",
});
console.log(JSON.stringify(response, null, 2));
/*
{
"kwargs": {
"content": "Okay, let's calculate that using the calculator tool:",
"additional_kwargs": {
"id": "msg_01YcT1KFV8qH7xG6T6C4EpGq",
"role": "assistant",
"model": "claude-3-haiku-20240307",
"tool_calls": [
{
"id": "toolu_01UiqGsTTH45MUveRQfzf7KH",
"type": "function",
"function": {
"arguments": "{\"number1\":2,\"number2\":2,\"operation\":\"add\"}",
"name": "calculator"
}
}
]
},
"response_metadata": {}
}
}
*/

API Reference:

tip

See the LangSmith trace here

Forced tool calling

In this example we'll provide the model with two tools:

  • calculator
  • get_weather

Then, when we call bindTools, we'll force the model to use the get_weather tool by passing the tool_choice arg like this:

.bindTools({
tools,
tool_choice: {
type: "tool",
name: "get_weather",
}
});

Finally, we'll invoke the model, but instead of asking about the weather, we'll ask it to do some math. Since we explicitly forced the model to use the get_weather tool, it will ignore the input and return the weather information (in this case it returned <UNKNOWN>, which is expected.)

import { ChatAnthropic } from "@langchain/anthropic";
import { ChatPromptTemplate } from "@langchain/core/prompts";
import { z } from "zod";
import { zodToJsonSchema } from "zod-to-json-schema";

const calculatorSchema = z.object({
operation: z
.enum(["add", "subtract", "multiply", "divide"])
.describe("The type of operation to execute."),
number1: z.number().describe("The first number to operate on."),
number2: z.number().describe("The second number to operate on."),
});

const weatherSchema = z.object({
city: z.string().describe("The city to get the weather from"),
state: z.string().optional().describe("The state to get the weather from"),
});

const tools = [
{
name: "calculator",
description: "A simple calculator tool",
input_schema: zodToJsonSchema(calculatorSchema),
},
{
name: "get_weather",
description:
"Get the weather of a specific location and return the temperature in Celsius.",
input_schema: zodToJsonSchema(weatherSchema),
},
];

const model = new ChatAnthropic({
apiKey: process.env.ANTHROPIC_API_KEY,
model: "claude-3-haiku-20240307",
}).bind({
tools,
tool_choice: {
type: "tool",
name: "get_weather",
},
});

const prompt = ChatPromptTemplate.fromMessages([
[
"system",
"You are a helpful assistant who always needs to use a calculator.",
],
["human", "{input}"],
]);

// Chain your prompt and model together
const chain = prompt.pipe(model);

const response = await chain.invoke({
input: "What is the sum of 2725 and 273639",
});
console.log(JSON.stringify(response, null, 2));
/*
{
"kwargs": {
"tool_calls": [
{
"name": "get_weather",
"args": {
"city": "<UNKNOWN>",
"state": "<UNKNOWN>"
},
"id": "toolu_01MGRNudJvSDrrCZcPa2WrBX"
}
],
"response_metadata": {
"id": "msg_01RW3R4ctq7q5g4GJuGMmRPR",
"model": "claude-3-haiku-20240307",
"stop_sequence": null,
"usage": {
"input_tokens": 672,
"output_tokens": 52
},
"stop_reason": "tool_use"
}
}
}
*/

API Reference:

The bind_tools argument has three possible values:

  • { type: "tool", name: "tool_name" } - Forces the model to use the specified tool.
  • "any" - Allows the model to choose the tool, but still forcing it to choose at least one.
  • "auto" - The default value. Allows the model to select any tool, or none.
tip

See the LangSmith trace here

withStructuredOutput

import { ChatAnthropic } from "@langchain/anthropic";
import { ChatPromptTemplate } from "@langchain/core/prompts";
import { z } from "zod";

const calculatorSchema = z
.object({
operation: z
.enum(["add", "subtract", "multiply", "divide"])
.describe("The type of operation to execute."),
number1: z.number().describe("The first number to operate on."),
number2: z.number().describe("The second number to operate on."),
})
.describe("A simple calculator tool");

const model = new ChatAnthropic({
apiKey: process.env.ANTHROPIC_API_KEY,
model: "claude-3-haiku-20240307",
});

// Pass the schema and tool name to the withStructuredOutput method
const modelWithTool = model.withStructuredOutput(calculatorSchema);

const prompt = ChatPromptTemplate.fromMessages([
[
"system",
"You are a helpful assistant who always needs to use a calculator.",
],
["human", "{input}"],
]);

// Chain your prompt and model together
const chain = prompt.pipe(modelWithTool);

const response = await chain.invoke({
input: "What is 2 + 2?",
});
console.log(response);
/*
{ operation: 'add', number1: 2, number2: 2 }
*/

/**
* You can supply a "name" field to give the LLM additional context
* around what you are trying to generate. You can also pass
* 'includeRaw' to get the raw message back from the model too.
*/
const includeRawModel = model.withStructuredOutput(calculatorSchema, {
name: "calculator",
includeRaw: true,
});
const includeRawChain = prompt.pipe(includeRawModel);

const includeRawResponse = await includeRawChain.invoke({
input: "What is 2 + 2?",
});
console.log(JSON.stringify(includeRawResponse, null, 2));
/*
{
"raw": {
"kwargs": {
"content": "Okay, let me use the calculator tool to find the result of 2 + 2:",
"additional_kwargs": {
"id": "msg_01HYwRhJoeqwr5LkSCHHks5t",
"type": "message",
"role": "assistant",
"model": "claude-3-haiku-20240307",
"usage": {
"input_tokens": 458,
"output_tokens": 109
},
"tool_calls": [
{
"id": "toolu_01LDJpdtEQrq6pXSqSgEHErC",
"type": "function",
"function": {
"arguments": "{\"number1\":2,\"number2\":2,\"operation\":\"add\"}",
"name": "calculator"
}
}
]
},
}
},
"parsed": {
"operation": "add",
"number1": 2,
"number2": 2
}
}
*/

API Reference:

tip

See the LangSmith trace here


Was this page helpful?


You can also leave detailed feedback on GitHub.