Skip to main content

MistralAI

tip

Want to run Mistral's models locally? Check out our Ollama integration.

Hereโ€™s how you can initialize an MistralAI LLM instance:

yarn add @langchain/mistralai
import { MistralAI } from "@langchain/mistralai";

const model = new MistralAI({
model: "codestral-latest", // Defaults to "codestral-latest" if no model provided.
temperature: 0,
apiKey: "YOUR-API-KEY", // In Node.js defaults to process.env.MISTRAL_API_KEY
});
const res = await model.invoke(
"You can print 'hello world' to the console in javascript like this:\n```javascript"
);
console.log(res);

console.log('hello world');
```
This will output 'hello world' to the console.

Since the Mistral LLM is a completions model, they also allow you to insert a suffix to the prompt. Suffixes can be passed via the call options when invoking a model like so:

const res = await model.invoke(
"You can print 'hello world' to the console in javascript like this:\n```javascript",
{
suffix: "```",
}
);
console.log(res);

console.log('hello world');
```

As seen in the first example, the model generated the requested console.log('hello world') code snippet, but also included extra unwanted text. By adding a suffix, we can constrain the model to only complete the prompt up to the suffix (in this case, three backticks). This allows us to easily parse the completion and extract only the desired response without the suffix using a custom output parser.

import { MistralAI } from "@langchain/mistralai";

const model = new MistralAI({
model: "codestral-latest",
temperature: 0,
apiKey: "YOUR-API-KEY",
});

const suffix = "```";

const customOutputParser = (input: string) => {
if (input.includes(suffix)) {
return input.split(suffix)[0];
}
throw new Error("Input does not contain suffix.");
};

const res = await model.invoke(
"You can print 'hello world' to the console in javascript like this:\n```javascript",
{
suffix,
}
);

console.log(customOutputParser(res));

console.log('hello world');

Was this page helpful?


You can also leave detailed feedback on GitHub.