ModelFusion
The TypeScript library for building AI applications.
README
ModelFusion
The TypeScript library for building AI applications.
Introduction
ModelFusion is an abstraction layer for integrating AI models into JavaScript and TypeScript applications, unifying the API for common operations such as text streaming, object generation, and tool usage. It provides features to support production environments, including observability hooks, logging, and automatic retries. You can use ModelFusion to build AI applications, chatbots, and agents.
- Vendor-neutral: ModelFusion is a non-commercial open source project that is community-driven. You can use it with any supported provider.
- Multi-modal: ModelFusion supports a wide range of models including text generation, image generation, vision, text-to-speech, speech-to-text, and embedding models.
- Type inference and validation: ModelFusion infers TypeScript types wherever possible and validates model responses.
- Observability and logging: ModelFusion provides an observer framework and logging support.
- Resilience and robustness: ModelFusion ensures seamless operation through automatic retries, throttling, and error handling mechanisms.
- Built for production: ModelFusion is fully tree-shakeable, can be used in serverless environments, and only uses a minimal set of dependencies.
Quick Install
- ```sh
- npm install modelfusion
- ```
Or use a starter template:
Usage Examples
[!TIP]
The basic examples are a great way to get started and to explore in parallel with the documentation. You can find them in the examples/basic folder.
You can provide API keys for the different integrations using environment variables (e.g.,OPENAI_API_KEY) or pass them into the model constructors as options.
Generate text using a language model and a prompt. You can stream the text if it is supported by the model. You can use images for multi-modal prompting if the model supports it (e.g. with llama.cpp).
You can use prompt styles to use text, instruction, or chat prompts.
generateText
- ```ts
- import { generateText, openai } from "modelfusion";
- const text = await generateText({
- model: openai.CompletionTextGenerator({ model: "gpt-3.5-turbo-instruct" }),
- prompt: "Write a short story about a robot learning to love:\n\n",
- });
- ```
streamText
- ```ts
- import { streamText, openai } from "modelfusion";
- const textStream = await streamText({
- model: openai.CompletionTextGenerator({ model: "gpt-3.5-turbo-instruct" }),
- prompt: "Write a short story about a robot learning to love:\n\n",
- });
- for await (const textPart of textStream) {
- process.stdout.write(textPart);
- }
- ```
streamText with multi-modal prompt
Multi-modal vision models such as GPT 4 Vision can process images as part of the prompt.
- ```ts
- import { streamText, openai } from "modelfusion";
- import { readFileSync } from "fs";
- const image = readFileSync("./image.png");
- const textStream = await streamText({
- model: openai
- .ChatTextGenerator({ model: "gpt-4-vision-preview" })
- .withInstructionPrompt(),
- prompt: {
- instruction: [
- { type: "text", text: "Describe the image in detail." },
- { type: "image", image, mimeType: "image/png" },
- ],
- },
- });
- for await (const textPart of textStream) {
- process.stdout.write(textPart);
- }
- ```
Generate typed objects using a language model and a schema.