Unified API
One generate() function handles image generation, video creation, audio synthesis, upscaling, and more across all providers.
A unified TypeScript library that gives you access to AI models across fal-ai, Replicate, WaveSpeed, and OpenRouter
through a single generate() call. Zero dependencies. Full type safety.
$ npm install getaiapi
import { generate, listModels } from 'getaiapi' // Generate an image const image = await generate({ model: 'flux-schnell', prompt: 'a cat wearing sunglasses' }) console.log(image.outputs[0].url) // Generate a video const video = await generate({ model: 'veo3.1', prompt: 'a timelapse of a flower blooming' }) // Discover models by modality const models = listModels({ input: 'text', output: 'video' }) // => 300+ video generation models
Why getaiapi
Stop juggling SDKs. One import, one function, every AI model.
One generate() function handles image generation, video creation, audio synthesis, upscaling, and more across all providers.
Uses native fetch for all provider communication. No bloated dependency trees or version conflicts.
Full TypeScript support with strict type checking and comprehensive type definitions for every model and parameter.
Browse, filter, and search models by category, provider, or keyword. Only shows models for which you have valid API keys.
Custom error classes like AuthError, ModelNotFoundError, and RateLimitError give you precise control over failure scenarios.
Seamlessly switch between fal-ai, Replicate, WaveSpeed, and OpenRouter without changing your code. Same input, same output shape.
Quick Start
Install, configure your API keys, and start generating.
# npm $ npm install getaiapi # yarn $ yarn add getaiapi # pnpm $ pnpm add getaiapi
# .env — only set keys for # providers you want to use # fal-ai (1,201 models) FAL_KEY="your-fal-key" # Replicate (687 models) REPLICATE_API_TOKEN="your-token" # WaveSpeed (66 models) WAVESPEED_API_KEY="your-key" # OpenRouter (10+ LLM models) OPENROUTER_API_KEY="your-key"
import { generate } from 'getaiapi' const result = await generate({ model: 'flux-schnell', prompt: 'a sunset over mountains' }) console.log(result.outputs[0].url) // => https://...generated-image.png
Examples
Images, videos, audio, 3D models, upscaling, background removal — all through generate().
Text Generation (LLM)
Call any LLM through OpenRouter
const answer = await generate({ model: 'claude-sonnet-4-6', prompt: 'Explain quantum computing' }) console.log(answer.outputs[0].content)
Text to Video
Generate videos from text prompts
const video = await generate({ model: 'veo3.1', prompt: 'a timelapse of a flower blooming in a garden' })
Image to Video
Animate still images with AI
const video = await generate({ model: 'kling-v3-pro', image: 'https://example.com/scene.jpg', prompt: 'camera slowly pans right', options: { duration: '5' } })
Image Editing
Edit images with natural language
const edited = await generate({ model: 'gpt-image-1.5-edit', image: 'https://example.com/photo.jpg', prompt: 'add a rainbow in the sky' })
Text to Speech
Convert text to natural speech
const speech = await generate({ model: 'elevenlabs-v3', prompt: 'Hello, welcome to our app.', options: { voice_id: 'rachel' } })
Upscale Image
Enhance resolution with AI upscaling
const upscaled = await generate({ model: 'topaz-upscale-image', image: 'https://example.com/low-res.jpg' })
Remove Background
Isolate subjects from any image
const cutout = await generate({ model: 'birefnet-v2', image: 'https://example.com/portrait.jpg' })
API Reference
Every request follows the same shape. Every response is consistent, regardless of provider.
Input parameters for the generate() function
type GenerateRequest = { model: string // model name or alias provider?: ProviderName // preferred provider prompt?: string // text prompt image?: string | File // input image images?: (string | File)[] // multiple refs audio?: string | File // input audio video?: string | File // input video negative_prompt?: string count?: number // number of outputs size?: string | { width, height } seed?: number // reproducibility guidance?: number // guidance scale steps?: number // inference steps strength?: number // denoising strength format?: 'png' | 'jpeg' | 'webp' | ... quality?: number safety?: boolean options?: Record<string, unknown> }
Unified output from any provider
type GenerateResponse = { id: string model: string provider: string status: 'completed' | 'failed' outputs: OutputItem[] metadata: { seed?: number inference_time_ms?: number cost?: number safety_flagged?: boolean tokens?: number // total tokens (LLM) prompt_tokens?: number // input tokens completion_tokens?: number } } type OutputItem = { type: 'image' | 'video' | 'audio' | 'text' | ... url?: string content?: string // text (LLM outputs) content_type: string size_bytes?: number }
Model Discovery
Browse, filter, and search across all 1,940+ models. Results are automatically scoped to providers where you have valid keys.
import { listModels, resolveModel, deriveCategory } from 'getaiapi' // All available models (filtered by your API keys) const all = listModels() // Filter by input/output modality const videoModels = listModels({ input: 'text', output: 'video' }) // Filter by provider const falModels = listModels({ provider: 'fal-ai' }) // Search by name const fluxModels = listModels({ query: 'flux' }) // Resolve a specific model const model = resolveModel('flux-schnell') // => { canonical_name, aliases, modality, providers } // Derive a display label from modality deriveCategory(model) // => "text-to-image"
Drop-in Replacement
No more memorizing provider endpoints or parsing provider-specific response formats.
import { fal } from '@fal-ai/client' const result = await fal.subscribe( 'fal-ai/kling-video/v3/pro/image-to-video', { input: { image_url: imageUrl, prompt: prompt, duration: '5', cfg_scale: 0.5, negative_prompt: 'blurry' } } ) // Parse provider-specific response... const url = result.data?.video?.url
import { generate } from 'getaiapi' const result = await generate({ model: 'kling-v3-pro', image: imageUrl, prompt: prompt, options: { duration: '5', cfg_scale: 0.5, negative_prompt: 'blurry' } }) // Unified response — always the same shape const url = result.outputs[0].url
Error Handling
All errors extend GetAIApiError with specific classes so you can handle each case precisely.
import { generate, AuthError, ModelNotFoundError, ProviderError, RateLimitError } from 'getaiapi' try { const result = await generate({ model: 'flux-schnell', prompt: 'a cat' }) } catch (err) { if (err instanceof AuthError) { // err.envVar, err.provider console.error(`Set ${err.envVar}`) } if (err instanceof ModelNotFoundError) { // Includes "did you mean?" suggestions console.error(err.message) } if (err instanceof RateLimitError) { // HTTP 429 — retry later } }
| Error Class | When it's thrown |
|---|---|
AuthError |
Missing or invalid API key for the provider |
ModelNotFoundError |
Model name could not be resolved (includes suggestions) |
ValidationError |
Invalid input parameters |
ProviderError |
Provider returned an error response |
TimeoutError |
Generation exceeded timeout |
RateLimitError |
Provider returned HTTP 429 |
StorageError |
R2 upload, delete, or config failure |
Providers
Access the full catalog of models from each provider with your existing API keys.
Modality-First
Models declare their input and output types. No fixed categories — modality is the source of truth.
Built-in Storage
Automatically upload assets to Cloudflare R2 before sending to providers. Supports public and presigned URL modes.
import { configureStorage } from 'getaiapi' // Public mode — requires publicly readable bucket configureStorage({ accountId: 'your-account-id', bucketName: 'my-assets', accessKeyId: 'key-id', secretAccessKey: 'secret', publicUrlBase: 'https://assets.example.com', mode: 'public' }) // Presigned mode — works with private buckets configureStorage({ ...credentials, mode: 'presigned' // time-limited signed URLs })
Binary assets in generate() calls are automatically uploaded to R2 before provider requests. Supports nested objects and arrays.
Public mode returns permanent public URLs. Presigned mode returns time-limited signed URLs — no public bucket access needed.
Auto-uploaded assets use a getaiapi-tmp/ prefix. Configure R2 lifecycle rules to auto-expire ephemeral assets.
Pass reupload: true in options to re-upload URL strings through R2 before sending to the provider.
Configuration
Configure API keys programmatically or via environment variables. Programmatic keys always take priority.
Set keys and storage together
import { configure } from 'getaiapi' configure({ keys: { 'fal-ai': 'your-fal-key', 'replicate': 'your-token', 'wavespeed': 'your-key', 'openrouter': 'your-key' }, storage: { accountId: '...', bucketName: '...', // ...R2 credentials } })
Configure auth and storage independently
import { configureAuth, configureStorage } from 'getaiapi' // Set only API keys configureAuth({ 'fal-ai': 'your-fal-key', 'replicate': myKeyVault.get('replicate') }) // Set only storage (optional) configureStorage({ accountId: '...', bucketName: '...', mode: 'presigned', // ...credentials })
Get Started
Install the package, set your API keys, and call generate(). It's that simple.
$ npm install getaiapi