Back to Blog
AI

AI Integration: How to Integrate LLMs into Your Products

Learn practical ways to integrate Large Language Models (LLMs) into your products.

10 min read
By codebiy Team
AILLMOpenAIMachine Learning

AI integration is critical for modern applications to gain a competitive advantage. In this article, you'll learn step by step how to integrate Large Language Models (LLMs) into your products.

LLM Providers

OpenAI GPT

OpenAI is the most popular LLM provider. You can use GPT-4 and GPT-3.5 models.

import OpenAI from 'openai';

const openai = new OpenAI({ apiKey: process.env.OPENAI_API_KEY, });

async function generateText(prompt: string) { const completion = await openai.chat.completions.create({ model: 'gpt-4', messages: [{ role: 'user', content: prompt }], }); return completion.choices[0].message.content; }

Anthropic Claude

Claude is a security and reliability-focused LLM:

import Anthropic from '@anthropic-ai/sdk';

const anthropic = new Anthropic({ apiKey: process.env.ANTHROPIC_API_KEY, });

async function chatWithClaude(prompt: string) { const message = await anthropic.messages.create({ model: 'claude-3-opus-20240229', max_tokens: 1024, messages: [{ role: 'user', content: prompt }], }); return message.content[0].text; }

Integration Strategies

1. Creating API Wrapper

Create a unified API wrapper for different providers:

interface LLMProvider {
  generate(prompt: string): Promise<string>;
}

class OpenAIProvider implements LLMProvider { async generate(prompt: string): Promise<string> { // OpenAI implementation } }

class AnthropicProvider implements LLMProvider { async generate(prompt: string): Promise<string> { // Anthropic implementation } }

class LLMService { constructor(private provider: LLMProvider) {} async generate(prompt: string): Promise<string> { return this.provider.generate(prompt); } }

2. Streaming Responses

Use streaming for large responses:

async function* streamResponse(prompt: string) {
  const stream = await openai.chat.completions.create({
    model: 'gpt-4',
    messages: [{ role: 'user', content: prompt }],
    stream: true,
  });
  
  for await (const chunk of stream) {
    yield chunk.choices[0]?.delta?.content || '';
  }
}

3. Error Handling

Robust error handling implementation:

async function safeGenerate(prompt: string) {
  try {
    return await generateText(prompt);
  } catch (error) {
    if (error instanceof OpenAI.APIError) {
      // Handle API errors
      console.error('OpenAI API Error:', error.message);
    }
    throw error;
  }
}

Use Cases

1. Content Generation

async function generateBlogPost(topic: string) {
  const prompt = Write a blog post about ${topic};
  return await generateText(prompt);
}

2. Code Generation

async function generateCode(description: string) {
  const prompt = Generate TypeScript code for: ${description};
  return await generateText(prompt);
}

3. Chat Interface

interface Message {
  role: 'user' | 'assistant';
  content: string;
}

async function chat(messages: Message[]) { const completion = await openai.chat.completions.create({ model: 'gpt-4', messages: messages, }); return completion.choices[0].message.content; }

Cost Optimization

1. Caching

Cache frequently used prompts:

import { LRUCache } from 'lru-cache';

const cache = new LRUCache<string, string>({ max: 100 });

async function cachedGenerate(prompt: string) { const cached = cache.get(prompt); if (cached) return cached; const result = await generateText(prompt); cache.set(prompt, result); return result; }

2. Token Management

Optimize token usage:

function countTokens(text: string): number {
  // Approximate token count
  return Math.ceil(text.length / 4);
}

function truncatePrompt(prompt: string, maxTokens: number): string { const tokens = countTokens(prompt); if (tokens <= maxTokens) return prompt; // Truncate to fit within token limit return prompt.slice(0, maxTokens * 4); }

Security Best Practices

1. API Key Management

Store API keys in environment variables:

const apiKey = process.env.OPENAI_API_KEY;
if (!apiKey) {
  throw new Error('OPENAI_API_KEY is not set');
}

2. Input Validation

Validate user inputs:

function validatePrompt(prompt: string): void {
  if (prompt.length > 10000) {
    throw new Error('Prompt too long');
  }
  
  // Add more validation rules
}

Conclusion

LLM integration is a powerful capability for modern applications. With what you've learned in this guide, you can start developing AI-powered applications.

AI Integration: How to Integrate LLMs into Your Products | codebiy Blog