LogoStarterkitpro
Features

OpenAI

Integrating OpenAI for AI-powered features with JSON/Text parsing and credit tracking.

Optional Feature

OpenAI integration is not included by default to keep the core lean. If your project requires AI features, you can add this functionality in minutes.

This guide explains how to integrate OpenAI into your application, enabling features like:

  • Generating plain Text responses.
  • Receiving structured JSON responses using function calling.
  • Implementing a basic Credits system to track usage.

1. Setup

Follow these steps to prepare your project for OpenAI integration.

Installation

First, install the official OpenAI Node.js library:

Terminal
npm install openai

Environment Variable

Add your OpenAI API key to your environment variables. Create or update your .env.local file:

.env.local
OPENAI_API_KEY="your_openai_api_key_here"

Model Configuration

Define the OpenAI model you want to use within your application configuration. It's recommended to store this in a central configuration file like lib/app-config.ts:

lib/app-config.ts
export const appConfig = {
  // ... other configs
  openai: {
    model: "gpt-4o", // Or your preferred model, e.g., gpt-4.5
  },
};

2. Create OpenAI Client

Create a reusable client to interact with the OpenAI API. This client handles API calls, response parsing (text or JSON), and basic credit tracking.

Create the file lib/openai/client.ts with the following code:

lib/openai/client.ts
"use server";
 
import OpenAI from "openai";
import { appConfig } from "@/lib/app-config";
import { getCurrentUser } from "@/lib/session";
 
export interface OpenAIResponse {
  data?: any; // For parsed JSON data
  error?: string;
  text?: string; // For plain text response
  tokens?: number; // Tokens used
  credits?: number; // Custom credits cost
}
 
export interface OpenAIRequestOptions {
  prompt: string;
  schema?: any; // Optional schema for JSON output (function calling)
  parser?: "json" | "text"; // Deprecated: Parser is now inferred from schema presence
  temperature?: number;
  credits?: number; // Custom credit cost per request (default: 1)
}
 
/**
 * Log token usage and calculate custom credit cost
 * @param tokensUsed Number of tokens used in the request
 * @param credits Custom credit cost defined per request
 * @returns The calculated credit cost
 */
async function updateCreditUsage(
  tokensUsed: number,
  credits: number
): Promise<number> {
  try {
    const currentUser = await getCurrentUser();
    if (!currentUser) {
      throw new Error("Unauthorized");
    }
 
    // Log the usage for monitoring
    console.log(
      `OpenAI API Usage: ${tokensUsed} tokens (${credits} credits)` +
        (userId ? ` for user ${userId}` : " for anonymous user")
    );
 
    // TODO: Implement actual credit deduction from user's account
    // This requires a `credits` field on your User model in Prisma
    // Example:
    // if (userId) {
    //   await prisma.user.update({
    //     where: { id: userId },
    //     data: { credits: { decrement: credits } },
    //   });
    // } else {
    //   console.warn("Cannot deduct credits for anonymous user.");
    // }
 
    return credits; // Return the cost defined for the request
  } catch (error: unknown) {
    console.error(
      "Error logging/updating credit usage:",
      error instanceof Error ? error.message : String(error)
    );
    // Still return the intended credit cost even if logging/deduction fails
    return credits;
  }
}
 
class OpenAIClient {
  private openai: OpenAI;
  public model: string;
 
  constructor() {
    if (!process.env.OPENAI_API_KEY || !appConfig.openai.model) {
      console.group("⚠️ OpenAI Configuration Missing");
      console.error(
        "OPENAI_API_KEY in .env and appConfig.openai.model are required."
      );
      console.error(
        "If you don't need OpenAI, remove the code from the /lib/openai folder."
      );
      console.groupEnd();
      // Consider throwing an error or handling this more gracefully depending on requirements
      throw new Error("OpenAI configuration missing.");
    }
    this.openai = new OpenAI({ apiKey: process.env.OPENAI_API_KEY });
    this.model = appConfig.openai.model; // e.g., "gpt-4-turbo"
  }
 
  /**
   * Generate a response from OpenAI, handling JSON or text parsing.
   * @param options Request options including prompt, schema, temperature, credits.
   * @returns A response object with data, text, tokens, credits, or error.
   */
  async generateResponse(
    options: OpenAIRequestOptions
  ): Promise<OpenAIResponse> {
    try {
      const { prompt, schema, temperature = 0.7, credits = 1 } = options;
 
      // Build the request payload
      const requestPayload: OpenAI.Chat.ChatCompletionCreateParams = {
        model: this.model,
        temperature,
        messages: [{ role: "user", content: prompt }],
      };
 
      // Use function calling if schema is provided for structured JSON output
      if (schema) {
        requestPayload.tools = [
          {
            type: "function",
            function: {
              name: schema.name, // e.g., "generatePosts"
              description: schema.description,
              parameters: schema.parameters, // Zod or JSON schema object
            },
          },
        ];
        // Optionally force the model to use the function
        // requestPayload.tool_choice = { type: "function", function: { name: schema.name } };
      }
 
      // Make the API call
      const response =
        await this.openai.chat.completions.create(requestPayload);
 
      // Track token usage and credit cost
      const tokensUsed = response?.usage?.total_tokens || 0;
      const creditCost = await updateCreditUsage(tokensUsed, credits);
 
      // Parse the response (JSON via tool_calls or Text via content)
      let result: OpenAIResponse;
      const toolCall = response.choices?.[0]?.message?.tool_calls?.[0];
 
      if (toolCall && toolCall.function?.arguments) {
        // We requested a function call (schema provided) and got one
        result = this.parseFunctionCallResponse(toolCall.function.arguments);
      } else {
        // No function call requested or received, parse as text
        result = this.parseTextResponse(response);
      }
 
      // Add token and credit information
      result.tokens = tokensUsed;
      result.credits = creditCost;
 
      return result;
    } catch (error: unknown) {
      const errorMessage =
        error instanceof Error ? error.message : String(error);
      console.error("Error generating OpenAI response:", errorMessage);
      // Check for specific OpenAI errors if needed
      // if (error.status === 401) ...
      return { error: `Failed to generate response: ${errorMessage}` };
    }
  }
 
  /**
   * Parse the arguments from a function call response.
   */
  private parseFunctionCallResponse(argsString: string): OpenAIResponse {
    try {
      const parsedData = JSON.parse(argsString);
      return { data: parsedData };
    } catch (error: unknown) {
      console.error(
        "Error parsing function call arguments:",
        error instanceof Error ? error.message : String(error)
      );
      return { error: "Failed to parse structured JSON response from OpenAI" };
    }
  }
 
  /**
   * Parse response content as plain text.
   */
  private parseTextResponse(
    response: OpenAI.Chat.ChatCompletion
  ): OpenAIResponse {
    try {
      const content = response.choices?.[0]?.message?.content;
 
      if (content === null || content === undefined) {
        // Can happen if function calling was attempted but failed, or other issues
        console.warn("No text content found in OpenAI response.");
        return { error: "No content in response" };
      }
 
      return { text: content };
    } catch (error: unknown) {
      console.error(
        "Error parsing text response:",
        error instanceof Error ? error.message : String(error)
      );
      return { error: "Failed to parse text response" };
    }
  }
}
 
// Create a singleton instance to reuse the client
const openaiClient = new OpenAIClient();
 
/**
 * Public function to generate a response using the singleton client.
 * @param options Request options including prompt, schema, etc.
 * @returns A response object with data, text, or error.
 */
export async function generateResponse(
  options: OpenAIRequestOptions
): Promise<OpenAIResponse> {
  // Potentially add pre-request checks here (e.g., user credit balance)
  // const currentUser = await getCurrentUser();
  //  if (!currentUser) {
  //    throw new Error("Unauthorized");
  //  }
  // const hasEnoughCredits = await checkCredits(currentUser.id, options.credits ?? 1);
  // if (!hasEnoughCredits) {
  //   return { error: "Insufficient credits." };
  // }
 
  return openaiClient.generateResponse(options);
}
 
/**
 * Get the OpenAI client instance (useful for specific configurations if needed).
 * @returns The OpenAI client instance.
 */
export function getClient(): OpenAIClient {
  return openaiClient;
}

3. Define Response Schemas

To receive structured JSON output from OpenAI, you need to define schemas using the JSON Schema format. OpenAI uses these schemas via its function calling feature.

Create the file lib/openai/openai-schemas.ts to store your schemas:

lib/openai/openai-schemas.ts
"use server";
 
// You can use plain JSON Schema objects or libraries like Zod to define parameters
 
/**
 * Example Schema for generating multiple blog post ideas.
 * Used with OpenAI function calling.
 */
export const postsSchema = {
  name: "generatePosts", // Function name OpenAI will "call"
  description:
    "Generate at least 3 blog post ideas based on the provided topic.",
  parameters: {
    // JSON Schema definition
    type: "object",
    required: ["posts"],
    properties: {
      posts: {
        type: "array",
        description: "List of generated blog post ideas.",
        items: {
          type: "object",
          required: ["title", "summary"],
          properties: {
            title: {
              type: "string",
              description: "Catchy title for the blog post.",
            },
            summary: {
              type: "string",
              description: "A brief summary of the blog post content.",
            },
          },
        },
      },
    },
    additionalProperties: false, // Disallow extra properties
  },
};
 
// Add other schemas here as needed, e.g., for product descriptions, summaries, etc.
// export const productSchema = { ... };

4. Implement Credit System (Optional)

The client includes a basic mechanism to track API usage cost using a custom credits value per request. This allows you to assign different costs to different types of AI operations.

  • Logging: The updateCreditUsage function logs the token count and assigned credit cost to the server console for monitoring purposes.
  • Deduction (Implementation Required): The function includes a placeholder (// TODO:) where you must add your own logic to deduct credits from a user's account in your database (e.g., using Prisma).

Adding Credits Field to User Model

To implement the credit deduction, first add a credits field (or similar) to your User model in prisma/schema.prisma:

prisma/schema.prisma (Add credits field)
model User {
  // ... other fields
  credits        Int       @default(0) // Add this line to track user credits
  // ... other fields
}

Update Database Schema

After modifying prisma/schema.prisma, update your database schema by running:

  • npx prisma db push (for MongoDB)
  • npx prisma migrate dev --name added_user_credits (for PostgreSQL or other relational databases)

Next, uncomment and adapt the database update logic within the updateCreditUsage function in lib/openai/client.ts to connect to your database and decrement the user's credits.

5. Usage Examples

You can use the generateResponse function from lib/openai/client.ts directly within your Server Actions or API Routes to interact with OpenAI.

Credit System Flexibility

The credit system is optional. If you don't pass the credits parameter to generateResponse, it defaults to 1. You can assign different costs for various operations, e.g., generating a title might cost 1 credit, while generating a full blog post might cost 3 credits.

Generating Plain Text Responses

To get a simple text response, call generateResponse with just a prompt:

Text Example (Server Action / API Route)
// ... (imports: generateResponse)
/**
 * Example: Generate simple text completion.
 * Returns plain text in the 'text' property.
 */
export async function generateTextExample(prompt: string, credits: number = 1) {
  console.log(`Generating text for prompt: ${prompt}`);
  const response = await generateResponse({
    prompt, // No schema provided, defaults to text output
    credits, // Credit cost for simple text (optional, defaults to 1)
  });
 
  if (response.error) {
    console.error("Error generating text:", response.error);
    return { error: response.error, text: "" };
  }
  // Access the text content
  const text = response.text || "";
  console.log("Generated Text:", text);
  return { text };
}

Generating Structured JSON Responses

To get a structured JSON response, provide a schema (defined in lib/openai/openai-schemas.ts) along with the prompt:

Json example (Server Action / API Route)
// ... (imports: generateResponse, postsSchema from './openai-schemas')
 
/**
 * Example: Generate posts using a JSON schema.
 * Returns structured data in the 'data' property.
 */
export async function generatePostsExample(topic: string, credits: number = 2) {
  console.log(`Generating post ideas for topic: ${topic}`);
  const response = await generateResponse({
    prompt: `Generate blog post ideas about ${topic}.`,
    schema: postsSchema, // Provide the schema for structured JSON output
    credits, // Assign a custom credit cost for this operation
  });
 
  if (response.error) {
    console.error("Error generating posts:", response.error);
    return { error: response.error, posts: [] };
  }
  // Access the structured data (assuming postsSchema structure)
  const posts = response.data?.posts || [];
  console.log("Generated Posts:", posts);
  return { posts };
}

This provides a solid foundation for integrating OpenAI into your SaaS application.

On this page