LogoStarterkitpro
Features/Storage

AWS S3 Storage

Switching from Cloudinary to AWS S3 for file uploads in StarterKitPro.

Alternative Storage Provider

While StarterKitPro defaults to Cloudinary, you can easily switch to using AWS S3 (Simple Storage Service) for handling file uploads if it better suits your needs or existing infrastructure. This guide details the steps to make the switch.

Switching from Cloudinary to S3 involves installing the AWS SDK, updating environment variables, replacing the storage helper library, and modifying the server actions.

1. Setup

Follow these steps to configure your project for AWS S3.

Package Management

  1. Install AWS SDK: Add the necessary AWS SDK v3 packages for S3 interaction.
    Terminal
    npm install @aws-sdk/client-s3
  2. Remove Cloudinary (Optional): If you are completely switching and no longer need Cloudinary, you can remove its package:
    Terminal
    npm uninstall cloudinary

Environment Variables

  1. Remove Cloudinary Keys: Comment out or delete the Cloudinary variables from your .env.local file.
  2. Add S3 Keys: Add your AWS S3 credentials and bucket details.
.env.local
# --- Remove or comment out these lines ---
# CLOUDINARY_CLOUD_NAME="your_cloud_name"
# CLOUDINARY_API_KEY="your_api_key"
# CLOUDINARY_API_SECRET="your_api_secret"
 
# +++ Add these lines for AWS S3 +++
AWS_ACCESS_KEY_ID="your_aws_access_key_id"
AWS_SECRET_ACCESS_KEY="your_aws_secret_access_key"
AWS_REGION="your_aws_bucket_region" # e.g., us-east-1
AWS_S3_BUCKET_NAME="your_s3_bucket_name"

Obtaining AWS Credentials & Bucket Setup

You need an S3 bucket and an IAM user with appropriate permissions:

  1. Create an S3 Bucket:
    • Log in to your AWS Management Console.
    • Navigate to the S3 service.
    • Create a new bucket, choosing a unique name and your desired AWS region. Note these down for your .env.local.
    • Permissions: Configure bucket permissions. For publicly accessible files like avatars, you might need to adjust block public access settings and add a bucket policy granting public read (s3:GetObject) access. Start with private access and adjust as needed.
  2. Create an IAM User:
    • Navigate to the IAM service in the AWS Console.
    • Create a new IAM user (e.g., starterkitpro-s3-user).
    • Attach Policies: Grant this user permissions to interact with your specific S3 bucket. You can start with AmazonS3FullAccess for ease of setup during development, but for production, create a more restrictive custom policy allowing only necessary actions (s3:PutObject, s3:GetObject, s3:DeleteObject) on your specific bucket resource (arn:aws:s3:::your_s3_bucket_name/*).
    • Generate Access Keys: Create an access key for this user. Copy the Access Key ID and Secret Access Key immediately and store them securely. You won't be able to see the Secret Access Key again. Add these to your .env.local.

AWS Credential Security

Treat your AWS Access Key ID and Secret Access Key like passwords. Never commit them to version control. Ensure .env.local is in your .gitignore. For deployments on AWS, consider using IAM roles for better security.

2. Centralized File Management

StarterKitPro uses a centralized helper module for storage operations. When switching to S3, you need to replace the Cloudinary helper with an S3 equivalent.

  1. Remove Cloudinary Helper: Delete the file lib/cloudinary.ts.
  2. Create S3 Helper: Create a new file lib/s3.ts and add the following code, which provides similar functionality using the AWS SDK:
lib/s3.ts
import "server-only";
import { S3Client, PutObjectCommand, DeleteObjectCommand } from "@aws-sdk/client-s3";
 
// Configure S3 client with environment variables
const s3Client = new S3Client({
  region: process.env.AWS_REGION || "us-east-1",
  credentials: {
    accessKeyId: process.env.AWS_ACCESS_KEY_ID || "",
    secretAccessKey: process.env.AWS_SECRET_ACCESS_KEY || "",
  },
});
 
const bucketName = process.env.AWS_S3_BUCKET_NAME || "";
const baseUploadPath = "uploads";
 
// Define a type for file URLs
type FileUrl = string;
 
// Extract the key from the S3 URL
export function extractPublicId(url: string): string {
  try {
    if (!url) throw new Error("Empty URL provided");
    
    // Parse the URL to extract the key
    const urlObject = new URL(url);
    
    // Remove the bucket name and any leading slashes
    let key = urlObject.pathname.replace(/^\//, "");
    
    // If the URL contains the bucket name in the hostname (virtual-hosted style URL)
    if (urlObject.hostname.includes(bucketName)) {
      // The key is the pathname without leading slash
      return key;
    }
    
    // If the URL is path-style (bucket name is in the path)
    if (key.startsWith(bucketName + "/")) {
      return key.substring(bucketName.length + 1);
    }
    
    return key;
  } catch (error) {
    const message = error instanceof Error ? error.message : "Unknown error";
    throw new Error(`Failed to extract key from S3 URL: ${message}`);
  }
}
 
// Upload a file to S3
export async function uploadToCloud(
  file: File,
  options: {
    contentType?: string;
    folder?: string;
  } = {}
): Promise<FileUrl> {
  try {
    const { contentType = file.type, folder = baseUploadPath } = options;
    
    // Create a unique filename with timestamp and original name
    const timestamp = new Date().getTime();
    const originalName = file.name.replace(/[^a-zA-Z0-9._-]/g, "_");
    const key = `${folder}/${timestamp}-${originalName}`;
    
    // Convert file to buffer
    const buffer = Buffer.from(await file.arrayBuffer());
    
    // Upload to S3
    const command = new PutObjectCommand({
      Bucket: bucketName,
      Key: key,
      Body: buffer,
      ContentType: contentType,
      ACL: 'public-read', // comment it if you want to make objects private
    });
    
    await s3Client.send(command);
    
    // Generate the URL for the uploaded file
    const fileUrl = `https://${bucketName}.s3.${process.env.AWS_REGION}.amazonaws.com/${key}`;
    return fileUrl;
  } catch (error) {
    console.error("S3 upload error:", error);
    throw new Error(`Upload failed: ${error instanceof Error ? error.message : "Unknown error"}`);
  }
}
 
// Remove a file from S3 using its URL
export async function removeFromCloud(url: FileUrl): Promise<boolean> {
  try {
    if (!url) {
      throw new Error("No URL provided for deletion");
    }
    
    // Check if this is an S3 URL from our bucket
    if (!url.includes(bucketName)) {
      console.warn("Not an S3 URL from our bucket, skipping deletion:", url);
      return false;
    }
    
    const key = extractPublicId(url);
    
    // Delete from S3
    const command = new DeleteObjectCommand({
      Bucket: bucketName,
      Key: key,
    });
    
    await s3Client.send(command);
    return true;
  } catch (error) {
    console.error("S3 deletion error:", error);
    throw new Error(`Deletion failed: ${error instanceof Error ? error.message : "Unknown error"}`);
  }
}

This lib/s3.ts module now provides the uploadToCloud and removeFromCloud functions, interacting with your configured S3 bucket.

3. Update Secure Server Actions

The server actions in actions/file-actions.ts orchestrate the file operations and include security checks. You only need to update the import path to use the new S3 helper.

  1. Open actions/file-actions.ts.

  2. Change the import statement:

    actions/file-actions.ts
    // --- Remove this line ---
    // import { uploadToCloud, removeFromCloud } from "@/lib/cloudinary";
     
    // +++ Add this line +++
    import { uploadToCloud, removeFromCloud } from "@/lib/s3";
     
    // ... rest of the file remains the same ...

Because the function names (uploadToCloud, removeFromCloud) and their basic purpose remain the same between lib/cloudinary.ts and lib/s3.ts, no further changes should be needed within the action logic itself, provided the S3 implementation handles parameters and returns URLs as expected.

Security with Auth.js Remains

All file management actions within actions/file-actions.ts continue to be protected using Auth.js, regardless of the storage backend (Cloudinary or S3). Only authenticated users can perform these operations.

By following these steps—installing the AWS SDK, updating environment variables, replacing lib/cloudinary.ts with lib/s3.ts, and modifying the import in actions/file-actions.ts—you have successfully switched StarterKitPro's file storage backend to AWS S3.

On this page