Storage Providers
FORGE provides a unified file storage abstraction that works identically whether you store files on the local filesystem or in a cloud object store. Upload a file once and switch storage backends at any time without changing application code.
StorageProvider Trait
All storage drivers implement the StorageProvider trait:
#[async_trait]
pub trait StorageProvider: Send + Sync {
/// Store a file from bytes
async fn put(&self, path: &str, data: &[u8]) -> Result<StoredFile, StorageError>;
/// Store a file from a stream (for large uploads)
async fn put_stream(
&self,
path: &str,
stream: impl AsyncRead + Send + Unpin,
content_type: Option<&str>,
) -> Result<StoredFile, StorageError>;
/// Retrieve file contents as bytes
async fn get(&self, path: &str) -> Result<Vec<u8>, StorageError>;
/// Get the public URL for a stored file
fn url(&self, path: &str) -> String;
/// Generate a time-limited signed URL for private files
async fn temporary_url(
&self,
path: &str,
expires_in: Duration,
) -> Result<String, StorageError>;
/// Delete a file from storage
async fn delete(&self, path: &str) -> Result<(), StorageError>;
/// Check whether a file exists
async fn exists(&self, path: &str) -> Result<bool, StorageError>;
}StoredFile Struct
Every successful upload returns a StoredFile with metadata about the stored object:
pub struct StoredFile {
/// Relative path within the storage backend
pub path: String,
/// Publicly accessible URL (or base URL + path for local storage)
pub url: String,
/// File size in bytes
pub size: u64,
/// Detected MIME type (e.g., "image/png", "application/pdf")
pub mime_type: String,
}Available Drivers
Local Storage (Default)
Stores files on the local filesystem. This is the default driver and requires no additional installation or external service. Ideal for development and single-server deployments.
Configuration:
| Setting | Type | Default | Description |
|---|---|---|---|
storage_driver | string | "local" | Storage driver identifier |
local_path | string | "./storage" | Filesystem path for stored files |
local_url | string | "/storage" | URL prefix for serving files (relative or absolute) |
TIP
For development, the default local storage works without any configuration. Files are saved under the ./storage directory in your project root and served through the built-in static file handler at /storage.
Directory structure:
project-root/
└── storage/
├── uploads/
│ ├── avatars/
│ │ └── user-123.jpg
│ └── documents/
│ └── invoice-456.pdf
└── media/
└── images/
└── hero-banner.webpWARNING
Local storage does not support temporary_url(). Calling this method on the local driver returns a permanent URL instead. If you need time-limited access control, switch to the S3 driver or implement access control at the application level.
S3 Storage
Object storage via the Amazon S3 API. This driver works with Amazon S3, DigitalOcean Spaces, MinIO, Backblaze B2, Cloudflare R2, and any S3-compatible service.
Installation:
forge provider:add storage:s3Configuration:
| Setting | Type | Description |
|---|---|---|
storage_driver | string | Set to "s3" |
s3_bucket | string | Bucket name |
s3_region | string | AWS region (e.g., us-east-1) |
s3_access_key | encrypted | AWS Access Key ID |
s3_secret_key | encrypted | AWS Secret Access Key |
s3_endpoint | string | Custom endpoint URL (for non-AWS S3-compatible services) |
s3_url | string | Public base URL for stored files (CDN or bucket URL) |
Features:
- Signed URLs -- Generate time-limited URLs for private objects without exposing credentials
- Presigned uploads -- Let clients upload directly to S3 without routing through your server
- CDN support -- Serve files through CloudFront or any CDN by setting
s3_urlto your distribution domain - Multipart uploads -- Automatic chunked uploading for large files via
put_stream()
S3-compatible services configuration
For non-AWS services, set the s3_endpoint to your provider's endpoint:
| Service | Endpoint Example | Region |
|---|---|---|
| DigitalOcean Spaces | https://nyc3.digitaloceanspaces.com | nyc3 |
| MinIO | http://localhost:9000 | us-east-1 |
| Backblaze B2 | https://s3.us-west-002.backblazeb2.com | us-west-002 |
| Cloudflare R2 | https://<account-id>.r2.cloudflarestorage.com | auto |
Presigned Uploads
For large file uploads, use presigned URLs to let clients upload directly to S3. This avoids routing large files through your API server:
let presigned = storage.temporary_url("uploads/large-video.mp4", Duration::from_secs(3600)).await?;
// Return presigned URL to the client for direct uploadUsage
Creating the Provider
The factory creates the correct driver based on your configuration:
use crate::services::storage::StorageFactory;
let storage = StorageFactory::create(&settings).await?;Uploading a File
// Upload from bytes
let data = std::fs::read("local-file.pdf")?;
let stored = storage.put("uploads/documents/report.pdf", &data).await?;
println!("Stored at: {}", stored.path); // uploads/documents/report.pdf
println!("URL: {}", stored.url); // https://cdn.example.com/uploads/documents/report.pdf
println!("Size: {} bytes", stored.size); // 245760
println!("Type: {}", stored.mime_type); // application/pdfUploading from a Stream
For large files, use streaming uploads to avoid loading the entire file into memory:
use tokio::fs::File;
let file = File::open("large-video.mp4").await?;
let stored = storage.put_stream(
"uploads/videos/intro.mp4",
file,
Some("video/mp4"),
).await?;Retrieving a File
// Get file contents as bytes
let data = storage.get("uploads/documents/report.pdf").await?;
// Get the public URL
let url = storage.url("uploads/documents/report.pdf");
// => "https://cdn.example.com/uploads/documents/report.pdf"Generating Temporary URLs
Create time-limited signed URLs for private files:
use std::time::Duration;
let signed_url = storage.temporary_url(
"uploads/private/contract.pdf",
Duration::from_secs(3600), // expires in 1 hour
).await?;
// => "https://bucket.s3.amazonaws.com/uploads/private/contract.pdf?X-Amz-Signature=..."WARNING
Temporary URLs require the S3 driver. The local storage driver does not support expiring URLs. If you need access control with local storage, implement it through your application's authorization layer.
Checking Existence
if storage.exists("uploads/avatars/user-123.jpg").await? {
println!("Avatar exists");
} else {
println!("No avatar found, using default");
}Deleting a File
storage.delete("uploads/documents/old-report.pdf").await?;Handling File Uploads in Handlers
A typical file upload handler combining the storage provider with the HTTP layer:
use actix_multipart::Multipart;
use futures_util::StreamExt;
pub async fn upload_avatar(
mut payload: Multipart,
storage: web::Data<Box<dyn StorageProvider>>,
auth: AuthenticatedUser,
) -> Result<HttpResponse, AppError> {
while let Some(field) = payload.next().await {
let mut field = field?;
let content_type = field.content_type().to_string();
let mut data = Vec::new();
while let Some(chunk) = field.next().await {
data.extend_from_slice(&chunk?);
}
// Validate file type
if !["image/jpeg", "image/png", "image/webp"].contains(&content_type.as_str()) {
return Err(AppError::validation("Only JPEG, PNG, and WebP images are allowed"));
}
// Store the file
let path = format!("uploads/avatars/{}.jpg", auth.user_id);
let stored = storage.put(&path, &data).await?;
return Ok(HttpResponse::Ok().json(stored));
}
Err(AppError::validation("No file provided"))
}Configuration via Admin
Storage settings are managed through the Settings admin page under the Storage group:
Settings > Storage
┌──────────────────────────────────────────────────┐
│ Driver: [S3 v] │
│ Bucket: [my-app-uploads ] │
│ Region: [us-east-1 ] │
│ Access Key: [AKIA... ] │
│ Secret Key: [................ ] │
│ Endpoint: [ ] │
│ Public URL: [https://cdn.example.com] │
└──────────────────────────────────────────────────┘Error Handling
All storage operations return Result<T, StorageError> with structured error types:
match storage.put("uploads/file.pdf", &data).await {
Ok(stored) => println!("Stored: {}", stored.url),
Err(StorageError::NotFound) => println!("File not found"),
Err(StorageError::PermissionDenied) => println!("Access denied"),
Err(StorageError::QuotaExceeded) => println!("Storage limit reached"),
Err(StorageError::ProviderError(msg)) => println!("Provider error: {}", msg),
Err(e) => println!("Unexpected error: {}", e),
}