Skip to main content

Storage Backends

Configure object storage for your BFFless deployment.

Overview

BFFless supports multiple storage backends through a pluggable adapter system. All storage operations use a common interface, allowing you to switch backends without code changes.

BackendBest ForProsCons
MinIOSelf-hosted productionFull control, S3-compatible, included in DockerSelf-managed
LocalDevelopment onlySimple, no setupNot for production
AWS S3Cloud deploymentsHighly reliable, scalableCost, AWS dependency
GCSGCP usersIntegrated with GCPGCP dependency
Azure BlobAzure usersIntegrated with AzureAzure dependency

MinIO is an S3-compatible object storage server included in the default Docker Compose stack.

Default Docker Configuration

MinIO runs automatically when you start the platform:

STORAGE_TYPE=minio
MINIO_ENDPOINT=minio
MINIO_PORT=9000
MINIO_ACCESS_KEY=minioadmin
MINIO_SECRET_KEY=your-secret-key
MINIO_BUCKET=assets
MINIO_USE_SSL=false

MinIO Variables

VariableDefaultDescription
MINIO_ROOT_USERminioadminMinIO admin username (for MinIO server)
MINIO_ROOT_PASSWORDchangemeMinIO admin password (for MinIO server)
MINIO_ENDPOINTminioMinIO server hostname
MINIO_PORT9000MinIO API port
MINIO_ACCESS_KEYminioadminAccess key for backend connection
MINIO_SECRET_KEYchangemeSecret key for backend connection
MINIO_BUCKETassetsBucket name for storing assets
MINIO_USE_SSLfalseUse HTTPS for MinIO connections

Accessing MinIO Console

The MinIO web console is available at:

  • Docker: http://localhost:9001 or https://minio.yourdomain.com
  • Credentials: MINIO_ROOT_USER / MINIO_ROOT_PASSWORD

External MinIO Server

To use an external MinIO server instead of the bundled one:

STORAGE_TYPE=minio
MINIO_ENDPOINT=minio.example.com
MINIO_PORT=443
MINIO_ACCESS_KEY=your-access-key
MINIO_SECRET_KEY=your-secret-key
MINIO_BUCKET=assets
MINIO_USE_SSL=true

Then remove or disable the minio service in docker-compose.yml.


Local Storage (Development Only)

Store files directly on the local filesystem. Simple but not suitable for production.

STORAGE_TYPE=local

Files are stored in apps/backend/uploads/ by default.

Limitations:

  • No redundancy
  • Not scalable
  • Tied to single server
  • Not suitable for container orchestration

AWS S3

Use Amazon S3 for cloud-native deployments.

Configuration

STORAGE_TYPE=s3
S3_REGION=us-east-1
S3_BUCKET=your-bucket-name
S3_ACCESS_KEY_ID=AKIAIOSFODNN7EXAMPLE
S3_SECRET_ACCESS_KEY=wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY

S3 Variables

VariableDescription
S3_REGIONAWS region (e.g., us-east-1)
S3_BUCKETS3 bucket name
S3_ACCESS_KEY_IDAWS access key ID
S3_SECRET_ACCESS_KEYAWS secret access key
S3_ENDPOINTCustom endpoint (for S3-compatible services)

IAM Policy

Create an IAM user with this minimal policy:

{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"s3:GetObject",
"s3:PutObject",
"s3:DeleteObject",
"s3:ListBucket"
],
"Resource": [
"arn:aws:s3:::your-bucket-name/*",
"arn:aws:s3:::your-bucket-name"
]
}
]
}

See AWS S3 Setup for detailed configuration.


Google Cloud Storage

Use GCS for deployments on Google Cloud Platform.

Configuration

STORAGE_TYPE=gcs
GCS_PROJECT_ID=your-project-id
GCS_BUCKET=your-bucket-name
GCS_KEYFILE_PATH=/path/to/service-account.json

GCS Variables

VariableDescription
GCS_PROJECT_IDGCP project ID
GCS_BUCKETGCS bucket name
GCS_KEYFILE_PATHPath to service account JSON key file

See Google Cloud Storage Setup for detailed configuration.


Azure Blob Storage

Use Azure Blob Storage for deployments on Microsoft Azure.

Configuration

STORAGE_TYPE=azure
AZURE_STORAGE_ACCOUNT=youraccount
AZURE_STORAGE_ACCESS_KEY=your-access-key
AZURE_CONTAINER=assets

Azure Variables

VariableDescription
AZURE_STORAGE_ACCOUNTAzure storage account name
AZURE_STORAGE_ACCESS_KEYStorage account access key
AZURE_CONTAINERBlob container name

See Azure Blob Storage Setup for detailed configuration.


Storage Key Format

All storage backends use a consistent key format:

{owner}/{repo}/{commitSha}/{path}/{filename}

Examples:

  • acme-corp/web-app/abc123/index.html
  • acme-corp/web-app/abc123/css/style.css
  • acme-corp/web-app/abc123/images/logo.png

Benefits:

  • Immutability: SHA-based paths ensure content integrity
  • Organization: Matches GitHub repository structure
  • Migration: Easy to move between storage backends

Switching Storage Backends

To switch from one storage backend to another:

  1. Update .env with new storage configuration
  2. Restart the backend:
    docker compose restart backend
  3. Test with a new upload

Important: Existing files are NOT automatically migrated. You'll need to:

  • Keep the old storage accessible, or
  • Manually migrate files using tools like mc (MinIO client), aws s3 sync, or gsutil

See Migration Guide for detailed migration instructions.


Storage Security

Credentials Encryption

Storage credentials (access keys, secret keys) are encrypted in the database using ENCRYPTION_KEY. This ensures credentials are protected even if the database is compromised.

Best Practices

  1. Use unique credentials for each environment
  2. Rotate keys periodically
  3. Enable bucket versioning for recovery
  4. Set appropriate bucket policies - avoid public access
  5. Use IAM roles when possible (AWS, GCP, Azure)
  6. Enable access logging for audit trails

Troubleshooting

"Bucket not found"

The bucket doesn't exist. Create it manually:

# MinIO Console
# Visit http://localhost:9001 and create the bucket

# Or using mc
mc mb local/assets

"Access denied"

Check credentials match between .env and storage provider:

# Verify MinIO credentials
docker compose exec minio mc admin info local

"Connection refused"

Storage service isn't running or endpoint is wrong:

# Check MinIO status
docker compose ps minio
docker compose logs minio

# Verify endpoint
curl http://localhost:9000/minio/health/live

Storage credentials invalid after restart

If you changed ENCRYPTION_KEY after initial setup, existing encrypted credentials are invalid. You'll need to reconfigure storage through the setup wizard.