I rely on S3 for central storage. Since some tools do not support native S3 yet, I use rclone. This article details how I implement a persistent S3 mount directly into a Docker container (paperless-ngx) using the rclone Docker Volume Plugin, which is a superior method to traditional host-level mounts.
Part I: Plugin Setup and Secure Credentials
1. Installation of the Volume Plugin
The key to this architecture is the Docker Volume Plugin, which allows Docker to manage the entire mount process, including the underlying FUSE execution, transparently. This is cleaner than managing FUSE mounts via the host’s fstab.
# Install FUSE dependency
apt-get -y install fuse
# Create the necessary config directory for the plugin
mkdir -p /var/lib/docker-plugins/rclone/config
# Install and activate the rclone Docker Volume Plugin
docker plugin install rclone/docker-volume-rclone:amd64 args="-v" --alias rclone --grant-all-permissions
2. Rclone Configuration
The rclone.conf must be placed in a directory accessible to the plugin and defines the S3 endpoint and access keys.
# /var/lib/docker-plugins/rclone/config/rclone.conf
[minio]
type = s3
region = somewhere-over-the-rainbow
endpoint = https://your-s3:9000
provider = Minio
env_auth = false
access_key_id = ...
secret_access_key = ...
acl = bucket-owner-full-controlPart II: Volume Architecture and Data Migration
1. Modifying docker-compose.yml (The Architecture)
The goal is to replace the old, local media volume with a new named volume (s3) that uses the rclone driver. This requires two critical changes in docker-compose.yml.
1. Define the Volume Driver (End of File): Define the named volume s3 and specify its driver and driver options.
volumes:
data:
#media: <-- COMMENT THIS OUT
pgdata:
redisdata:
s3:
driver: rclone
driver_opts:
remote: "minio:paperless-jean"
allow_other: "true"
vfs_cache_mode: "full" # Critical for consistency
Note: The remote value “minio:paperless-jean” references the configuration section [minio] in rclone.conf and the target bucket name (paperless-jean).
2. Update the Mountpoint: In the webserver service definition, the old local mount (media) must be commented out, and the new s3 volume must be assigned the correct path (/usr/src/paperless/media/documents).
webserver:
image: ghcr.io/paperless-ngx/paperless-ngx:latest
volumes:
- s3:/usr/src/paperless/media/documents
- data:/usr/src/paperless/data
#- media:/usr/src/paperless/media <-- REMOVE THIS LINE
...
2. Initial Data Migration
After updating docker-compose.yml and running docker compose pull / up, the new s3 volume will be created and mounted. Since the old documents are not yet in S3, a migration is required.
The Strategy: Export all existing documents from the local database and immediately import them back into the new S3-backed volume.
# 1. Export documents from the local database
docker compose exec -T webserver document_exporter -d -c ../export
# 2. Re-import into the new S3-backed volume
docker compose exec -T webserver document_importer ../export
Verification: The log output confirms the successful migration, showing the number of objects copied to the MinIO backend.
Checking the manifest
Installed 1299 object(s) from 1 fixture(s)
Copy files into paperless...
100%|██████████| 1020/1020 [00:56<00:00, 17.96it/s]
Sources / See Also
- Rclone Documentation. Mount Options and Usage (VFS Cache Modes).
https://rclone.org/commands/rclone_mount/ - Nextcloud Documentation. External Storage Configuration (S3 as Primary Storage).
https://docs.nextcloud.com/server/latest/admin_manual/configuration_files/external_storage/s3.html - FUSE Project Documentation. Understanding FUSE Filesystems and Permissions (
allow-other,umask).https://github.com/libfuse/libfuse - systemd Documentation. Using Templates and Instances (
rclone@.service).https://www.freedesktop.org/software/systemd/man/systemd.unit.html#Templates - MinIO Documentation. Reference Guide for S3 Configuration and Endpoints.
https://min.io/docs/minio/linux/deployment/distributed-deployment/