Deploy on premise S3 storage using QNAP NAS, Minio and Traefik

Marcelo Ochoa
ITNEXT
Published in
6 min readJan 16, 2021

--

Why?

Any popular Cloud provider such as Oracle, Amazon or Azure provides you compatible S3 storage, so the idea is to deploy an on-premise S3 compatible storage for development, testing or archiving in the meantime to a full Cloud migration.

Deployment Diagram

Let start seeing a deployment diagram of our on-premise Docker Swarm cluster

Cluster Deployment Diagram

We have a cluster of 6 nodes plus a QNAP TS-831X which works as another node of the cluster but with different architecture (ARM Cortex-A15 CPU).

# docker node ls

Persistent volume storage is either implemented using GlusterFS or NFS v4 based using QNAP NAS.

QNAP Container Station

QNAP offers a great browser based tool for administering Docker container named Container Station as We can see below:

QNAP NAS Container Station

But by joining QNAP NAS to our Docker Swarm Cluster We get another additional benefits:

  • Administering Docker using Portainer together with the rest of other stacks and containers
Portainer.IO App Swarm Cluster Graphical View
  • Monitoring using Grafana/Prometheus
Grafana/Prometheus interface
  • And finally command line access to QNAP NAS using ssh
# ssh admin@10.254.0.158

Deploying our S3 Compatible Storage

As We mentioned above the idea is to use Minio Object Storage as our on-premise S3 backend, so once the QNAP NAS is joined to the Docker Swarm cluster and is fully integrated to them, starting a MinIO server is quite easy but let see two different options:

  • Deploy as Swarm stack choosing target node NAS-DC for the MinIO server
  • Deploy as independent container inside QNAP NAS

First option is simple, just drop a docker-compose.yml at Portainer and deploy the stack including MinIO and expose the UI using Traefik, the drawback here is that all IO to the S3 storage will be routed using Traefik and is not possible to use the interconnect interface available on NAS hardware; this model of QNAP came with two network interface by default and We use one connected to a rack private backbone which is connected to all nodes of swarm. For an unknown reason if MinIO is deployed as part of the stack Docker Container it doesn't expose properly MinIO port 9000.

Second option allows MinIO to receive traffic for the two network interfaces of QNAP NAS and this is great for choosing:

  • Public interface (traffic to MinIO Web UI, exposed using Caddy Reverse Proxy and Traefik)
  • Private interconnect (traffic from Docker Swarm tasks using S3 FS volume plugin)

And in both solutions SSL encryption is off-loading from MinIO server leavering this traffic to Traefik using LetsEncrypt certs.

Deploying MinIO at QNAP NAS

By accessing using SSH to QNAP NAS create a simple shell script to start MinIO Server:

[~] # cat start-minio.sh
docker run -d \
--restart=always \
--health-cmd='curl -f http://localhost:9000/minio/health/live' \
--health-interval=2s \
--name minio-server \
-p 9000:9000/tcp \
-e "MINIO_ACCESS_KEY=AKIAIOSFODNN7EXAMPLE" \
-e "MINIO_SECRET_KEY=wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY" \
-v /share/CACHEDEV1_DATA/homes/s3:/data \
minio/minio:RELEASE.2020-11-25T22-36-25Z \
server /data

Notes:

  • -p 9000:9000/tcp will expose MinIO server on both network interfaces (port 9000 tcp)
  • MINIO_ACCESS_KEY,MINIO_SECRET_KEY must be changed for production
  • /share/CACHEDEV1_DATA/homes/s3 is local directory on QNAP NAS, look for one with enough space for your data or create a new one
  • minio/minio:RELEASE.2020–11–25T22–36–25Z is Docker image available at Docker Hub compiled for ARM
  • — restart=always will ensure that MinIO is always on and survive to NAS restarts

Exposing MinIO Web UI

As is mentioned above MinIO Web UI will be exposed to internet or internal network using Caddy and Traefik, let see a simple stack for doing that

version: '3.6'services:
server:
image: caddy:alpine
command: caddy reverse-proxy --from :80 --to 10.254.0.158:9000
networks:
- lb_network
deploy:
mode: replicated
placement:
constraints:
- node.hostname != NAS-DC
labels:
- traefik.enable=true
- traefik.docker.network=lb_network
- traefik.constraint-label=traefik-public
- traefik.http.routers.minio.rule=Host(`s3.mydomain.com`)
- traefik.http.routers.minio.entrypoints=http
- traefik.http.services.minio.loadbalancer.server.port=80
networks:
lb_network:
external: true

Notes:

  • 10.254.0.158 is the IP of NAS QNAP connected to public net
  • caddy:alpine is the Docker image for Caddy reverse proxy
  • s3.mydomain.com is external name exposed by Traefik
  • lb_network is Docker Swarm network shared by Traefik stack and Caddy
  • node.hostname != NAS-DC ensure that Caddy run on other host rather than QNAP NAS

MinIO Web UI will look like:

MinIO Web UI

Note that I am using plain HTTP access because Traefik doesn’t have LetsEncrypt certs.

Using S3 storage from within Docker Stacks

As I mentioned early to access to MinIO S3 object storage an instance of S3 FS Plugin will be installed, here the steps:

# docker plugin install --alias s3fs  mochoa/s3fs-volume-plugin --grant-all-permissions --disable
# docker plugin set s3fs AWSACCESSKEYID="AKIAIOSFODNN7EXAMPLE"
# docker plugin set s3fs AWSSECRETACCESSKEY="wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY"
# docker plugin set s3fs DEFAULT_S3FSOPTS="use_path_request_style,url=http://10.1.253.50:9000/"
# docker plugin enable s3fs

Notes:

  • mochoa/s3fs-volume-plugin is Docker Image available on Docker Hub
  • AWSACCESSKEYID,AWSSECRETACCESSKEY must be equal to above values
  • DEFAULT_S3FSOPTS have information to connect to QNAP NAS using private interface.
    By using private interface Docker tasks using this plugin will be routed to the second one interface gaining extra traffic apart from the traffic accessed by MinIO Web UI

Sample test using Docker container

root@iguana:~# docker volume create -d s3fs test-bucket
test-bucket
root@iguana:~# docker run -ti --rm -v test-bucket:/mnt ubuntu bash
root@0db09c90684b:/# ls -l /mnt
total 1
drwxr-x--- 1 root root 0 Jan 1 1970 test-dir
root@0db09c90684b:/# ls -l /mnt/test-dir/
total 9
-rw-r----- 1 root root 8626 Jan 15 23:43 README.md

Sample Docker stack

version: '3.7'services:
test:
image: ubuntu
command: tail -f /dev/null
deploy:
mode: replicated
placement:
constraints:
- node.hostname == iguana
volumes:
- data:/mnt
volumes:
data:
driver: s3fs
name: "test-bucket"

Logging inside Docker container started by above stack:

root@iguana:~# docker exec -ti minio_test.1.zbun9oeqsrvsn1rrfy2knh0j9 bash
root@2aec797db597:/# ls -l /mnt/
total 1
drwxr-x--- 1 root root 0 Jan 1 1970 test-dir
root@2aec797db597:/# ls -l /mnt/test-dir/
total 9
-rw-r----- 1 root root 8626 Jan 15 23:43 README.md
root@2aec797db597:/# dd if=/dev/zero of=/mnt/test.img bs=1G count=1
1+0 records in
1+0 records out
1073741824 bytes (1.1 GB, 1.0 GiB) copied, 20.868 s, 52.8 MB/s
root@2aec797db597:/# dd if=/mnt/test.img of=/dev/null
2097152+0 records in
2097152+0 records out
1073741824 bytes (1.1 GB, 1.0 GiB) copied, 13.4096 s, 84.9 MB/s

as you can see is fast enough, but NFS v4 is twice as much faster for writing (102>52,8) and 34% faster for reading (114 > 84,9) same disk, same interface, same physical server hosting Swarm task, but may be I have to take deeper look inside s3fs parameters to get a best value.

Anyway, MinIO deployed at QNAP NAS as S3 compatible storage for our on-premise deployment is just what We need to start development, test and go into production Cloud native apps.

--

--