Using Oracle Cloud Object Storage as Docker Volume

Oracle Cloud Object Storage provides unlimited, high performance, durable, and secure data storage. Data is uploaded as objects that are stored in buckets.

Object Storage Icon

The Object Storage service can store an unlimited amount of unstructured data of any content type, including analytic data and rich content, like images and videos. Object Storage provides several connectivity options, including a native REST API, along with OpenStack Swift API compatibility, and an HDFS plug-in. Object Storage also offers a Java SDK and Python CLI access for management.

In addition to above We will provide a Docker plugin to use Object Storage as regular Docker Volumes.

Let’s start first introduction our test environment, continuing with My own dev/test cloud environment using Oracle Always Free instances see deployment diagram:

Deployment Diagram using Oracle Cloud Object Storage as Docker Volumes

We see above a shared storage implemented using VM Block storage and replicated using GlusterFS, this storage is designed for block level IO such MongoDB or MySQL and there is new one storage for our Docker containers implemented using Object Storage designed for storing HTML pages, images, configuration files or Docker Registry blob objects.

By adding a Docker plugin for Oracle Object Storage We can access from the Oracle Cloud Shell as a regular file system way too, see below.

Implementing an Oracle Object Storage Docker Volume Plugin

Starting from the implementation of GlusterFS Docker Plugin I wrote a new plugin for Oracle Object Storage using the S3 API compatibility and the s3fs Fuse project.

This article is not about how to implement a plugin, but if you are curious the code is available at my GitHub repo comments, bugs, PRs and suggestions are welcome.

Installing S3FS Plugin to access to our Object Storage

Install instructions for installing and enabling this plugin is like any other Docker Volume Plugin installation just execute:

$ docker plugin install --alias s3fs mochoa/s3fs-volume-plugin --grant-all-permissions --disable
$ docker plugin set s3fs AWSACCESSKEYID=key
$ docker plugin set s3fs AWSSECRETACCESSKEY=secret
$ docker plugin set s3fs DEFAULT_S3FSOPTS="nomultipart,use_path_request_style,url=https://[tenant_id].compat.objectstorage.[region-id]"
$ docker plugin enable s3fs
$ docker plugin ls
dd00b09fda36 glusterfs:latest GlusterFS plugin for Docker true
558b297906aa s3fs:latest S3FS plugin for Docker true

Text in italic/bold are information that you have to provide to access the Oracle Object Storage, let’s follow this screenshot as an example.

First, login into your Oracle Cloud Console and look for the page Identity->Users->User Details->Customer Secret Keys and generate your Customer Secret key. Another path is using you profile link as is shown below, then use Customer Secret Keys link at the left menu:

User Settings access from profile icon

Push generate Secret Key and a popup will ask for a name:

Generate Secret Key Pop-up

Click Generate Secret Key and using the link Copy, store this value as AWSSECRETACCESSKEY= — — generated-secret — — , be sure to copy it first because once you close this popup this value is no longer shown at any place.

Generate Secret Key confirmation dialog showing AWSSECRETACCESSKEY value

By click on Close a new Secret Key will be showed on the list:

List of generated Secret Keys with link to see AWSACCESSKEYID value

Clicking on the airbrush zone will show the value to store as AWSACCESSKEYID= — — key-id — — .

Docker Volume Plugin s3fs must be installed in all nodes of your Docker Swarm Cluster in order to work with Docker stacks.

Finally tenant-id and region is available at Cloud Console menu Administration->Tenancy Details

Tenant Details page showing tenant-id and region values

tenant-id is the airbrush zone named Object Storage Namespace and region-id is Home Region.

Testing using a Docker container

Before mounting an Object Storage compartment (bucket), We have to create them, and in my personal test put some object on them in order to work, for example using Cloud Console my buckets are

Object Storage Bucket List

above screenshot is available at Cloud Console menu Object Storage, We can create or select one of the bucket at the list, if We select docker-shared-bucket a list of objects is shown:

Bucket content and upload button

Using the Upload button is enough to upload a simple file for testing.

Accessing to docker-shared-bucket storage from within a Docker container is simple as:

$ docker volume create -d s3fs docker-shared-bucket
$ docker run --rm -it -v docker-shared-bucket:/mnt alpine
/ # ls -l /mnt
total 2
drwxr-xr-x 1 root root 0 Oct 6 12:54 certs
-rw-r--r-- 1 root root 1029 Oct 13 19:10 deploy-hook.log
/ # cat /mnt/deploy-hook.log
Tue Oct 13 18:48:07 UTC 2020

and that’s all, simple as any other volume in Docker.

If you change bucket visibility from Private to Public your bucket can be mounted from Docker containers running outside Oracle Cloud, for example development environment, but this setting is not recommended for production.

Using Oracle Cloud Shell

Similar to previous section We can work with our Oracle Object Storage within Oracle Cloud Shell, see a simple video demo

Using Oracle Object Store from Cloud Shell

The script which install Docker Volume plugin is:

mochoa@cloudshell:~ (us-ashburn-1)$ cat
docker plugin install --alias s3fs mochoa/s3fs-volume-plugin:v2.0.4 --grant-all-permissions --disable
docker plugin set s3fs DEFAULT_S3FSOPTS="nomultipart,use_path_request_style,url=https://[tenant-id].compat.objectstorage.[region-id]"
docker plugin set s3fs AWSACCESSKEYID=------
docker plugin set s3fs AWSSECRETACCESSKEY="------"
docker plugin enable s3fs
docker volume create -d s3fs docker-shared-bucket

Using Object Storage in Docker Swarm stacks

Finally to conclude this article a Docker Swarm stack example of using Oracle Cloud Object Storage.

A good candidate for using Object Storage is Docker Registry stack, content of this stack is of the form write-once/read-many a good usage for the bucket. As I showed previously I have a bucket named registry, for the new stack a modified version docker-compose-registry.yml

Volume named data instead of using GlusterFS now is using s3fs plugin and bucket named registry. A five level tree structure of our registry bucket is:

Directory structure stored at registry bucket


A simple performance test of Oracle Cloud Object Storage using Cloud Shell or a node running in VM using free tier shows:

Cloud Shell

root@032f6f481c6a:/mnt# dd if=/dev/zero of=test1.img bs=1G count=1
1+0 records in
1+0 records out
1073741824 bytes (1.1 GB, 1.0 GiB) copied, 32.4348 s, 33.1 MB/s
root@032f6f481c6a:/mnt# dd if=test1.img of=/dev/null
2097152+0 records in
2097152+0 records out
1073741824 bytes (1.1 GB, 1.0 GiB) copied, 12.3321 s, 87.1 MB/s

Free Tier VM

root@8d9e193fb5b3:/mnt# dd if=/dev/zero of=test1.img bs=1G count=1
1+0 records in
1+0 records out
1073741824 bytes (1.1 GB, 1.0 GiB) copied, 590.406 s, 1.8 MB/s
root@8d9e193fb5b3:/mnt# dd if=test1.img of=/dev/null
2097152+0 records in
2097152+0 records out
1073741824 bytes (1.1 GB, 1.0 GiB) copied, 25.2497 s, 42.5 MB/s

As We can see Free Tier VM are very constrained in network bandwidth for writing, Cloud Shell seems to outperform better and for sure a paid tier will be great.

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store