MTA — Modernize Traditional Apps with Docker, case study 2 ZoneMinder cluster

Continuing with the saga of MTA experiences this post is about how to adapt ZoneMinder software.

ZoneMinder is a full-featured, open source, state-of-the-art video surveillance software system which We used at many buildings of the University, next image shows how ZoneMinder can be deployed in multi-server way which is closed to the idea of clustering

ZoneMinder multi-server deployment

As we see in a previous post, applications which where not designed to work in Docker Swarm clustering need to be modified, not a lot, but with some tweaks, I forked ZoneMinder project with a set of modified scripts and configuration files in order to deploy in Swarm Cluster, to build ZoneMinder docker image simple clone above repo and run docker build:

root@localhost:~# git clone
root@localhost:~# cd docker-zoneminder/
root@localhost:~/docker-zoneminder# docker build -t “quantumobject/docker-zoneminder:1.31.1” -f Dockerfile .

another tweak for this deployment is a Load Balancer or front-end proxy which capture HTTP/HTTPs traffic from outside and routes into ZoneMinder services, We use a modified version of dockercloud/haproxy, We will discus later why We need a modified version, to build the Load Balancer image run:

root@localhost:~# git clone
root@localhost:~# cd dockercloud-haproxy/
root@localhost:~/dockercloud-haproxy# docker build -t “dockercloud/haproxy:” -f Dockerfile .

once We have above images ready at our local docker repository We can deploy ZoneMinder stack using this sample configuration:


some point to remark at above stack:

  • there is network named net used to connect ZoneMinder services to MySQL DB running in a separate container
  • MySQL container is using default image from Docker store mysql/mysql-server:5.7 with a minor changes for backward compatibility defined at conf/mysql/my.cnf:
    max_connections = 500
  • service named web is working as front-end for the Web application running with 1 replica and accessible using the URL http://zm.localhost
  • service named stream is working as ZoneMinder server (video processing, storage, mod detection and streaming) they are accessible at URL stream{{.Task.Slot}}.localhost, {{.Task.Slot}} is numbered, due there are three replicas, as 1..3, so finally public URLs are http://stream1.localhost, http://stream2.localhost and so on, this functionality is not provided by the official dockercloud/haproxy but is implemented at my fork
  • ZM_SERVER_HOST=node.{{.Task.Slot}} is also a modification at the ZoneMinder shell script to modify the value at file:
    /etc/zm/conf.d/02-multiserver.conf with environment variable value
  • finally service named lb is the Load Balancer using above built image and expose port 80 outside Docker Swarm network and connect with ZoneMinder internal network using net, this service must run in nodes with the role manager for listening at /var/run/docker.sock and detects stream and web services startup events
  • all persistent information are defined in local sub-directories such as /home/data/zm/backups or /home/data/zm/mysql, but in production environment they are mapped to external directories over NAS storage using NFS protocol

Finally in our installation backup are performed by Swarm Scheduler, this is following best practices in Docker environment which defines that only once process should be run in a docker container, it means that leaving in background a cron job is not a good idea, so we added to above stack definition another extra service as:

image: quantumobject/docker-zoneminder:1.31.1
command: /sbin/backup
— net
— backups:/var/backups
mode: replicated
replicas: 0
— node.labels.interconnect == si
condition: none

Note that replicas is defined as 0, swarm-scheduler will start this service at an specific time and once the service end it will be again in stopped mode, crontab configuration file look like:

# Backup ZoneMinder MySQL Tuesday 2:35am
35 2 * * 2 root run-task zm_backup

How about healthcheck?? We found that some time a ZoneMinder capture daemon unexpected dies, mainly because a camera is not accessible, to detect that situations We added a healthcheck to stream service as:

test: exit $$(ps axo pid=,stat= | awk ‘$$2~/^Z/ { print }’| grep -c -m 1 Z)
interval: 55s
timeout: 3s

in plain text English means that if We found 1 or more process in zombie state (Z) We can assume that this ZoneMinder server instance is not stable and should be stopped and started again, in same docker swarm node or may be another with more free resources. Next capture is showing ZoneMinder in action deployed using Portainer.IO superb tool

Next post will be about Pandora FMS Monitoring tool.

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store