Thursday, February 12, 2026

Dock(er) Wok

Docker is a simple way to create virtualization without having full systems running.  I took a moment to figure out how to run docker in relation to Postgres.

Note you need at least three servers for this for it to have any sort of redundancy. One is the "master".  You can probably get away with two (run the manager on one of the nodes), but I do not recommend this.  Really, you could get away with one server that runs the manager and two instances (I did), but know that if you don't run the docker instances on separate hosts, you've now lost all high availability, because if that server fails, the whole stack will cease to exist.

First, install Docker :

    sudo apt-get install docker.io
    sudo systemctl enable docker
    sudo systemctl start docker
    

Then, create your swarm (anything recent should have swarm built in, you just need to set it up).

    
    sudo docker swarm init --advertise-addr 192.168.x.x
    

Next, join a docker worker to the swarm

    username@server1:~$ sudo docker swarm init --advertise-addr 192.168.0.3
    Swarm initialized: current node (NODE IDENTIFIER) is now a manager.
    
    To add a worker to this swarm, run the following command:
    
        docker swarm join --token TOKEN_STRING 192.168.0.3:2377
    
    To add a manager to this swarm, run 'docker swarm join-token manager' and follow the instructions.
    username@server1:~$
    

Then, on each worker node that isn't the manager, run the command you were provided (use sudo) :

    sudo docker swarm join --token TOKEN_STRING 192.168.1.237:2377
    

Next is your basic configuration.  Each host is relatively identical, with just a few modifications.  Each host has 3 different configuration files, pg_hba.conf, postgresql.conf, and pg_ident.conf.  Let's start by creating our directory structure.

I created a directory to house all of this so I can clear it out quickly after my learning curve.

    mkdir cluster-postgres
    cd cluster-postgres
    mkdir -p {master,slave-1,slave-2}/config
    

Next, create our three files for each host :

    touch {master,slave-1,slave-2}/config/{pg_hba.conf,postgresql.conf,pg_ident.conf}
    

With the files created, let's populate them.  Here's the gist for each host, but at the end, I'll identify differences as needed.

postgresql.conf

This file contains :

    # -----------------------------
    # PostgreSQL configuration file
    # -----------------------------
    #
    
    data_directory = '/data'
    hba_file = '/config/pg_hba.conf'
    ident_file = '/config/pg_ident.conf'
    
    port = 5432
    listen_addresses = '*'
    max_connections = 100
    shared_buffers = 128MB
    dynamic_shared_memory_type = posix
    max_wal_size = 1GB
    min_wal_size = 80MB
    log_timezone = 'Etc/UTC'
    datestyle = 'iso, mdy'
    timezone = 'Etc/UTC'
    
    #locale settings
    lc_messages = 'en_US.utf8'   # locale for system error message
    lc_monetary = 'en_US.utf8'   # locale for monetary formatting
    lc_numeric = 'en_US.utf8'    # locale for number formatting
    lc_time = 'en_US.utf8'       # locale for time formatting
    
    default_text_search_config = 'pg_catalog.english'
    
    #replication
    wal_level = replica
    wal_keep_size = 512MB  # Adjust this value as needed
    archive_mode = on
    archive_command = 'test ! -f /mnt/server/archive/%f && cp %p /mnt/server/archive/%f'
    max_wal_senders = 3
    

However, on the slave nodes, lines 28-34, which contain :

    #replication
    wal_level = replica
    wal_keep_size = 512MB  # Adjust this value as needed
    archive_mode = on
    archive_command = 'test ! -f /mnt/server/archive/%f && cp %p /mnt/server/archive/%f'
    

are removed.

pg_hba.conf

This file contains :

    # TYPE  DATABASE        USER            ADDRESS                 METHOD
    
    host     replication     replicationUser         0.0.0.0/0        md5
    
    # "local" is for Unix domain socket connections only
    local   all             all                                     trust
    # IPv4 local connections:
    host    all             all             127.0.0.1/32            trust
    # IPv6 local connections:
    host    all             all             ::1/128                 trust
    # Allow replication connections from localhost, by a user with the
    # replication privilege.
    local   replication     all                                     trust
    host    replication     all             127.0.0.1/32            trust
    host    replication     all             ::1/128                 trust
    
    host all all all scram-sha-256
    

However, on the slave nodes, line 3 :

    host     replication     replicationUser         0.0.0.0/0        md5
    

is removed.

pg_ident.conf

This file contains :

    # PostgreSQL User Name Maps
    # =========================
    #
    # Refer to the PostgreSQL documentation, chapter "Client
    # Authentication" for a complete description.  A short synopsis
    # follows.
    #
    # This file controls PostgreSQL user name mapping.  It maps external
    # user names to their corresponding PostgreSQL user names.  Records
    # are of the form:
    #
    # MAPNAME  SYSTEM-USERNAME  PG-USERNAME
    #
    # (The uppercase quantities must be replaced by actual values.)
    #
    # MAPNAME is the (otherwise freely chosen) map name that was used in
    # pg_hba.conf.  SYSTEM-USERNAME is the detected user name of the
    # client.  PG-USERNAME is the requested PostgreSQL user name.  The
    # existence of a record specifies that SYSTEM-USERNAME may connect as
    # PG-USERNAME.
    #
    # If SYSTEM-USERNAME starts with a slash (/), it will be treated as a
    # regular expression.  Optionally this can contain a capture (a
    # parenthesized subexpression).  The substring matching the capture
    # will be substituted for \1 (backslash-one) if present in
    # PG-USERNAME.
    #
    # Multiple maps may be specified in this file and used by pg_hba.conf.
    #
    # No map names are defined in the default configuration.  If all
    # system user names and PostgreSQL user names are the same, you don't
    # need anything in this file.
    #
    # This file is read on server startup and when the postmaster receives
    # a SIGHUP signal.  If you edit the file on a running system, you have
    # to SIGHUP the postmaster for the changes to take effect.  You can
    # use "pg_ctl reload" to do that.
    
    # Put your actual configuration here
    # ----------------------------------
    
    # MAPNAME       SYSTEM-USERNAME         PG-USERNAME
    

There are no differences here for each node.

Last Configurations

Create the docker "network" :

sudo docker network create postgres-cluster-network

This will print a fairly large alphanumeric ID. 

Starting the "master"

At this point in time, start the master node using the following command :

    sudo docker run -d --name postgres-master  --net postgres-cluster-network \
    -e POSTGRES_USER=postgresadmin -e POSTGRES_PASSWORD=admin123 \
    -e POSTGRES_DB=postgresdb -e PGDATA="/data" -v ${PWD}/master/pgdata:/data \
    -v ${PWD}/master/config:/config -v ${PWD}/master/archive:/mnt/server/archive \
    -p 5000:5432 postgres:latest -c 'config_file=/config/postgresql.conf'
    

If you haven't downloaded the postgres docker image yet, this will actually cause it to try and install.  If you get an error about it already existing because you tried to start it one and it gave you an error :

    docker: Error response from daemon: Conflict. The container name "/postgres-master" is already in use by container "6362ca473d3f29b7fe1bf02558f5a871fb6cf6827eff23fe01714f34cb386951". You have to remove (or rename) that container to be able to reuse that name.
    

Then list the dockers, and delete it :

    username@server1:~/postgres-cluster$ sudo docker ps -a
    CONTAINER ID   IMAGE             COMMAND                  CREATED          STATUS    PORTS     NAMES
    6362ca473d3f   postgres:latest   "docker-entrypoint.s…"   10 minutes ago   Created             postgres-master
    username@server1:~/postgres-cluster$ sudo docker rm 6362ca473d3f
    6362ca473d3f
    username@server1:~/postgres-cluster$ sudo docker ps -a
    CONTAINER ID   IMAGE     COMMAND   CREATED   STATUS    PORTS     NAMES
    username@server1:~/postgres-cluster$
    

Now try to start it if it had previous failed again, and it should simply give you a large alphanumeric ID again.

Next, create a replication user :

    username@server1:~$ sudo docker exec -it postgres-master bash
    [sudo] password for username:          
    root@63919c16e356:/# createuser -U postgresadmin -P -c 5 --replication replicationUser
    Enter password for new role: 
    Enter it again: 
    root@63919c16e356:/# exit
    exit
    username@server1:~$
    

Just type exit and get back to the prompt.

Now, we can move on to starting the slaves. 

Starting the slaves

For each node, run :

    username@server1:~$ sudo docker run -it --name postgres-slave1 --rm \
    > --net postgres-cluster-network \
    > -v ${PWD}/slave-1/pgdata:/data \
    > --entrypoint /bin/bash postgres:latest
    root@447571e3e11e:/#
    

This will put you in a bash prompt as "interactive", where you then run :

    pg_basebackup -h postgres-master -p 5432 -U replicationUser -D /data/ -Fp -Xs -R
    

The command initiates replication, and will use the password you specified on the master node when you started that up with the "createuser" command.

Repeat for slave-2 (of course, replacing postgres-slave1 with postgres-slave2, and the slave-1 folder names with slave-2).

Create standby instances 

Run the following :

    sudo docker run -d --name postgres-slave1 --net postgres-cluster-network \
    -e POSTGRES_USER=postgresadmin -e POSTGRES_PASSWORD=admin123 \
    -e POSTGRES_DB=postgresdb -e PGDATA="/data"  \
    -v ${PWD}/slave-2/pgdata:/data -v ${PWD}/slave-2/config:/config \
    -v ${PWD}/slave-2/archive:/mnt/server/archive -p 5002:5432 \
    postgres:latest -c 'config_file=/config/postgresql.conf'
    

Repeat for slave-2 (of course, replacing postgres-slave1 with postgres-slave2, and the slave-1 folder names with slave-2).

Test It

Connect to the master node using :

    sudo docker exec -it postgres-master bash
    

From in here, you can run your psql commands to create databases and manipulate whatever you need.

    psql --username=postgresadmin postgresdb
    

Exit, and then check the other slave nodes by connecting to them (sudo docker exec) and running psql to query any tables you've created and populated with data.