Best of Both Worlds: Integrating Pgpool with EDB Postgres Failover Manager

 

EDB Postgres Failover Manager (EFM) is a high availability module from EnterpriseDB (EDB) that monitors the health of Postgres clusters and verifies failures quickly. Should one occur, EFM can automatically promote a Standby node to Master to ensure continued performance and protect against data loss.

Among EDB customers, virtually all of them use EDB Postgres Failover Manager in production within their high availability infrastructures, often alongside other solutions. EDB, in fact, developed EFM with hooks and parameters to ease the tool’s integration with external systems and other software.

One such piece of software is Pgpool, the open source middleware that sits between the database clients and the Postgres server. The Pgpool module helps in connection pooling and load balancing of SELECT queries on multiple standbys in EDB Postgres clusters. EDB has also invested resources in Pgpool development and supports multiple engineers whose work advances and maintains the tool.

A commonly asked question is how best to integrate the two tools because of the benefits they both provide. In the event an EDB Postgres Master fails and EFM promotes a Standby, DBAs would also like to be able to update the Pgpool about the new Master.

The following are the steps for automating the Pgpool update should EFM execute a Master failover:

On Pgpool server:

  1. Disable the auto-failover of Pgpool by modifying the following parameter in pgpool conf:
backend_flag = 'DISALLOW_TO_FAILOVER'

The above parameter is important in order to avoid a split-brain situation between Pgpool and EFM for Postgres Failover/switchover.

For more information, please refer the following link:

http://www.pgpool.net/docs/latest/en/html/runtime-config-backend-settings.html#RUNTIME-CONFIG-BACKEND-CONNECTION-SETTINGS

  1. Configure the conf.

For more information on PCP settings, please use the following link:

http://www.pgpool.net/docs/latest/en/html/pcp-commands.html

On EDB Postgres database nodes:

  1. Install the Pgpool binaries.

Pgpool binaries come with PCP commands. Users can use the PCP command to notify pgpool about new master through EFM

  1. The following is a sample script which uses the PCP command to notify Pgpool about the new master:
#!/bin/bash

###################################################################################
#title           : EFM fencing script for updating pgpool
#description     : This script executes pcp commands. Therefore pcp & pgpool 
#                : should be available on the server
#author          : Vibhor Kumar (vibhor.aim@gmail.com).
#date            : Jan 5 2018
#version         : 1.0
#notes           : Install Vim and Emacs to use this script.
#                : configure the pcppass file for EFM user and set 
#                : the password correctly
#bash_version    : GNU bash, version 4.2.46(2)-release (x86_64-redhat-linux-gnu)
###################################################################################
# quit on any error
set -e
# verify any  undefined shell variables
set -u

###################################################################################
# Code for notifiying pgpool for promote the standby
###################################################################################

NEW_PRIMARY=$1                              # argument from EFM fencing hook
PCP_USER=enterprisedb                       # PCP user name
PCP_PORT=9051                               # PCP port number as in pgpool.conf
PCP_HOST=pgpool                             # hostname of Pgpool-II  
PGPOOL_PATH=/usr/edb/pgpool3.6              # Pgpool-II installation path
PCPPASSFILE=/var/efm/pcppass                # Path to PCPPASS file

PCP_NODE_COUNT=${PGPOOL_PATH}/bin/pcp_node_count
PCP_NODE_INFO=${PGPOOL_PATH}/bin/pcp_node_info
PCP_PROMOTE=${PGPOOL_PATH}/bin/pcp_promote_node
 
export PCPPASSFILE PCP_USER PCP_PORT PGPOOL_PATH \
       PCP_PROMOTE PCP_NODE_INFO PCP_NODE_COUNT

###################################################################################
# find the number of nodes and search for node-id of NEW_PRIMARY
###################################################################################
NO_OF_NODES=$(${PCP_NODE_COUNT} --host=${PCP_HOST} \
                      --username=${PCP_USER} \
                      --port=${PCP_PORT} \
                      --no-password )

for (( i=0 ; i < ${NO_OF_NODES} ; i++ ))
do
   exists=$(${PCP_NODE_INFO} --host=${PCP_HOST} \
                    --username=${PCP_USER} \
                    --port=${PCP_PORT} \
                    --no-password ${i} |grep ${NEW_PRIMARY}|wc -l)
   if [[ ${exists} -eq 1 ]]; then
      NODE_ID=${i}
      break
   fi
done

###################################################################################
# Promote the specific node id using PCP command
###################################################################################
if [[ ! -z ${NODE_ID} ]]; then
    ${PCP_PROMOTE} --host=${PCP_HOST} \
                   --username=${PCP_USER} \
                   --port=${PCP_PORT} \
                   --no-password \
                   --verbose \
                   ${NODE_ID}
fi
exit 0
  1. Modify the following parameters in properties file of EDB Postgres Servers (Master and Standby):
script.fence=/usr/efm-2.1/bin/efm_pgpool_notify %p

Please note the above script only covers the failover scenario of an EDB Postgres cluster. However, the above script can be extended to cover a switchover use case too.

Let’s have a look at how it works in an environment:

  1. Connect to the Pgpool server and verify the nodes:
[root@master /]# psql -h pgpool
psql.bin (9.6.6.11)
Type "help" for help.

edb=# show pool_nodes;
 node_id | hostname | port | status | lb_weight |  role   | select_cnt | load_balance_node | replication_delay 
---------+----------+------+--------+-----------+---------+------------+-------------------+-------------------
 0       | master   | 5444 | up     | 0.500000  | primary | 13         | true              | 0
 1       | standby  | 5444 | up     | 0.500000  | standby | 19         | false             | 0
(2 rows)

Above output shows that we have Master and Standby EDB Postgres nodes and both are up and running.

  1. Connect to the EDB Postgres nodes and check the status of EFM. The following is a snapshot:
[root@master /]# /usr/efm-2.1/bin/efm cluster-status efm
Cluster Status: efm
VIP: 

    Agent Type  Address              Agent  DB       Info
    --------------------------------------------------------------
    Witness     pgpool               UP     N/A       
    Standby     standby              UP     UP        
    Master      master               UP     UP        

Allowed node host list:
    master pgpool standby

Membership coordinator: master

Standby priority host list:
    standby

Promote Status:

    DB Type     Address              XLog Loc         Info
    --------------------------------------------------------------
    Master      master               0/E0001E0        
    Standby     standby              0/E0001E0        

    Standby database(s) in sync with master. It is safe to promote.
    1. Perform a failover as shown below:
/usr/efm-2.1/bin/efm promote efm
Promote/switchover command accepted by local agent. Proceeding with promotion and will reconfigure original master. Run the 'cluster-status' command for information about the new cluster state.
  1. Connect to Pgpool node and verify the status of the EDB Postgres nodes:
[root@master /]# psql -h pgpool
psql.bin (9.6.6.11)
Type "help" for help.

edb=# show pool_nodes;
 node_id | hostname | port | status | lb_weight |  role   | select_cnt | load_balance_node | replication_delay 
---------+----------+------+--------+-----------+---------+------------+-------------------+-------------------
 0       | master   | 5444 | down   | 0.500000  | standby | 13         | false             | 0
 1       | standby  | 5444 | up     | 0.500000  | primary | 20         | true              | 0
(2 rows)

The above shows that the Master server which was primary is now down and the Standby is promoted to primary.

Leveraging Local Temporary Table for Global Temporary Table in EDB Postgres 9.6

In the past, I posted a blog on the concept of creating a Global Temporary Table (GTT) for migration from Oracle to EDB Postgres. In that blog I had shared the following characteristics a Global Temporary Table:

  1. The Global Temporary Table gives predefined structure for storing data.
  2. It’s an unlogged table which means any activity on this table will not be logged.
  3. The data in a Global Temporary Table are private, such that session can only access data inserted by a session.

With above characteristics, there are two options available for GTT. Following are the two of those options:

  1. ON COMMIT PRESERVE ROWS
  2. ON COMMIT DELETE ROWS

Option one can be implemented as mentioned in the Tip:: PPAS 9.4 and Global Temporary Table.  However option two is not as easy to implement.

Users also have the option of implementing a Global Temporary Table using a Local Temporary Table (LTT). An LTT (https://www.postgresql.org/docs/9.6/static/sql-createtable.html) supports both options and can be leveraged for a GTT.

To implement a Global Temporary Table in EDB Postgres, a user must have following objects in EDB Postgres:

  1. An UNLOGGED table structure that can help in creating a backend LTT;
  2. An automatic updatable VIEW with the name Global temporary table that will be used for the frontend SELECT/INSERT/DELETE/UPDATE; and
  3. A TRIGGER on view that will help in redirecting the INSERT on the view to the backend Local temporary table (LTT).

Based on the above, let’s look at an example of how DBAs and Developers can create a Global Temporary Table in EDB Postgres.

Below is a definition of a Global Temporary Table:

CREATE GLOBAL TEMPORARY TABLE global_temp (
ts       TIMESTAMP,
action   CHAR(100),
state    CHAR(50)
)
ON COMMIT DELETE ROWS;

To create the above Global Temporary Table, we will first create a backend UNLOGGED table, global_temp_backend, as given below:

CREATE UNLOGGED TABLE global_temp_backend (
ts       TIMESTAMP,
action   CHAR(100),
state    CHAR(50)
);

After creating the above UNLOGGED table, we can create a view, which users will use as a Global Temporary Table:

CREATE VIEW global_temp AS SELECT * FROM global_temp_backend;

Now, we can create an INSTEAD OF trigger on the view, which will do the following:

  1. CREATE a Local Temporary Table using the global_temp_backend definition if it does not exist in session.
  2. Re-route the insert to a Local Temporary Table.

Below is an example of such a trigger:

CREATE OR REPLACE FUNCTION global_temp_insert()
RETURNS TRIGGER
LANGUAGE plpgsql
AS
$function$
BEGIN
   INSERT INTO global_local_temp_backend VALUES(NEW.*);
   RETURN NEW;
   /* create local temporary table if not exists */
   EXCEPTION WHEN undefined_table THEN
      CREATE TEMP TABLE global_local_temp_backend
        (LIKE global_temp_backend INCLUDING ALL )
        INHERITS (global_temp_backend)
        ON COMMIT DELETE ROWS;

   INSERT INTO global_local_temp_backend VALUES(NEW.*);
   RETURN NEW;
END;
$function$;

CREATE TRIGGER insert_trigger_gt INSTEAD OF INSERT ON global_temp
FOR EACH ROW EXECUTE PROCEDURE global_temp_insert();

Below are some snapshots:

edb=# CREATE UNLOGGED TABLE global_temp_backend (
edb(#   ts       TIMESTAMP,
edb(#   action   CHAR(100),
edb(#   state    CHAR(50)
edb(# );
CREATE TABLE
edb=# CREATE VIEW global_temp AS SELECT * FROM global_temp_backend;
CREATE VIEW
edb=# CREATE OR REPLACE FUNCTION global_temp_insert()
edb-# RETURNS TRIGGER
edb-# LANGUAGE plpgsql
edb-# AS
edb-# $function$
edb$# BEGIN
edb$#     INSERT INTO global_local_temp_backend VALUES(NEW.*);
edb$#     RETURN NEW;
edb$#     EXCEPTION WHEN undefined_table THEN
edb$#       CREATE TEMP TABLE global_local_temp_backend () INHERITS (global_temp_backend)
edb$#        ON COMMIT DELETE ROWS;
edb$#       INSERT INTO global_local_temp_backend VALUES(NEW.*);
edb$#       RETURN NEW;
edb$# END;
edb$# $function$;
CREATE FUNCTION
edb=# CREATE TRIGGER insert_trigger_gt INSTEAD OF INSERT ON global_temp
edb$# FOR EACH ROW EXECUTE PROCEDURE global_temp_insert();
CREATE TRIGGER

Let’s insert some records into the Global Temporary Table and verify how it works:

edb=# BEGIN;
BEGIN
edb=# INSERT INTO global_temp VALUES (current_timestamp, 'node-1-request sended.', 'OK');
NOTICE:  merging column "ts" with inherited definition
NOTICE:  merging column "action" with inherited definition
NOTICE:  merging column "state" with inherited definition
INSERT 0 1
edb=# INSERT INTO global_temp VALUES (current_timestamp, 'node-2-request sended.', 'OK');
INSERT 0 1
edb=# INSERT INTO global_temp VALUES (current_timestamp, 'node-1-answer received.', 'Failed');
INSERT 0 1
edb=# INSERT INTO global_temp VALUES (current_timestamp, 'node-2-answer received.', 'OK');
INSERT 0 1
edb=# SELECT * FROM global_temp WHERE state = 'OK';
ts             |                                                action                                                |                       state
---------------------------+------------------------------------------------------------------------------------------------------+----------------------------------------------------
03-JUL-17 11:34:44.074603 | node-1-request sended.                                                                               | OK
03-JUL-17 11:34:44.079148 | node-2-request sended.                                                                               | OK
03-JUL-17 11:34:44.081194 | node-2-answer received.                                                                              | OK
(3 rows)
edb=#
edb=# COMMIT;
COMMIT
edb=# SELECT * FROM global_temp WHERE state = 'OK';
ts | action | state
----+--------+-------
(0 rows)

The above result was expected, since as per GTT definition rows will be deleted after commit.

If DBAs and Developers want to create a Global Temporary Table with , then they can modify the above trigger definition and include the following:

CREATE TEMP TABLE global_local_temp_backend
(LIKE global_temp_backend INCLUDING ALL )
INHERITS (global_temp_backend)
ON COMMIT PRESERVE ROWS;

READ-ONLY user,READ-ONLY Database,READ-ONLY backup user in PostgreSQL/PPAS

This has been asked me many times. Many users has asked me how they can create READ-ONLY user, READ-ONLY Database or How they can create backup user which can be use only for backup (no modification)

Well answer is in parameter called default_transaction_read_only.

If you want to make a user READ-ONLY, then you can follow steps given below:
1. CREATE normal user.
2. Use ALTER USER command to set this parameter for this user as given below:

ALTER USER  set default_transaction_read_only = on;

3. GRANT SELECT to this user for tables/object using GRANT. And you are good to go.
Below is snapshot:

postgres=# create user readonly password 'test';
CREATE ROLE
postgres=# alter user readonly set default_transaction_read_only = on;
ALTER ROLE
postgres=# GRANT select on employees to readonly;
GRANT
edbs-MacBook-Pro:data postgres$ psql -U readonly -W
Password for user readonly:
psql (9.1.1)
Type "help" for help.

postgres=> select * from employees ;
 employee_name | entity_name
---------------+-------------
 Smith         | HR
 Jones         | HR
 Taylor        | SALES
 Brown         | SALES
(4 rows)

postgres=> CREATE table test_readonly(id numeric);
ERROR:  cannot execute CREATE TABLE in a read-only transaction
postgres=>

Similarly, If you want to make a Database READ-ONLY, then you can use following command

ALTER DATABASE  set default_transaction_read_only=on;

Below is snapshot:

postgres=# CREATE database readonly;
CREATE DATABASE
postgres=# alter database readonly set default_transaction_read_only = on;
ALTER DATABASE
postgres=# \q
edbs-MacBook-Pro:data postgres$ psql readonly
psql (9.1.1)
Type "help" for help.

readonly=# create table test_readonly(id numeric);
ERROR:  cannot execute CREATE TABLE in a read-only transaction
readonly=#

Now, if you want a seperate backup user which you want to use for Online/Hot Backup,Logical Backup (using pg_dump), then you can create a super user with default_transaction_read_only = on and can use it for backup purpose. As given below:

CREATE USER backupuser SUPERUSER  password 'backupuser';
ALTER USER backupuser set default_transaction_read_only = on;

Using default_transaction_read_only parameter, user can also make a Session Readonly by executing following command:

set default_transaction_read_only=on;

I hope above would help someone who is interested in having READ-ONLY user,database or backupuser.