efm_sql_command: Postgres database extension for EFM (EDB Failover Manager 2.1.x)

 

EDB Failover manager (EFM) continues to gain popularity among Postgres DBAs. EDB customers are using this tool to build a highly available EDB Postgres platform.

EFM’s primarily role is to monitor Postgres Clusters, notify the DBA of any failed clusters and automatically promote the standby cluster to function as a master.

Here are some of the high-level features of EFM:

  1. Automatic failover
  2. Switchover to a standby (s) close to master in terms of transaction xlog
  3. Set priority of standby (s)
  4. Customize notifications/alerts
  5. Transfer VIP to the new master with failover (provided, the master has been configured with VIP)

Utility “efm” is used by the DBAs to gain control of these actions. However, the need to connect to EFM cluster servers/nodes using ssh continues to not optimize the DBA experience.

Keeping this pain point in mind, we have developed “efm_sql_command” extension for the EFM 2.1.x utility command using the inherent Postgres interface. DBAs no longer need to ssh to EFM node to allow/disallow new/old nodes for EFM clusters. Additionally, through the Postgres interface itself, EFM can also be integrated with other monitoring systems like EDB Postgres Enterprise Manager, Nagios, etc.

efm_sql_command extension provides following functions:

  • efm_sql_command.efm_allow_node(‘ip address’);

This functions takes IP address as an argument and notifies EFM agents to allow the specific IP address to be part of a EFM cluster. It returns status of EFM command in 0/1. 0 means successful and 1 means failed.DBAs now can use SQL interface to notify EFM for allowing the new standby (which they are still building) without ssh to one of the EFM nodes.

  • efm_sql_command.efm_disallow_node(‘ip address’);

Similar to efm_allow_node, this function notifies to EFM agents to disallow the IP address from EFM cluster. Function report the status of the command in 0 and 1, where 0 means successfully notified and 1 means failed.

  • efm_sql_command.efm_failover();

Function notifies the EFM cluster agents to perform manual failover of master to closest standby. The function returns 0/1 status, where 0 means successfully notified EFM cluster agents and 1 means failed to notify agents.

  • efm_sql_command.efm_resume_monitoring();

If an agent is not monitoring the local EDB Postgres database, this function makes EFM agent start monitoring the local EDB Postgres database.

  • efm_sql_command.efm_set_priority(‘ip address’, ‘priority’);

Set failover priority for standby and return the status of the command in 0/1, where 0 means success and 1 means failed.

  • efm_sql_command.efm_switchover();

This function performs the switchover to closest standby of master and reconfigures the master as a new standby.

  • efm_sql_command.efm_local_properties

This a view using which DBA can view the efm properties.

  • efm_sql_command.efm_nodes_details:

This view provides the details of each node in EFM cluster. A user can use this for other the purpose. For example checking the status of standby(s), How far are the standbys from a master? etc.

  • efm_sql_command.efm_cluster_status

This function takes following arguments as text:
‘text’: To print the status of EFM cluster in TEXT
‘json’: To print the status of EFM cluster in JSON format.

This extension also gives following GUC, which DBAs can set at cluster/database/user level

ALTER SYSTEM SET efm.cluster_name TO 'clustername';
ALTER DATABASE  SET efm.cluster_name TO 'clustername';
ALTER USER  SET efm.cluster_name TO 'clustername';

Following are some snapshots of efm_sql_command’s functions

CREATE EXTENSION efm_sql_command;
edb=# select efm_extension.efm_cluster_status('text');
INFO: efm command is available
efm_cluster_status
------------------------------------------------------------------------
Cluster Status: efm
VIP:
 
Agent Type Address Agent DB Info
--------------------------------------------------------------
Idle 172.17.0.2 UP UNKNOWN
 
Allowed node host list:
172.17.0.2
 
Membership coordinator: 172.17.0.2
 
Standby priority host list:
(List is empty.)
 
Promote Status:
 
 
Idle Node Status (idle nodes ignored in XLog location comparisons):
 
Address XLog Loc Info
--------------------------------------------------------------
172.17.0.2 0/35BFC10 DB is not in recovery.
(23 rows)
edb=# select jsonb_pretty(efm_extension.efm_cluster_status('json')::jsonb);
INFO:  efm command is available 
                         jsonb_pretty                         
--------------------------------------------------------------
 {                                                           +
     "VIP": "",                                              +
     "nodes": {                                              +
         "172.17.0.2": {                                     +
             "db": "UNKNOWN",                                +
             "info": " ",                                    +
             "type": "Idle",                                 +
             "xlog": "0/35BFC10",                            +
             "agent": "UP",                                  +
             "xloginfo": "DB is not in recovery."            +
         }                                                   +
     },                                                      +
     "messages": [                                           +
         "Did not find XLog location for any non-idle nodes."+
     ],                                                      +
     "allowednodes": [                                       +
         "(List",                                            +
         "is",                                               +
         "empty.)"                                           +
     ],                                                      +
     "minimumstandbys": 0,                                   +
     "failoverpriority": [                                   +
     ],                                                      +
     "membershipcoordinator": "172.17.0.2"                   +
 }
(1 row)
edb=# select efm_extension.efm_allow_node('172.17.0.2');
INFO: efm command is available
efm_allow_node
----------------
0
(1 row)
edb=# select efm_extension.efm_disallow_node('172.17.0.2');
INFO: efm command is available
efm_disallow_node
-------------------
0
(1 row)
edb=# select * from efm_extension.efm_nodes_details ;
INFO: efm command is available
node_ip | property | value
------------+----------+--------------------------
172.17.0.2 | db | "UNKNOWN"
172.17.0.2 | info | " "
172.17.0.2 | type | "Idle"
172.17.0.2 | xlog | "0/35BC388"
172.17.0.2 | agent | "UP"
172.17.0.2 | xloginfo | "DB is not in recovery."
(6 rows)
edb=# select * from efm_extension.efm_local_properties ;
           name            |              value              
---------------------------+----------------------------------
 efm.license               |
 db.user                   | efm
 db.password.encrypted     | 074b627bf50168881d246c5dd32fd8d0
 db.port                   | 5444
 db.database               | edb
 db.service.owner          | enterprisedb
 db.service.name           | edb-as-9.6
 db.bin                    | /usr/edb/as9.6/bin
 db.recovery.conf.dir      | /pgdata
 jdbc.ssl                  | false
 jdbc.ssl.mode             | verify-ca
 user.email                | vibhor.kumar@enterprisedb.com
 script.notification       |
 bind.address              | 172.17.0.2:5430
 admin.port                | 5431
 is.witness                | false
 local.period              | 10
 local.timeout             | 60
 local.timeout.final       | 10
 remote.timeout            | 10
 node.timeout              | 50
 pingServerIp              | 8.8.8.8
 pingServerCommand         | /bin/ping -q -c3 -w5
 auto.allow.hosts          | true
 db.reuse.connection.count | 0
 auto.failover             | true
 auto.reconfigure          | true
 promotable                | true
 minimum.standbys          | 0
 recovery.check.period     | 2
 auto.resume.period        | 0
 virtualIp                 |
 virtualIp.interface       |
 virtualIp.netmask         |
 script.fence              |
 script.post.promotion     |
 script.resumed            |
 script.db.failure         |
 script.master.isolated    |
 sudo.command              | sudo
 sudo.user.command         | sudo -u %u
 jgroups.loglevel          | INFO
 efm.loglevel              | INFO
 jvm.options               | -Xmx32m
(44 rows)

I look forward to your comments on this topic. Click here to request a full demo of EFM.

Leveraging Local Temporary Table for Global Temporary Table in EDB Postgres 9.6

In the past, I posted a blog on the concept of creating a Global Temporary Table (GTT) for migration from Oracle to EDB Postgres. In that blog I had shared the following characteristics a Global Temporary Table:

  1. The Global Temporary Table gives predefined structure for storing data.
  2. It’s an unlogged table which means any activity on this table will not be logged.
  3. The data in a Global Temporary Table are private, such that session can only access data inserted by a session.

With above characteristics, there are two options available for GTT. Following are the two of those options:

  1. ON COMMIT PRESERVE ROWS
  2. ON COMMIT DELETE ROWS

Option one can be implemented as mentioned in the Tip:: PPAS 9.4 and Global Temporary Table.  However option two is not as easy to implement.

Users also have the option of implementing a Global Temporary Table using a Local Temporary Table (LTT). An LTT (https://www.postgresql.org/docs/9.6/static/sql-createtable.html) supports both options and can be leveraged for a GTT.

To implement a Global Temporary Table in EDB Postgres, a user must have following objects in EDB Postgres:

  1. An UNLOGGED table structure that can help in creating a backend LTT;
  2. An automatic updatable VIEW with the name Global temporary table that will be used for the frontend SELECT/INSERT/DELETE/UPDATE; and
  3. A TRIGGER on view that will help in redirecting the INSERT on the view to the backend Local temporary table (LTT).

Based on the above, let’s look at an example of how DBAs and Developers can create a Global Temporary Table in EDB Postgres.

Below is a definition of a Global Temporary Table:

CREATE GLOBAL TEMPORARY TABLE global_temp (
ts       TIMESTAMP,
action   CHAR(100),
state    CHAR(50)
)
ON COMMIT DELETE ROWS;

To create the above Global Temporary Table, we will first create a backend UNLOGGED table, global_temp_backend, as given below:

CREATE UNLOGGED TABLE global_temp_backend (
ts       TIMESTAMP,
action   CHAR(100),
state    CHAR(50)
);

After creating the above UNLOGGED table, we can create a view, which users will use as a Global Temporary Table:

CREATE VIEW global_temp AS SELECT * FROM global_temp_backend;

Now, we can create an INSTEAD OF trigger on the view, which will do the following:

  1. CREATE a Local Temporary Table using the global_temp_backend definition if it does not exist in session.
  2. Re-route the insert to a Local Temporary Table.

Below is an example of such a trigger:

CREATE OR REPLACE FUNCTION global_temp_insert()
RETURNS TRIGGER
LANGUAGE plpgsql
AS
$function$
BEGIN
   INSERT INTO global_local_temp_backend VALUES(NEW.*);
   RETURN NEW;
   /* create local temporary table if not exists */
   EXCEPTION WHEN undefined_table THEN
      CREATE TEMP TABLE global_local_temp_backend
        (LIKE global_temp_backend INCLUDING ALL )
        INHERITS (global_temp_backend)
        ON COMMIT DELETE ROWS;

   INSERT INTO global_local_temp_backend VALUES(NEW.*);
   RETURN NEW;
END;
$function$;

CREATE TRIGGER insert_trigger_gt INSTEAD OF INSERT ON global_temp
FOR EACH ROW EXECUTE PROCEDURE global_temp_insert();

Below are some snapshots:

edb=# CREATE UNLOGGED TABLE global_temp_backend (
edb(#   ts       TIMESTAMP,
edb(#   action   CHAR(100),
edb(#   state    CHAR(50)
edb(# );
CREATE TABLE
edb=# CREATE VIEW global_temp AS SELECT * FROM global_temp_backend;
CREATE VIEW
edb=# CREATE OR REPLACE FUNCTION global_temp_insert()
edb-# RETURNS TRIGGER
edb-# LANGUAGE plpgsql
edb-# AS
edb-# $function$
edb$# BEGIN
edb$#     INSERT INTO global_local_temp_backend VALUES(NEW.*);
edb$#     RETURN NEW;
edb$#     EXCEPTION WHEN undefined_table THEN
edb$#       CREATE TEMP TABLE global_local_temp_backend () INHERITS (global_temp_backend)
edb$#        ON COMMIT DELETE ROWS;
edb$#       INSERT INTO global_local_temp_backend VALUES(NEW.*);
edb$#       RETURN NEW;
edb$# END;
edb$# $function$;
CREATE FUNCTION
edb=# CREATE TRIGGER insert_trigger_gt INSTEAD OF INSERT ON global_temp
edb$# FOR EACH ROW EXECUTE PROCEDURE global_temp_insert();
CREATE TRIGGER

Let’s insert some records into the Global Temporary Table and verify how it works:

edb=# BEGIN;
BEGIN
edb=# INSERT INTO global_temp VALUES (current_timestamp, 'node-1-request sended.', 'OK');
NOTICE:  merging column "ts" with inherited definition
NOTICE:  merging column "action" with inherited definition
NOTICE:  merging column "state" with inherited definition
INSERT 0 1
edb=# INSERT INTO global_temp VALUES (current_timestamp, 'node-2-request sended.', 'OK');
INSERT 0 1
edb=# INSERT INTO global_temp VALUES (current_timestamp, 'node-1-answer received.', 'Failed');
INSERT 0 1
edb=# INSERT INTO global_temp VALUES (current_timestamp, 'node-2-answer received.', 'OK');
INSERT 0 1
edb=# SELECT * FROM global_temp WHERE state = 'OK';
ts             |                                                action                                                |                       state
---------------------------+------------------------------------------------------------------------------------------------------+----------------------------------------------------
03-JUL-17 11:34:44.074603 | node-1-request sended.                                                                               | OK
03-JUL-17 11:34:44.079148 | node-2-request sended.                                                                               | OK
03-JUL-17 11:34:44.081194 | node-2-answer received.                                                                              | OK
(3 rows)
edb=#
edb=# COMMIT;
COMMIT
edb=# SELECT * FROM global_temp WHERE state = 'OK';
ts | action | state
----+--------+-------
(0 rows)

The above result was expected, since as per GTT definition rows will be deleted after commit.

If DBAs and Developers want to create a Global Temporary Table with , then they can modify the above trigger definition and include the following:

CREATE TEMP TABLE global_local_temp_backend
(LIKE global_temp_backend INCLUDING ALL )
INHERITS (global_temp_backend)
ON COMMIT PRESERVE ROWS;

Transitioning Synchronous Standby to Asynchronous in EDB Postgres

 

 

In the world of solution architecture, where we as solution architects are developing configurations to meet specific customer needs, we often consider a great deal of interesting scenarios. One of such scenario my team at EnterpriseDB (EDB) encountered recently bears exploring further with this blog

For a business critical application, transaction durability is crucial. Therefore, we helped one customer recently to design an architecture for near zero data loss.

One of our customers wanted to use two synchronous replication mechanisms within the same data center with a high-speed network between the master and standby for data transfer. Also, the database administrator  wanted to have read consistency from the replicas.

In Postgres 9.6, we have synchronous_commit = remote_apply, which helps in getting read-balancing consistency.

In addition to these requirements, the customer wanted to layer another capability into architecture; where if one of his synchronous standby replicas is down, then write transactions should not be stopped/hung. Transactions on the master should go into a “waiting state” if all the synchronous replication standbys are down.

EDB Postgres Advanced Server 9.6 comes with all building blocks out of the box to include this kind of logic to handle the scenario.

To achieve this capability in EDB Postgres, the following steps can be used:

1  Create the pg_background extension. The following is a link which you can use to compile pg_background module:

https://github.com/vibhorkum/pg_background
Below is a snapshot:

[root@ca420ebfb299 pg_background]# make install
/bin/mkdir -p '/usr/edb/as9.6/lib'
/bin/mkdir -p '/usr/edb/as9.6/share/extension'
/bin/mkdir -p '/usr/edb/as9.6/share/extension'
/bin/install -c -m 755  pg_background.so '/usr/edb/as9.6/lib/pg_background.so'
/bin/install -c -m 644 .//pg_background.control '/usr/edb/as9.6/share/extension/'
/bin/install -c -m 644 .//pg_background--1.0.sql  '/usr/edb/as9.6/share/extension/'
[root@ca420ebfb299 pg_background]# psql
psql.bin (9.6.2.7)
Type "help" for help.

edb=# create extension pg_background;
CREATE EXTENSION
edb=#

2  Create the dbms_scheduler and pgagent extension in EDB Postgres as shown below:

edb=# CREATE EXTENSION dbms_scheduler ;
CREATE EXTENSION
edb=# create extension pgagent;
CREATE EXTENSION

3  Now, you can create the procedure which can do following:

  • Identify all synchronous standby and check the status of each named synchronous standby in pg_stat_replication
  • If named synchronous standby doesn’t exists in pg_stat_replication, then change the synchronous_standby_names parameter in such a way that it doesn’t lose the name of synchronous standbys, however can demote the named synchronous to asynchronous standby. For that, it’s recommended to use following string for synchronous_standby_names parameter:
2(standby1, standby2…)
  • After demoting the synchronous standby to asynchronous, send an e-mail to your DBAs group to notify them about demotion and your DBAs can take necessary steps. In EDB Postgres, we have a package UTL_SMTP, which can be used for sending e-mails. The following is an example of such a procedure:
CREATE OR REPLACE PROCEDURE send_mail (
    p_sender        VARCHAR2,
    p_recipient     VARCHAR2,
    p_subj          VARCHAR2,
    p_msg           VARCHAR2,
    p_mailhost      VARCHAR2
)
IS
    v_conn          UTL_SMTP.CONNECTION;
    v_crlf          CONSTANT VARCHAR2(2) := CHR(13) || CHR(10);
    v_port          CONSTANT PLS_INTEGER := 25;
BEGIN
    v_conn := UTL_SMTP.OPEN_CONNECTION(p_mailhost,v_port);
    UTL_SMTP.HELO(v_conn,p_mailhost);
    UTL_SMTP.MAIL(v_conn,p_sender);
    UTL_SMTP.RCPT(v_conn,p_recipient);
    UTL_SMTP.DATA(v_conn, SUBSTR(
        'Date: ' || TO_CHAR(SYSDATE,
        'Dy, DD Mon YYYY HH24:MI:SS') || v_crlf
        || 'From: ' || p_sender || v_crlf
        || 'To: ' || p_recipient || v_crlf
        || 'Subject: ' || p_subj || v_crlf
        || p_msg
        , 1, 32767));
    UTL_SMTP.QUIT(v_conn);
END;

For more information on the UTL_SMTP package in EDB Postgres, click here.

  • If none of the standbys are available, then maintain the setting of synchronous_standby_names as given below:

synchronous_standby_names = 1(standby1, standby2,) The above setting will cover the scenario, where write should be stopped or should be in hanging state in case all standbys are down.

  • If replication slots are getting used, then check the lag for replication slots and reset the replication slots, so that we are not overloading pg_xlog.

Note: If you are using synchronous standby with replication slots, it’s recommended to have the same name for synchronous standbys and replication slots.

4  The following is a link procedure which covers all points mentioned in step 3.

https://github.com/vibhorkum/procedure_transition_sync_async

5  After creating the procedure, the user can schedule a job as given below:

EXEC sys.DBMS_SCHEDULER.create_program (
      'synchronous_to_asynchrnous',
      'STORED_PROCEDURE',
      '"public"."transition_sync_async"',
      3,true,  
     'Job which can transition synchronous to asynchronous');  

EXEC
  DBMS_SCHEDULER.DEFINE_PROGRAM_ARGUMENT(
    program_name         => 'synchronous_to_asynchrnous',
    argument_position    => 1,
    argument_name        => 'allowed_slots_lag',
    argument_type        => 'NUMERIC',
    default_value        => '1073741824');
EXEC
  DBMS_SCHEDULER.DEFINE_PROGRAM_ARGUMENT(
    program_name         => 'synchronous_to_asynchrnous',
    argument_position    => 2,
    argument_name        => 'p_sender',
    argument_type        => 'TEXT',
    default_value        => 'vibhor.aim@gmail.com');

EXEC
  DBMS_SCHEDULER.DEFINE_PROGRAM_ARGUMENT(
    program_name         => 'synchronous_to_asynchrnous',
    argument_position    => 3,
    argument_name        => 'p_recipient',
    argument_type        => 'TEXT',
    default_value        => 'vibhor.aim@gmail.com');

EXEC
  DBMS_SCHEDULER.CREATE_SCHEDULE (
    schedule_name    => 'every_minute',  
    start_date       => now, 
    repeat_interval  => 'FREQ=MINUTELY;',  
    comments         => 'This schedule executes every minute');  

EXEC    
   DBMS_SCHEDULER.create_job (
   job_name       => 'job_synchronous_to_asynchrnous',
   program_name    => 'synchronous_to_asynchrnous',
   schedule_name   => 'every_minute',
   comments        => 'Job which can transition synchronous to asynchronous');

Following is a snapshot of manual execution of job:

edb=# exec DBMS_SCHEDULER.RUN_JOB('job_synchronous_to_asynchrnous', TRUE);
INFO: synchronos_standby_names => 2(standby1,standby2)
INFO: standby_name => {2,standby1,standby2}
INFO: old standby count => 2
INFO: synchronous_standby_count => 2

EDB-SPL Procedure successfully completed

For more information on the DBMS_SCHEDULER package in EDB Postgres, click here.

Monitoring Amazon PostgreSQL RDS with PEM version 6.0

The EDB Postgres Enterprise Manager is an enterprise management tool designed to tune, manage, and monitor a large number of Postgres deployments on servers on-premises or spread out across geographical boundaries. It is the cornerstone of the Management Suite of tools that is integrated into the EDB Postgres Platform from EnterpriseDB® (EDB) and it emerged to solve a great many challenges database administrators faced daily. (The blog, How to Be the Master of Your Database Domain, explored EDB Postgres Enterprise Manager in greater detail.)

EDB customers use EDB Postgres Enterprise Manager for the following key features:

  1. Management of multiple EDB Postgres Advanced Server deployments
  2. Capacity planning
  3. Performance monitoring
  4. Alert Management
  5. Auditing violation
  6. Database Analysis
  7. Postgres Expert

Recently one of EDB’s customers acquired a company that was hosting its data on Amazon PostgreSQL RDS. The customer wanted to leverage EDB Postgres Enterprise Manager to monitor the database deployments on PostgreSQL RDS.

At the time, this request was considered a real possibility. A significant amount of development expertise has gone into EDB Postgres Enterprise Manager to create a sophisticated, enterprise-class tool for managing vast deployments of Postgres. As for remote monitoring, EDB Postgres Enterprise Manager had the capacity to monitor remote databases using an agent on a designated server. EDB Postgres Enterprise Manager also provides capabilities for users, such as database administrators, to create their own custom probes, custom dashboards, and custom alerts. Combined, all of these capabilities mean EDB Postgres Enterprise Manager ultimately provides the kinds of control and flexibility DBAs needed for managing databases that are hosted in different environments.

EDB worked closely with the customer to develop a process for utilizing their subscription for the EDB Postgres Enterprise Manager, which is part of the EDB Postgres Platform, with PostgreSQL databases on Amazon RDS. The steps were formalized into a repeatable process so others could follow suit and enjoy the benefits of EDB Postgres Enterprise Manager in their cloud deployments.

The following are the steps for using EDB Postgres Enterprise Manager with Amazon PostgreSQL RDS:

Amazon PostgreSQL RDS comes with an rds_superuser, which is not the same as a super user in PostgreSQL. Rds_super role has a lot of limitations, and they effect the monitoring capabilities.

To use EDB Postgres Enterprise Manager with PostgreSQL RDS, follow these steps:

  1. Use the following function to modify the EDB Postgres Enterprise Manager probes so that it can leverage user defined views on some catalog tables:
CREATE OR REPLACE FUNCTION strip_pg_catalog_from_probe()
RETURNS boolean
LANGUAGE plpgsql
AS
$function$
 DECLARE
   rec RECORD;
 BEGIN
   CREATE TABLE pem.probe_code_backup AS SELECT id, probe_code FROM pem.probe WHERE id IN (1, 2, 3, 9, 10, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 23, 24, 25, 29, 36, 39, 47, 52, 53, 54);
   FOR rec IN SELECT id, probe_code FROM pem.probe WHERE id IN (1, 2, 3, 9, 10, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 23, 24, 25, 29, 36, 39, 47, 52, 53, 54)
   LOOP
      UPDATE pem.probe SET probe_code=replace(rec.probe_code,'pg_catalog.','') where id=rec.id;
   END LOOP;
   RETURN true;
END;
$function$;
  1. After creating the above function, execute the above function in the EDB Postgres Enterprise Manager server database as shown below:

psql -h localhost –c “SELECT strip_pg_catalog_from_probe()” pem

  1. Connect to PostgreSQL RDS database as rds_super user and execute the following SQL in Postgres and the user’s defined database:
CREATE USER monitor PASSWORD 'password';
GRANT rds_superuser TO monitor;
CREATE SCHEMA rds_pem_views;
SET search_path TO rds_pem_views;
CREATE OR REPLACE VIEW pg_database AS SELECT oid,* FROM pg_catalog.pg_database WHERE datname <> 'rdsadmin';

CREATE OR REPLACE VIEW pg_tablespace AS SELECT oid, * FROM pg_tablespace WHERE spcname <> 'pg_global';

CREATE FUNCTION rds_pem_views.pg_ls_dir(text)
RETURNS TEXT
LANGUAGE sql
AS
$function$
   SELECT ''::TEXT;
$function$;

GRANT ALL ON FUNCTION rds_pem_views.pg_ls_dir(text) TO monitor;
GRANT ALL ON SCHEMA rds_pem_views TO monitor;
GRANT ALL ON  rds_pem_views.pg_database TO monitor;
GRANT ALL ON  rds_pem_views.pg_tablespace TO monitor;

ALTER USER monitor SET search_path TO rds_pem_views,pg_catalog, "$user", public;

The above SQL will create a database user “monitor” with “rds_superuser” privileges.

As mentioned above, rds_superuser in RDS is not the same as a true superuser in PostgreSQL, therefore a schema with a view for pg_database and for pg_tablespace is created.

Views pg_database and pg_tablespace are created because the rds_superuser role has the following restrictions:

  1. Rds_superuser or normal user in PostgreSQL RDS cannot access rdsadmin database;
  2. Rds_superuser or normal user in PostgreSQL RDS cannot calculate the size of pg_global tablespace;
  3. Rds_superuser cannot use pg_ls_dir to list the WAL files in pg_xlog directory.
  1. After executing SQLs, mentioned in step three, we can now use the database user monitor for EDB Postgres Enterprise Manager monitoring.
  2. For remote monitoring of PostgreSQL RDS, use the following link to add the following server:

https://www.enterprisedb.com/docs/en/6.0.2/pemgetstarted/getting_started_guide.1.14.html#

When you are following the documentation, please remember to click on “remote monitoring” and choose the EDB Postgres Enterprise Manager agent (PEM Agent) for remote monitoring as shown below:

snap1.png

In the above Bound Agent is pemagent running on PEM Server. Also note that the database is monitor.

After performing the above steps, the EDB Postgres Enterprise Manager server can monitor PostgreSQL RDS, and it can collect all database level statistics. Also, DBAs and other users can configure all database level alerts like autovacuum, etc. EDB Postgres Enterprise Manager provides 63 templates for database level alerts. It also provides flexibility for DBAs and other users to create their own alerts based on custom probes.

Now, some users might have questions or concerns about about system level information from Amazon PostgreSQL RDS, and whether EDB Postgres Enterprise Manager can monitor that data. The answer is yes.

As an example, the following are the steps for configuring EDB Postgres Enterprise Manager to monitor the CPU utilization of PostgreSQL RDS:

  1. Install the Amazon Cloudwatch tools on the EDB Postgres Enterprise Manager Server using following command:
wget http://ec2-downloads.s3.amazonaws.com/CloudWatch-2010-08-01.zip
unzip CloudWatch-2010-08-01.zip

For more information on setting the Amazon CloudWatch command line, please use the following link:

http://docs.aws.amazon.com/AmazonCloudWatch/latest/cli/SetupCLI.html

  1. Create an IAM user on the AWS panel, and attach the managed policy “CloudWatchReadOnlyAccess”.

For more information, please refer to the following link:

http://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/iam-identity-based-access-control-cw.html

  1. Create an AWS Credential File using the following sample:
AWSAccessKeyId=
AWSSecretKey=

For more information on AWS Credential file, please use the following link:

http://docs.aws.amazon.com/cli/latest/userguide/cli-chap-getting-started.html

  1. Change the permission of the above Credential file using the following command:
chmod 600 credential_file
  1. Create a following script to get the CPUUtiliztion information from Amazon CloudWatch:
#!/bin/bash
export AWS_CREDENTIAL_FILE=/usr/local/aws/credential/credential-file-path
export AWS_CLOUDWATCH_HOME=/root/CloudWatch-1.0.20.0
export PATH=$PATH:$AWS_CLOUDWATCH_HOME/bin
export JAVA_HOME=/usr/lib/jvm/jre
export AWS_CLOUDWATCH_URL=http://monitoring.us-west-2.amazonaws.com
echo -e "avg_cpu\tminimum_cpu\tmax_cpu"
mon-get-stats CPUUtilization --namespace="AWS/RDS" --dimensions="DBInstanceIdentifier=postgres" --statistics Average,Minimum,Maximum|tail -n 1|awk '{print $3"\t"$4"\t"$5}'
  1. After creating the above script, make the script executable as given below:
chmod +x rds_cpu.sh

Now you are ready to create a custom probe to use this script for monitoring with EDB Postgres Enterprise Manager.

To create a custom probe, please visit the following link:

https://www.enterprisedb.com/docs/en/6.0.2/pemgetstarted/getting_started_guide.1.29.html#pID0E0TBB0HA

Below is a snapshot of a custom probe which I have used:

snap2

By using above probes, user can create their custom alerts in EDB Postgres Enterprise Manager for monitoring CPU utilization on Amazon PostgreSQL RDS. Also they can use data for creating a custom dashboard for RDS.

The following is a snapshot of RDS CPU utilization from EDB Postgres Enterprise Manager.

snap3

If you want to know more about Custom Alerts and Custom Dashboards, please use the following links:

  1. Creating Custom Alerts templates

https://www.enterprisedb.com/docs/en/6.0.2/pemgetstarted/getting_started_guide.1.30.html#pID0E0E40HA

  1. Creating Custom Alerts

https://www.enterprisedb.com/docs/en/6.0.2/pemgetstarted/getting_started_guide.1.30.html#pID0E0Z60HA

  1. Creating Custom Charts

https://www.enterprisedb.com/docs/en/6.0.2/pemgetstarted/getting_started_guide.1.28.html#

  1. Creating Custom Ops Dashboard

https://www.enterprisedb.com/docs/en/6.0.2/pemgetstarted/getting_started_guide.1.27.html#pID0E0HFB0HA

 

As you can see, the EDB Postgres Enterprise Manager is very flexible and customizable to meet specific needs.