Leveraging Local Temporary Table for Global Temporary Table in EDB Postgres 9.6

In the past, I posted a blog on the concept of creating a Global Temporary Table (GTT) for migration from Oracle to EDB Postgres. In that blog I had shared the following characteristics a Global Temporary Table:

  1. The Global Temporary Table gives predefined structure for storing data.
  2. It’s an unlogged table which means any activity on this table will not be logged.
  3. The data in a Global Temporary Table are private, such that session can only access data inserted by a session.

With above characteristics, there are two options available for GTT. Following are the two of those options:

  1. ON COMMIT PRESERVE ROWS
  2. ON COMMIT DELETE ROWS

Option one can be implemented as mentioned in the Tip:: PPAS 9.4 and Global Temporary Table.  However option two is not as easy to implement.

Users also have the option of implementing a Global Temporary Table using a Local Temporary Table (LTT). An LTT (https://www.postgresql.org/docs/9.6/static/sql-createtable.html) supports both options and can be leveraged for a GTT.

To implement a Global Temporary Table in EDB Postgres, a user must have following objects in EDB Postgres:

  1. An UNLOGGED table structure that can help in creating a backend LTT;
  2. An automatic updatable VIEW with the name Global temporary table that will be used for the frontend SELECT/INSERT/DELETE/UPDATE; and
  3. A TRIGGER on view that will help in redirecting the INSERT on the view to the backend Local temporary table (LTT).

Based on the above, let’s look at an example of how DBAs and Developers can create a Global Temporary Table in EDB Postgres.

Below is a definition of a Global Temporary Table:

CREATE GLOBAL TEMPORARY TABLE global_temp (
ts       TIMESTAMP,
action   CHAR(100),
state    CHAR(50)
)
ON COMMIT DELETE ROWS;

To create the above Global Temporary Table, we will first create a backend UNLOGGED table, global_temp_backend, as given below:

CREATE UNLOGGED TABLE global_temp_backend (
ts       TIMESTAMP,
action   CHAR(100),
state    CHAR(50)
);

After creating the above UNLOGGED table, we can create a view, which users will use as a Global Temporary Table:

CREATE VIEW global_temp AS SELECT * FROM global_temp_backend;

Now, we can create an INSTEAD OF trigger on the view, which will do the following:

  1. CREATE a Local Temporary Table using the global_temp_backend definition if it does not exist in session.
  2. Re-route the insert to a Local Temporary Table.

Below is an example of such a trigger:

CREATE OR REPLACE FUNCTION global_temp_insert()
RETURNS TRIGGER
LANGUAGE plpgsql
AS
$function$
BEGIN
   INSERT INTO global_local_temp_backend VALUES(NEW.*);
   RETURN NEW;
   /* create local temporary table if not exists */
   EXCEPTION WHEN undefined_table THEN
      CREATE TEMP TABLE global_local_temp_backend
        (LIKE global_temp_backend INCLUDING ALL )
        INHERITS (global_temp_backend)
        ON COMMIT DELETE ROWS;

   INSERT INTO global_local_temp_backend VALUES(NEW.*);
   RETURN NEW;
END;
$function$;

CREATE TRIGGER insert_trigger_gt INSTEAD OF INSERT ON global_temp
FOR EACH ROW EXECUTE PROCEDURE global_temp_insert();

Below are some snapshots:

edb=# CREATE UNLOGGED TABLE global_temp_backend (
edb(#   ts       TIMESTAMP,
edb(#   action   CHAR(100),
edb(#   state    CHAR(50)
edb(# );
CREATE TABLE
edb=# CREATE VIEW global_temp AS SELECT * FROM global_temp_backend;
CREATE VIEW
edb=# CREATE OR REPLACE FUNCTION global_temp_insert()
edb-# RETURNS TRIGGER
edb-# LANGUAGE plpgsql
edb-# AS
edb-# $function$
edb$# BEGIN
edb$#     INSERT INTO global_local_temp_backend VALUES(NEW.*);
edb$#     RETURN NEW;
edb$#     EXCEPTION WHEN undefined_table THEN
edb$#       CREATE TEMP TABLE global_local_temp_backend () INHERITS (global_temp_backend)
edb$#        ON COMMIT DELETE ROWS;
edb$#       INSERT INTO global_local_temp_backend VALUES(NEW.*);
edb$#       RETURN NEW;
edb$# END;
edb$# $function$;
CREATE FUNCTION
edb=# CREATE TRIGGER insert_trigger_gt INSTEAD OF INSERT ON global_temp
edb$# FOR EACH ROW EXECUTE PROCEDURE global_temp_insert();
CREATE TRIGGER

Let’s insert some records into the Global Temporary Table and verify how it works:

edb=# BEGIN;
BEGIN
edb=# INSERT INTO global_temp VALUES (current_timestamp, 'node-1-request sended.', 'OK');
NOTICE:  merging column "ts" with inherited definition
NOTICE:  merging column "action" with inherited definition
NOTICE:  merging column "state" with inherited definition
INSERT 0 1
edb=# INSERT INTO global_temp VALUES (current_timestamp, 'node-2-request sended.', 'OK');
INSERT 0 1
edb=# INSERT INTO global_temp VALUES (current_timestamp, 'node-1-answer received.', 'Failed');
INSERT 0 1
edb=# INSERT INTO global_temp VALUES (current_timestamp, 'node-2-answer received.', 'OK');
INSERT 0 1
edb=# SELECT * FROM global_temp WHERE state = 'OK';
ts             |                                                action                                                |                       state
---------------------------+------------------------------------------------------------------------------------------------------+----------------------------------------------------
03-JUL-17 11:34:44.074603 | node-1-request sended.                                                                               | OK
03-JUL-17 11:34:44.079148 | node-2-request sended.                                                                               | OK
03-JUL-17 11:34:44.081194 | node-2-answer received.                                                                              | OK
(3 rows)
edb=#
edb=# COMMIT;
COMMIT
edb=# SELECT * FROM global_temp WHERE state = 'OK';
ts | action | state
----+--------+-------
(0 rows)

The above result was expected, since as per GTT definition rows will be deleted after commit.

If DBAs and Developers want to create a Global Temporary Table with , then they can modify the above trigger definition and include the following:

CREATE TEMP TABLE global_local_temp_backend
(LIKE global_temp_backend INCLUDING ALL )
INHERITS (global_temp_backend)
ON COMMIT PRESERVE ROWS;

Partition pruning in EDB Postgres 9.5

One of my colleague who was recently working with a customer has presented a customer case. According to him, customer has a partitioned table and EDB Postgres was not applying the partition pruning in his query. So, I thought to blog about partition pruning, so that EDB Postgres developers and DBAs can benefit.

EDB Postgres supports two types of partition pruning:

Constraint exclusion pruning:

It is a feature introduced in Postgresql 8.1. This type of pruning works with PostgreSQL-style of partition. With constraint exclusion enabled, the planner will examine the constraints of each partition and try to prove that the partition need not be scanned because it could not contain any rows meeting the query’s WHERE clause. When the planner can prove this, it excludes the partition from the query plan.

However, it has some limitations. Following is the limitation of constraint_exclusion:

a. Constraint exclusion only works when the query’s WHERE clause contains constants (or externally supplied parameters). For example, a comparison against a non-immutable function such as CURRENT_TIMESTAMP cannot be optimized, since the planner cannot know which partition the function value might fall into at run time.
b. Keep the partitioning constraints simple, else the planner may not be able to prove that partitions don’t need to be visited. Use simple equality conditions for list partitioning, or simple range tests for range partitioning, as illustrated in the preceding examples. A good rule of thumb is that partitioning constraints should contain only comparisons of the partitioning column(s) to constants using B-tree-indexable operators.

For verification, below shows the behavior of constraint_exclusion pruning:
1. Let’s create PostgreSQL-style partition table using table inheritance feature.

CREATE TABLE measurement (
     city_id        int not null,
     logdate        date not null,
     peaktemp        int,
     unitsales      int
 );
CREATE TABLE measurement_y2004m02 (
     CHECK ( date_part('month'::text, logdate) = 2)
 ) INHERITS (measurement);
CREATE TABLE measurement_y2004m03 (
     CHECK ( date_part('month'::text, logdate) = 3 )
 ) INHERITS (measurement);

  1. Execute simple query to verify the constraint_exclusion behavior based on above definition:
 edb=# EXPLAIN (costs off) SELECT count(*) FROM measurement WHERE date_part('month'::text, logdate) = 3;
                                    QUERY PLAN                                     
-----------------------------------------------------------------------------------
 Aggregate
   ->  Append
         ->  Seq Scan on measurement
               Filter: (date_part('month'::text, logdate) = '3'::double precision)
         ->  Seq Scan on measurement_y2004m02
               Filter: (date_part('month'::text, logdate) = '3'::double precision)
         ->  Seq Scan on measurement_y2004m03
               Filter: (date_part('month'::text, logdate) = '3'::double precision)
(8 rows)

Above output of the query shows EDB Postgres considered all partitions of table measurements, even though we had included partition column and constant value in WHERE clause. This is due to check constraint which has date_part function. date_part function is not immutable in Postgres, therefore at planning time, EDB Postgres doesn’t know what value it will return. And, if user doesn’t include proper WHERE clause as mentioned in check constraint, pruning will not work.

In Postgres you can make a function immutable by using ALTER FUNCTION command.

In below example, we will make date_part function immutable to check if constraint_exclusion works with date_part immutable function or not:

  1. Convert date_part function to immutable :
edb=# ALTER FUNCTION date_part (text, timestamp without time zone ) immutable;
ALTER FUNCTION
  1. Perform EXPLAIN command to check the behavior of constraint_exclusion using immutable function:
edb=# EXPLAIN (costs off) SELECT count(*) FROM measurement WHERE date_part('month'::text, logdate) = 3;
                                    QUERY PLAN
-----------------------------------------------------------------------------------
 Aggregate
   ->  Append
         ->  Seq Scan on measurement
               Filter: (date_part('month'::text, logdate) = '3'::double precision)
         ->  Seq Scan on measurement_y2004m03
               Filter: (date_part('month'::text, logdate) = '3'::double precision)
(6 rows)

As you can see with immutable function EDB Postgres was able to perform constraint_exclusion pruning.

What if we change the WHERE clause little bit and include < and = operator in our SQL queries (below are examples)

edb=#  EXPLAIN (costs off) SELECT count(*) FROM measurement WHERE logdate < DATE '2004-03-01';                                      QUERY PLAN                                       -------------------------------------------------------------------------------------  Aggregate    ->  Append
         ->  Seq Scan on measurement
               Filter: (logdate < '01-MAR-04 00:00:00'::timestamp without time zone)          ->  Seq Scan on measurement_y2004m02
               Filter: (logdate < '01-MAR-04 00:00:00'::timestamp without time zone)          ->  Seq Scan on measurement_y2004m03
               Filter: (logdate < '01-MAR-04 00:00:00'::timestamp without time zone) (8 rows) edb=# EXPLAIN (costs off) SELECT count(*) FROM measurement WHERE logdate = DATE '2004-02-01';                                      QUERY PLAN                                       -------------------------------------------------------------------------------------  Aggregate    ->  Append
         ->  Seq Scan on measurement
               Filter: (logdate = '01-FEB-04 00:00:00'::timestamp without time zone)
         ->  Seq Scan on measurement_y2004m02
               Filter: (logdate = '01-FEB-04 00:00:00'::timestamp without time zone)
         ->  Seq Scan on measurement_y2004m03
               Filter: (logdate = '01-FEB-04 00:00:00'::timestamp without time zone)
(8 rows)

As you can see with a change in WHERE clause and exclusion of the way constraint defined on partition, Postgres will scan all partitions.

Based on above we can conclude that if a user is planning to use Postgres way of partition then they have to be careful about the constraint definition in order to utilize constraint_exclusion pruning.

Lets modify the definition of measurement table and verify the ,=, <= and = operator in WHERE clause.

CREATE TABLE measurement (
     city_id        int not null,
     logdate        date not null,
     peaktemp        int,
     unitsales      int
 ); 
CREATE TABLE measurement_y2004m02 (
     CHECK ( logdate >= DATE '2004-02-01' AND logdate < DATE '2004-03-01' )  ) INHERITS (measurement); CREATE TABLE measurement_y2004m03 (      CHECK ( logdate >= DATE '2004-03-01' AND logdate < DATE '2004-04-01' )
 ) INHERITS (measurement);

Below is explain plan based on above definition:

edb=# EXPLAIN (costs off) SELECT count(*) FROM measurement WHERE logdate < DATE '2004-03-01';                                      QUERY PLAN                                       -------------------------------------------------------------------------------------  Aggregate    ->  Append
         ->  Seq Scan on measurement
               Filter: (logdate < '01-MAR-04 00:00:00'::timestamp without time zone)          ->  Seq Scan on measurement_y2004m02
               Filter: (logdate < '01-MAR-04 00:00:00'::timestamp without time zone) (6 rows) edb=# EXPLAIN (costs off) SELECT count(*) FROM measurement WHERE logdate = DATE '2004-03-01';                                      QUERY PLAN                                       -------------------------------------------------------------------------------------  Aggregate    ->  Append
         ->  Seq Scan on measurement
               Filter: (logdate = '01-MAR-04 00:00:00'::timestamp without time zone)
         ->  Seq Scan on measurement_y2004m03
               Filter: (logdate = '01-MAR-04 00:00:00'::timestamp without time zone)
(6 rows)

Above clearly shows that with correct constraint definition, constriant_exclusion pruning can work for >,<,>=, <= and = operator in WHERE clause.

Fast pruning:

EDB Postgres has CREATE TABLE PARTITION SYNTAX since version 9.1. PARTITION SYNTAX in EDB Postgres uses one more pruning called fast pruning. Fast pruning uses the partition metadata and query predicates to efficiently reduce the set of partitions to scan. Fast pruning in EDB Postgres happens before query plan. Let’s verify the behavior of fast pruning.
As mentioned fast pruning works with partition which user created using EDB Postgres CREATE TABLE PARTITION Syntax. Let’s modify the above definition of measurement table to use CREATE TABLE PARTITION SYNTAX as given below:

CREATE TABLE  measurement (
     city_id        int not null,
     logdate        date not null,
     peaktemp        int,
     unitsales      int
 )
PARTITION BY RANGE(logdate)
(PARTITION y2004m01 VALUES LESS THAN ('2004-02-01'),
 PARTITION y2004m02 VALUES LESS THAN ('2004-03-01'),
 PARTITION y2004m03 VALUES LESS THAN ('2004-04-01')
);
edb=# EXPLAIN (costs off) SELECT count(*) FROM measurement WHERE logdate < DATE '2004-03-01';                                      QUERY PLAN                                       -------------------------------------------------------------------------------------  Aggregate    ->  Append
         ->  Seq Scan on measurement
               Filter: (logdate < '01-MAR-04 00:00:00'::timestamp without time zone)          ->  Seq Scan on measurement_y2004m01
               Filter: (logdate < '01-MAR-04 00:00:00'::timestamp without time zone)          ->  Seq Scan on measurement_y2004m02
               Filter: (logdate < '01-MAR-04 00:00:00'::timestamp without time zone) (8 rows) edb=# EXPLAIN (costs off) SELECT count(*) FROM measurement WHERE logdate = DATE '2004-03-01';                                      QUERY PLAN                                       -------------------------------------------------------------------------------------  Aggregate    ->  Append
         ->  Seq Scan on measurement
               Filter: (logdate = '01-MAR-04 00:00:00'::timestamp without time zone)
         ->  Seq Scan on measurement_y2004m03
               Filter: (logdate = '01-MAR-04 00:00:00'::timestamp without time zone)
(6 rows)

edb=# EXPLAIN (costs off) SELECT count(*) FROM measurement WHERE logdate > DATE '2004-03-01';
                                     QUERY PLAN                                      
-------------------------------------------------------------------------------------
 Aggregate
   ->  Append
         ->  Seq Scan on measurement
               Filter: (logdate > '01-MAR-04 00:00:00'::timestamp without time zone)
         ->  Seq Scan on measurement_y2004m03
               Filter: (logdate > '01-MAR-04 00:00:00'::timestamp without time zone)
(6 rows)

For more information on EDB Postgres pruning please refer following link:
https://www.enterprisedb.com/docs/en/9.5/oracompat/Database_Compatibility_for_Oracle_Developers_Guide.1.327.html#

Postgres and Transparent Data Encryption (TDE)

Security has always been a great concern of Enterprises. Especially, if you have crucial information stored in the database, you would always prefer to have high security around it. Over the years, technologies have evolved and provided better solutions around it.

If you have very sensitive information, people try to keep this information encrypted so, that in case, somebody gets access of the system, then they cannot view this information, if they are not authorized.

For managing sensitive information, Enterprises use multiple methods:

  1. Encrypting specific information.

If you are PPAS users, you would like to use DBMS_CRYPTO package which provides a way of encrypting sensitive information in databases.

For more information, please refer following link:

http://www.enterprisedb.com/docs/en/9.4/oracompat/Database_Compatibility_for_Oracle_Developers_Guide.1.178.html#

For PostgreSQL, users can use pgcrypto module.

  1. Transparent Data Encryption (TDE) is another method employed by both Microsoft and Oracle to encrypt database files. TDE offers encryption at file level. This method solves the problem of protecting data at rest i.e. encrypting databases both on the hard drive and consequently on backup media. Enterprises typically employ TDE to solve compliance issues such as PCI DSS.

Postgres Plus, currently doesn’t have inbuilt TDE, however, if Enterprises looking for encryption at the database file level, they can use one of the following methods for protecting data at rest:

  1. Full Disk Encryption:

Full disk or partition encryption is one of the best ways of protecting your data. This method not only protects each file, however, also protects the temporary storage that may contain parts of these files.  Full disk encryption protects all of your files and then you do not have to worry about selecting what you want to protect and possibly missing a file.

RHEL (Red Hat) supports Linux Unified Key Setup-on-disk-format (or LUKS). LUKS bulk encrypts Hard Drive partition.

For more information on LUKS, please refer following link:

https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Linux/6/html/Security_Guide/chap-Security_Guide-Encryption.html#sect-Security_Guide-LUKS_Disk_Encryption

  1. File system-level encryption:

File system-level encryption often called file/directory encryption. In this method individual files or directories are encrypted by the file system itself.

There is stackable cryptographic file system encryption available which user can utilize in their environment.

File system level Encryption gives following advantages:

  1. Flexible file-based key management, so that each file can be and usually is encrypted with a separate encryption key.
  1. Individual management of encrypted files e.g. Incremental backups of the individual changed files even in encrypted form, rather than backup of the entire encrypted volume.
  1. Access control can be enforced through the use of public-key cryptography, and the fact that cryptographic keys are only held in memory while the file that is decrypted by them is held open.

Stackable cryptographic file system encryption can be use for Postgres for Transparent Data Encryption.

In this blog, I will discuss using mount ecrpytfs as it requires less overhead in setup (LUKS requires a new disk to be configured and formatted before storing data on it. “mount ecrpytfs” works with existing directories and data).

If Enterprises want to give the control to DBAs for TDE, they can use/define few sudo rules for DBAs to execute commands for encryption.

Following is a method, which they can use:

  • Ask system admin to create sudo rules to allow DBA to execute encryption for data directory for Postgres Plus. One common way to do this is using the “mount ecryptfs” command in Linux operating systems.
  • If user needs to encrypt the /ppas94/data directory, they can use following command:
sudo mount -t ecryptfs /ppas94/data /ppas94/data
        

More information can be found in the documentation from RHEL:

https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Linux/6/html/Storage_Administration_Guide/ch-efs.html

User can also specify encryption key type (passphrase, openssl), cipher (aes, des3_ede…) key byte size, and other options with above commands.

Example is given below:

# mount -t ecryptfs /home /home -o ecryptfs_unlink_sigs \

 ecryptfs_key_bytes=16 ecryptfs_cipher=aes ecryptfs_sig=c7fed37c0a341e19
<strong>


Centos 7 and RHEL 7, by default doesn’t come with ecrpytfs therefore, users can also use encfs command.

For more information on encfs, please refer following link:

https://wiki.archlinux.org/index.php/EncFS

Following are the steps to use encfs to encrypt the data directory.

  1. Create a data directory using the following command, as enterprisedb user.
   mkdir /var/lib/ppas/9.4/encrypted_data
   chmod 700 /var/lib/ppas/9.4/encrypted_data
  1. Use following encfs command to encrypt the data directory.
         encfs /var/lib/ppas-9.4/encrypted_data/ /var/lib/ppas-9.4/data

Snapshot of above command is given below:

encfs /var/lib/ppas/9.4/encrypted_data /var/lib/ppas/9.4/data

The directory "/var/lib/ppas/9.4/data" does not exist. Should it be created? (y,n) y

Creating new encrypted volume.

Please choose from one of the following options:

 enter "x" for expert configuration mode,

 enter "p" for pre-configured paranoia mode,

 anything else, or an empty line will select standard mode.

?> p




Paranoia configuration selected.




Configuration finished.  The filesystem to be created has

the following properties:

Filesystem cipher: "ssl/aes", version 3:0:2

Filename encoding: "nameio/block", version 3:0:1

Key Size: 256 bits

Block Size: 1024 bytes, including 8 byte MAC header

Each file contains 8 byte header with unique IV data.

Filenames encoded using IV chaining mode.

File data IV is chained to filename IV.

File holes passed through to ciphertext.




-------------------------- WARNING --------------------------

The external initialization-vector chaining option has been

enabled.  This option disables the use of hard links on the

filesystem. Without hard links, some programs may not work.

The programs 'mutt' and 'procmail' are known to fail.  For

more information, please see the encfs mailing list.

If you would like to choose another configuration setting,

please press CTRL-C now to abort and start over.




Now you will need to enter a password for your filesystem.

You will need to remember this password, as there is absolutely

no recovery mechanism.  However, the password can be changed

later using encfsctl.




New Encfs Password: 

Verify Encfs Password: 

  1. After encrypting, data directory, users also need to modify the postgresql-<version> service script to include proper command in it for password. For that either, they can use sshpass or they can write their own program which can pass the password for mounting directory.

As you can see, achieving Transparent Data Encryption Postgres is very easy.

Compiling PLV8 with Postgres Plus Advanced Server

PLV8 is a programming language that lets users write stored procedures and triggers in JavaScript and store them in their Postgres database. This allows application programmers to write a lot of their server-side programming in the same language they use to build their web client applications. Fewer languages to learn usually means fewer mistakes and faster time to completion. The extensive language support is one of many reasons why Postgres’ use across the world is increasing lately. The recent addition of document data support with JSON and JSONB data types in PostgreSQL, and in Postgres Plus Advanced Server from EnterpriseDB, is the main reason for the increasing interest in the PL/V8 language extension.

Below are the steps you need to compile PLV8 with Postgres Plus Advanced Server 9.3/9.4.

To get started, here are the prerequisites:
1. A supported version of PostgreSQL or Postgres Plus Advanced Server, such as versions 9.1 and higher.
2. V8 version 3.14.5
3. g++ version 4.5.1

If you want to know more about V8, you can visit the following wiki page:
http://en.wikipedia.org/wiki/V8_(JavaScript_engine)

It’s important to note that when compiling PLV8 with Postgres Plus Advanced Server 9.3 or the upcoming 9.4, you will get the following two types of error messages:

The first error:

[root@localhost plv8js]# make
sed -e 's/^#undef PLV8_VERSION/#define PLV8_VERSION "1.5.0-dev1"/' plv8_config.h.in > plv8_config.h
g++ -Wall -O2 -DV8_USE_UNSAFE_HANDLES  -I. -I./ -I/usr/ppas-9.4/include/server -I/usr/ppas-9.4/include/internal -I/usr/include/et -D_GNU_SOURCE -I/usr/include/libxml2  -I/usr/include -fPIC -c -o plv8.o plv8.cc
plv8.cc: In function ‘void _PG_init()’:
plv8.cc:226: error: invalid conversion from ‘void (*)(XactEvent, void*)’ to ‘void (*)(XactEvent, void*, bool)’
plv8.cc:226: error:   initializing argument 1 of ‘void RegisterXactCallback(void (*)(XactEvent, void*, bool), void*)’
make: *** [plv8.o] Error 1

The above error message is a result of a different signature of

typedef void (*XactCallback)

in the Advanced Server transaction system.

To fix the above issue, the user can replace the following in plv8.cc:

static void plv8_xact_cb(XactEvent event, void *arg);

With

static void plv8_xact_cb(XactEvent event, void *arg, bool spl_context);

The second error:
After making the above changes, you may get the following error after trying to compile the source code using the “make” command:

[root@localhost plv8js]# make
g++ -Wall -O2 -DV8_USE_UNSAFE_HANDLES  -I. -I./ -I/usr/ppas-9.4/include/server -I/usr/ppas-9.4/include/internal -I/usr/include/et -D_GNU_SOURCE -I/usr/include/libxml2  -I/usr/include -fPIC -c -o plv8.o plv8.cc
plv8.cc:137: warning: ‘void plv8_xact_cb(XactEvent, void*, bool)’ used but never defined
plv8.cc:232: warning: ‘void plv8_xact_cb(XactEvent, void*)’ defined but not used
g++ -Wall -O2 -DV8_USE_UNSAFE_HANDLES  -I. -I./ -I/usr/ppas-9.4/include/server -I/usr/ppas-9.4/include/internal -I/usr/include/et -D_GNU_SOURCE -I/usr/include/libxml2  -I/usr/include -fPIC -c -o plv8_type.o plv8_type.cc
g++ -Wall -O2 -DV8_USE_UNSAFE_HANDLES  -I. -I./ -I/usr/ppas-9.4/include/server -I/usr/ppas-9.4/include/internal -I/usr/include/et -D_GNU_SOURCE -I/usr/include/libxml2  -I/usr/include -fPIC -c -o plv8_func.o plv8_func.cc
In file included from plv8_param.h:11,
                 from plv8_func.cc:9:
/usr/ppas-9.4/include/server/nodes/params.h:77: error: expected ‘,’ or ‘...’ before ‘typeid’
make: *** [plv8_func.o] Error 1

The above is mainly due to the use of typeid in params.h; typeid is the reserved keyword of C++ compiler.

To fix this issue, make the following changes in plv8.h

extern "C" {
#include "postgres.h"

#include "access/htup.h"
#include "fmgr.h"
#include "mb/pg_wchar.h"
#include "utils/tuplestore.h"
#include "windowapi.h"
}

with

#define typeid __typeid
extern "C" {
#include "postgres.h"

#include "access/htup.h"
#include "fmgr.h"
#include "mb/pg_wchar.h"
#include "utils/tuplestore.h"
#include "windowapi.h"
}
#undef typeid

In plv8_param.h, change the following:

extern "C" {
#include "postgres.h"

/*
 * Variable SPI parameter is since 9.0.  Avoid include files in prior versions,
 * as they contain C++ keywords.
 */
#include "nodes/params.h"
#if PG_VERSION_NUM >= 90000
#include "parser/parse_node.h"
#endif	// PG_VERSION_NUM >= 90000

} // extern "C"

With

#define typeid __typeid
extern "C" {
#include "postgres.h"

/*
 * Variable SPI parameter is since 9.0.  Avoid including files in prior versions,
 * as they contain C++ keywords.
 */
#include "nodes/params.h"
#if PG_VERSION_NUM >= 90000
#include "parser/parse_node.h"
#endif	// PG_VERSION_NUM >= 90000

} // extern "C"
#undef typeid

In plv8_param.cc, replace following:

extern "C" {

#include "catalog/pg_type.h"
#include "utils/builtins.h"
#include "utils/lsyscache.h"

} // extern "C"

with

#define typeid __typeid
extern "C" {

#include "catalog/pg_type.h"
#include "utils/builtins.h"
#include "utils/lsyscache.h"

} // extern "C"
#undef typeid

After making the above changes, you will be able to compile PLV8 with Advanced Server as shown below:

[root@localhost plv8js]# make
sed -e 's/^#undef PLV8_VERSION/#define PLV8_VERSION "1.5.0-dev1"/' plv8_config.h.in > plv8_config.h
g++ -Wall -O2 -DV8_USE_UNSAFE_HANDLES  -I. -I./ -I/usr/ppas-9.4/include/server -I/usr/ppas-9.4/include/internal -I/usr/include/et -D_GNU_SOURCE -I/usr/include/libxml2  -I/usr/include -fPIC -c -o plv8.o plv8.cc
g++ -Wall -O2 -DV8_USE_UNSAFE_HANDLES  -I. -I./ -I/usr/ppas-9.4/include/server -I/usr/ppas-9.4/include/internal -I/usr/include/et -D_GNU_SOURCE -I/usr/include/libxml2  -I/usr/include -fPIC -c -o plv8_type.o plv8_type.cc
g++ -Wall -O2 -DV8_USE_UNSAFE_HANDLES  -I. -I./ -I/usr/ppas-9.4/include/server -I/usr/ppas-9.4/include/internal -I/usr/include/et -D_GNU_SOURCE -I/usr/include/libxml2  -I/usr/include -fPIC -c -o plv8_func.o plv8_func.cc
g++ -Wall -O2 -DV8_USE_UNSAFE_HANDLES  -I. -I./ -I/usr/ppas-9.4/include/server -I/usr/ppas-9.4/include/internal -I/usr/include/et -D_GNU_SOURCE -I/usr/include/libxml2  -I/usr/include -fPIC -c -o plv8_param.o plv8_param.cc
echo "extern const unsigned char coffee_script_binary_data[] = {" >coffee-script.cc
(od -txC -v coffee-script.js | \
	sed -e "s/^[0-9]*//" -e s"/ \([0-9a-f][0-9a-f]\)/0x\1,/g" -e"\$d" ) >>coffee-script.cc
echo "0x00};" >>coffee-script.cc
g++ -Wall -O2 -DV8_USE_UNSAFE_HANDLES  -I. -I./ -I/usr/ppas-9.4/include/server -I/usr/ppas-9.4/include/internal -I/usr/include/et -D_GNU_SOURCE -I/usr/include/libxml2  -I/usr/include -fPIC -c -o coffee-script.o coffee-script.cc
echo "extern const unsigned char livescript_binary_data[] = {" >livescript.cc
(od -txC -v livescript.js | \
	sed -e "s/^[0-9]*//" -e s"/ \([0-9a-f][0-9a-f]\)/0x\1,/g" -e"\$d" ) >>livescript.cc
echo "0x00};" >>livescript.cc
g++ -Wall -O2 -DV8_USE_UNSAFE_HANDLES  -I. -I./ -I/usr/ppas-9.4/include/server -I/usr/ppas-9.4/include/internal -I/usr/include/et -D_GNU_SOURCE -I/usr/include/libxml2  -I/usr/include -fPIC -c -o livescript.o livescript.cc
g++ -O2 -g -pipe -Wall -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector --param=ssp-buffer-size=4 -m64 -mtune=generic -I/usr/include/et -Wall -Wmissing-prototypes -Wpointer-arith -Wdeclaration-after-statement -Wendif-labels -Wmissing-format-attribute -Wformat-security -fno-strict-aliasing -fwrapv -fpic -shared -o plv8.so plv8.o plv8_type.o plv8_func.o plv8_param.o coffee-script.o livescript.o -L/usr/ppas-9.4/lib -L/usr/lib64 -Wl,--as-needed -Wl,-rpath,'/usr/ppas-9.4/lib',--enable-new-dtags  -lv8 
sed -e 's/@PLV8_VERSION@/1.5.0-dev1/g' plv8.control.common | g++ -E -P -DLANG_plv8 - > plv8.control
sed -e 's/@LANG_NAME@/plv8/g' plv8.sql.common | g++ -E -P -I. -I./ -I/usr/ppas-9.4/include/server -I/usr/ppas-9.4/include/internal -I/usr/include/et -D_GNU_SOURCE -I/usr/include/libxml2  -I/usr/include -DLANG_plv8 - > plv8--1.5.0-dev1.sql
sed -e 's/@PLV8_VERSION@/1.5.0-dev1/g' plv8.control.common | g++ -E -P -DLANG_plcoffee - > plcoffee.control
sed -e 's/@LANG_NAME@/plcoffee/g' plv8.sql.common | g++ -E -P -I. -I./ -I/usr/ppas-9.4/include/server -I/usr/ppas-9.4/include/internal -I/usr/include/et -D_GNU_SOURCE -I/usr/include/libxml2  -I/usr/include -DLANG_plcoffee - > plcoffee--1.5.0-dev1.sql
sed -e 's/@PLV8_VERSION@/1.5.0-dev1/g' plv8.control.common | g++ -E -P -DLANG_plls - > plls.control
sed -e 's/@LANG_NAME@/plls/g' plv8.sql.common | g++ -E -P -I. -I./ -I/usr/ppas-9.4/include/server -I/usr/ppas-9.4/include/internal -I/usr/include/et -D_GNU_SOURCE -I/usr/include/libxml2  -I/usr/include -DLANG_plls - > plls--1.5.0-dev1.sql
/bin/mkdir -p '/usr/ppas-9.4/lib'
/bin/mkdir -p '/usr/ppas-9.4/share/extension'
/bin/mkdir -p '/usr/ppas-9.4/share/extension'
/usr/bin/install -c -m 755  plv8.so '/usr/ppas-9.4/lib/plv8.so'
/usr/bin/install -c -m 644 plv8.control '/usr/ppas-9.4/share/extension/'
/usr/bin/install -c -m 644 plv8.control plv8--1.5.0-dev1.sql plcoffee.control plcoffee--1.5.0-dev1.sql plls.control plls--1.5.0-dev1.sql '/usr/ppas-9.4/share/extension/'

After compiling PLV8, you now can install the PLV8 language in Advanced Server using the following command:

beta=# CREATE EXTENSION PLV8;
CREATE EXTENSION
beta=# 

To test your installed PLV8, here is some sample code:

beta=# DO $$ PLV8.elog(NOTICE, 'this', 'is', 'inline', 'code') $$ LANGUAGE PLV8;
NOTICE:  this is inline code
DO

beta=# CREATE TYPE rec AS (i integer, t text);
CREATE TYPE
beta=# CREATE FUNCTION set_of_records() RETURNS SETOF rec AS
beta-# $$
beta$#     // PLV8.return_next() stores records in an internal tuplestore,
beta$#     // and return all of them at the end of function.
beta$#     PLV8.return_next( { "i": 1, "t": "a" } );
beta$#     PLV8.return_next( { "i": 2, "t": "b" } );
beta$# 
beta$#     // You can also return records with an array of JSON.
beta$#     return [ { "i": 3, "t": "c" }, { "i": 4, "t": "d" } ];
beta$# $$
beta-# LANGUAGE PLV8;
CREATE FUNCTION
beta=# SELECT * FROM set_of_records();
 i | t 
---+---
 1 | a
 2 | b
 3 | c
 4 | d
(4 rows)

In case you need a patched version of PLV8, use the following git repository:

 https://github.com/vibhorkum/PLV8_FOR_PPAS.git plv8_ppas

To use this, execute the following command:

 git clone https://github.com/vibhorkum/PLV8_FOR_PPAS.git plv8_ppas
cd PLV8_ppas
make
make install

To test the compiled PLV8, you can use the following command:

[root@localhost plv8_ppas]# make installcheck
/usr/ppas-9.4/lib/pgxs/src/makefiles/../../src/test/regress/pg_regress --inputdir=./ --psqldir='/usr/ppas-9.4/bin'    --dbname=contrib_regression init-extension plv8 inline json startup_pre startup varparam json_conv window dialect
(using postmaster on Unix socket, port 5444)
============== dropping database "contrib_regression" ==============
DROP DATABASE
============== creating database "contrib_regression" ==============
CREATE DATABASE
ALTER DATABASE
============== running regression test queries        ==============
test init-extension           ... ok
test plv8                     ... ok
test inline                   ... ok
test json                     ... ok
test startup_pre              ... ok
test startup                  ... ok
test varparam                 ... ok
test json_conv                ... ok
test window                   ... ok
test dialect                  ... ok

======================
 All 10 tests passed. 
======================

Meet BART – A New Tool for Backup And Recovery Management

EnterpriseDB recently launched a new tool for backup and recovery – named simply EDB Backup and Recovery Tool, or BART. This tool makes the DBA’s life easier by simplifying the tasks for managing their Postgres physical backup and recovery tasks, whether they are PostgreSQL or Postgres Plus Advanced Server deployments.

BART has the following advantages over custom scripts for managing backups:

1. It’s stable and it uses the tool pg_basebackup to take a physical backup. This tool has been well defined and is well-supported by the PostgreSQL community.

2. It catalogs all of the backups users are taking, which is important in terms of:
    i. Listing the type of backups used
   ii. Listing the status of those backups with server information.

3. BART also provides functionality to restore backups, with all required archived WAL files. So automation around this tool will make DBAs’ lives easier for restore and recovery.

4. BART provides an option to validate your backup by using checksum. This is useful for confirming you took a valid backup and it is not corrupted at disk level.

5. BART provides an option to define your retention policy around the backups you are keeping.

Given all of the above advantages, I decided to give this new tool a try and share some tips. To get started, you need the following prerequisites:

1. BART currently requires a Linux 64 bit platform, CentOS 6.x or RHEL 6.x
2. Need to have password-less, direct SSH access to the target machine where you want to restore backups as well as the database servers you want backed up
3. Install the Postgres Plus Advanced Server or PostgreSQL binaries for pg_basebackup

Yum or rpm

To install this tool, you have two options that I will explore below:

1. Yum command
2. Rpm command.

Using the yum command:

To perform a yum command installation, BART users can ask EDB for credentials to the EnterpriseDB yum repository and configure the their local yum repository as follows:

echo &quot;[tools]
name=EnterpriseDB Tools
baseurl=http://username:password@yum.enterprisedb.com/tools/redhat/rhel-$releasever-$basearch
enabled=1
gpgcheck=0&quot; &gt; /etc/yum.repos.d/edbtools.repo

After creating the yum repo, the user can execute the following command to install BART:

 yum install edb-bart

If the user doesn’t want to install the EDB Backup and Recovery Tool using the yum command, then the user can download a free standing rpm using the link below from EDB’s website:

http://www.enterprisedb.com/downloads/postgres-postgresql-downloads

and then enter the rpm install command as follows:

rpm -ivh edb-bart-1.0.1-1.rhel6.x86_64.rpm

After installing BART using the above commands, the user can see the binaries in the directory:/usr/edb-bart-1.0/bin and a sample BART configuration file in /usr/edb-bart-1.0/etc

That’s a very easy installation.

For more information on configuring BART Host and Database Host, the following are some documents that will help:
1. pg_basebackup configuration for PostgreSQL:
http://www.postgresql.org/docs/current/static/app-pgbasebackup.html

2. For direct password less ssh configuration user can refer following link
http://www.enterprisedb.com/docs/en/1.0/bart/EDB_Backup_and_Recovery_Tool_Guide-17.htm#P1008_76316

After the installation of the BART binaries, the user also has to create a BART configuration file.

The following is a sample configuration file for BART:

[BART]
bart-host= enterprisedb@127.0.0.1
backup_path = /opt/backup
pg_basebackup_path = /usr/ppas-9.4/bin/pg_basebackup
logfile = /tmp/bart.log

[PG]
host = 127.0.0.1
port = 5432
user = postgres
description = &quot;Postgres server&quot;

[PPAS94]
host = 127.0.0.1
port = 5444
user = enterprisedb
description = &quot;PPAS 94 server&quot;

Global Configuration Settings

Content under the [BART] tag are called global configuration settings. Under this tag are the following:

1. bart-host: the IP address of the host on which BART is installed. The value for this parameter must be specified in the form: bart_user@bart_host_address, where bart_user is the operating system user account on the BART host that is used to run BART and owns the BART backup catalog directory. bart_host_address is the IP address of the BART host.

2. backup_path: specifies the file system parent directory where all BART database server base backups and archived WAL files are stored. This parameter is required.

3. pg_basebackup_path: specifies the path to the pg_basebackup program of the Postgres database server installed on the BART host.

4. log file: specifies the path to the BART log file. This parameter is optional. If no path to a log file is specified after logfile =, or if the parameter is commented out, BART does not create a log file.

The remaining part of configuration file is self-explanatory. The TAG: [PG]/[PPAS94] part is content for servers which the user wants to back up.

Pg_basebackup Settings

After performing the above configuration on the Backup Server, the user has to do set following settings on the servers that they want to back up. Below are the settings for enabling backup using pg_basebackup.

The user has to set a few parameters in PostgreSQL postgresql.conf file, which he wants to backup:

1. wal_level parameter to archive or hot_standby.
2. archive_mode=on
3. archive_command setting.
4. max_wal_senders to 1 or more than one, since pg_basebackup uses the replication protocol to copy data directory.

For more information on each setting please refer to the following:
1. wal_level:
http://www.postgresql.org/docs/9.4/static/runtime-config-wal.html

2. archive_mode and archive_command:
http://www.postgresql.org/docs/9.4/static/runtime-config-wal.html#RUNTIME-CONFIG-WAL-ARCHIVING

3. max_wal_senders:
http://www.postgresql.org/docs/9.4/static/runtime-config-replication.html
http://www.enterprisedb.com/docs/en/1.0/bart/EDB_Backup_and_Recovery_Tool_Guide-19.htm#TopOfPage

With the above settings, the user then needs to update the pg_hba.conf file for the replication connection.

Note: The above settings are for pg_basebackup to take backups using replication protocols. In case users need more information about pg_basebackup and settings, please use the above mentioned link

How BART Works

Now, since we have configured both servers, let’s have a look how BART works.

The following command executes a backup:

 bart -c bart.cfg BACKUP -s ppas94

And below is the output:

[bart@localhost ~]$ bart -c bart.cfg BACKUP -s ppas94

INFO:  creating backup for server 'ppas94'
INFO:  backup identifier: '1413852137762'
6394456/6394456 kB (100%), 1/1 tablespace

INFO:  backup checksum: 7f49ea9653511308710c174f22ec765d
INFO:  backup completed successfully
[bart@localhost ~]$ 

That was an easy way to take a backup. The DBA can also create a job to execute the above command to take backups.

If the user wants to list the backup using BART, the user can use the option SHOW-BACKUPS:

[bart@localhost ~]$ bart -c bart.cfg SHOW-BACKUPS -s ppas94
 Server Name   Backup ID       Backup Time           Backup Size  
                                                                  
 ppas94        1413852137762   2014-10-20 17:43:41   6244.59 MB   

This is useful for knowing what backups a user has available for recovery. The above command gives important information:

1.	Backup ID: It’s a unique ID for the physical backup
2.	Backup Time: Time when backup was taken
3.	Backup Size: Size of backup

This information is useful when a user wants to plan for recovery using backup. This way, the user can also plan for disk size.

Sometimes a user wants to verify their backup state. VERIFY-CHKSUM option is useful in this case:

[bart@localhost ~]$ bart -c bart.cfg VERIFY-CHKSUM -s ppas94 -i 1413852137762
 Server Name   Backup ID       Verify  
                                       
 ppas94        1413852137762   OK      

I have to say, after putting EDB BART through its paces, I think DBAs will enjoy having such a great tool for making Backup Management easy.

In my next post, I will blog about the Recovery process.

Monitoring approach for Streaming Replication with Hot Standby in PostgreSQL 9.3.

The people using PostgreSQL and the Streaming Replication feature seem to ask many of the same questions:

1. How best to monitor Streaming Replication?

2. What is the best way to do that?

3. Are there alternatives, when monitoring on Standby, to using the pg_stat_replication view on Master?

4. How should I calculate replication lag-time, in seconds, minutes, etc.?

In light of these commonly asked questions, I thought a blog would help. The following are some methods I’ve found to be useful.

Monitoring is critical for large infrastructure deployments where you have Streaming Replication for:

1. Disaster recovery

2. Streaming Replication is for High Availability

3. Load balancing, when using Streaming Replication with Hot Standby

PostgreSQL has some building blocks for replication monitoring, and the following are some important functions and views which can be use for monitoring the replication:

1. pg_stat_replication view on master/primary server.

This view helps in monitoring the standby on Master. It gives you the following details:

   pid:              Process id of walsender process
   usesysid:         OID of user which is used for Streaming replication.
   usename:          Name of user which is used for Streaming replication
   application_name: Application name connected to master
   client_addr:      Address of standby/streaming replication
   client_hostname:  Hostname of standby.
   client_port:      TCP port number on which standby communicating with WAL sender
   backend_start:    Start time when SR connected to Master.
   state:            Current WAL sender state i.e streaming
   sent_location:    Last transaction location sent to standby.
   write_location:   Last transaction written on disk at standby
   flush_location:   Last transaction flush on disk at standby.
   replay_location:  Last transaction flush on disk at standby.
   sync_priority:    Priority of standby server being chosen as synchronous standby
   sync_state:       Sync State of standby (is it async or synchronous).

e.g.:

postgres=# select * from pg_stat_replication ;
-[ RECORD 1 ]----+---------------------------------
pid              | 1114
usesysid         | 16384
usename          | repuser
application_name | walreceiver
client_addr      | 172.17.0.3
client_hostname  |
client_port      | 52444
backend_start    | 15-MAY-14 19:54:05.535695 -04:00
state            | streaming
sent_location    | 0/290044C0
write_location   | 0/290044C0
flush_location   | 0/290044C0
replay_location  | 0/290044C0
sync_priority    | 0
sync_state       | async

2. pg_is_in_recovery() : Function which tells whether standby is still in recovery mode or not.

e.g.

postgres=# select pg_is_in_recovery();
 pg_is_in_recovery
-------------------
 t
(1 row)

3. pg_last_xlog_receive_location: Function which tells location of last transaction log which was streamed by Standby and also written on standby disk.

e.g.

postgres=# select pg_last_xlog_receive_location();
 pg_last_xlog_receive_location
-------------------------------
 0/29004560
(1 row)

4. pg_last_xlog_replay_location: Function which tells last transaction replayed during recovery process. e.g is given below:

postgres=# select pg_last_xlog_replay_location();
 pg_last_xlog_replay_location
------------------------------
 0/29004560
(1 row)

5. pg_last_xact_replay_timestamp: This function tells about the time stamp of last transaction which was replayed during recovery. Below is an example:

postgres=# select pg_last_xact_replay_timestamp();
  pg_last_xact_replay_timestamp
----------------------------------
 15-MAY-14 20:54:27.635591 -04:00
(1 row)

Above are some important functions/views, which are already available in PostgreSQL for monitoring the streaming replication.

So, the logical next question is, “What’s the right way to monitor the Hot Standby with Streaming Replication on Standby Server?”

If you have Hot Standby with Streaming Replication, the following are the points you should monitor:

1. Check if your Hot Standby is in recovery mode or not:

For this you can use pg_is_in_recovery() function.

2.Check whether Streaming Replication is working or not.

And easy way of doing this is checking the pg_stat_replication view on Master/Primary. This view gives information only on master if Streaming Replication is working.

3. Check If Streaming Replication is not working and Hot standby is recovering from archived WAL file.

For this, either the DBA can use the PostgreSQL Log file to monitor it or utilize the following functions provided in PostgreSQL 9.3:

pg_last_xlog_replay_location();
pg_last_xact_replay_timestamp();

4. Check how far off is the Standby from Master.

There are two ways to monitor lag for Standby.



   i. Lags in Bytes: For calculating lags in bytes, users can use the pg_stat_replication view on the master with the function pg_xlog_location_diff function. Below is an example:

pg_xlog_location_diff(pg_stat_replication.sent_location, pg_stat_replication.replay_location)

which gives the lag in bytes.

  ii. Calculating lags in Seconds. The following is SQL, which most people uses to find the lag in seconds:

   SELECT CASE WHEN pg_last_xlog_receive_location() = pg_last_xlog_replay_location()
                 THEN 0
               ELSE EXTRACT (EPOCH FROM now() - pg_last_xact_replay_timestamp())
          END AS log_delay;

Including the above into your repertoire can give you good monitoring for PostgreSQL.

I will in a future post include the script that can be used for monitoring the Hot Standby with PostgreSQL streaming replication.

New in PostgreSQL 9.3: New in Functions

In the series of blogging about new features in 9.3, today, I thought about blogging new functions and improvements coming in PostgreSQL.

Lets look whats new in 9.3, in terms of in build functions:

1. New in one array functions for one dimensional array

PostgreSQL 9.3, is coming with new functions which can help users to manipulate one dimensional arrays by calling simple functions at the place of crafting their own functions and following some methods to do the modification in it.

i. array_remove function

This is a new function added in 9.3, which provides ability for removing the elements from array. Function takes two arguments:
a. One dimensional array from which user wants to remove elements
b. element value which user wants to remove
.

Syntax of this function is given below:

ARRAY_REMOVE(<one dimensional array>, element)

Example of array_remove is given below:

postgres=# select array_remove(ARRAY['First','Second','Delete','Four'],'Delete');
    array_remove     
---------------------
 {First,Second,Four}
(1 row)

ii. array_replace function

This is a new in 9.3, this helps user to replace any element in array. This function can be use multi-dimensional array Or for single dimensional array.
Example of array_replace is given below:

pgsqltest=# select array_replace(ARRAY[ARRAY[1,2],ARRAY[3,4],ARRAY[56,6],ARRAY[7,8]],56,5);
       array_replace       
---------------------------
 {{1,2},{3,4},{5,6},{7,8}}
(1 row)

2. VARIADIC-labeled arguments expansion for concat and format functions

This is new addition to concat and format function. pre-9.3 concat and format, function used to ignore VARIADIC label and doesn’t used to print right output. However, this has been fixed in 9.3. Lets look at the example of pre-9.3 and in 9.3

In pre-9.3

worktest=# select concat(variadic array[1,2,3]);
 concat  
---------
 {1,2,3}
(1 row)

In 9.3

pgsqltest=# select concat(variadic array[1,2,3]);
 concat 
--------
 123
(1 row)

You can see above in 9.3, using VARIADIC label working properly.

Lets look at the format() function:

In pre-9.3

worktest=# select format(string_agg('%s',','), variadic array_agg(i)) from generate_series(1,3) g(i);
ERROR:  too few arguments for format

woow, its error out. However, this is fixed in 9.3 :-).

In 9.3

pgsqltest=# select format(string_agg('%s',','), variadic array_agg(i)) from generate_series(1,3) g(i);
 format 
--------
 1,2,3
(1 row)

3. Improvement in format() function to handle field width and left/right alignment

This new added in 9.3 format() function, which is going to increase the usability of format() function for developers. In 9.3, format function is coming with following enhancements:
i). Proper handling of field width.
ii). Proper handling of Left/right alignment.

Above two improvement was missing in pre-9.3. However, addition of above improvement made format() function to be follower of sprintf() C function.

Lets look at the example:
In pre-9.3, using of field width resulted into error message, as given below

ERROR:  unterminated conversion specifier
worktest=#  select format('>>%10s<<', 'Hello');
ERROR:  unterminated conversion specifier
worktest=# 

However, 9.3 is coming with proper field width support.

pgsqltest=#  select format('>>%10s<<', 'Hello');
     format     
----------------
 >>     Hello<<
(1 row)

9.3 is also coming proper handling left/right alignment. Examples are given below:
Left alignment

pgsqltest=# select format('>>%-10s<<', 'Hello');
     format     
----------------
 >>Hello     <<
(1 row)

Right alignment

pgsqltest=# select format('>>%1$10s<<', 'Hello');
     format     
----------------
 >>     Hello<<
(1 row)

4. Proper handling of Negative century in to_char, to_date and to_timestamp functions

In pre-9.3, following function behavior for negative century was wrong or inconsistent
a. to_char
b. to_date
c. to_timestamp.

However, 9.3 is coming with proper fix/handling for negative century. Lets look at the output in pre-9.3 and in 9.3 for each functions.

pre-9.3

worktest=# select to_timestamp('4713-01-08 BC','YYYY-DD-MM bc');
         to_timestamp         
------------------------------
 01-AUG-13 00:00:00 -04:56:02
(1 row)

Above you can see that its displaying wrong result for BC. In 9.3

pgsqltest=# select to_timestamp('4713-01-08 BC','YYYY-DD-MM bc');
          to_timestamp           
---------------------------------
 4713-08-01 00:00:00-04:56:02 BC
(1 row)

Lets see for to_date functions. In 9.2/pre-9.3

worktest=# SELECT to_date('4713-01-08 BC','YYYY-DD-MM bc');
ERROR:  full year must be between -4712 and +9999, and not be 0
worktest=# 

In 9.3

pgsqltest=# SELECT to_date('4713-01-08 BC','YYYY-DD-MM bc');
    to_date    
---------------
 4713-08-01 BC
(1 row)

Above, you can see in 9.3,to_date function is working right. However to_date in 9.2 didn’t able to handle it. Similar behavior you can see for to_char function.

5. Improvement in pg_get_viewdef() to print new line after each SELECT and FROM entry

This is kind of enhancement made in pg_get_viewdef function of 9.3. And user will be able to see this enhancement in pg_dump plain dump too.

This enhancement is more like readability of output of view definition in 9.3 and reducing the line length of view definition. Lets look at how this improvement makes user experience of getting/viewing view definition much better.

Before 9.3 or in pre-9.3

worktest=# select pg_get_viewdef('pg_tables'::regclass);
                                                                                                    
                                                                                  pg_get_viewdef    
                                                                                                    
                                                                              
----------------------------------------------------------------------------------------------------
----------------------------------------------------------------------------------------------------
----------------------------------------------------------------------------------------------------
------------------------------------------------------------------------------
 SELECT n.nspname AS schemaname, c.relname AS tablename, pg_get_userbyid(c.relowner) AS tableowner, 
t.spcname AS tablespace, c.relhasindex AS hasindexes, c.relhasrules AS hasrules, c.relhastriggers AS
 hastriggers FROM ((pg_class c LEFT JOIN pg_namespace n ON ((n.oid = c.relnamespace))) LEFT JOIN pg_
tablespace t ON ((t.oid = c.reltablespace))) WHERE (c.relkind = 'r'::"char");
(1 row)

Above you can see that pg_get_viewdef has printed defintion of view in one line, which not only make readability of view definition difficult. However also increase the line length.

Lets look at the pg_get_viewdef in 9.3

pgsqltest=# select pg_get_viewdef('pg_tables'::regclass);
                        pg_get_viewdef                        
--------------------------------------------------------------
  SELECT n.nspname AS schemaname,                            +
     c.relname AS tablename,                                 +
     pg_get_userbyid(c.relowner) AS tableowner,              +
     t.spcname AS tablespace,                                +
     c.relhasindex AS hasindexes,                            +
     c.relhasrules AS hasrules,                              +
     c.relhastriggers AS hastriggers                         +
    FROM ((pg_class c                                        +
    LEFT JOIN pg_namespace n ON ((n.oid = c.relnamespace)))  +
    LEFT JOIN pg_tablespace t ON ((t.oid = c.reltablespace)))+
   WHERE (c.relkind = 'r'::"char");
(1 row)

which seems me better in terms of readability and doesn’t have long line.

Enjoy!!