Page 3 of 10

New in PostgreSQL 9.3: Client Application improvements

Long long time. I haven’t blogged at all. So, I have decided whenever I get chance I will blog about the cool things going in Database Technology market. I have lot of topics to share. Some got lost in air. Some I retained in mind. Before I lost more. I decided to continue from PostgreSQL 9.3 features and accordingly will go towards about new things as it starts to popup.

PostgreSQL users must have heard about release of PostgreSQL 9.3 and they must have started testing PostgreSQL 9.3. In today series, I am going to blog about client application improvements done in PostgreSQL 9.3. Lets take it one by one.

1. New binary/command pg_isready.

PostgreSQL 9.3 is coming with new binary called pg_isready. this command helps user to know the status of server/PostgreSQL cluster. Before 9.3, user used to have their own tool/script to check the status of PostgreSQL, for that they used execute some random SQL like “SELECT 1” and if the status is successful, PostgreSQL is running or if status is unsuccessful, then PostgreSQL 9.3.
However, pg_isready utilit/command is much better than old methods in the sense it covers following scenario for checking the status of PostgreSQL 9.3:

   a. Check if PostgreSQL is ready and accepting connections.
   b. Check if PostgreSQL is ready and connection is acception/rejecting.
   c. Check if PostgreSQL is non-responsive.
   d. it provide exit code and also has option being quiet about message, which be useful for users using exit code which create their own health check and messaging in application.

different exit codes and Message use by pg_isready to tell the status of PostgreSQL is given below:

1. exit code 0,
   Message: "host:port - accepting connections"
2. exit code: 1
   Message: "host:port - rejecting connections"
3. exit code: 2
   Message: "host:port - no response"

usage example is given below:

[root@ip-10-159-51-181 ~]# /usr/pgsql-9.3/bin/pg_isready -h localhost
localhost:5445 - accepting connections

2. –table (multiple table option) in pg_restore, clusterdb, reindexdb and vacuumdb

–table (multiple table option) was missing in pre-9.3 PostgreSQL. I personally missed an option for specifying multiple tables in single command.
Now user can use multiple table option with following commands in PostgreSQL 9.3 onwards. Example is given below:

Till 9.2, when user uses command like

pg_restore -t emp -t dept test.dmp

User will get information of table which is last table mention in command, as given below:

[root@ip-10-159-51-181 ~]# pg_restore --version
pg_restore (EnterpriseDB) 9.2.4.10
[root@ip-10-159-51-181 ~]# pg_restore -Fc -t emp -t dept test.dmp
--
-- EnterpriseDB database dump
--

SET statement_timeout = 0;
SET client_encoding = 'SQL_ASCII';
SET standard_conforming_strings = on;
SET check_function_bodies = false;
SET client_min_messages = warning;

SET search_path = public, pg_catalog, sys;

SET default_tablespace = '';

SET default_with_oids = false;

--
-- Name: dept; Type: TABLE; Schema: public; Owner: enterprisedb; Tablespace:
--

CREATE TABLE dept (
    deptno numeric(2,0) NOT NULL,
    dname character varying(14),
    loc character varying(13)
);

ALTER TABLE public.dept OWNER TO enterprisedb;

--
-- EnterpriseDB database dump complete
--

You can see in pre-9.3, pg_restore has shown the result of last table in list in pg_restore command.

Now in 9.3,  user can mention multiple table names and will be able to listed all tables in command:

[root@ip-10-159-51-181 ~]# /usr/pgsql-9.3/bin/pg_restore  -Fc -t emp -t dept test.dmp
--
-- PostgreSQL database dump
--

SET statement_timeout = 0;
SET lock_timeout = 0;
SET client_encoding = 'SQL_ASCII';
SET standard_conforming_strings = on;
SET check_function_bodies = false;
SET client_min_messages = warning;

SET search_path = public, pg_catalog;

SET default_tablespace = '';

SET default_with_oids = false;

--
-- Name: dept; Type: TABLE; Schema: public; Owner: enterprisedb; Tablespace:
--

CREATE TABLE dept (
    deptno numeric(2,0) NOT NULL,
    dname character varying(14),
    loc character varying(13)
);

ALTER TABLE public.dept OWNER TO enterprisedb;

--
-- Name: emp; Type: TABLE; Schema: public; Owner: enterprisedb; Tablespace:
--

CREATE TABLE emp (
    empno numeric(4,0) NOT NULL,
    ename character varying(10),
    job character varying(9),
    mgr numeric(4,0),
    hiredate timestamp without time zone,
    sal numeric(7,2),
    comm numeric(7,2),
    deptno numeric(2,0),
    CONSTRAINT emp_sal_ck CHECK ((sal > (0)::numeric))
);

ALTER TABLE public.emp OWNER TO enterprisedb;

--
-- PostgreSQL database dump complete
--

similarly this option is enable for reindexdb. Example is given below:

Before 9.3:

[root@ip-10-159-51-181 ~]# reindexdb --version
reindexdb (EnterpriseDB) 9.2.4.10
[root@ip-10-159-51-181 ~]# reindexdb -t emp -t dept -e
REINDEX TABLE dept;

In 9.3:

[root@ip-10-159-51-181 ~]# /usr/pgsql-9.3/bin/reindexdb  -t emp -t dept -e
REINDEX TABLE emp;

REINDEX TABLE dept;

User can observer usage of this switch for clusterdb and vacuumdb too.

Addition of this, it has reduced lot of effort for defining script which can recursively call the reindexdb/vacuumdb/clusterdb for each tables.

9.3 gives option to user for using these binaries more efficiently in their environment and tools.

3. –dbname/-d option for binaries: pg_dumpall, pg_basebackup and pg_receivexlog.
Before 9.3, –dbname and -d option was available in psql, clusterdb, reindexdb command etc.

However this option wasn’t available for command like pg_dumpall, pg_basebackup and pg_receivexlog, which made users to define connection string for this binaries differently from other binaries. For example:

Before 9.3:

[root@ip-10-159-51-181 ~]# pg_basebackup --version
pg_basebackup (PostgreSQL) 9.2.4.10
[root@ip-10-159-51-181 ~]# pg_basebackup --help|grep dbname

In 9.3, user can see this option available:

[root@ip-10-159-51-181 ~]# /usr/pgsql-9.3/bin/pg_basebackup --help|grep -i dbname
  -d, --dbname=CONNSTR   connection string

4. Remove warning message for psql, when psql uses to connect to old server.
Example is given below:
Before 9.3:

User used to get warning message something like given below:

[root@ip-10-159-51-181 ~]# psql -p 5444
psql (9.2.4.10, server 9.1.9.17)
WARNING: psql version 9.2, server version 9.1.
         Some psql features might not work.
Type "help" for help.

Now 9.3 onwards:

[root@ip-10-159-51-181 ~]# /usr/pgsql-9.3/bin/psql  -p5444
psql (9.3beta2, server 9.1.9.17)
Type "help" for help.

worktest=#

Note:: Warning message is kept if user uses psql to connect to higher major version of postgresql.

5. psql –single-transaction mode for STDIN:
Before 9.3, –single-transaction mode doesn’t work for STDIN. For example if you have a set of commands which you want to pass to psql through STDIN, then it doesn’t used to work. To make it work, people used to put the commands in a file and then they used to use the –single-transaction mode. However 9.3 onwards, it won’t require any more. Example is given below:

Before 9.3:

[root@ip-10-159-51-181 ~]# psql --version
psql (EnterpriseDB) 9.2.4.10

[root@ip-10-159-51-181 ~]# echo "
CREATE TABLE (id numeric);
CREATE TABLE a;
CREATE DATABASE test;"|psql --single-transaction
ERROR:  syntax error at or near "("
LINE 1: CREATE TABLE (id numeric);
                     ^
ERROR:  syntax error at or near ";"
LINE 1: CREATE TABLE a;
                      ^
CREATE DATABASE

Above example shows that, there was error on 1st and 2nd command, however psql executed third command successfully.

worktest=# select datname from pg_database where datname='test';
 datname
---------
 test
(1 row)

9.3 on wards:

[root@ip-10-159-51-181 ~]# echo "
CREATE TABLE (id numeric);
CREATE TABLE a;
CREATE DATABASE test;"|/usr/pgsql-9.3/bin/psql --single-transaction
ERROR:  syntax error at or near "("
LINE 1: CREATE TABLE (id numeric);
                     ^
ERROR:  syntax error at or near ";"
LINE 1: CREATE TABLE a;
                      ^
ERROR:  current transaction is aborted, commands ignored until end of transaction block
[root@ip-10-159-51-181 ~]#

You can see transaction was aborted when user uses the STDIN with psql –single-transaction.

6. Other improvements in psql.

In 9.3, psql has further improved for tab completion and patter searching. This has been done by tuning of functions cost settings.

7. New addition in Backslash commands:

9.3 also coming with some new backslash commands. Those are going to help in lot ways for user to monitor and using psql in efficient way in there script/monitoring.
Following are new in backslash commands:

i. \watch.

Similar to Linux system watch command, psql is also coming with \watch command. Using this command user can execute current buffer query repeatedly. For example, if user wants to monitor/watch the sessions made to postgresql database, then he/she can use command something like given below in psql to do that:

a. Start a psql session as given below:

           [root@ip-10-159-51-181 ~]# /usr/pgsql-9.3/bin/psql -p 5445 -U enterprisedb -d worktest
psql (9.3beta2, server 9.2.4.10)
Type "help" for help.

worktest=#

b. Now execute a query which you want to use repeatedly as given below

        worktest=# select datname, usename, application_name, query from pg_stat_activity ;
 datname  |   usename    | application_name |                                  query

----------+--------------+------------------+-------------------------------------------------------
-------------------
 worktest | enterprisedb | psql             | select datname, usename, application_name, query from
pg_stat_activity ;
(1 row)

c. Now use command “\watch [seconds]”

worktest=# \watch 1
                                       Watch every 1s	Sat Jul 27 04:15:21 2013

 datname  |   usename    | application_name |                                  query
----------+--------------+------------------+--------------------------------------------------------------------------
 worktest | enterprisedb | psql             | select datname, usename, application_name, query from pg_stat_activity ;
(1 row)

                                       Watch every 1s	Sat Jul 27 04:15:22 2013

 datname  |   usename    | application_name |                                  query
----------+--------------+------------------+--------------------------------------------------------------------------
 worktest | enterprisedb | psql             | select datname, usename, application_name, query from pg_stat_activity ;
(1 row)

Or user can combine above two in one command to use it something like given below:

worktest=# select datname, usename, application_name, query from pg_stat_activity \watch 1
 datname  |   usename    | application_name |                                  query

----------+--------------+------------------+-------------------------------------------------------
-------------------
 worktest | enterprisedb | psql             | select datname, usename, application_name, query from
pg_stat_activity ;
(1 row)

                                       Watch every 1s	Sat Jul 27 04:36:00 2013

 datname  |   usename    | application_name |                                  query
----------+--------------+------------------+--------------------------------------------------------------------------
 worktest | enterprisedb | psql             | select datname, usename, application_name, query from pg_stat_activity ;
(1 row)

In above, I wanted to use query to monitor the pg_stat_statement every seconds.

\watch commands interrupted/stop in following condition:

if user has interrupted the running watch.
if query fails.

ii. \gset command to store query results in psql variables.

New command is added in backslash. This is very interesting command and user can use it in multiple purpose.

This backslash command allows users to set value of a variable based on single row/output of SQL. If SQL returns multiple output, then \gset will send proper message and will not set variable.

There are multiple normal example of using this backslash, for example you can store the output of now query in a variable and can use in psql session as given below:

worktest=# select now() as timestamp
worktest-# \gset
worktest=# \echo :timestamp
27-JUL-13 04:53:13.214203 -04:00
worktest=#

Some, user like me, can also use this command to prepare dynamic commands and can execute in session. For example I have users like test which I want to drop from PG instance, so, I can do something like given below:

worktest=# SELECT replace(array_to_string(ARRAY(SELECT 'DROP USER '||usename||';' FROM pg_user where usename ~ 'test'),', '),',','') as drop_test_user
worktest-# \gset
worktest=# \echo "Verify the Drop test user command"
"Verify the Drop test user command"
worktest=# \echo :drop_test_user
DROP USER test; DROP USER test_su;
worktest=# :drop_test_user
DROP ROLE
DROP ROLE

iii. improvement in \conninfo to show ssl information
Before 9.3, backslash command conninfo used to show information about user,database, port and host. Even if user is using ssl connection. However psql in 9.3 will give ssl information too. Example is given below:
Before 9.3

edb=> \conninfo
You are connected to database "edb" as user "vibhor" on host "localhost" at port "5444".

In 9.3:

edb=> \conninfo
You are connected to database "edb" as user "vibhor" on host "localhost" at port "5444".
SSL connection (cipher: DHE-RSA-AES256-SHA, bits: 256)

iv. database name patter support in \l

Backslash \l command people use to list the database in psql. However, before 9.3, this command didn’t have the pattern search support. Now, in 9.3, this command has been improved in terms of displaying/listing the database based on pattern provided by user. Snapshot is given below:

Before 9.3:

edb=> \l pe*
                                        List of databases
   Name    |    Owner     | Encoding |   Collate   |    Ctype    |       Access privileges
-----------+--------------+----------+-------------+-------------+-------------------------------
 edb       | enterprisedb | UTF8     | en_US.UTF-8 | en_US.UTF-8 |
 pem       | enterprisedb | UTF8     | en_US.UTF-8 | en_US.UTF-8 | enterprisedb=CTc/enterprisedb+
           |              |          |             |             | pem_user=Tc/enterprisedb     +
           |              |          |             |             | pem_agent=Tc/enterprisedb
 postgres  | enterprisedb | UTF8     | en_US.UTF-8 | en_US.UTF-8 |
 template0 | enterprisedb | UTF8     | en_US.UTF-8 | en_US.UTF-8 | =c/enterprisedb              +
           |              |          |             |             | enterprisedb=CTc/enterprisedb
 template1 | enterprisedb | UTF8     | en_US.UTF-8 | en_US.UTF-8 | =c/enterprisedb              +
           |              |          |             |             | enterprisedb=CTc/enterprisedb
(5 rows)

\l: extra argument "pe*" ignored

As you can see it has ignored the pattern based listing and listed all database.

Now in 9.3:

edb=> \l pe*
                                     List of databases
 Name |    Owner     | Encoding |   Collate   |    Ctype    |       Access privileges
------+--------------+----------+-------------+-------------+-------------------------------
 pem  | enterprisedb | UTF8     | en_US.UTF-8 | en_US.UTF-8 | enterprisedb=CTc/enterprisedb+
      |              |          |             |             | pem_user=Tc/enterprisedb     +
      |              |          |             |             | pem_agent=Tc/enterprisedb
(1 row)

Thats really a good enhancement in backslash command.

iv. Fix in “\g filename”, which was affecting subsequent commands after an error.

Before 9.3, if “\g filename” commands fails, then it was also affecting the execution of query after this backslash command. For example, if user executes query something like given below, then subsequent query which output user doesn’t want to store in a file, will go in the filename mentioned in first command:

       worktest=# select 'a'/1 \g /tmp/test
ERROR:  invalid input syntax for integer: "a"
LINE 1: select 'a'/1
               ^
worktest=# select 1/2;
worktest=#

As you can see 1st SQL failed. However second SQL output not displayed on screen and it went in /tmp/test file. which is inconsistent. Now in 9.3 this has been fixed:

worktest=# select 'a'/1 \g /tmp/test
ERROR:  invalid input syntax for integer: "a"
LINE 1: select 'a'/1
               ^
worktest=# select 1/2;
        ?column?
------------------------
 0.50000000000000000000
(1 row)

v. Improvement in \df+ command to show Security label.

If user creates following function with security definer

CREATE FUNCTION test_func()
RETURNS integer
SECURITY DEFINER
AS
$$ SELECT 1;
$$ language sql;

before 9.3,  \df+ command never used to give information on SECURITY label. Like in above case, SECURITY DEFINER. Below is output

before 9.3:

-[ RECORD 1 ]-------+-------------
Schema              | oma
Name                | test_func
Result data type    | integer
Argument data types |
Type                | normal
Volatility          | volatile
Owner               | enterprisedb
Language            | sql
Source code         |  SELECT 1;
                    |
Description         |

In 9.3:

Schema              | oma
Name                | test_func
Result data type    | integer
Argument data types |
Type                | normal
Security            | definer
Volatility          | volatile
Owner               | enterprisedb
Language            | sql
Source code         |  SELECT 1;
                    |
Description         |

In above you can see column name Security.

8. New improvement psql Output:

9.3 is also coming with some new improvements in psql commands below are details of those:

i. latex-longtable support and border=3 style for latex output.

Those who are not familiar with LaTex, I would recommend following two links:

        http://www.latex-project.org
        http://en.wikipedia.org/wiki/LaTeX
      

So,those, who are familiar with Latex, for them this new feature will be helpful. latex-longtable allows tabular format output to in multiple pages. With this there border=3 defined for latex output in psql 9.3

ii. –tuples-only (-t) with –expanded (-x) not to show “(No rows)”

before 9.3, if someone uses –tuples-only with –expanded switch in psql and query executed doesn’t return any record,then combination two switches used to print “(No rows)”. As given below:

      [root@ip-10-159-51-181 ~]# psql -t -x -c "select datname from pg_database where datname ~ 'temporar'"
(No rows)
     

which used to give some inconvenience in scripting psql. However now, 9.3 onwards, if query doesn’t return any record/row, then psql will return nothing. as given below:

[root@ip-10-159-51-181 ~]# /usr/pgsql-9.3/bin/psql -t -x -c "select datname from pg_database where datname ~ 'temporar'"
[root@ip-10-159-51-181 ~]#

which helps in better scripting.

ii. –no-align (-A) with –expanded (-x) not to print empty lines.

Before 9.3, combination of –no-align (-A) with –expanded (-x) used to print empty line, if query doesn’t return any value. However this is fixed. In 9.3 it will return nothing, as shown below:

Before 9.3

[root@ip-10-159-51-181 ~]# psql -t -c "select datname from pg_database where datname ~ 'temporar'" -x -A

[root@ip-10-159-51-181 ~]#

In 9.3:

[root@ip-10-159-51-181 ~]# /usr/pgsql-9.3/bin/psql -t -c "select datname from pg_database where datname ~ 'temporar'" -x -A
[root@ip-10-159-51-181 ~]#

From scripting perspective, this is good improvement.

9. Features/imporvement in pg_dump

i. parallel dump option using –jobs in pg_dump

9.3 has added new performance improvement in pg_dump. pg_dump can be faster in unloading data from PG database. It has been improved to do with parallel jobs.

There are some notes, which user should know:

  1. parallel dump is supported if user uses dump format directory. i.e -Fd.
  2. parallel dump can increase load on server, so user has to be cautious about choosing number of jobs for pg_dump.
  3. --jobs will open a n+1 connections to database, so, user would like to set the max_connections appropriately. Out of n+1 connections, one connection will be used by master process of pg_dump and n connections will be used by master workers.
  4. Worker process in pg_dump takes SHARED LOCK with NOWAIT option. so, if there is any exclusive lock on table then worker process will not be able to acquire SHARED LOCK will exit and accordingly inform to master process,which will abort the pg_dump.
  5. for consistent backup, database needs to support synchronized snapshots which was introduced in 9.2. Therefore if user is planning to use pg_dump of 9.3 to take backup of pre-9.2 postgresql, then they have to make sure that database content doesn't change.
 

Example of parallel dump is given below:

pg_dump -U username -j4 -Fd -f  databasename

ii. –dbname switch in pg_dump.

Before 9.3, pg_dump didn’t have the –dbname option. Since it was assummed that user will use database name at the end of pg_dump, as given below:

 pg_dump -U username -p port -h host [database name]

To keep the consistency of connection option with other utilities/binaries, in 9.3 –dbname option has been introduced. Now, user can also execute command like given below:

     pg_dump -U username -p port -h host -d [database name]
   

10. New in initdb

9.3 has also added new functionality and feature in initdb command too.

i. fsync the newly created data directory.

in 9.3, initdb now make sure data directly is safely written to disk. which makes data directory created by initdb is more durable. There will be slight performance impact of using default fysnc. However user can disable this option by choosing –nosync (-N) option, which is not recommended.

Below are some stats:

With default:
real	0m4.304s
user	0m2.736s
sys	0m1.604s

with --nosync
real	0m4.129s
user	0m2.667s
sys	0m1.493s

you can see there is not much difference as per performance.

ii. –sync-only option in initdb.

9.3 has also added new switch called sync-only. Using this option user can make sure existing data directory to be written safely to disk. Example of usage is given below:

-bash-4.1$ /usr/pgsql-9.3/bin/initdb --sync-only -D /var/lib/pgsql/test
syncing data to disk ... ok
-bash-4.1$

iii. warning message if initdb is used to place data directory in top of file filesystem mount point.

In 9.3, initdb has been made more sensible on warning user, if user is trying to create data directory top of filesystem mount point.

Following are some warning which user will get:

It contains a dot-prefixed/invisible file, perhaps due to it being a mount point.
It contains a lost+found directory, perhaps due to it being a mount point.
Using a mount point directly as the data directory is not recommended.                          Create a subdirectory under the mount point

Enjoy Learning more about 9.3!!!

pg_xlog_location_diff function for PostgreSQL/PPAS

In PostgreSQL 9.2, community has added a function pg_xlog_location_diff(), which is very useful for finding the difference between two xlog location in bytes and also useful for monitoring replication. Detail of this function is given in following link:

http://www.postgresql.org/docs/9.2/static/functions-admin.html#FUNCTIONS-ADMIN-BACKUP

However this function is not available for users of PostgreSQL/PPAS 9.0/9.1 users. So, I thought to write same function plpgsql so, that users can take benefit of same in 9.0/9.1.

Before using formula and developing function, lets understand what is xlog and offset.
Let’s consider user has used function pg_current_xlog_location() function and he gets following information:

worktest=# select pg_current_xlog_location();
 pg_current_xlog_location 
--------------------------
 1/D1012B80
(1 row)

In above, first field before forward slash is the hexadecimal value of logical xlog file and second field i.e. D1012B80 is hexadecimal offset inside the logical xlogfile.

So, whenerver user sees information for xlog location, he gets xlog information in following format:

(hexadecimal) xlog/ (hexadecimal) offset

To calculate the difference between two xlog location, user can use following formula, which gives difference in hexadecimal:

(FF000000 * xlog + offset) - (FF000000 * xlog + offset)

where FF000000 is max value of offset i.e In PostgreSQL/PPAS offset value can go from 00000000 to FF000000 and if we convert that in decimal, it gives maximum 4278190080 bytes (4080MB)

Based on above formula, following is plpgsql function which can be use to get the difference:

CREATE OR REPLACE FUNCTION pg_xlog_location_diff_sql( text, text)
 RETURNS numeric
 LANGUAGE plpgsql
AS 
 $function$
    DECLARE
       offset1 text;
       offset2 text;
       xlog1 text;
       xlog2 text;
       SQL text;
       diff text;
    BEGIN
       /* Extract the Offset and xlog from input in
          offset and xlog variables */
          
       offset1=split_part($1,'/',2);
         xlog1=split_part($1,'/',1);
       offset2=split_part($2,'/',2);
         xlog2=split_part($2,'/',1);
       
       /* Prepare SQL query for calculation based on following formula
         (FF000000 * xlog + offset) - (FF000000 * xlog + offset)
         which gives value in hexadecimal. Since, hexadecimal calculation is cumbersome
         so convert into decimal and then calculate the difference */
       
       SQL='SELECT (x'''||'FF000000'||'''::bigint * x'''||xlog1||'''::bigint 
                                +  x'''||offset1||'''::bigint)'||' 
                - 
                   (x'''||'FF000000'||'''::bigint * x'''||xlog2||'''::bigint 
                                +  x'''||offset2||'''::bigint)';
       EXECUTE SQL into diff;
       
       /* Return the value in numeric by explicit casting  */
       
       RETURN diff::numeric;
    END;
 $function$;
 

Usage example is given below:

worktest=# select pg_xlog_location_diff_sql(pg_current_xlog_location(),'1/D009F578');
 pg_xlog_location_diff_sql 
---------------------------
                  16230472
(1 row)

I hope this will help PostgreSQL/PPAS 9.0/9.1 users.

New in Postgres Plus Advanced Server 9.2

Good News Postgres Plus Advanced Server Beta Version is now availabale, which has all the new feature of PostgreSQL 9.2, and it also has new features which are specific to Advanced Server.

I will cover/explain New features of PostgreSQL 9.2 later as per release Notes. However, the following link covers major features of PostgreSQL 9.2 and can be taken as reference.

http://wiki.postgresql.org/wiki/What's_new_in_PostgreSQL_9.2

In this Blog, I am going to cover only specific features of Advanced Server 9.2 Core. Which are:

1. INSERT APPEN HINT in PPAS 9.2

PPAS 9.2, now supports INSERT append hint. This is a very interesting feature and it is very useful for users who frequently delete records in Bulk and do bulk of INSERTs.
This hint can provide some benefits in INSERTs. This hint makes PPAS not use Free Space Map and Append the rows at the end of relation(table).
Its usage is given below:

INSERT /*+append*/ INTO tab21115 VALUES(1,'abc',sysdate);

2. Procedure Called like Function Call.

PPAS 9.2 now allows calling procedure with following syntax:

SELECT * FROM procedure(arg1,arg2,…);

This type of Procedure call is allowed for Procedures which have OUT/INOUT Params. With this, user can use exec Function.

lets see how it works:

a. Create a Procedure as Given below:

CREATE OR REPLACE PROCEDURE foo_proc(A IN INT, B INOUT INT, C OUT INT)
AS
BEGIN
   b:=a+b;
   c:=b+1;
END;

Till 9.1 and even in 9.2, user can do something like given below:

DECLARE
  d int:=1;
  e int:=2;
  f int;
BEGIN
   foo_proc(d,e,f);
   DBMS_OUTPUT.PUT_LINE('e = '||e);
   DBMS_OUTPUT.PUT_LINE('d = '||d);
   DBMS_OUTPUT.PUT_LINE('f= '||f);
END;
e = 3
d = 1
f= 4

Now in 9.2, user can also do this:

edb=# select * from foo_proc(1,2);
 b | c 
---+---
 3 | 4
(1 row)

In case the user has Procedure and Function with a same name and the user wants to execute procedure using select command, then SELECT command is going to give preference to Function over Procedure as given below:

Function Definition:

CREATE OR REPLACE FUNCTION foo(a INT) RETURN INT
AS
BEGIN
  RETURN 2;
END;

CREATE PROCEDURE foo(A int, B OUT int)
AS
BEGIN
   B:=a+1;
END;

edb=# select * from foo(4);
 foo 
-----
   2

However, user can still use EXEC command to execute Procedure as given below:

edb=# exec foo(4);
 ?column? 
----------
        5
(1 row)

Or if you are using Anonymous function then PERFORM will also work with Procedure as given below:

edb=# DECLARE
edb-#     a int:=4;
edb$#     b int;
edb$# BEGIN
edb$#    PERFORM foo(a,b);
edb$#    DBMS_OUTPUT.PUT_LINE('b = '||b);
edb$# END;
b = 5

EDB-SPL Procedure successfully completed

3. Object Type Support:

PPAS already has Object Type support. However, the new version is coming with a new enhancement in that support. In New Version, user would be able to create Objects with Attributes, Functions, and Procedures. This gives an advantage of reduced coding in terms of defining Object types and maintaining it.

Example is given below:

a. Define a Type:

    CREATE OR REPLACE TYPE PersonObj AS OBJECT (
  first_name  VARCHAR2(50),
  last_name   VARCHAR2(50),
  date_of_birth  DATE,
  MEMBER FUNCTION getAge RETURN NUMBER
);

b. Define a Type Body:

CREATE OR REPLACE TYPE BODY PersonObj AS
  MEMBER FUNCTION getAge RETURN NUMBER AS
  BEGIN
    RETURN Trunc(Months_Between(Sysdate, date_of_birth)/12);
  END getAge;
END;

c. Define a Table based on Body type:

CREATE TABLE people (
  id      NUMBER(10) NOT NULL,
  person  PersonObj
);

d. To insert Data, Default constructor can be use as given below:

INSERT INTO people
VALUES (1, PersonObj('John','Doe',
        TO_DATE('01/01/1999','DD/MM/YYYY')));

e. With the following way, a user can access the data in the table:

SELECT p.id,
       (p.person).first_name,
       p.person.getAge() age
FROM   people p;
 id | first_name | age 
----+------------+-----
  1 | John       |  13
  2 | Jane       |  13
(2 rows)

4. PL/SQL Subtypes:

Subtypes is considered as a TYPE, which is defined on base/user defined types with some constraints, syntax for declaring SUBTYPE in PL/SQL is given below:

SUBTYPE <subtype_name> IS <type_name> [(constraint)] [NOT NULL];

Example is given below:

Example 1:

DECLARE
  SUBTYPE INT2 IS NUMBER;
  v2 INT2 (3,0);
BEGIN
  v2:= 123;
  DBMS_OUTPUT.PUT_LINE ('V2 = '|| v2);
END;

Example 2:

DECLARE
  SUBTYPE v_word IS VARCHAR2(10) NOT NULL;
  verb  v_word  := 'verb';
  noun  v_word  := 'noun';
BEGIN
  noun :='n1';
  DBMS_OUTPUT.PUT_LINE (UPPER(verb));
  DBMS_OUTPUT.PUT_LINE (UPPER(noun));
END;

5. DROP TABLE CASCADE CONSTRAINTS in 9.2:

PPAS 9.2 now supports DROP TABLE CASCADE CONSTRAINTS, this is a very intersting feature and is not available in PostgreSQL 9.2.

Using this command, user would be able to drop all referential integrity constraints that refer to primary and unique keys in the dropped table. If you omit this clause and such referential integrity constraints exist, then the database returns an error and does not drop the table.

This is useful, when somebody wants to drop Master table but doesn’t want to drop child table, which has Foreign Key Constraint and is referencing the Master. An example is given below:

CREATE TABLE master(id numeric primary key);
CREATE table child(id numeric references master(id));
insert into master values(1);
insert into child values(1);
edb=# drop table master cascade constraints;
NOTICE:  drop cascades to constraint child_id_fkey on table child
DROP TABLE
edb=# \d child
  Table "enterprisedb.child"
 Column |  Type   | Modifiers 
--------+---------+-----------
 id     | numeric | 

As you can see above, I have dropped the Master table, on which child table has referential integrity constaint. However, my Child table still exists. PostgreSQL 9.2, doesn’t have CASCADE CONSTRAINTS. Therefore, user has to first drop the constraints on table which has referential integrity constraints and then drop the referenced table. This process requires user to find all tables which has referential integrity constraints that refer to primary key of master table. This commands helps DBAs/Users not to worry about finding the child tables.

6. Allow throwing Multiple Errors as Warning, while compiling Procedures/Functions using SPL.

This is also a very interesting enhancement found for SPL. Whenever the user wants to compile a function/procedure in SPL, it will show all the error messages in the particular Function/Procedure Body, so that the user doesn’t have to re-visit their code everytime to fix the other issue. An example is given below:

CREATE OR REPLACE FUNCTION foo() RETURN integer
AS
BEGIN
   a:=a=1
;
B:=b + 1;
I am making fool;
END;

Output:
ERROR:  "a" is not a known variable
LINE 4:    a:=a=1
           ^
ERROR:  "b" is not a known variable
LINE 6: B:=b + 1;
        ^
ERROR:  syntax error at or near "I"
LINE 7: I am making fool;
        ^
ERROR:  compilation of SPL function/procedure "foo" failed due to 3 errors

As you can see above, it has given all the mistakes that I made in Function Body, and now I can fix all of them in one go, to reduce the overhead of executing again and again.

This is very useful for Developers/DBAs.

7. DEFAULT for Types declared in Packages:

Now, in PPAS 9.2, user would be able to define DEFAULT value for particular type, as given below:

CREATE OR REPLACE PACKAGE INITPKG_pro_b4_default AS
PROCEDURE show_default_values;
 n1 NUMBER DEFAULT 20;
 c1 VARCHAR2 DEFAULT 'Default';
END;

CREATE OR REPLACE PACKAGE BODY INITPKG_pro_b4_default AS
 PROCEDURE show_default_values IS
  n number;
 BEGIN
  dbms_output.put_line(c1);
  dbms_output.put_line(n1);
  n1 := n1*n1;
  n1 := SQRT(n1);
  dbms_output.put_line(n1);
 END;
END;

edb=# exec INITPKG_pro_b4_default.show_default_values;
Default
20
20.000000000000000

EDB-SPL Procedure successfully completed

8. TABLE Expression support for Nested Tables.

Now, in 9.2, user would be able to use TABLE Expressions for Nested tables. This feature was missing and asked by many Oracle DBAs. Table Expressions allows the user to query a collection in the FROM Clause like a Table.

edb=# CREATE OR REPLACE TYPE comp_typ_str_21189 IS TABLE OF VARCHAR2(100);
CREATE TYPE
edb=# 
edb=# select * from table(comp_typ_str_21189('PPP','QQQ')) ;
 column_value 
--------------
 PPP
 QQQ
(2 rows)

9. INPUT/OUTPUT Functions for NESTED TABLES:

PPAS 9.2 supports user defined input/output functions for NESTED TABLE. This feature was missing in 9.1.

Some examples are given below:

edb=# create or replace type mytab as table of varchar2(90);
CREATE TYPE
edb=# Create or replace function fun return mytab
edb-# as
edb$# begin
edb$#  return mytab('a','b',3);
edb$# end;
CREATE FUNCTION
edb=# select fun;
 fun21168 
----------
 {a,b,3}
(1 row)

edb=# 
edb=# create or replace function fun return mytab
edb-# as
edb$#  nvar mytab;
edb$# begin
edb$#  nvar := mytab();
edb$#  nvar.extend(4);
edb$#  nvar(1) := 'foo';
edb$#  nvar(2) := NULL;
edb$#  nvar(3) := 'deleteme';
edb$#  nvar(4) := 'bar';
edb$#  return nvar;
edb$# end;
CREATE FUNCTION
edb=# --Verify User's is able to see that data
edb=# select fun;
        fun21168         
-------------------------
 {foo,NULL,deleteme,bar}
(1 row)

9. LOG is no more a reserved keyword for functions.
Till PPAS 9.1, LOG was a reserved keyword and users were not allowed to create functions using this Keyword. In 9.2, a user would be able to use this keyword.

edb=# CREATE OR REPLACE FUNCTION log( a int ) return int as
edb$# BEGIN
edb$# dbms_output.put_line('Function LOG is called');
edb$# return a*2;
edb$# END;
CREATE FUNCTION
edb=# SELECT LOG(10);
Function LOG is called
 log 
-----
  20
(1 row)

10. Variables can be named as current_date.

We know that current_date is special function which returns current date of PostgreSQL/PPAS DB. Till PPAS 9.1, users were not allowed to use this reserved name in variables. Now, in 9.2, users would be able to use it. As given below:

edb=# create table t_currentdate(current_date int);
CREATE TABLE
edb=# desc t_currentdate
    Table "enterprisedb.t_currentdate"
    Column    |  Type   | Modifiers 
--------------+---------+-----------
 current_date | integer | 

create or replace procedure proc_currentdate(current_date date) is
begin
 dbms_output.put_line(current_date);
end;

edb=# exec proc_currentdate(current_date);
05-OCT-12 00:00:00

EDB-SPL Procedure successfully completed

11. New Data Type STRING.

Since Oracle supports string as data type and this datatype wasn’t available in PPAS 9.1, we have included it in 9.2. String is an alias to VARCHAR Data type. Example is given below:

edb=# CREATE TABLE test_string(col string)
edb-# ;
CREATE TABLE
edb=# insert into test_string values('As an Example');
INSERT 0 1
edb=# select * from test_string;
      col      
---------------
 As an Example
(1 row)

edb=# \d test_string
    Table "enterprisedb.test_string"
 Column |       Type        | Modifiers 
--------+-------------------+-----------
 col    | character varying | 

12. NVARCHAR2 data type support in 9.2

PPAS 9.2 allowes users to use NVARCHAR2 as a datatype. In PPAS 9.2, NVARCHAR2 is mapped to VARCHAR data type. This is only for an Oracle user, who always asked question on having this in PPAS.

Example is given below:

edb=# CREATE TABLE test_nvarchar(col NVARCHAR2(10))
edb-# ;
CREATE TABLE
edb=# \d test_nvarchar
     Table "enterprisedb.test_nvarchar"
 Column |         Type          | Modifiers 
--------+-----------------------+-----------
 col    | character varying(10) | 

13. MultiByte and string delimiter support in EDB*loader.

EDB*Loader in PPAS 9.2, is more efficient, compared to 9.1, in loading data. There were some cases, in which it has been observed that it is difficult to find single character delimiter for data in file as the data in a file has all possible characters. To load those kind of data files, users can now define there own STRING Delimiter. An example is given below:

Data File: data.log:
1$$abc
2$$ccc
3$$dddddddd

Control FIle:
LOAD DATA
INFILE '/tmp/data.log'
BADFILE '/tmp/data.bad'
truncate INTO table edb_loader
FIELDS TERMINATED BY '$$' optionally enclosed by '"'
(id , col )

-bash-4.1$ edbldr userid=enterprisedb/ control=/tmp/edb.ctl log=/tmp/edb.log
EDB*Loader: Copyright (c) 2007-2012, EnterpriseDB Corporation.

Successfully processed (3) records
-bash-4.1$ psql
psql (9.2.0.1)
Type "help" for help.

edb=# select * from edb_loader ;
 id |   col    
----+----------
  1 | abc
  2 | ccc
  3 | dddddddd
(3 rows)

List user privileges in PostgreSQL/PPAS 9.1

PostgreSQL has some useful functions which can be use to know about the privilege of a user on a particular Database object. Those functions is available in following link:

http://www.postgresql.org/docs/9.1/static/functions-info.html

Functions has_*_privilege in PostgreSQL/PPAS is good to know about privilege a user has on one database objects and these function returns boolean value true or false.

Since, DBAs/Users are interested in listing objects and privileges of a Database User and currently PostgreSQL doesn’t have a view, which DBA can use to list users privileges on objects for a particular database. Therefore, I thought about making some functions, which can be used to list users privileges, based on what is available in PostgreSQL/PPAS 9.1. These are basic functions and can be expanded, as per need, to show more privileges like WITH GRANT OPTION.

Following are functions which can use to get the privileges of a particular user:

1. Function for table privileges:

CREATE OR REPLACE FUNCTION table_privs(text) RETURNS table(username text, relname regclass, privs text[])
AS
$$
SELECT  $1,c.oid::regclass, array(select privs from unnest(ARRAY [ 
( CASE WHEN has_table_privilege($1,c.oid,'SELECT') THEN 'SELECT' ELSE NULL END),
(CASE WHEN has_table_privilege($1,c.oid,'INSERT') THEN 'INSERT' ELSE NULL END),
(CASE WHEN has_table_privilege($1,c.oid,'UPDATE') THEN 'UPDATE' ELSE NULL END),
(CASE WHEN has_table_privilege($1,c.oid,'DELETE') THEN 'DELETE' ELSE NULL END),
(CASE WHEN has_table_privilege($1,c.oid,'TRUNCATE') THEN 'TRUNCATE' ELSE NULL END),
(CASE WHEN has_table_privilege($1,c.oid,'REFERENCES') THEN 'REFERENCES' ELSE NULL END),
(CASE WHEN has_table_privilege($1,c.oid,'TRIGGER') THEN 'TRIGGER' ELSE NULL END)]) foo(privs) where privs is not null) FROM pg_class c JOIN pg_namespace n on c.relnamespace=n.oid where n.nspname not in ('information_schema','pg_catalog','sys') and nspparent=0 and c.relkind='r' and
has_table_privilege($1,c.oid,'SELECT, INSERT,UPDATE,DELETE,TRUNCATE,REFRENCES,TRIGGER') AND has_schema_privilege($1,c.relnamespace,'USAGE')
$$ language sql;

Example:

edb=# select * from table_privs('test_user');
 username  | relname |         privs          
-----------+---------+------------------------
 test_user | test_id | {SELECT,UPDATE,DELETE}
(1 row)

2. Database privileges:

CREATE OR REPLACE FUNCTION database_privs(text) RETURNS table(username text,dbname name,privileges  text[])
AS
$$
SELECT $1, datname, array(select privs from unnest(ARRAY[
( CASE WHEN has_database_privilege($1,c.oid,'CONNECT') THEN 'CONNECT' ELSE NULL END),
(CASE WHEN has_database_privilege($1,c.oid,'CREATE') THEN 'CREATE' ELSE NULL END),
(CASE WHEN has_database_privilege($1,c.oid,'TEMPORARY') THEN 'TEMPORARY' ELSE NULL END),
(CASE WHEN has_database_privilege($1,c.oid,'TEMP') THEN 'CONNECT' ELSE NULL END)])foo(privs) WHERE privs IS NOT NULL) FROM pg_database c WHERE 
has_database_privilege($1,c.oid,'CONNECT,CREATE,TEMPORARY,TEMP') AND datname &lt;&gt; 'template0';
$$ language sql;

Example:

edb=# select * from database_privs('test_user');
 username  |  dbname   |         privileges          
-----------+-----------+-----------------------------
 test_user | template1 | {CONNECT}
 test_user | edb       | {CONNECT,TEMPORARY,CONNECT}
(2 rows)

3. Tablespace privileges:

CREATE OR REPLACE FUNCTION tablespace_privs(text) RETURNS table(username text,spcname name,privileges text[])
AS
$$
   SELECT $1, spcname, ARRAY[
(CASE WHEN has_tablespace_privilege($1,spcname,'CREATE') THEN 'CREATE' ELSE NULL END)] FROM pg_tablespace WHERE has_tablespace_privilege($1,spcname,'CREATE');
$$ language sql;

Example:

edb=# select * from tablespace_privs('test_user');
 username  | spcname | privileges 
-----------+---------+------------
 test_user | test    | {CREATE}
(1 row)

4. Foreign Dataa Wrapper privileges

CREATE OR REPLACE FUNCTION fdw_wrapper_privs(text) RETURNS table(username text,fdwname name,privleges text[])
AS
$$
  SELECT $1, fdwname, ARRAY[
(CASE WHEN has_foreign_data_wrapper_privilege($1,fdwname,'USAGE') THEN 'USAGE' ELSE NULL END)] FROM pg_catalog.pg_foreign_data_wrapper WHERE has_foreign_data_wrapper_privilege($1,fdwname,'USAGE');
$$ language sql;

Example:

edb=# select * from fdw_wrapper_privs('test_user');
 username  |   fdwname    | privleges 
-----------+--------------+-----------
 test_user | libpq_dblink | {USAGE}
(1 row)

5. To find foreign server privileges following functions can be use:

CREATE OR REPLACE FUNCTION foreign_server_privs(text) RETURNS table(username text, srvname name, privileges text[])
AS
$$
  SELECT $1, s.srvname ,  ARRAY[
(CASE WHEN has_server_privilege($1,srvname,'USAGE') THEN 'USAGE' ELSE NULL END)] from pg_catalog.pg_foreign_server s  WHERE has_server_privilege ($1,srvname,'USAGE');
$$
language sql;

6. To find language priveleges, following function can be use:

CREATE OR REPLACE FUNCTION language_privs(text) RETURNS table(username text,srvname name, privileges text[])
AS
$$
SELECT $1, l.lanname, ARRAY[(CASE WHEN has_language_privilege($1,lanname,'USAGE') THEN 'USAGE' ELSE NULL END)] FROM pg_catalog.pg_language l where has_language_privilege($1,lanname,'USAGE');
$$ language sql;

7. To find schema privileges of a user following can be use:

CREATE OR REPLACE FUNCTION schema_privs(text) RETURNS table(username text, schemaname name, privieleges text[])
AS
$$
  SELECT $1, c.nspname, array(select privs from unnest(ARRAY[
( CASE WHEN has_schema_privilege($1,c.oid,'CREATE') THEN 'CREATE' ELSE NULL END),
(CASE WHEN has_schema_privilege($1,c.oid,'USAGE') THEN 'USAGE' ELSE NULL END)])foo(privs) WHERE privs IS NOT NULL)
FROM pg_namespace c where has_schema_privilege($1,c.oid,'CREATE,USAGE') AND c.nspparent=0;
$$ language sql;


Note:: Above function can be use in Advanced Server. For schema_privilege in PostgreSQL, user can try following function:

CREATE OR REPLACE FUNCTION schema_privs(text) RETURNS table(username text, schemaname name, privieleges text[])
AS
$$
  SELECT $1, c.nspname, array(select privs from unnest(ARRAY[
( CASE WHEN has_schema_privilege($1,c.oid,'CREATE') THEN 'CREATE' ELSE NULL END),
(CASE WHEN has_schema_privilege($1,c.oid,'USAGE') THEN 'USAGE' ELSE NULL END)])foo(privs) WHERE privs IS NOT NULL)
FROM pg_namespace c where has_schema_privilege($1,c.oid,'CREATE,USAGE');
$$ language sql;

8. To get privilege of a particular with view name, following function can be use:

CREATE OR REPLACE FUNCTION view_privs(text) returns table(username text, viewname regclass, privileges text[])
AS
$$
SELECT  $1, c.oid::regclass, array(select privs from unnest(ARRAY [ 
( CASE WHEN has_table_privilege($1,c.oid,'SELECT') THEN 'SELECT' ELSE NULL END),
(CASE WHEN has_table_privilege($1,c.oid,'INSERT') THEN 'INSERT' ELSE NULL END),
(CASE WHEN has_table_privilege($1,c.oid,'UPDATE') THEN 'UPDATE' ELSE NULL END),
(CASE WHEN has_table_privilege($1,c.oid,'DELETE') THEN 'DELETE' ELSE NULL END),
(CASE WHEN has_table_privilege($1,c.oid,'TRUNCATE') THEN 'TRUNCATE' ELSE NULL END),
(CASE WHEN has_table_privilege($1,c.oid,'REFERENCES') THEN 'REFERENCES' ELSE NULL END),
(CASE WHEN has_table_privilege($1,c.oid,'TRIGGER') THEN 'TRIGGER' ELSE NULL END)]) foo(privs) where privs is not null) FROM pg_class c JOIN pg_namespace n on c.relnamespace=n.oid where n.nspname not in ('information_schema','pg_catalog','sys') and nspparent=0 and c.relkind='v' and has_table_privilege($1,c.oid,'SELECT, INSERT,UPDATE,DELETE,TRUNCATE,REFRENCES,TRIGGER') AND has_schema_privilege($1,c.relnamespace,'USAGE')
$$ language sql;

9. For Sequence Privilege following function can be use:

CREATE OR REPLACE FUNCTION sequence_privs(text) RETURNS table(username text, sequence regclass, privileges text[])
AS
$$
  SELECT $1, c.oid::regclass, array(select privs from unnest(ARRAY [ 
( CASE WHEN has_table_privilege($1,c.oid,'SELECT') THEN 'SELECT' ELSE NULL END),
(CASE WHEN has_table_privilege($1,c.oid,'UPDATE') THEN 'UPDATE' ELSE NULL END)]),
(CASE WHEN has_table_privilege($1,c.oid,'USAGE') THEN 'UPDATE' ELSE NULL END) foo(privs) where privs is not null) FROM pg_class c JOIN pg_namespace n on c.relnamespace=n.oid where n.nspname not in ('information_schema','pg_catalog','sys') and nspparent=0 and c.relkind='S' and 
has_table_privilege($1,c.oid,'SELECT,UPDATE,USAGE')  AND has_schema_privilege($1,c.relnamespace,'USAGE')
$$ language sql;

User can also make join on above functions to view the privileges on objects something like given below:

select * from ( 
select username,'SCHEMA' as object_type,schemaname as object_name,privieleges 
    FROM schema_privs('test_user') 
 UNION ALL
SELECT username,'TABLE' as object_type ,relname::name as object_name ,privs 
    FROM table_privs('test_user')
 ) order by 2;

Above will give table and schema privileges of a user test_user. Below is output:

edb=# select * from  
edb-# (select username,'SCHEMA' as object_type,schemaname as object_name,privieleges FROM schema_privs('test_user') 
edb(#  UNION ALL
edb(# SELECT username,'TABLE' as object_type ,relname::name as object_name ,privs  FROM table_privs('test_user')
edb(# ) order by 2;

 username  | object_type |    object_name     |      privieleges       
-----------+-------------+--------------------+------------------------
 test_user | SCHEMA      | pg_catalog         | {USAGE}
 test_user | SCHEMA      | public             | {CREATE,USAGE}
 test_user | SCHEMA      | information_schema | {USAGE}
 test_user | SCHEMA      | sys                | {USAGE}
 test_user | SCHEMA      | dbo                | {USAGE}
 test_user | SCHEMA      | test               | {USAGE}
 test_user | TABLE       | test_id            | {SELECT,UPDATE,DELETE}
(7 rows)

Monitor CPU and MEMORY percentage used by each process in PostgreSQL/PPAS 9.1

PostgreSQL has pg_stat_activity view which can be use to get the session details. This view gives following information:

1. datid: database OID
2. datname: database name
3. procpid: process ID
4. usesysid: user OID 
5. usename: user name
6. application_name: application name
7. client_addr: client's address
8. client_hostname: host name (if available)
9. client_port: Clients port number
10. backend_start: time at which the server process started
11. xact_start: time at which current transaction started
12: query_start: time at which current query began execution started
13: waiting:  process's waiting status
14. current_query: text of the current query.

Above is good for monitoring sessions in postgresql. However, suppose user wants to know about percentage of CPU & Memory used by a particular session/user, then he can get this information by using plperlu function.

To Create the pleperlu function, user needs to have plperlu language installed in DB. For creating the plperlu, connect to Database as super user and execute following command:

edb=# CREATE EXTENSION plperlu;
CREATE EXTENSION

Now create following functions:

CREATE OR REPLACE FUNCTION get_pid_cpu_mem(int) returns table(PID INT,CPU_perc float,MEM_perc float) 
as
$$
  my $ps = "ps aux";
  my $awk = "awk '{if (\$2==".$_[0]."){print \$2\":\"\$3\":\"\$4}}'";
  my $cmd = $ps."|".$awk;
  $output = `$cmd 2>&1`;
  @output = split(/[\n\r]+/,$output);
  foreach $out (@output)
  { 
    my @line = split(/:/,$out);
    return_next{'pid' => $line[0],'cpu_perc' => $line[1], 'mem_perc' => $line[2]};
    return undef;
  }
   return;
 $$ language plperlu;

Note:: Above function is made for PostgreSQL running on Linux/Unix System.

Now user can use above function with pg_stat_activity to monitor the percentage of cpu and memory used by particular user/process.

Following is one snapshot:

edb=# select procpid,usename,application_name, get_pid_cpu_mem(procpid).cpu_perc,get_pid_cpu_mem(procpid).mem_perc,current_query from pg_stat_activity;
-[ RECORD 1 ]----+--------------------------------------------------------------------------------------------------------------------------------------------------
procpid          | 12991
usename          | enterprisedb
application_name | psql
cpu_perc         | 0
mem_perc         | 0.6
current_query    | select procpid,usename,application_name, get_pid_cpu_mem(procpid).cpu_perc,get_pid_cpu_mem(procpid).mem_perc,current_query from pg_stat_activity;

Creating user probe and alert in PEM 2.1

This is also one type of thing in which people are very interested.
How to create probes other than PEM inbuilt probe? And how to create alert based on probe?
Well answer is simple. Understand what probe is in PEM and then understand PEM Data detail

Probe in PEM requires following:

1. SQL Code which can be use to gathering data by pemagent.
2. Table in pemdata schema, which will be use for storing Current status/data of SQL.
3. History table (specially in pemhistory schema) where all history data will reside.

Suppose user wants to monitor the Slony Replication using PEM (since, PEM doesn’t have slony replication moniotring), so user can do following:
1. SQL Code which can be use for slony replication monitoring.
We know that slony replication monitoring can be done using view sl_status of slony. So, user can create a SQL Code as given below for replication.

select '_test_slony' as schema_name, st_origin,st_received, st_lag_num_events , st_lag_time from _test_slony.sl_status

Now, you must be thinking why I am including Slony SCHEMA name in SQL Code. Well, reason is that slony_schema name represents Slony Cluster name and if user is having more than one slony replication in Database, then those can be monitor with small modification of SQL Code. Also, PEM data table and PEM History can be use for monitoring multiple slony replication cluster. Lets start with creating probe based on SQL Code:

Standard way of creating probe in Postgres Enterprise Manager:
1. Insert the detail of probe in probe table with SQL Code for monitoring:

INSERT INTO probe(display_name,
internal_name,
collection_method, 
target_type_id, 
applies_to_id,
probe_code,
enabled_by_default, default_execution_frequency,default_lifetime,any_server_version,force_enabled,probe_key_list) 
VALUES('slon replication monitor','slony_replication','s',300,600,'select ''_test_slony'' as schema_name, st_origin,st_received, st_lag_num_events , st_lag_time from 	_test_slony.sl_status',false,300,180,true,false,ARRAY['st_origin','st_received']);

Comlumn Description of probe table is given below:
display_name: Name as presented in Alerting Box.
internal_name: Internal Name of probe to PEM server. Based on that function: pem.create_data_and_history_tables(), creates pemdata and pemhistory table
collection_method: is Collection based on SQL: ‘s’ or based on internal code of pemagent: ‘i’
target_type_id: Target type id, Type id and its description below:

		Global 	: 50
		Agent  	: Agent
		Server 	: 200
		Database: 300
		Schema	:   400
		Table	:   500
		Index	:  600
		Sequence: 700
		Function: 800

If probe is for database level then target_type_id would be 300.

applies_to_id: probe is apply to which type id, i.e if user can make database level probe which can be apply to table (like IndexSize can be database level and can be table level alert).
probe_code: SQL Code/function to collect data
enabled_by_default: if true then it will be enable for all pemagent servers
default_execution_frequency: Interval
default_lifetime: Data retention period
any_server_version: Is it PPAS specific alert or PG/PPAS both (true/false)
force_enabled: By default enable (true/false).
probe_key_list: Key columns

2. Insert the detail of data column, returns by SQL Code, in probe column, as given below:

INSERT INTO pem.probe_column (probe_id, internal_name, display_name, display_position, classification,
       sql_data_type, unit_of_value, calculate_pit, discard_history, pit_by_default, is_graphable)

SELECT 
   (SELECT max(id) FROM PEM.probe), v.internal_name, v.display_name, v.display_position, v.classification,
       v.sql_data_type, v.unit_of_value, v.calculate_pit, v.discard_history, v.pit_by_default, v.is_graphable FROM (
    VALUES('database_name','Database Name',1,'k','text','',false,false,false,false),
    ('schema_name','Slony Cluster',2,'k','text','',false,false,false,false),
    ('st_origin','Master Node ID',3,'k','INTEGER','',false,false,false,false),
    ('st_received','Slave Node ID',4,'k','INTEGER','',false,false,false,false),
    ('st_lag_num_events','Lag Events',5,'m','INTEGER','',false,false,false,false),
    ('st_lag_time','Lag Interval',6,'m','INTERVAL','',false,false,false,false)
)
v(internal_name, display_name, display_position, classification,
               sql_data_type, unit_of_value, calculate_pit, discard_history, pit_by_default, is_graphable);

Description of columns are given below:
probe_id : Assigned probe_id for column (its max(id) of pem.probe column.
internal_name : Internal name to PEM server.
display_name : Column Display name to user.
display_position : Position of column in SQL Code.
classification : If column consider as primary key i.e based on this record can be identified, then value would ‘k’ else ‘m’.
sql_data_type : Data type of Column
unit_of_value : Unit of data of column.
calculate_pit : Point in time data
discard_history : Discard any history.
pit_by_default : Default Point in time representation true/false
is_graphable : Can be use for graph (always keep false).

3. Now use PEM server function to create data and history table in pemdata and pemhistory schema of PEM server.

SELECT pem.create_data_and_history_tables();

Above will create table with internal_name mentioned in probe_column, like pemdata.slony_replication and pemhistory.slony_replication.

Above are three simple steps to create probe in PEM Server. Now, user can see that if you know SQL language, you can create probe in PEM. To verify the probe you can see the probe in PEM Client:

Open PEM Client -> Go to PEM Server directory (in Left pane) -> Connect to PostgreSQL Cluster -> expand the databases -> Right click on any database -> select Probe Configuration.
which will popup "Probe configuration" and you would be able to see the slony replication.

Since, we have created probe and based on probe, we can create template alert.
Creating Template Alert is simple. For new template alert, you have to do following:
1. Identify the right SQL code for monitoring based on probe, as given below for slony replication:

SELECT extract ('epoch' from st_lag_time) from  pemdata.slony_replication WHERE server_id=${server_id} AND database_name='${database_name}' AND st_origin='${param_1}' AND st_received='${param_2}' and st_lag_time >=interval '1 sec' AND schema_name='_test_slony'

In above I am using epoch function to calculate seconds and based on that I can have alert which would be having threshold value for Low, Medium and High alert.

2. Use PEM server function, pem.create_alert_template(), to create template alert as given below:

SELECT pem.create_alert_template('Slony Replication','Slony Replication Monitoring Alert',$SQL$ SELECT extract ('epoch' from st_lag_time) from  pemdata.slony_replication WHERE server_id=${server_id} AND database_name='${database_name}' AND st_origin='${param_1}' AND st_received='${param_2}' and st_lag_time >=interval '1 sec' AND schema_name='_test_slony' $SQL$,300,ARRAY['st_origin','st_received'],ARRAY['INTEGER','INTEGER']::alert_param_type[],NULL,'seconds',ARRAY['slony_replication'],2);

Argument description of pem.create_alert_template() function is given below:
1.name : Name of alert in single quotes
2.description : Small Description of Alert
3.sql : SQL code for getting threshold
4.object_type : Alert is of Agent LevelServer Level,
5.param_names : SQL Code is dependent on any parameter name, (like in my SQL Code, its depend ‘${param_1}’ (origin) and ‘{param_2}’ (subscriber node id)
6.param_types : Data type of each parameter (As per SQL code for slony, this would be NULL)
7.param_units : Assigned unit
8.threshold_unit : Unit of threshold (since I am doing Checking the Time lag, there it would seconds).
9.probe_dependency_list: Dependency on Pemdata table (Its slony_replication)
10.snmp_oid : Oid for snmp
11.applicable_on_server: Is this applicable for Advanced Server or ALL
12.default_check_frequency: Interval
13.default_history_retention: data retention in number of days.

Thats it! Now, you can see creating new user defined alert and probe in PEM is very simple. Only thing which user has to do is to focus on identifying right SQL Code for alert/probe.

Have fun with Postgres Enterprise Manager!

Automating Binding of Servers with PEM Agent in Postgres Enterprise Manager 2.1.

This is second post of Postgres Enterprise Manager 2.1 series.

Question which people ask, Is Postgres Enterprise Manager tool useful for Companies, which provide services for PostgreSQL? Answer is yes. What about companies, which provide infrastructure support, where they provide server and PostgreSQL database, for them is this is a good tool for monitoring PostgreSQL? Answer is yes, you can use it.

Companies, which are giving Infrastructure support and providing Database As service always want everything to be automated. i.e with provisioning server, server should have installed PostgreSQL and its components plus they want automatic installation of pemagent, which is also acheivable. However, they stumped on Automatic Bidning of PEM Agent with PostgreSQL.

For binding PostgreSQL with PEM Agent, Simple Method is using PEM Client.
In PEM Client, Add a PostgreSQL in PEM Directory and then user can bind the PEM agent with PostgreSQL Cluster.

File -> Add Server -> which will popup screen for "New serve Registeration" -> After adding server information (Also click on Store PEM Server)

Then in “New server Registeration Window” ->

 Click on tab "PEM Agent" -> Choose agent from drop down window of "Bound Agent", Then user can include the information. 

Well above is one method. However for Company which provides infrastructure support doesn’t want to do this manual work. They want this activity to be automated.

For automating this process, user can do following:
1. Install the PEM agent in unattended mode. For detail of installing pemagent in unattended mode, user can use command like:

./pem_agent-2.1.0.2-linux-x64.run --prefix /opt/PEM --mode unattended --pghost <pem server hostname> --pguser <pem server username and password> --agent_description <Agent Hostname>

2. Create .pgpass file in root/admin user home directory. Like on linux root home directory is /root.

# cat .pgpass
#hostname:port:database:username:password
*:5432:*:postgres:<your password>

Change the permission on .pgpass:

chmod 600 .pgpass

3. After creating the .pgpass file in home directory, execute following SQL:

/opt/PEM/agent/bin/edb-psql -h <PEM Server hostname> -p 5432 -c "
INSERT INTO pem.server(description,server,port,database) VALUES('New server','IP Address',5432,'postgres',2);" -U postgres

/opt/PEM/agent/bin/edb-psql -h <PEM Server hostname> -p 5432 -c "INSERT INTO pem.server_option(server_id,pem_user,username,server_group) VALUES((SELECT id FROM pem.server WHERE server='IP address' AND active=true),'postgres','postgres','PEM Server Directory');" -U postgres

/opt/PEM/agent/bin/edb-psql -h <PEM Server hostname> -p 5432 -c "INSERT INTO pem.agent_server_binding(agent_id,server_id,server,port,username,database)
VALUES((SELECT id FROM pem.agent WHERE description ='Agent Hostname' AND active=true),(SELECT id FROM pem.server WHERE server='IP address' AND active=true),5432,'postgres','postgres');" -U postgres

After executing above SQLs, you have successfully binded agent with Server for monitoring.

User can also include above steps in shell script and can execute while provisioning new server for their client.

Have Fun!

Types of Alerts in Postgres Enterprise Manager: 2.1.

Its being two months, that I didn’t blog on any topic. So, I thought to share some information on PEM. This post is for users, who always look for type of inbuild alerts of PEM. In this Blog, I am listing categories of Alerts and List of alerts in each category, which one can find in Postgres Enterprise Manager 2.1.
Also, if you have list of alerts then you can decide which alert you would like to configure.

Postgres Enterprise Manager (PEM) is very useful tool for monitoring PostgreSQL and it has all the alerts which is require for monitoring postgresql. User can define this alerts in following categories:

1. Server Level Alerts.
2. PG Cluster Level Alerts.
3. Database Level Alerts.
4. Schema Level Alerts
5. Table Level Alerts.

All the alerts which has been made/defined keeping in mind of postgreSQL. Let see what are the alerts in each categories:
1. Server Level Alerts: These alerts are made for monitoring Server Components like Memory,Disk and CPU and Server Level monitoring of PostgreSQL:

 1. Average table bloat on host (Bloats impact CPU and Disk)
 2. CPU utilization
 3. Database size on host
 4. Disk Available
 5. Disk busy percentage
 6. Disk Consumption
 7. Disk consumption percentage
 8. Free memory percentage
 9. Highest table bloat on host
 10. Load Average (15 minutes)
 11. Load Average (1 minute)
 12. Load Average (5 minutes)
 13. Load Average per CPU Core (15 minutes)
 14. Load Average per CPU Core (1 minutes)
 15. Load Average per CPU Core (5 minutes)
 16. Memory used percentage
 17. Most used disk percentage
 18. Number of CPUs running higher than a threshold
 19. Swap consumption
 20. Swap consumption percentage
 21. Table size on host
 22. Total table bloat on host

User can see above alerts covered server Monitoring which includes: Memory,Disk,CPU and Bloats & Biggest table on host.

2. PG Cluster Level Alert: These alerts are made for Monitoring PostgreSQL, Cluster Level. i.e. Number of Connections, Database Level Stats, User monitoring etc… Following is list of PG Cluster Level alerts:

 1. A user expires in N days
 2. Average table bloat in server
 3. Buffers allocated per second
 4. Buffers written per second
 5. Committed transactions percentage
 6. Connections in idle-in-transaction state
 7. Connections in idle-in-transaction state, as a percentage of max_connections
 8. Connections in idle state
 9. Database size in server
 10. Dead Tuples
 11. Dead tuples percentage
 12. Function Count
 13. Highest table bloat in server
 14. Hot update percentage
 15. Index Scans
 16. Index size as a percentage of table size
 17. InfiniteCache buffers hit percentage
 18. Largest index by table-size percentage
 19. Largest table (by multiple of unbloated size)
 20. Last Analyze
 21. Last AutoAnalyze
 22. Last AutoVacuum
 23. Last Vacuum
 24. Live Tuples
 25. Long-running autovacuums
 26. Long-running idle connections
 27. Long-running idle connections and idle transactions
 28. Long-running idle transactions
 29. Long-running queries
 30. Long-running transactions
 31. Long-running vacuums
 32. Number of prepared transactions
 33. Number of WAL files
 34. Percentage of buffers written by backends
 35. Percentage of buffers written by backends over last N minutes
 36. Percentage of buffers written by checkpoint
 37. Sequence Count
 38. Sequential Scans
 39. Shared buffers hit percentage
 40. Table Count
 41. Table size in server
 42. Total connections
 43. Total connections as percentage of max_connections
 44. Total table bloat in server
 45. Tuples deleted
 46. Tuples fetched
 47. Tuples hot updated
 48. Tuples inserted
 49. Tuples returned
 50. Tuples updated
 51. Ungranted locks
 52. Unused, non-superuser connections
 53. Unused, non-superuser connections as percentage of max_connections

3. Database Level Alerts: These alerts for Monitoring Specific Database in PostgreSQL Cluster. This is useful when you have database, which is important for your Bussiness and monitoring of that database is important for you:

 1. Average table bloat in database
 2. Committed transactions percentage
 3. Connections in idle-in-transaction state
 4. Connections in idle-in-transaction state, as a percentage of max_connections
 5. Connections in idle state
 6. Database Frozen XID
 7. Database size
 8. Dead Tuples
 9. Dead tuples percentage
 10. Function Count
 11. Highest table bloat in database
 12. Hot update percentage
 13. Index Scans
 14. Index size as a percentage of table size
 15. InfiniteCache buffers hit percentage
 16. Largest index by table-size percentage
 17. Largest table (by multiple of unbloated size)
 18. Last Analyze
 19. Last AutoAnalyze
 20. Last AutoVacuum
 21. Last Vacuum
 22. Live Tuples
 23. Long-running autovacuums
 24. Long-running idle connections
 25. Long-running idle connections and idle transactions
 26. Long-running idle transactions
 27. Long-running queries
 28. Long-running transactions
 29. Long-running vacuums
 30. Sequence Count
 31. Sequential Scans
 32. Shared buffers hit percentage
 33. Table Count
 34. Table size in database
 35. Total connections
 36. Total connections as percentage of max_connections
 37. Total table bloat in database
 38. Tuples deleted
 39. Tuples fetched
 40. Tuples hot updated
 41. Tuples inserted
 42. Tuples returned
 43. Tuples updated
 44. Ungranted locks

4. Schema Level Alerts: User can also configure alerts for specific schema in Database. This is important when you have a schema, related to important Business Objects and you have to monitor the performance of tables in schema. List of those alerts is given below:

 1. Average table bloat in schema
 2. Dead Tuples
 3. Dead tuples percentage
 4. Function Count
 5. Highest table bloat in schema
 6. Hot update percentage
 7. Index Scans
 8. Index size as a percentage of table size
 9. Largest index by table-size percentage
 10. Largest table (by multiple of unbloated size)
 11. Last Analyze
 12. Last AutoAnalyze
 13. Last AutoVacuum
 14. Last Vacuum
 15. Live Tuples
 16. Sequence Count
 17. Sequential Scans
 18. Table Count
 19. Table size in schema
 20. Total table bloat in schema
 21. Tuples deleted
 22. Tuples hot updated
 23. Tuples inserted
 24. Tuples updated

5. Table Level Alerts: User can also create alert table level. Some times, user are interested in monitoring important/specific table which has business importance. For them these alerts are important for maintaining the performance of PG.

 1. Dead Tuples
 2. Dead tuples percentage
 3. Hot update percentage
 4. Index Scans
 5. Index size as a percentage of table size
 6. Last Analyze
 7. Last AutoAnalyze
 8. Last AutoVacuum
 9. Last Vacuum
 10. Live Tuples
 11. Row Count
 12. Sequential Scans
 13. Table bloat
 14. Table Frozen XID
 15. Table size
 16. Table size as a multiple of ubloated size
 17. Tuples deleted
 18. Tuples hot updated
 19. Tuples inserted
 20. Tuples updated

You can see that Postgres Enterprise Manager covers PostgreSQL monitoring from all aspects which directly/indirectly responsible for PostgreSQL performance/Monitoring.

Now, reader of my blog must be thinking, these are the list of alerts. How can they get more information/description on these alerts. Answer is simple, you can get more detail about above alerts by three methods:
1. Using PEM Client HELP

 Open PEM Client -> Go to Help

2. Using Alerting:

  Open PEM Client ->
   For server Level, go to -> PEM Agents -> Right click on particular agent -> click on Alerting

For PG Cluster Level,

go to -> PEM Server Directory -> Right Click on particular PG Cluster -> select Alerting 

Similarly For Database alerts and Table Alerts.

3. Using SQL:
Connect to PEM Server Database:

   psql -p 5432 -U postgres pem

And use following SQLs:
Server Level:

SELECT row_number() over (order by display_name)||'. '|| at.display_name AS display_name,description FROM pem.alert_template at WHERE (at.object_type = 100) ORDER BY at.display_name;

CLuster Level:

SELECT row_number() over (order by display_name)||'. '|| at.display_name, description	FROM pem.alert_template at WHERE (at.object_type = 200) AND at.applicable_on_server IN ('ALL' , 'POSTGRES_SERVER') ORDER BY at.display_name;

DB Level:

SELECT row_number() over (order by display_name)||'. '|| at.display_name, description	FROM pem.alert_template at WHERE (at.object_type = 300) AND at.applicable_on_server IN ('ALL' , 'POSTGRES_SERVER') ORDER BY at.display_name;

schema Level:

SELECT row_number() over (order by display_name)||'. '|| at.display_name, description	FROM pem.alert_template at WHERE (at.object_type = 400) AND at.applicable_on_server IN ('ALL' , 'POSTGRES_SERVER') ORDER BY at.display_name;

Table Level:

SELECT row_number() over (order by display_name)||'. '|| at.display_name, description	FROM pem.alert_template at WHERE (at.object_type = 500) AND at.applicable_on_server IN ('ALL' , 'POSTGRES_SERVER') ORDER BY at.display_name;

Have Fun! and Have perfect Monitoring of PostgreSQL.

READ-ONLY user,READ-ONLY Database,READ-ONLY backup user in PostgreSQL/PPAS

This has been asked me many times. Many users has asked me how they can create READ-ONLY user, READ-ONLY Database or How they can create backup user which can be use only for backup (no modification)

Well answer is in parameter called default_transaction_read_only.

If you want to make a user READ-ONLY, then you can follow steps given below:
1. CREATE normal user.
2. Use ALTER USER command to set this parameter for this user as given below:

ALTER USER  set default_transaction_read_only = on;

3. GRANT SELECT to this user for tables/object using GRANT. And you are good to go.
Below is snapshot:

postgres=# create user readonly password 'test';
CREATE ROLE
postgres=# alter user readonly set default_transaction_read_only = on;
ALTER ROLE
postgres=# GRANT select on employees to readonly;
GRANT
edbs-MacBook-Pro:data postgres$ psql -U readonly -W
Password for user readonly:
psql (9.1.1)
Type "help" for help.

postgres=> select * from employees ;
 employee_name | entity_name
---------------+-------------
 Smith         | HR
 Jones         | HR
 Taylor        | SALES
 Brown         | SALES
(4 rows)

postgres=> CREATE table test_readonly(id numeric);
ERROR:  cannot execute CREATE TABLE in a read-only transaction
postgres=>

Similarly, If you want to make a Database READ-ONLY, then you can use following command

ALTER DATABASE  set default_transaction_read_only=on;

Below is snapshot:

postgres=# CREATE database readonly;
CREATE DATABASE
postgres=# alter database readonly set default_transaction_read_only = on;
ALTER DATABASE
postgres=# \q
edbs-MacBook-Pro:data postgres$ psql readonly
psql (9.1.1)
Type "help" for help.

readonly=# create table test_readonly(id numeric);
ERROR:  cannot execute CREATE TABLE in a read-only transaction
readonly=#

Now, if you want a seperate backup user which you want to use for Online/Hot Backup,Logical Backup (using pg_dump), then you can create a super user with default_transaction_read_only = on and can use it for backup purpose. As given below:

CREATE USER backupuser SUPERUSER  password 'backupuser';
ALTER USER backupuser set default_transaction_read_only = on;

Using default_transaction_read_only parameter, user can also make a Session Readonly by executing following command:

set default_transaction_read_only=on;

I hope above would help someone who is interested in having READ-ONLY user,database or backupuser.

Partition Syntax Support in PPAS 9.1

In Series of New Features in Advanced Server 9.1, today I tought to write about Parition Table Syntax Supported in PPAS 9.1. In PostgreSQL and till PPAS 9.0, user has to follow method given below for partitioning a table:
1. CREATE PARENT Table,
2. Create Child tables using Inherit feature
3. Create Trigger on Parent table, so that trigger can re-direct insert to Right Partition.
4. And if user has to add new child table, then it has to do 2 and 3 steps again.

Now, in PPAS 9.0, user doesn’t have to perform above activities. PPAS 9.1 supports PARTITION TABLE syntax.

Lets see how PARTITION syntax in PPAS can make users/DBAs life easier.

We know that PPAS/PostgreSQL supports two types of partition (Range and List). So, we will see how its simple with CREATE PARTITION SYNTAX.
Lets CREATE RANGE PARTITION as we used to do in PPAS 9.0/PostgreSQL
1. Create Master table as given below:

CREATE TABLE partition_master(id numeric primary key,val text);
CREATE TABLE partition_child1(CHECK (id > 0 and id <=10)) inherits(partition_master);
CREATE TABLE partition_child2(CHECK (id >10 and id <=20)) inherits(partition_master);

2. Then create check constraints on Both Child tables

alter table partition_child1 add primary key (id);
alter table partition_child2 add primary key (id);

3. Create Trigger, which redirect Inserts to correct child table:

CREATE OR REPLACE FUNCTION part_trig_insrt() RETURNS trigger
AS
$$
BEGIN
   IF TG_OP='INSERT' THEN
      IF NEW.id >0 and NEW.id <=10 THEN
           INSERT INTO partition_child1 VALUES(NEW.*);
      ELSIF NEW.id >10 and NEW.id <=20 THEN
           INSERT INTO partition_child2 VALUES(NEW.*);
      ELSE
          RAISE 'inserted partition key doesnt map to any partition';      
     END IF;
  END IF;
END;
$$ language plpgsql;
CREATE TRIGGER partition_trig_insert BEFORE INSERT ON partition_master FOR EACH ROW execute procedure part_trig_insrt();

similarly, you have to write trigger which can handle partition key update.

And whenever user wants to add new partition, he has to update the trigger function and create new partition table etc.

Now, in 9.1AS on-wards, user can run single command for partition and PPAS will take care of all things, as given below:

CREATE TABLE partition_master (id numeric primary key,val text)
PARTITION BY RANGE(id)
(PARTITION partition_child1 VALUES LESS THAN (11),
PARTITION partition_child2 VALUES LESS THAN (21));

edb=# insert into partition_master values(1,'First');
INSERT 0 0
edb=# insert into partition_master values(11,'Eleventh');
INSERT 0 0
edb=# select * from partition_master;
 id |   val    
----+----------
  1 | First
 11 | Eleventh
(2 rows)

edb=# select * from only partition_master_partition_child1;
 id |  val  
----+-------
  1 | First
(1 row)

edb=# select * from partition_master_partition_child2;
 id |   val    
----+----------
 11 | Eleventh
(1 row)

That was easy. With single command you can have your partition table in PPAS.

Suppose later, user wants to add one more partition then he can execute following single command:

ALTER TABLE partition_master add partition partition_child3 VALUES LESS THAN (31);

edb=# insert into partition_master values(30,'Thirty');
INSERT 0 0
edb=# select * from partition_master;
 id |   val    
----+----------
  1 | First
 11 | Eleventh
 30 | Thirty
(3 rows)

edb=# select * from partition_master_partition_child3;
 id |  val   
----+--------
 30 | Thirty
(1 row)

Thats simple. isn’t?

With this PPAS 9.1 Partition Syntax also allows swaping an existing table with a partition or subpartition, as given below:

CREATE TABLE single_table (id numeric primary key,val text)
insert into single_master select generate_series(1,10);
INSERT 0 10

ALTER TABLE partition_master 
EXCHANGE PARTITION partition_child1
WITH TABLE single_table;

Other syntax which are supported is given below:

1.  ALTER TABLE… ADD PARTITION
2.  ALTER TABLE… ADD SUBPARTITION
3.  ALTER TABLE… DROP PARTITION
4.  ALTER TABLE… DROP SUBPARTITION
5.  ALTER TABLE… SPLIT PARTITION
6.  ALTER TABLE… SPLIT SUBPARTITION
7.  ALTER TABLE… TRUNCATE PARTITION
8.  ALTER TABLE… TRUNCATE SUBPARTITION
9.  ALTER TABLE… EXCHANGE PARTITION
10. ALTER TABLE… MOVE PARTITION
11. ALTER TABLE… RENAME PARTITION

Support of above partition syntaxes have really made management of partition table easier in PPAS 9.1!