Partition pruning in EDB Postgres 9.5

One of my colleague who was recently working with a customer has presented a customer case. According to him, customer has a partitioned table and EDB Postgres was not applying the partition pruning in his query. So, I thought to blog about partition pruning, so that EDB Postgres developers and DBAs can benefit.

EDB Postgres supports two types of partition pruning:

Constraint exclusion pruning:

It is a feature introduced in Postgresql 8.1. This type of pruning works with PostgreSQL-style of partition. With constraint exclusion enabled, the planner will examine the constraints of each partition and try to prove that the partition need not be scanned because it could not contain any rows meeting the query’s WHERE clause. When the planner can prove this, it excludes the partition from the query plan.

However, it has some limitations. Following is the limitation of constraint_exclusion:

a. Constraint exclusion only works when the query’s WHERE clause contains constants (or externally supplied parameters). For example, a comparison against a non-immutable function such as CURRENT_TIMESTAMP cannot be optimized, since the planner cannot know which partition the function value might fall into at run time.
b. Keep the partitioning constraints simple, else the planner may not be able to prove that partitions don’t need to be visited. Use simple equality conditions for list partitioning, or simple range tests for range partitioning, as illustrated in the preceding examples. A good rule of thumb is that partitioning constraints should contain only comparisons of the partitioning column(s) to constants using B-tree-indexable operators.

For verification, below shows the behavior of constraint_exclusion pruning:
1. Let’s create PostgreSQL-style partition table using table inheritance feature.

CREATE TABLE measurement (
     city_id        int not null,
     logdate        date not null,
     peaktemp        int,
     unitsales      int
 );
CREATE TABLE measurement_y2004m02 (
     CHECK ( date_part('month'::text, logdate) = 2)
 ) INHERITS (measurement);
CREATE TABLE measurement_y2004m03 (
     CHECK ( date_part('month'::text, logdate) = 3 )
 ) INHERITS (measurement);

  1. Execute simple query to verify the constraint_exclusion behavior based on above definition:
 edb=# EXPLAIN (costs off) SELECT count(*) FROM measurement WHERE date_part('month'::text, logdate) = 3;
                                    QUERY PLAN                                     
-----------------------------------------------------------------------------------
 Aggregate
   ->  Append
         ->  Seq Scan on measurement
               Filter: (date_part('month'::text, logdate) = '3'::double precision)
         ->  Seq Scan on measurement_y2004m02
               Filter: (date_part('month'::text, logdate) = '3'::double precision)
         ->  Seq Scan on measurement_y2004m03
               Filter: (date_part('month'::text, logdate) = '3'::double precision)
(8 rows)

Above output of the query shows EDB Postgres considered all partitions of table measurements, even though we had included partition column and constant value in WHERE clause. This is due to check constraint which has date_part function. date_part function is not immutable in Postgres, therefore at planning time, EDB Postgres doesn’t know what value it will return. And, if user doesn’t include proper WHERE clause as mentioned in check constraint, pruning will not work.

In Postgres you can make a function immutable by using ALTER FUNCTION command.

In below example, we will make date_part function immutable to check if constraint_exclusion works with date_part immutable function or not:

  1. Convert date_part function to immutable :
edb=# ALTER FUNCTION date_part (text, timestamp without time zone ) immutable;
ALTER FUNCTION
  1. Perform EXPLAIN command to check the behavior of constraint_exclusion using immutable function:
edb=# EXPLAIN (costs off) SELECT count(*) FROM measurement WHERE date_part('month'::text, logdate) = 3;
                                    QUERY PLAN
-----------------------------------------------------------------------------------
 Aggregate
   ->  Append
         ->  Seq Scan on measurement
               Filter: (date_part('month'::text, logdate) = '3'::double precision)
         ->  Seq Scan on measurement_y2004m03
               Filter: (date_part('month'::text, logdate) = '3'::double precision)
(6 rows)

As you can see with immutable function EDB Postgres was able to perform constraint_exclusion pruning.

What if we change the WHERE clause little bit and include < and = operator in our SQL queries (below are examples)

edb=#  EXPLAIN (costs off) SELECT count(*) FROM measurement WHERE logdate < DATE '2004-03-01';                                      QUERY PLAN                                       -------------------------------------------------------------------------------------  Aggregate    ->  Append
         ->  Seq Scan on measurement
               Filter: (logdate < '01-MAR-04 00:00:00'::timestamp without time zone)          ->  Seq Scan on measurement_y2004m02
               Filter: (logdate < '01-MAR-04 00:00:00'::timestamp without time zone)          ->  Seq Scan on measurement_y2004m03
               Filter: (logdate < '01-MAR-04 00:00:00'::timestamp without time zone) (8 rows) edb=# EXPLAIN (costs off) SELECT count(*) FROM measurement WHERE logdate = DATE '2004-02-01';                                      QUERY PLAN                                       -------------------------------------------------------------------------------------  Aggregate    ->  Append
         ->  Seq Scan on measurement
               Filter: (logdate = '01-FEB-04 00:00:00'::timestamp without time zone)
         ->  Seq Scan on measurement_y2004m02
               Filter: (logdate = '01-FEB-04 00:00:00'::timestamp without time zone)
         ->  Seq Scan on measurement_y2004m03
               Filter: (logdate = '01-FEB-04 00:00:00'::timestamp without time zone)
(8 rows)

As you can see with a change in WHERE clause and exclusion of the way constraint defined on partition, Postgres will scan all partitions.

Based on above we can conclude that if a user is planning to use Postgres way of partition then they have to be careful about the constraint definition in order to utilize constraint_exclusion pruning.

Lets modify the definition of measurement table and verify the ,=, <= and = operator in WHERE clause.

CREATE TABLE measurement (
     city_id        int not null,
     logdate        date not null,
     peaktemp        int,
     unitsales      int
 ); 
CREATE TABLE measurement_y2004m02 (
     CHECK ( logdate >= DATE '2004-02-01' AND logdate < DATE '2004-03-01' )  ) INHERITS (measurement); CREATE TABLE measurement_y2004m03 (      CHECK ( logdate >= DATE '2004-03-01' AND logdate < DATE '2004-04-01' )
 ) INHERITS (measurement);

Below is explain plan based on above definition:

edb=# EXPLAIN (costs off) SELECT count(*) FROM measurement WHERE logdate < DATE '2004-03-01';                                      QUERY PLAN                                       -------------------------------------------------------------------------------------  Aggregate    ->  Append
         ->  Seq Scan on measurement
               Filter: (logdate < '01-MAR-04 00:00:00'::timestamp without time zone)          ->  Seq Scan on measurement_y2004m02
               Filter: (logdate < '01-MAR-04 00:00:00'::timestamp without time zone) (6 rows) edb=# EXPLAIN (costs off) SELECT count(*) FROM measurement WHERE logdate = DATE '2004-03-01';                                      QUERY PLAN                                       -------------------------------------------------------------------------------------  Aggregate    ->  Append
         ->  Seq Scan on measurement
               Filter: (logdate = '01-MAR-04 00:00:00'::timestamp without time zone)
         ->  Seq Scan on measurement_y2004m03
               Filter: (logdate = '01-MAR-04 00:00:00'::timestamp without time zone)
(6 rows)

Above clearly shows that with correct constraint definition, constriant_exclusion pruning can work for >,<,>=, <= and = operator in WHERE clause.

Fast pruning:

EDB Postgres has CREATE TABLE PARTITION SYNTAX since version 9.1. PARTITION SYNTAX in EDB Postgres uses one more pruning called fast pruning. Fast pruning uses the partition metadata and query predicates to efficiently reduce the set of partitions to scan. Fast pruning in EDB Postgres happens before query plan. Let’s verify the behavior of fast pruning.
As mentioned fast pruning works with partition which user created using EDB Postgres CREATE TABLE PARTITION Syntax. Let’s modify the above definition of measurement table to use CREATE TABLE PARTITION SYNTAX as given below:

CREATE TABLE  measurement (
     city_id        int not null,
     logdate        date not null,
     peaktemp        int,
     unitsales      int
 )
PARTITION BY RANGE(logdate)
(PARTITION y2004m01 VALUES LESS THAN ('2004-02-01'),
 PARTITION y2004m02 VALUES LESS THAN ('2004-03-01'),
 PARTITION y2004m03 VALUES LESS THAN ('2004-04-01')
);
edb=# EXPLAIN (costs off) SELECT count(*) FROM measurement WHERE logdate < DATE '2004-03-01';                                      QUERY PLAN                                       -------------------------------------------------------------------------------------  Aggregate    ->  Append
         ->  Seq Scan on measurement
               Filter: (logdate < '01-MAR-04 00:00:00'::timestamp without time zone)          ->  Seq Scan on measurement_y2004m01
               Filter: (logdate < '01-MAR-04 00:00:00'::timestamp without time zone)          ->  Seq Scan on measurement_y2004m02
               Filter: (logdate < '01-MAR-04 00:00:00'::timestamp without time zone) (8 rows) edb=# EXPLAIN (costs off) SELECT count(*) FROM measurement WHERE logdate = DATE '2004-03-01';                                      QUERY PLAN                                       -------------------------------------------------------------------------------------  Aggregate    ->  Append
         ->  Seq Scan on measurement
               Filter: (logdate = '01-MAR-04 00:00:00'::timestamp without time zone)
         ->  Seq Scan on measurement_y2004m03
               Filter: (logdate = '01-MAR-04 00:00:00'::timestamp without time zone)
(6 rows)

edb=# EXPLAIN (costs off) SELECT count(*) FROM measurement WHERE logdate > DATE '2004-03-01';
                                     QUERY PLAN                                      
-------------------------------------------------------------------------------------
 Aggregate
   ->  Append
         ->  Seq Scan on measurement
               Filter: (logdate > '01-MAR-04 00:00:00'::timestamp without time zone)
         ->  Seq Scan on measurement_y2004m03
               Filter: (logdate > '01-MAR-04 00:00:00'::timestamp without time zone)
(6 rows)

For more information on EDB Postgres pruning please refer following link:
https://www.enterprisedb.com/docs/en/9.5/oracompat/Database_Compatibility_for_Oracle_Developers_Guide.1.327.html#

Postgres and Transparent Data Encryption (TDE)

Security has always been a great concern of Enterprises. Especially, if you have crucial information stored in the database, you would always prefer to have high security around it. Over the years, technologies have evolved and provided better solutions around it.

If you have very sensitive information, people try to keep this information encrypted so, that in case, somebody gets access of the system, then they cannot view this information, if they are not authorized.

For managing sensitive information, Enterprises use multiple methods:

  1. Encrypting specific information.

If you are PPAS users, you would like to use DBMS_CRYPTO package which provides a way of encrypting sensitive information in databases.

For more information, please refer following link:

http://www.enterprisedb.com/docs/en/9.4/oracompat/Database_Compatibility_for_Oracle_Developers_Guide.1.178.html#

For PostgreSQL, users can use pgcrypto module.

  1. Transparent Data Encryption (TDE) is another method employed by both Microsoft and Oracle to encrypt database files. TDE offers encryption at file level. This method solves the problem of protecting data at rest i.e. encrypting databases both on the hard drive and consequently on backup media. Enterprises typically employ TDE to solve compliance issues such as PCI DSS.

Postgres Plus, currently doesn’t have inbuilt TDE, however, if Enterprises looking for encryption at the database file level, they can use one of the following methods for protecting data at rest:

  1. Full Disk Encryption:

Full disk or partition encryption is one of the best ways of protecting your data. This method not only protects each file, however, also protects the temporary storage that may contain parts of these files.  Full disk encryption protects all of your files and then you do not have to worry about selecting what you want to protect and possibly missing a file.

RHEL (Red Hat) supports Linux Unified Key Setup-on-disk-format (or LUKS). LUKS bulk encrypts Hard Drive partition.

For more information on LUKS, please refer following link:

https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Linux/6/html/Security_Guide/chap-Security_Guide-Encryption.html#sect-Security_Guide-LUKS_Disk_Encryption

  1. File system-level encryption:

File system-level encryption often called file/directory encryption. In this method individual files or directories are encrypted by the file system itself.

There is stackable cryptographic file system encryption available which user can utilize in their environment.

File system level Encryption gives following advantages:

  1. Flexible file-based key management, so that each file can be and usually is encrypted with a separate encryption key.
  1. Individual management of encrypted files e.g. Incremental backups of the individual changed files even in encrypted form, rather than backup of the entire encrypted volume.
  1. Access control can be enforced through the use of public-key cryptography, and the fact that cryptographic keys are only held in memory while the file that is decrypted by them is held open.

Stackable cryptographic file system encryption can be use for Postgres for Transparent Data Encryption.

In this blog, I will discuss using mount ecrpytfs as it requires less overhead in setup (LUKS requires a new disk to be configured and formatted before storing data on it. “mount ecrpytfs” works with existing directories and data).

If Enterprises want to give the control to DBAs for TDE, they can use/define few sudo rules for DBAs to execute commands for encryption.

Following is a method, which they can use:

  • Ask system admin to create sudo rules to allow DBA to execute encryption for data directory for Postgres Plus. One common way to do this is using the “mount ecryptfs” command in Linux operating systems.
  • If user needs to encrypt the /ppas94/data directory, they can use following command:
sudo mount -t ecryptfs /ppas94/data /ppas94/data
        

More information can be found in the documentation from RHEL:

https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Linux/6/html/Storage_Administration_Guide/ch-efs.html

User can also specify encryption key type (passphrase, openssl), cipher (aes, des3_ede…) key byte size, and other options with above commands.

Example is given below:

# mount -t ecryptfs /home /home -o ecryptfs_unlink_sigs \

 ecryptfs_key_bytes=16 ecryptfs_cipher=aes ecryptfs_sig=c7fed37c0a341e19
<strong>


Centos 7 and RHEL 7, by default doesn’t come with ecrpytfs therefore, users can also use encfs command.

For more information on encfs, please refer following link:

https://wiki.archlinux.org/index.php/EncFS

Following are the steps to use encfs to encrypt the data directory.

  1. Create a data directory using the following command, as enterprisedb user.
   mkdir /var/lib/ppas/9.4/encrypted_data
   chmod 700 /var/lib/ppas/9.4/encrypted_data
  1. Use following encfs command to encrypt the data directory.
         encfs /var/lib/ppas-9.4/encrypted_data/ /var/lib/ppas-9.4/data

Snapshot of above command is given below:

encfs /var/lib/ppas/9.4/encrypted_data /var/lib/ppas/9.4/data

The directory "/var/lib/ppas/9.4/data" does not exist. Should it be created? (y,n) y

Creating new encrypted volume.

Please choose from one of the following options:

 enter "x" for expert configuration mode,

 enter "p" for pre-configured paranoia mode,

 anything else, or an empty line will select standard mode.

?> p




Paranoia configuration selected.




Configuration finished.  The filesystem to be created has

the following properties:

Filesystem cipher: "ssl/aes", version 3:0:2

Filename encoding: "nameio/block", version 3:0:1

Key Size: 256 bits

Block Size: 1024 bytes, including 8 byte MAC header

Each file contains 8 byte header with unique IV data.

Filenames encoded using IV chaining mode.

File data IV is chained to filename IV.

File holes passed through to ciphertext.




-------------------------- WARNING --------------------------

The external initialization-vector chaining option has been

enabled.  This option disables the use of hard links on the

filesystem. Without hard links, some programs may not work.

The programs 'mutt' and 'procmail' are known to fail.  For

more information, please see the encfs mailing list.

If you would like to choose another configuration setting,

please press CTRL-C now to abort and start over.




Now you will need to enter a password for your filesystem.

You will need to remember this password, as there is absolutely

no recovery mechanism.  However, the password can be changed

later using encfsctl.




New Encfs Password: 

Verify Encfs Password: 

  1. After encrypting, data directory, users also need to modify the postgresql-<version> service script to include proper command in it for password. For that either, they can use sshpass or they can write their own program which can pass the password for mounting directory.

As you can see, achieving Transparent Data Encryption Postgres is very easy.

READ-ONLY user,READ-ONLY Database,READ-ONLY backup user in PostgreSQL/PPAS

This has been asked me many times. Many users has asked me how they can create READ-ONLY user, READ-ONLY Database or How they can create backup user which can be use only for backup (no modification)

Well answer is in parameter called default_transaction_read_only.

If you want to make a user READ-ONLY, then you can follow steps given below:
1. CREATE normal user.
2. Use ALTER USER command to set this parameter for this user as given below:

ALTER USER  set default_transaction_read_only = on;

3. GRANT SELECT to this user for tables/object using GRANT. And you are good to go.
Below is snapshot:

postgres=# create user readonly password 'test';
CREATE ROLE
postgres=# alter user readonly set default_transaction_read_only = on;
ALTER ROLE
postgres=# GRANT select on employees to readonly;
GRANT
edbs-MacBook-Pro:data postgres$ psql -U readonly -W
Password for user readonly:
psql (9.1.1)
Type "help" for help.

postgres=> select * from employees ;
 employee_name | entity_name
---------------+-------------
 Smith         | HR
 Jones         | HR
 Taylor        | SALES
 Brown         | SALES
(4 rows)

postgres=> CREATE table test_readonly(id numeric);
ERROR:  cannot execute CREATE TABLE in a read-only transaction
postgres=>

Similarly, If you want to make a Database READ-ONLY, then you can use following command

ALTER DATABASE  set default_transaction_read_only=on;

Below is snapshot:

postgres=# CREATE database readonly;
CREATE DATABASE
postgres=# alter database readonly set default_transaction_read_only = on;
ALTER DATABASE
postgres=# \q
edbs-MacBook-Pro:data postgres$ psql readonly
psql (9.1.1)
Type "help" for help.

readonly=# create table test_readonly(id numeric);
ERROR:  cannot execute CREATE TABLE in a read-only transaction
readonly=#

Now, if you want a seperate backup user which you want to use for Online/Hot Backup,Logical Backup (using pg_dump), then you can create a super user with default_transaction_read_only = on and can use it for backup purpose. As given below:

CREATE USER backupuser SUPERUSER  password 'backupuser';
ALTER USER backupuser set default_transaction_read_only = on;

Using default_transaction_read_only parameter, user can also make a Session Readonly by executing following command:

set default_transaction_read_only=on;

I hope above would help someone who is interested in having READ-ONLY user,database or backupuser.

2011 in review

The WordPress.com stats helper prepared a 2011 annual report for my blog. Stats seems good to me, so sharing this report with readers.

Here’s an excerpt:

A New York City subway train holds 1,200 people. This blog was viewed about 7,500 times in 2011. If it were a NYC subway train, it would take about 6 trips to carry that many people.

Click here to see the complete report.

UPSERT/MERGE using Writable CTE in PostgreSQL 9.1

There is always discussion of having UPSERT/MERGE in postgresql. Since, oracle and other RDBMS has this feature, therefore people ask about this feature in PostgeSQL.

In previous version of PostgreSQL, we used to implement it using functions. Now in PostgreSQL 9.1, user can implement this feature using Writable CTE.

PostgreSQL 9.1, now has Writable CTE. WITH provides a way to write auxiliary statements for use in a larger query. These statements, which are often referred to as Common Table Expressions or CTEs, can be thought of as defining temporary tables that exist just for one query. Each auxiliary statement in a WITH clause can be a SELECT, INSERT, UPDATE, or DELETE; and the WITH clause itself is attached to a primary statement that can also be a SELECT, INSERT, UPDATE, or DELETE.

Lets see how we can use Writable CTE for UPSERT.

Following are SQL Code which can be use in Oracle and PostgreSQL for creating sample data:

For oracle:

create table myTable
  (pid number, sales number, status varchar2(6));

create table myTable2
  (pid number, sales number, status varchar2(6));

insert into myTable2 values(1,12,'CURR');
insert into myTable2 values(2,13,'NEW' );
insert into myTable2 values(3,15,'CURR');
insert into myTable  values(2,24,'CURR');
insert into myTable  values(3, 0,'OBS' );
insert into myTable  values(4,42,'CURR');
select * from myTable2;

For PostgreSQL 9.1:

create table myTable
  (pid numeric, sales numeric, status varchar(6));

create table myTable2
  (pid numeric, sales numeric, status varchar(6));

insert into myTable2 values(1,12,'CURR');
insert into myTable2 values(2,13,'NEW' );
insert into myTable2 values(3,15,'CURR');
insert into myTable  values(2,24,'CURR');
insert into myTable  values(3, 0,'OBS' );
insert into myTable  values(4,42,'CURR');
select * from myTable2;

In Oracle, people use Merge Something like given below:

merge into myTable2 m
         using myTable d
          on (m.pid = d.pid)
   when  matched
   then  update set   m.sales  = m.sales+d.sales
                ,     m.status = d.status
   when  not matched
   then  insert values (d.pid,d.sales,'NEW');

SQL> select * from myTable2;
       PID	SALES STATUS
---------- ---------- ------
	 1	   12 CURR
	 2	   37 CURR
	 3	   15 OBS
	 4	   42 NEW

In PostgreSQL 9.1, with writable CTE:

WITH upsert as
(update mytable2 m set sales=m.sales+d.sales, status=d.status from mytable d where m.pid=d.pid
  RETURNING m.*
)
insert into mytable2 select a.pid, a.sales,'NEW' from mytable a where a.pid not in (select b.pid from upsert b);

postgres=# select * from mytable2 order by 1;
 pid | sales | status
-----+-------+--------
   1 |    12 | CURR
   2 |    37 | CURR
   3 |    15 | OBS
   4 |    42 | NEW

Now, we have Writable CTE which can help us make UPSERT in PostgreSQL.
Enjoy 🙂

Some trigger and programming feature in PostgreSQL

Anonymous Block Support:
Now, user would be able to create anonymous block using plpgsql as given below:

do  
$$  
declare  
   id1 varchar;  
begin  
   select id into id1 from test_sort limit 1;  
   raise notice '%',id1;  
end;  
$$ language plpgsql;  

Output:

NOTICE:  1.1  
DO  

Column Trigger:
PG9.0 is also supports the column level and When Triggers.

Following is an example:

create or replace function test_col_trigger() returns trigger  
as  
$$  
  BEGIN  
      RAISE NOTICE 'INSERTED VALUE in Column a %',new.id;  
       RETURN null;  
  END;  
$$ language plpgsql;  
  
create trigger col_trigger after update of "a" on test_def FOR EACH ROW EXECUTE procedure test_col_trigger();  

Output:

postgres=# update test_def set a='Second';  
NOTICE:  INSERTED VALUE in Column a 1  
UPDATE 1  

When Triggers in PG9.0:

PG9.0 has “When trigger” which allowed user to create trigger to execute procedure for a particular condition.

create or replace function test_col_trigger() returns trigger  
as  
$$  
  BEGIN  
      RAISE NOTICE 'INSERTED VALUE in Column a %',new.id;  
       RETURN null;  
  END;  
$$ language plpgsql;  
  
create trigger trigger_when AFTER update on test_Def for each row when (old.col1 = new.col2) execute procedure test_col_trigger();  

Output:

postgres=# update test_def set a='Second';  
NOTICE:  SAME Result inserted a 1  
UPDATE 1  

There are some other new features has been added in PG9.0.

Important features which DBA must know are following:

1. New VACUUM FULL

Prior to PG9.0, VACUUM FULL was slow, since it used to move the blocks to a block which is closer to beginning of the table. This process used to consume lot of CPU and does the lot of random I/O. With this, after VACUUM FULL, DBA has to reindex the indexes associated with table since, VACUUM FULL used to leave some Bloats/Fragmentation in Indexes.

Overall it was not an efficient method.

However from PG9.0, VACUUM FULL has been coded to work smartly. At the place of moving the tuples, VACUUM FULL will now create a fresh table with new indexes and will destroy the old bloated table. This method is faster than the old method.

2. New column added in pg_stat_activity

application_name is a new column which has been added in pg_stat_activity, which would be giving the information about the application from where connections has been made.

Example is given below:

postgres=# select * from pg_stat_activity where procpid=pg_backend_pid();  
 datid | datname  | procpid | usesysid | usename  | application_name | client_addr | client_port |          backend_start           |            xact_start            |           query_start            | waiting |                         current_query                            
-------+----------+---------+----------+----------+------------------+-------------+-------------+----------------------------------+----------------------------------+----------------------------------+---------+----------------------------------------------------------------  
 11874 | postgres |    4533 |       10 | postgres | psql             |             |          -1 | 2010-11-02 11:39:13.170051+05:30 | 2010-11-02 12:55:04.278363+05:30 | 2010-11-02 12:55:04.278363+05:30 | f       | select * from pg_stat_activity where procpid=pg_backend_pid(); 

3. Per database and per role configuration.

4. –analyze-only new option has been added in vacuumdb command.
This option would allow the user to run analyze only using vacuumdb command.
Following is an output of vacuumed —help shows:

-Z, --analyze-only              only update optimizer statistics  

For more information other feature in PG9.0, following blog is helpful:
http://wiki.postgresql.org/wiki/Illustrated_9_0