Postgres Plus Advanced Server 9.3 Features

Postgres Plus Advanced Server 9.3, released on Tuesday for general availability, contains all of the innovation that was part of PostgreSQL 9.3, which was released in September.
These features include the following:

1. Configuration directive ‘include_dir’
2. COPY FREEZE for more efficient bulk loading
3. Custom Background Workers
4. Data Checksums
5. JSON: Additional functionality
6. LATERAL JOIN
7. Parallel pg_dump for faster backups
8. ‘pg_isready’ server monitoring tool
9. Switch to Posix shared memory and mmap()
10. Event Triggers
11. VIEW Features:
                   Materialized Views
                   Recursive View Syntax
                   Updatable Views
12. Writeable Foreign Tables
                   postgres_fdw
13. Replication Improvements
                    Streaming-Only Remastering
                    Fast Failover
                    Architecture-Independent Streaming
                    pg_basebackup conf setup

For Postgres Plus Advanced Server, we integrated into the core PostgreSQL additional performance improvements, new packages and Object oriented features so that our database can address a wider range of enterprise use cases.

Partitioning enhancements to boost performance for INSERTS/UPDATES/SELECT was a major development for Postgres Plus. Below are graphs illustrating the performance increases of Postgres Plus Advanced Server 9.3 compared to the 9.2 version.

TPS_Select

Figure: TPS SELECT

TPS_Update

 Figure TPS UPDATE

The release features some important functions that can make a developer’s life easier, such as:

1. DBMS_RANDOM package. This packages helps users to create Random numbers, Random strings and Random dates. PostgreSQL supports random functions, which enable users to have their own function on top of random for random string and random date.

With this package, users can easily use the built-in functions for those two purposes. Below are some examples:
DBMS_RANDOM package can be used to easily generate random strings and dates, and users don’t need to make their own wrapper function for these two activities. Below is one simple example of using it.

user=# BEGIN
user$#   DBMS_OUTPUT.put_line('Run 1 : seed=0');
user$#   DBMS_RANDOM.seed (val => 0);
user$#   FOR i IN 1 ..5 LOOP
user$#     DBMS_OUTPUT.put_line('i=' || i || ' : value=' || DBMS_RANDOM.value(low => 1, high => 10));
user$#   END LOOP;
user$#
user$#   DBMS_OUTPUT.put_line('Run 2 : seed=0');
user$#   DBMS_RANDOM.seed (val => 0);
user$#   FOR i IN 1 ..5 LOOP
user$#     DBMS_OUTPUT.put_line('i=' || i || ' : value=' || DBMS_RANDOM.value(low => 1, high => 10));
user$#   END LOOP;
user$#
user$# END;
Run 1 : seed=0
i=1 : value=6.756044853478672
i=2 : value=7.806749390438203
i=3 : value=1.1173334638588129
i=4 : value=5.221515491604802
i=5 : value=8.445010517258194
Run 2 : seed=0
i=1 : value=6.756044853478672
i=2 : value=7.806749390438203
i=3 : value=1.1173334638588129
i=4 : value=5.221515491604802
i=5 : value=8.445010517258194

EDB-SPL Procedure successfully completed
user=# BEGIN
user$#   FOR i IN 1 .. 5 LOOP
user$#     DBMS_OUTPUT.put_line('string(''x'',10)= ' || DBMS_RANDOM.string('x',10));
user$#   END LOOP;
user$# END;
string('x',10)= 1TT23XR8X2
string('x',10)= DO5D2KUUVD
string('x',10)= AGNPAXDECT
string('x',10)= 7JC6RMU9KX
string('x',10)= 13BW6JM6KN

EDB-SPL Procedure successfully completed
user=# BEGIN
user$#   FOR i IN 1 .. 5 LOOP
user$#     DBMS_OUTPUT.put_line('date= ' || TRUNC(SYSDATE + DBMS_RANDOM.value(0,366)));
user$#   END LOOP;
user$# END;
date= 20-JUN-14 00:00:00
date= 26-MAY-14 00:00:00
date= 11-JAN-14 00:00:00
date= 27-JUN-14 00:00:00
date= 21-DEC-13 00:00:00

EDB-SPL Procedure successfully completed

2. DBMS_LOCK.sleep: Similar to pg_sleep function, the DBMS_LOCK.sleep package is meant for Oracle users/developers who are more familiar with Oracle packages.

3. DBMS_CRYPTO: DMBMS_CRYPTO is new in Postgres Plus Advanced Server 9.3 and it provides the interface to encrypt and decrypt stored data, and can be used in conjunction with PL/SQL programs for running network communications. It provides support for several industry-standard encryption and hashing algorithms.

DECLARE
   input_string       VARCHAR2 (200) :=  'Secret Message';
   output_string      VARCHAR2 (200);
   encrypted_raw      RAW (2000);             -- stores encrypted binary text
   decrypted_raw      RAW (2000);             -- stores decrypted binary text
   num_key_bytes      NUMBER := 256/8;        -- key length 256 bits (32 bytes)
   key_bytes_raw      RAW (32);               -- stores 256-bit encryption key
   encryption_type   INTEGER :=          -- total encryption type
                            DBMS_CRYPTO.ENCRYPT_DES
                          + DBMS_CRYPTO.CHAIN_CBC
                          + DBMS_CRYPTO.PAD_PKCS5;
BEGIN
   DBMS_OUTPUT.PUT_LINE ( 'Original string: ' || input_string);
   key_bytes_raw := DBMS_CRYPTO.RANDOMBYTES (num_key_bytes::INTEGER);
   encrypted_raw := DBMS_CRYPTO.ENCRYPT
      (
         src => convert_to(input_string, 'LATIN1'),
         typ => encryption_type,
         key => key_bytes_raw
      );
    -- The encrypted value "encrypted_raw" can be used here
   decrypted_raw := DBMS_CRYPTO.DECRYPT
      (
         src => encrypted_raw,
         typ => encryption_type,
         key => key_bytes_raw
      );
   output_string := convert_from(decrypted_raw,'LATIN1');

   DBMS_OUTPUT.PUT_LINE ('Decrypted string: ' || output_string);
END;

4. DBMS_SCHEDULER: With Postgres Plus 9.2, we got DBMS_JOB package, and now Postgres Plus 9.3 has come out with DBMS_SCHEDULER, which has more control options for jobs and better visibility of scheduled jobs.
5. UTL_ENCODE: This is another important package for keeping sensitive information in the database. Using this package, users can easily encode and decode their data and keep it in database. This package can be utilized in new applications, and gives flexibility around important encode/decode functions for developers and users where data is very important. This is particularly useful for users working applications that were written for Oracle but migrated onto Postgres Plus Advanced Server. .

Let’s see an example:

user=# DECLARE
user-#     v_str VARCHAR2(100);
user$#   BEGIN
user$#      --generate encoded value
user$#     v_str := utl_encode.text_encode('EnterpriseDB','gb18030', UTL_ENCODE.BASE64);
user$#     dbms_output.put_line('Encoded string => '||v_str);
user$#
user$#      --take the encoded value and decode it
user$#    v_str := utl_encode.text_decode(v_str,'gb18030', UTL_ENCODE.BASE64);
user$#    dbms_output.put_line('Decoded string => '||v_str);
user$# END;
Encoded string => RW50ZXJwcmlzZURC
Decoded string => EnterpriseDB

EDB-SPL Procedure successfully completed

6. UTL_HTTP: Advanced Server 9.3 has an UTL_HTTP package. This package provides functions for HTTP callouts from SQL/SPL. Developers/users can use this package and associated functions for accessing data on the Internet over HTTP.

7. UTL_URL: This is one of additions we developed for Postgres Plus 9.3. This package has two functions, escape and unescape mechanisms for URL characters. The escape function helps to escape a URL before the URL can be used to fetch data from a website. The unescape function can unescape any escaped character used in the URL, before fetching the data from a website. These two packages with UTL_HTTP allow users to direct the fetching of data from a website without having to write complex code in the application for handling specific data from a website.

8. New in EDB*loader. In Postgres Plus 9.3 EDB*loader has more control options for bulk loading.
a. ROWS parameter: prior to version 9.3, EDB*Loader processed entire data files as a single transaction. With parameter, users can control the processing of large amounts of data after which COMMIT needs to be executed. More control for processing/Loading data files.
b. Error Codes: EDB*loader now supports some additional exit/error codes, which will help users/developers include proper exit code checking while using a higher speed bulk data loader with parallel processing. Exit codes are shown below:

      0: Success
      1: Failure
      2: Warning
      3: Fatal

c. Streaming output files to client: Prior to version 9.3, users had to check the logfile of EDB*loader on the server side. Files: logfile, Bad file and discard file used to be created on the server side, and for troubleshooting users had to log in on the server to verify these files. Now, Postgres Plus 9.3 enables these files to be created at the client site. Users are no longer required to log into the server to verify/check these files..
d. New GUC for empty string. EDB*loader now has one GUC which users can utilize to control the default behavior of an empty_string in their datafile. edbldr.empty string has the following valid values:
1. null: empty field is treated as a null if the raw field contains no characters or a pair of delimiters with nothing in between.
2. empty_string: empty field is treated as a empty_string, if the raw field contains no characters or a pair of delimiters with nothing in between.
3. pgsql: empty field is treated as a null if the raw field contains no characters, but as an empty string if it contains a pair of delimiters with nothing in between.
Default is pgsql.

9. New REGEXP functions. Postgres Plus 9.3 nowhas three new REGEXP functions for developers.
a. REGEXP_COUNT: This searches a string for a regular expression, and returns a count of the times that the regular expression occurs. Here’s a simple example:

          user=# SELECT REGEXP_COUNT('reinitializing', 'i', 1);
          regexp_count
           --------------
            5
          (1 row)
 

b. REGEXP_INSTR: This function searches a string for a POSIX-style regular expression and returns the position within the string where the match was located.

user=# SELECT REGEXP_INSTR('800-555-1212', '[0-9][0-9][0-9]', 1, 1) ;
 regexp_instr
--------------
            1
(1 row)

c. REGEXP_SUBSTR: This function searches a string for a pattern specified by a POSIX compliant regular expression and returns the string that matches the pattern specified in the call to the function.


user=# SELECT REGEXP_SUBSTR('800-555-1212', '[0-9][0-9][0-9]', 1, 2);
 regexp_substr
---------------
 555
(1 row)

10. New exception codes for UTL_FILE package: The UTL_FILE package provides the capability to read from, and write to files on the operating system’s file system. To provide error control capabilities to developers/users, Postgres Plus 9.3 has added some new exception codes, which users can utilize in their PL/SPL code for better exception handling. One simple example is given below:

user=# CREATE DIRECTORY tempdir_fb22954 AS '/tmp/';
CREATE DIRECTORY
user=# SELECT dirname
user-# FROM edb_dir
user-# WHERE dirname='tempdir_fb22954';
     dirname
-----------------
 tempdir_fb22954
(1 row)

user=#
user=#  -- check "utl_file.invalid_operation" exception
user=# DECLARE v_testfile UTL_FILE.FILE_TYPE;
user$#
user$#  v_directory VARCHAR2(50) := 'tempdir_fb22954';
user$#
user$#  v_filename VARCHAR2(50) := 'test_file_exist.txt';
user$#
user$#  BEGIN -- Create file
user$#  v_testfile := UTL_FILE.FOPEN(v_directory,v_filename,'w');
user$#
user$#  UTL_FILE.PUT(v_testfile,'A');
user$#
user$#  UTL_FILE.NEW_LINE(v_testfile);
user$#
user$#  UTL_FILE.FCLOSE(v_testfile);
user$#
user$#  DBMS_OUTPUT.PUT_LINE('Created file: ' || v_filename);
user$#
user$#  -- It should throw exception because file is open for read.
user$#  v_testfile := UTL_FILE.FOPEN(v_directory,v_filename,'r');
user$#
user$#  UTL_FILE.PUT(v_testfile,'B');
user$#
user$#  UTL_FILE.NEW_LINE(v_testfile);
user$#
user$#  UTL_FILE.PUT(v_testfile,'C');
user$#
user$#  UTL_FILE.NEW_LINE(v_testfile);
user$#
user$#  exception
user$#  WHEN utl_file.invalid_operation THEN
user$# RAISE notice 'exception caught utl_file.invalid_operation : SQLERRM: %',
user$#                                                       sqlerrm;
user$# UTL_FILE.FCLOSE(v_testfile);
user$#  WHEN others THEN
user$# RAISE notice 'exception, others : SQLERRM: %',
user$#                         sqlerrm;
user$# UTL_FILE.FCLOSE(v_testfile);
user$#  END;
Created file: test_file_exist.txt
NOTICE:  exception caught utl_file.invalid_operation : SQLERRM: invalid file operation

EDB-SPL Procedure successfully completed

New in Postgres Plus Advanced Server 9.2

Good News Postgres Plus Advanced Server Beta Version is now availabale, which has all the new feature of PostgreSQL 9.2, and it also has new features which are specific to Advanced Server.

I will cover/explain New features of PostgreSQL 9.2 later as per release Notes. However, the following link covers major features of PostgreSQL 9.2 and can be taken as reference.

http://wiki.postgresql.org/wiki/What's_new_in_PostgreSQL_9.2

In this Blog, I am going to cover only specific features of Advanced Server 9.2 Core. Which are:

1. INSERT APPEN HINT in PPAS 9.2

PPAS 9.2, now supports INSERT append hint. This is a very interesting feature and it is very useful for users who frequently delete records in Bulk and do bulk of INSERTs.
This hint can provide some benefits in INSERTs. This hint makes PPAS not use Free Space Map and Append the rows at the end of relation(table).
Its usage is given below:

INSERT /*+append*/ INTO tab21115 VALUES(1,'abc',sysdate);

2. Procedure Called like Function Call.

PPAS 9.2 now allows calling procedure with following syntax:

SELECT * FROM procedure(arg1,arg2,…);

This type of Procedure call is allowed for Procedures which have OUT/INOUT Params. With this, user can use exec Function.

lets see how it works:

a. Create a Procedure as Given below:

CREATE OR REPLACE PROCEDURE foo_proc(A IN INT, B INOUT INT, C OUT INT)
AS
BEGIN
   b:=a+b;
   c:=b+1;
END;

Till 9.1 and even in 9.2, user can do something like given below:

DECLARE
  d int:=1;
  e int:=2;
  f int;
BEGIN
   foo_proc(d,e,f);
   DBMS_OUTPUT.PUT_LINE('e = '||e);
   DBMS_OUTPUT.PUT_LINE('d = '||d);
   DBMS_OUTPUT.PUT_LINE('f= '||f);
END;
e = 3
d = 1
f= 4

Now in 9.2, user can also do this:

edb=# select * from foo_proc(1,2);
 b | c 
---+---
 3 | 4
(1 row)

In case the user has Procedure and Function with a same name and the user wants to execute procedure using select command, then SELECT command is going to give preference to Function over Procedure as given below:

Function Definition:

CREATE OR REPLACE FUNCTION foo(a INT) RETURN INT
AS
BEGIN
  RETURN 2;
END;

CREATE PROCEDURE foo(A int, B OUT int)
AS
BEGIN
   B:=a+1;
END;

edb=# select * from foo(4);
 foo 
-----
   2

However, user can still use EXEC command to execute Procedure as given below:

edb=# exec foo(4);
 ?column? 
----------
        5
(1 row)

Or if you are using Anonymous function then PERFORM will also work with Procedure as given below:

edb=# DECLARE
edb-#     a int:=4;
edb$#     b int;
edb$# BEGIN
edb$#    PERFORM foo(a,b);
edb$#    DBMS_OUTPUT.PUT_LINE('b = '||b);
edb$# END;
b = 5

EDB-SPL Procedure successfully completed

3. Object Type Support:

PPAS already has Object Type support. However, the new version is coming with a new enhancement in that support. In New Version, user would be able to create Objects with Attributes, Functions, and Procedures. This gives an advantage of reduced coding in terms of defining Object types and maintaining it.

Example is given below:

a. Define a Type:

    CREATE OR REPLACE TYPE PersonObj AS OBJECT (
  first_name  VARCHAR2(50),
  last_name   VARCHAR2(50),
  date_of_birth  DATE,
  MEMBER FUNCTION getAge RETURN NUMBER
);

b. Define a Type Body:

CREATE OR REPLACE TYPE BODY PersonObj AS
  MEMBER FUNCTION getAge RETURN NUMBER AS
  BEGIN
    RETURN Trunc(Months_Between(Sysdate, date_of_birth)/12);
  END getAge;
END;

c. Define a Table based on Body type:

CREATE TABLE people (
  id      NUMBER(10) NOT NULL,
  person  PersonObj
);

d. To insert Data, Default constructor can be use as given below:

INSERT INTO people
VALUES (1, PersonObj('John','Doe',
        TO_DATE('01/01/1999','DD/MM/YYYY')));

e. With the following way, a user can access the data in the table:

SELECT p.id,
       (p.person).first_name,
       p.person.getAge() age
FROM   people p;
 id | first_name | age 
----+------------+-----
  1 | John       |  13
  2 | Jane       |  13
(2 rows)

4. PL/SQL Subtypes:

Subtypes is considered as a TYPE, which is defined on base/user defined types with some constraints, syntax for declaring SUBTYPE in PL/SQL is given below:

SUBTYPE <subtype_name> IS <type_name> [(constraint)] [NOT NULL];

Example is given below:

Example 1:

DECLARE
  SUBTYPE INT2 IS NUMBER;
  v2 INT2 (3,0);
BEGIN
  v2:= 123;
  DBMS_OUTPUT.PUT_LINE ('V2 = '|| v2);
END;

Example 2:

DECLARE
  SUBTYPE v_word IS VARCHAR2(10) NOT NULL;
  verb  v_word  := 'verb';
  noun  v_word  := 'noun';
BEGIN
  noun :='n1';
  DBMS_OUTPUT.PUT_LINE (UPPER(verb));
  DBMS_OUTPUT.PUT_LINE (UPPER(noun));
END;

5. DROP TABLE CASCADE CONSTRAINTS in 9.2:

PPAS 9.2 now supports DROP TABLE CASCADE CONSTRAINTS, this is a very intersting feature and is not available in PostgreSQL 9.2.

Using this command, user would be able to drop all referential integrity constraints that refer to primary and unique keys in the dropped table. If you omit this clause and such referential integrity constraints exist, then the database returns an error and does not drop the table.

This is useful, when somebody wants to drop Master table but doesn’t want to drop child table, which has Foreign Key Constraint and is referencing the Master. An example is given below:

CREATE TABLE master(id numeric primary key);
CREATE table child(id numeric references master(id));
insert into master values(1);
insert into child values(1);
edb=# drop table master cascade constraints;
NOTICE:  drop cascades to constraint child_id_fkey on table child
DROP TABLE
edb=# \d child
  Table "enterprisedb.child"
 Column |  Type   | Modifiers 
--------+---------+-----------
 id     | numeric | 

As you can see above, I have dropped the Master table, on which child table has referential integrity constaint. However, my Child table still exists. PostgreSQL 9.2, doesn’t have CASCADE CONSTRAINTS. Therefore, user has to first drop the constraints on table which has referential integrity constraints and then drop the referenced table. This process requires user to find all tables which has referential integrity constraints that refer to primary key of master table. This commands helps DBAs/Users not to worry about finding the child tables.

6. Allow throwing Multiple Errors as Warning, while compiling Procedures/Functions using SPL.

This is also a very interesting enhancement found for SPL. Whenever the user wants to compile a function/procedure in SPL, it will show all the error messages in the particular Function/Procedure Body, so that the user doesn’t have to re-visit their code everytime to fix the other issue. An example is given below:

CREATE OR REPLACE FUNCTION foo() RETURN integer
AS
BEGIN
   a:=a=1
;
B:=b + 1;
I am making fool;
END;

Output:
ERROR:  "a" is not a known variable
LINE 4:    a:=a=1
           ^
ERROR:  "b" is not a known variable
LINE 6: B:=b + 1;
        ^
ERROR:  syntax error at or near "I"
LINE 7: I am making fool;
        ^
ERROR:  compilation of SPL function/procedure "foo" failed due to 3 errors

As you can see above, it has given all the mistakes that I made in Function Body, and now I can fix all of them in one go, to reduce the overhead of executing again and again.

This is very useful for Developers/DBAs.

7. DEFAULT for Types declared in Packages:

Now, in PPAS 9.2, user would be able to define DEFAULT value for particular type, as given below:

CREATE OR REPLACE PACKAGE INITPKG_pro_b4_default AS
PROCEDURE show_default_values;
 n1 NUMBER DEFAULT 20;
 c1 VARCHAR2 DEFAULT 'Default';
END;

CREATE OR REPLACE PACKAGE BODY INITPKG_pro_b4_default AS
 PROCEDURE show_default_values IS
  n number;
 BEGIN
  dbms_output.put_line(c1);
  dbms_output.put_line(n1);
  n1 := n1*n1;
  n1 := SQRT(n1);
  dbms_output.put_line(n1);
 END;
END;

edb=# exec INITPKG_pro_b4_default.show_default_values;
Default
20
20.000000000000000

EDB-SPL Procedure successfully completed

8. TABLE Expression support for Nested Tables.

Now, in 9.2, user would be able to use TABLE Expressions for Nested tables. This feature was missing and asked by many Oracle DBAs. Table Expressions allows the user to query a collection in the FROM Clause like a Table.

edb=# CREATE OR REPLACE TYPE comp_typ_str_21189 IS TABLE OF VARCHAR2(100);
CREATE TYPE
edb=# 
edb=# select * from table(comp_typ_str_21189('PPP','QQQ')) ;
 column_value 
--------------
 PPP
 QQQ
(2 rows)

9. INPUT/OUTPUT Functions for NESTED TABLES:

PPAS 9.2 supports user defined input/output functions for NESTED TABLE. This feature was missing in 9.1.

Some examples are given below:

edb=# create or replace type mytab as table of varchar2(90);
CREATE TYPE
edb=# Create or replace function fun return mytab
edb-# as
edb$# begin
edb$#  return mytab('a','b',3);
edb$# end;
CREATE FUNCTION
edb=# select fun;
 fun21168 
----------
 {a,b,3}
(1 row)

edb=# 
edb=# create or replace function fun return mytab
edb-# as
edb$#  nvar mytab;
edb$# begin
edb$#  nvar := mytab();
edb$#  nvar.extend(4);
edb$#  nvar(1) := 'foo';
edb$#  nvar(2) := NULL;
edb$#  nvar(3) := 'deleteme';
edb$#  nvar(4) := 'bar';
edb$#  return nvar;
edb$# end;
CREATE FUNCTION
edb=# --Verify User's is able to see that data
edb=# select fun;
        fun21168         
-------------------------
 {foo,NULL,deleteme,bar}
(1 row)

9. LOG is no more a reserved keyword for functions.
Till PPAS 9.1, LOG was a reserved keyword and users were not allowed to create functions using this Keyword. In 9.2, a user would be able to use this keyword.

edb=# CREATE OR REPLACE FUNCTION log( a int ) return int as
edb$# BEGIN
edb$# dbms_output.put_line('Function LOG is called');
edb$# return a*2;
edb$# END;
CREATE FUNCTION
edb=# SELECT LOG(10);
Function LOG is called
 log 
-----
  20
(1 row)

10. Variables can be named as current_date.

We know that current_date is special function which returns current date of PostgreSQL/PPAS DB. Till PPAS 9.1, users were not allowed to use this reserved name in variables. Now, in 9.2, users would be able to use it. As given below:

edb=# create table t_currentdate(current_date int);
CREATE TABLE
edb=# desc t_currentdate
    Table "enterprisedb.t_currentdate"
    Column    |  Type   | Modifiers 
--------------+---------+-----------
 current_date | integer | 

create or replace procedure proc_currentdate(current_date date) is
begin
 dbms_output.put_line(current_date);
end;

edb=# exec proc_currentdate(current_date);
05-OCT-12 00:00:00

EDB-SPL Procedure successfully completed

11. New Data Type STRING.

Since Oracle supports string as data type and this datatype wasn’t available in PPAS 9.1, we have included it in 9.2. String is an alias to VARCHAR Data type. Example is given below:

edb=# CREATE TABLE test_string(col string)
edb-# ;
CREATE TABLE
edb=# insert into test_string values('As an Example');
INSERT 0 1
edb=# select * from test_string;
      col      
---------------
 As an Example
(1 row)

edb=# \d test_string
    Table "enterprisedb.test_string"
 Column |       Type        | Modifiers 
--------+-------------------+-----------
 col    | character varying | 

12. NVARCHAR2 data type support in 9.2

PPAS 9.2 allowes users to use NVARCHAR2 as a datatype. In PPAS 9.2, NVARCHAR2 is mapped to VARCHAR data type. This is only for an Oracle user, who always asked question on having this in PPAS.

Example is given below:

edb=# CREATE TABLE test_nvarchar(col NVARCHAR2(10))
edb-# ;
CREATE TABLE
edb=# \d test_nvarchar
     Table "enterprisedb.test_nvarchar"
 Column |         Type          | Modifiers 
--------+-----------------------+-----------
 col    | character varying(10) | 

13. MultiByte and string delimiter support in EDB*loader.

EDB*Loader in PPAS 9.2, is more efficient, compared to 9.1, in loading data. There were some cases, in which it has been observed that it is difficult to find single character delimiter for data in file as the data in a file has all possible characters. To load those kind of data files, users can now define there own STRING Delimiter. An example is given below:

Data File: data.log:
1$$abc
2$$ccc
3$$dddddddd

Control FIle:
LOAD DATA
INFILE '/tmp/data.log'
BADFILE '/tmp/data.bad'
truncate INTO table edb_loader
FIELDS TERMINATED BY '$$' optionally enclosed by '"'
(id , col )

-bash-4.1$ edbldr userid=enterprisedb/ control=/tmp/edb.ctl log=/tmp/edb.log
EDB*Loader: Copyright (c) 2007-2012, EnterpriseDB Corporation.

Successfully processed (3) records
-bash-4.1$ psql
psql (9.2.0.1)
Type "help" for help.

edb=# select * from edb_loader ;
 id |   col    
----+----------
  1 | abc
  2 | ccc
  3 | dddddddd
(3 rows)

Postgres Plus Advanced Server 9.1 Beta 1

PPAS 9.1 Beta 1 has been released. So, I thought to write about it and New Features which are coming.

Best part of PPAS 9.1 is that it has all features of PostgreSQL 9.1, which I have already discribed in my series of 9.1. List is given below:

1. PostgreSQL Object Manipulation Features

2. New Replication and Recovery Features

3. Queries improvement of Core PostgreSQL 9.1

4. Updateable Views using INSTEADOF Trigger in PostgreSQL 9.1

5. Utility Operation Improvements

6. Upsert/Merge using Writeable CTE

7. New Functions/Improvement

8. Object Functions Improvement in PostgreSQL 9.1

9. Client Application Improvement in PostgreSQL 9.1

10. Asynchronous/Synchronous Replication

There are some other features in PostgreSQL 9.1. List of those Improvements/features is given below:

1. Unlogged Tables

2. FOREACH IN ARRAY in PL/pgSQL

3. Pl/Perl Improvement:

 Record type support
 Pl/perl array argument map to Perl Argument.
 Pl/perl Composite-type array to Perl Hash.</strong>


4. Pl/Python Improvement:

 Table Support in Pl/Python Function
 New Validator for Pl/Python
 SQL Exception handling in Pl/Python Exception Blocks
 Subtransactions in Pl/Python
 New Pl/Ptyhon functions for Quoting String (plpy.quote_ident, plpy.quote_literal, and plpy.quote_nullable)
 Traceback support for Plpython errors</strong><strong>
 Exception Handling in Python3</strong>

5. ECPG Improvements:

 WHERE CURRENT OF in Dynamic Cursor
 Double digit with percision of 15</strong>

6. Libpq Improvements:

client_encoding option in Connection
PQlibVersion() function for knowing Version of Library
 PQping and PQpingParams for knowing Server Status.

With Above Features, PPAS 9.1 has additional Features and Improvement, which users would like to have.

List of those Features is given below:

1.VPD (Virtual Private Database) Support 

2. Parition Syntax Support

CREATE TABLE employees
 (empno numeric(4,0),
 ename varchar(10),
 job varchar(9),
 hiredate timestamp,
 )
PARTITION BY RANGE (hiredate)
 (PARTITION ten_year_staff
 VALUES LESS THAN('01-JAN-2001'),
 (PARTITION five_year_staff
 VALUES LESS THAN('01-JAN-2006'),
 (PARTITION one_year_staff
 VALUES LESS THAN('01-JAN-2011'));

CREATE TABLE employees
 (empno numeric(4,0),
 ename varchar(10),
 job varchar(9),
 hiredate timestamp,
)
PARTITION BY RANGE (hiredate)
SUBPARTITION BY LIST (job)
(PARTITION ten_year_staff VALUES LESS THAN('01-JAN-2001)
(SUBPARTITION mgmt VALUES ('MANAGER', 'PRESIDENT'), SUBPARTITION sales VALUES ('SALESMAN', 'CLERK'), SUBPARTITION support VALUES ('ANALYST')),
(PARTITION five_year_staff VALUES LESS THAN('01-JAN-2006') (SUBPARTITION mgmt VALUES ('MANAGER', 'PRESIDENT'),
 SUBPARTITION sales VALUES ('SALESMAN', 'CLERK'),
SUBPARTITION support VALUES ('ANALYST')),
(PARTITION one_year_staff VALUES LESS THAN('01-JAN-2011')
(SUBPARTITION mgmt VALUES ('MANAGER', 'PRESIDENT'), SUBPARTITION sales VALUES ('SALESMAN', 'CLERK'), SUBPARTITION support VALUES ('ANALYST'));

etc.

3. HextoRaw and RAWToHEX function support

4. New Hint: ORDERED

5. Ref Cursor as OUT in Procedure and Functions

6. WHEVEVER SQLERROR Syntax Support

7. Advanced Server Performance

Improvement in Lock Management for Read and Write workloads with more CPUs
Index Advisor Support for Composite Indexes

8. New Catalog Views for Partition Table

9. OCI Function: OCIBreak() and OCIReset() for sever connections on Blocking and nonBlocking Mode.

I would be discussing about above features in coming Blogs. So, stay tune!

PostgreSQL 9.1:: New in Object Manipulation

In Series of writing Blog on New features in PostgreSQL 9.1, I thought to write and give example on Object Manipulation Changes.

Let see what we have in Object Manipulation Features in 9.1

1. CREATE/ALTER/DROP EXTENSIONS
PostgreSQL 9.1 has different way of managing Contrib Modules. User can now install Contrib Module using Command as given below:

psql -c "CREATE EXTENSION pgcrypto;" postgres
CREATE EXTENSION

If user wants to keep its contrib/Extension module in some other schema then user can try ALTER Command as given below:

postgres=# CREATE schema contrib_modules;
CREATE SCHEMA
postgres=# alter extension pgcrypto set schema contrib_modules;
ALTER EXTENSION

Simlarly, ALTER EXTENSION has many options, some options are given below:

Description: change the definition of an extension
Syntax:
ALTER EXTENSION extension_name UPDATE [ TO new_version ]
ALTER EXTENSION extension_name SET SCHEMA new_schema
ALTER EXTENSION extension_name ADD member_object
ALTER EXTENSION extension_name DROP member_object

where member_object is:

  AGGREGATE agg_name (agg_type [, ...] ) |
  CAST (source_type AS target_type) |
  COLLATION object_name |
  CONVERSION object_name |
  DOMAIN object_name |
  FOREIGN DATA WRAPPER object_name |
  FOREIGN TABLE object_name |
  FUNCTION function_name ( [ [ argmode ] [ argname ] argtype [, ...] ] ) |
  OPERATOR operator_name (left_type, right_type) |
  OPERATOR CLASS object_name USING index_method |
  OPERATOR FAMILY object_name USING index_method |
  [ PROCEDURAL ] LANGUAGE object_name |
  SCHEMA object_name |
  SEQUENCE object_name |
  SERVER object_name |
  TABLE object_name |
  TEXT SEARCH CONFIGURATION object_name |
  TEXT SEARCH DICTIONARY object_name |
  TEXT SEARCH PARSER object_name |
  TEXT SEARCH TEMPLATE object_name |
  TYPE object_name |
  VIEW object_name

Advantages of having this feature are given below:
i. Easy to find installed Contrib Modules and its new operators, and new index operator classes.
Following query can be use to find the installed contrib modules:

SELECT e.extname AS "Name", e.extversion AS "Version", n.nspname AS "Schema", c.description AS "Description"
FROM pg_catalog.pg_extension e LEFT JOIN pg_catalog.pg_namespace n ON n.oid = e.extnamespace LEFT JOIN pg_catalog.pg_description c ON c.objoid = e.oid AND c.classoid = 'pg_catalog.pg_extension'::pg_catalog.regclass
ORDER BY 1;

Short command of psql is given below (with snapshot):

postgres=# \dx
                                       List of installed extensions
   Name    | Version |     Schema      |                            Description
-----------+---------+-----------------+-------------------------------------------------------------------
 adminpack | 1.0     | pg_catalog      | administrative functions for PostgreSQL
 file_fdw  | 1.0     | public          | foreign-data wrapper for flat file access
 pg_trgm   | 1.0     | public          | text similarity measurement and index searching based on trigrams
 pgcrypto  | 1.0     | contrib_modules | cryptographic functions
 plperl    | 1.0     | pg_catalog      | PL/Perl procedural language
 plperlu   | 1.0     | pg_catalog      | PL/PerlU untrusted procedural language
 plpgsql   | 1.0     | pg_catalog      | PL/pgSQL procedural language
 xml2      | 1.0     | public          | XPath querying and XSLT
(8 rows)

ii. pg_dump command knows about all objects of the extension and it will just include a CREATE EXTENSION command in dumps, at the place of increasing size of dump file by including DDLs of objects of module, which will be helpful in migration to new version.
iii. Extension won’t allow drop individual object of an extension, except DROP EXTENSION.

For more detail on Extension, I would recommend to go through following link:

http://www.postgresql.org/docs/9.1/static/extend-extensions.html

2. Foreign Tables.
This is a new features which has been introduced in 9.1. Foreign tables features gives ability to user for accessing data outside of database. Simple Example is, if you have data in a file on server and you want to access that data inside postgreSQL, then you can create a foreign tables to access the data of file in PostgreSQL.
For using this feature, there are multiple extensions available. For accessing of data from a file default extension file_fdw comes with PostgreSQL installation. However, there are other extension also available for accessing data from Different Databases like Oracle, MySQL, CouchDB, Redis, Twitter etc. User can get those extension and details from following link:

http://wiki.postgresql.org/wiki/Foreign_data_wrappers

Below is an example of using file_fdw for accessing data from plain file:
i. Create file_fdw extension as given below:

psql -c "CREATE EXTENSION file_fdw" -d postgres
CREATE EXTENSION

ii. Create foreign Data wrapper as given below:

psql -c "CREATE FOREIGN DATA WRAPPER file_data_wrapper HANDLER file_fdw_handler;" -d postgresCREATE FOREIGN DATA WRAPPER

iii. Create Server using Foreign data wrapper, as given below:

psql -c " CREATE SERVER file FOREIGN DATA WRAPPER file_fdw;" -d postgres
CREATE SERVER

iv. Now, lets create foreign table to access file: my_data.file content is given below:

File: my_data.file
1,3,'First Row'
4,5,'Second Row'
7,8,'Third Row'
9,10,'Fourth Row'
11,12,'Fifth Row'

psql -c "CREATE FOREIGN TABLE my_data(field1 numeric, field2 numeric, field_detail text) server file options (filename '/Users/postgres/my_data.file', format 'csv', delimiter ',');" -d postgres

Now, we have foreign table which we can use to access data of my_data.file in database as given below:

postgress-MacBook-Pro:~ postgres$ psql -c "select * from my_data;" -d postgres
 field1 | field2 | field_detail
--------+--------+--------------
      1 |      3 | 'First Row'
      4 |      5 | 'Second Row'
      7 |      8 | 'Third Row'
      9 |     10 | 'Fourth Row'
     11 |     12 | 'Fifth Row'
(5 rows)

3. Improvement in ALTER TYPE for ENUM.

Till 9.0, ALTER TYPE for ENUM requires dropping of dependent columns and then after modification of ENUM recreating of those dependent columns, which was big block for modifying of ENUM.
Now in 9.1, user can modify ENUM without dropping dependent columns as given below:

postgres=# CREATE TYPE allowed_color AS ENUM ('RED','WHITE');
CREATE TYPE
postgres=# CREATE TABLE possible_color(color allowed_color);
CREATE TABLE
postgres=# INSERT INTO possible_color VALUES('RED');
INSERT 0 1
postgres=# INSERT INTO possible_color VALUES('WHITE');
INSERT 0 1
postgres=# INSERT INTO possible_color VALUES('BLACK');
ERROR:  invalid input value for enum allowed_color: "BLACK"
LINE 1: INSERT INTO possible_color VALUES('BLACK');
                                          ^
postgres=# ALTER TYPE allowed_color ADD VALUE 'BLACK' AFTER 'WHITE';
ALTER TYPE
postgres=# INSERT INTO possible_color VALUES('BLACK');
INSERT 0 1
postgres=# SELECT * FROM possible_color ;
 color
-------
 RED
 WHITE
 BLACK
(3 rows)

4. ALTER TYPE ADD/DROP/ALTER/RENAME ATTRIBUTE for composite types:

In 9.1, users would be able to do modification in composite type using ALTER TYPE Command. Example is given below:

i. Create a composite type:

postgres=# CREATE TYPE test_alter_type AS (col1 int);
CREATE TYPE

ii. Create a table based on composite type created above:

postgres=# CREATE TABLE test_alter_type_table (initial_col test_alter_type);
CREATE TABLE
postgres=# \d test_alter_type_table
   Table "public.test_alter_type_table"
   Column    |      Type       | Modifiers
-------------+-----------------+-----------
 initial_col | test_alter_type |

postgres=# INSERT INTO test_alter_type_table VALUES(ROW(1));
INSERT 0 1
postgres=# INSERT INTO test_alter_type_table VALUES(ROW(2));
INSERT 0 1
postgres=# SELECT * FROM test_alter_type_table ;
 initial_col
-------------
 (1)
 (2)
(2 rows)

iii. ALTER composite type to add new attribute

postgres=# ALTER TYPE test_alter_type ADD ATTRIBUTE col2 int;
ALTER TYPE
postgres=# SELECT * FROM test_alter_type_table ;
 initial_col
-------------
 (1,)
 (2,)
(2 rows)

iv. Create a function for Composite type to assign a Values to composite type as given below:

</pre>
CREATE FUNCTION test_alter_type_constructor(test_alter_type) RETURNS test_alter_type
AS
$$ SELECT $1::test_alter_type;
$$ language sql;
<pre>

v. Update values in added attribute of Composite type

postgres=# UPDATE test_alter_type_table SET initial_col=ROW(1,2) WHERE initial_col=test_alter_type_constructor(ROW(1,NULL));
UPDATE 1
postgres=# UPDATE test_alter_type_table SET initial_col=ROW(2,2) WHERE initial_col=test_alter_type_constructor(ROW(2,NULL));
UPDATE 1
postgres=# SELECT * FROM test_alter_type_table;
 initial_col
-------------
 (1,2)
 (2,2)
(2 rows)

vi. RENAME Attribute and Composite type as given below:

postgres=# ALTER TYPE test_alter_type RENAME ATTRIBUTE col2 to col3;
ALTER TYPE
postgres=# \d test_alter_type
Composite type "public.test_alter_type"
 Column |  Type   | Modifiers
--------+---------+-----------
 col1   | integer |
 col3   | integer |

postgres=# ALTER TYPE test_alter_type RENAME TO test_alter_type3;
ALTER TYPE
postgres=# \d test_alter_type3
Composite type "public.test_alter_type3"
 Column |  Type   | Modifiers
--------+---------+-----------
 col1   | integer |
 col3   | integer |

postgres=# \d test_alter_type
test_alter_type3      test_alter_type_table
postgres=# \d test_alter_type_table
    Table "public.test_alter_type_table"
   Column    |       Type       | Modifiers
-------------+------------------+-----------
 initial_col | test_alter_type3 |

For more detail on supported options of ALTER TYPE for Composite types, user can use following link:

http://www.postgresql.org/docs/9.1/static/sql-altertype.html

5. ALTER TYPE {OF|NOT OF} support for Typed Table.
This is a new sytax which has been added in PostgreSQL 9.1. Using this syntax user can now switch a standalone table into typed table and can switch typed table into standalone. Example is given below:

postgres=# CREATE TABLE table_type_test(id NUMERIC, test TEXT);
CREATE TABLE

postgres=# CREATE TYPE test_type AS (id NUMERIC, test TEXT);
CREATE TYPE
postgres=# ALTER TABLE table_type_test OF test_type;
ALTER TABLE
postgres=# \d table_type_test
Table "public.table_type_test"
 Column |  Type   | Modifiers
--------+---------+-----------
 id     | numeric |
 test   | text    |
Typed table of type: test_type
postgres=# ALTER TABLE table_type_test NOT OF;
ALTER TABLE
postgres=# \d table_type_test
Table "public.table_type_test"
 Column |  Type   | Modifiers
--------+---------+-----------
 id     | numeric |
 test   | text    |

6. Supported Syntax of ALTER SET SCHEMA for other objects

In PostgreSQL 9.1, ALTER SET SCHEMA is supported for following objects:

  a. conversions,
  b. operators,
  c. operator classes,
  d. operator families,
  e. text search configurations,
  f. text search dictionaries,
  g. text search parsers
  h. text search templates.

Simple example is given below:

postgres=# ALTER OPERATOR %(text,text) SET SCHEMA contrib_modules;
ALTER OPERATOR
postgres=# \do contrib_modules.%
                                  List of operators
     Schema      | Name | Left arg type | Right arg type | Result type | Description
-----------------+------+---------------+----------------+-------------+-------------
 contrib_modules | %    | text          | text           | boolean     |
(1 row)
postgres=# ALTER OPERATOR contrib_modules.%(text,text) SET SCHEMA public;
ALTER OPERATOR

7. Modify UNIQUE/Primary KEY using Unique Index.

This is a very interesting feature which has been added in PostgreSQL 9.1.
Till 9.0, user used to use REINDEX command for Rebuilding indexes of Primary/Unique Keys or ALTER TABLE ADD CONSTRAINT for adding Primary/Unique Key, which used to keep TABLE Level lock for long time (depending on amount of data in table).
Now, in 9.1, User can use CREATE UNIQUE INDEX CONCURRENTLY command to create unique index and would be able to use created index for Primay/Unique Key without having Table Level Lock for long time.

Example is given below

For Adding Primary Key:

postgres=# CREATE UNIQUE INDEX CONCURRENTLY emp_pk_idx ON emp_new(empno);
CREATE INDEX
postgres=# ALTER TABLE emp_new ADD PRIMARY KEY USING INDEX emp_pk_idx;
ALTER TABLE
postgres=# \d emp_new;
            Table "public.emp_new"
  Column  |         Type          | Modifiers
----------+-----------------------+-----------
 empno    | numeric(4,0)          | not null
 ename    | character varying(10) |
 job      | character varying(9)  |
 mgr      | numeric(4,0)          |
 hiredate | date                  |
 sal      | numeric(7,2)          |
 comm     | numeric(7,2)          |
 deptno   | numeric(2,0)          |
Indexes:
    "emp_pk_idx" PRIMARY KEY, btree (empno)

Now, suppose user wants to Rebuild Primary Key Index, then he can try something like given below:

  CREATE UNIQUE INDEX CONCURRENTLY emp_pk_idx_new ON emp_new(empno);
  BEGIN;
  ALTER TABLE emp_new DROP CONSTRAINT emp_pk_idx;
  ALTER TABLE emp_new ADD PRIMARY KEY USING INDEX emp_pk_idx_new;
  END;

Above Lock period would be less than rebuilding Index emp_pk_idx.

Similar kind of idea, I have presented in my old post for PostgreSQL 8.4, which link is given below:

https://vibhorkumar.wordpress.com/2010/12/16/rebuilding-pkey-and-indexes-without-locking-table-in-pg-8-4/

8. Foreign Key without Validation.

9.1 also supports adding foreign key on table without validation of old data, which means the constraint is checked for new or modified rows, but existing data may violate the constraint.
Note:: Converting a NO VALID constraint to VALID may take a long time depending on the amount of data to be validated.
Example is given below:

CREATE TABLE emp_fk_key_table(id NUMERIC);
CREATE TABLE

Lets insert Few Data in Table:

postgres=# INSERT INTO emp_fk_key_table VALUES(1);
INSERT 0 1
postgres=# INSERT INTO emp_fk_key_table VALUES(2);
INSERT 0 1
postgres=# INSERT INTO emp_fk_key_table VALUES(3);
INSERT 0 1

Now, Create Foreign Key with NOT VALID Option:

ALTER TABLE emp_fk_key_table ADD CONSTRAINT fk_constraint FOREIGN KEY(id) REFERENCES emp_new(empno) NOT VALID;

Above Constraint will be checked for new data as given below:

postgres=# INSERT INTO emp_fk_key_table VALUES(5);
ERROR:  insert or update on table "emp_fk_key_table" violates foreign key constraint "fk_constraint"
DETAIL:  Key (id)=(5) is not present in table "emp_new".

For Old data Constraint Check will not happen. However, later if user wants to validate old data in table, then he/she can use VALIDATE CONSTRAINT option as given below:

ALTER TABLE emp_fk_key_table VALIDATE CONSTRAINT fk_constraint;

9. No rewrite in appropriate cases of Change in Datatype

Till 9.0, Any change in data type ALTER TABLE SET DATA TYPE command used to re-write whole table. Now in 9.1, Data type change may not result in re-writing of table, which will result in Fast execution of ALTER TABLE SET DATA TYPE command.
Example is given below:

In 9.0:

postgres=# CREATE TABLE test_rewrite(col VARCHAR(10));
CREATE TABLE
postgres=#
postgres=# INSERT INTO test_rewrite SELECT * FROM generate_series(1,100);
INSERT 0 100
postgres=# SELECT relfilenode, relname FROM pg_class WHERE relname='test_rewrite';
 relfilenode |   relname
-------------+--------------
       31318 | test_rewrite
(1 row)
postgres=# ALTER TABLE test_rewrite ALTER COLUMN col SET DATA TYPE TEXT;
ALTER TABLE
postgres=# SELECT relfilenode, relname FROM pg_class WHERE relname='test_rewrite';
 relfilenode |   relname
-------------+--------------
       31324 | test_rewrite     ## Re-write result in new relfilenode
(1 row)

In PostgreSQL 9.1

postgres=# CREATE TABLE test_rewrite(col VARCHAR(10));
CREATE TABLE
postgres=# INSERT INTO test_rewrite SELECT * FROM generate_series(1,100);
INSERT 0 100
postgres=# SELECT relfilenode, relname FROM pg_class WHERE relname='test_rewrite';
 relfilenode |   relname
-------------+--------------
       35164 | test_rewrite
(1 row)

postgres=# ALTER TABLE test_rewrite ALTER COLUMN col SET DATA TYPE TEXT;
ALTER TABLE
postgres=# SELECT relfilenode, relname FROM pg_class WHERE relname='test_rewrite';
 relfilenode |   relname
-------------+--------------
       35164 | test_rewrite
(1 row)

10. IF NOT EXISTS in CREATE TABLE
This is a new option which has been added in CREATE TABLE Command. Now, user can use IF NOT EXISTS clause in CREATE TABLE command, which will not throw/show any error message if table exists. Example is given below:

postgres=# CREATE TABLE IF NOT EXISTS test_rewrite(col INTEGER);
NOTICE:  relation "test_rewrite" already exists, skipping
CREATE TABLE

Above is very useful for Programming perspective, where developer needs to verify everytime if table exists or not and accordingly create table or skip table. Following is an example of such scenario.
In 9.0,

DO $$
 DECLARE
   verify boolean;
 BEGIN
   SELECT CASE WHEN count(*) = 1 THEN true ELSE false END INTO verify FROM pg_tables WHERE schemaname='enterprisedb' AND tablename='test_rewrite';
    IF verify THEN
       RAISE NOTICE 'test_rewrite TABLE EXISTS';
    ELSE
       CREATE TABLE test_rewrite(col NUMERIC(20));
    END IF;
  END
$$ language plpgsql;

NOTICE:  test_rewrite TABLE EXISTS
DO

In 9.1, User doesn’t have to write code like above, simple CREATE TABLE IF NOT EXISTS will work.

New Replication and Recovery Features in PostgreSQL 9.1

1. Streaming Replication and Continuous Archiving Features.

a. Synchronous Replication
PostgreSQL 9.1 came with Synchronous Replication. In Synchronous Replication, all commited transaction will be transferred to standby synchronously.
When Primary is in Synchronous replication, then each commit transaction will wait untill transaction get transfered to Slave/Replication Server. This reduces the loss of any committed transaction. And gives high degree of durability.
For setting up synchronous replication you can look at my blog:

https://vibhorkumar.wordpress.com/2011/10/20/asynchronoussynchronous-streaming-replication-in-postgresql-9-1/ 

2. New parameter replication_timeout:
This is a new parameter which has been added in 9.1. Using this parameter user can terminate replication if connection is inactive more than replication_timeout seconds. This is useful parameter for Primary Server to detect Replication Server Crash or network outage.

3. New role/permission for Streaming Replication
REPLICATION is a new role has been added in PostgreSQL 9.1 for Streaming Replication. If a user has REPLICATION role granted then user would be able to initiate Streaming Replication. Some example is given below:
a. CREATE USER with REPLICATION Privilege:

     CREATE USER rep_user WITH REPLICATION PASSWORD 'password';

b. Assign REPLICATION Privilege to a user:

      ALTER USER username WITH REPLICATION;

REPLICATION privileged can also used for Online Backup of PostgreSQL.

4. New tool for Base Backup: pg_basebackup
pg_basebackup is a new Backup tool introduce in PostgreSQL 9.1. Using this tool user can take Base Backups of running PostgreSQL database cluster.
To use pg_basebackup, user has to make following in Changes in PostgreSQL Cluster
a. Make following changes in pg_hba.conf file:

      host    replication     postgres        [Ipv4 address of client]/32  trust

b. Make following changes in postgresql.conf file of PostgreSQL Cluster:

    archive_command = 'cp -i %p /Users/postgres/archive/%f'
    archive_mode = on # Require Restart
    max_wal_senders = 3 # Maximum 'wal_senders'
    wal_keep_segments = # How many WAL segments (=files) should be kept on the server

c. After making above changes, user can restart the PostgreSQL Cluster and can use pg_basebackup to take the backup as given below:

     pg_basebackup -D /Users/vibhor/testbackup -v -Fp -l Testbackup -h 127.0.0.1 -U postgres

Note:: User which can use pg_basebackup should have either SUPERUSER or REPLICATION privilege.

5. Functions to Control Streaming Replication replay:
New version of PostgreSQL has New Streaming Replication Control Function. Information on it is given below:
a. pg_xlog_replay_pause():
Using this function user can pause recovery of Standby and would be able to take consistent backup of Standby Data Directory. Example is given below:

postgres=# select  pg_xlog_replay_pause();
 pg_xlog_replay_pause
----------------------

(1 row)

b. pg_is_xlog_replay_paused():
Using this function user would be able to check the Standby/Streaming Replication is paused or not. Example is given below:

 postgres=# select  pg_is_xlog_replay_paused();
 pg_is_xlog_replay_paused
--------------------------
 t
(1 row)

c. pg_xlog_replay_resume():
Using this function user would be able resume replication of standby/streaming replication, if its recovery/replay is paused. Example is given below:

postgres=# select pg_xlog_replay_resume();
 pg_xlog_replay_resume
-----------------------

(1 row)

postgres=# select  pg_is_xlog_replay_paused();
 pg_is_xlog_replay_paused
--------------------------
 f
(1 row)

6. New in Replication Monitoring:
a. View: pg_stat_replication:
This view displays information on WAL sender processes. View contains one row for each WAL sender process shows, information on processid, user (oid), username, application name, host name (if available) and port number, time at which the server process began execution, and the current WAL sender state and transaction log location. Example is given below:

postgres=# select * from pg_stat_replication;
 procpid | usesysid | usename  | application_name | client_addr | client_hostname | client_port |          backend_start           |   state   | sent_location | write_location | flush_location | replay_location | sync_priority | sync_state
---------+----------+----------+------------------+-------------+-----------------+-------------+----------------------------------+-----------+---------------+----------------+----------------+-----------------+---------------+------------
    2319 |       10 | postgres | sync_replication | ::1         |                 |       50224 | 2011-11-10 21:39:45.424503+05:30 | streaming | 0/33002798    | 0/33002798     | 0/33002798     | 0/33002798      |             0 | async
(1 row)

Note: pg_stat_replication view will give information on Master. Executing Query against pg_stat_replication view on Standby/streaming will return zero rows.

b. Function: pg_last_xact_replay_timestamp():
Above function gives time stamp of last transaction replayed during recovery, time at which the commit or abort WAL record for that transaction was generated on the primary. If no transactions have been replayed during recovery, this function returns NULL. Otherwise, if recovery is still in progress this will increase monotonically. If recovery has completed then this value will remain static at the value of the last transaction applied during that recovery. When the server has been started normally without recovery the function returns NULL. Example is given below:

postgres=# select * from pg_last_xact_replay_timestamp();
  pg_last_xact_replay_timestamp
----------------------------------
 2011-11-10 22:17:26.431321+05:30
(1 row)

7. New in Hot Standby:
a. New parameter hot_standby_feedback:
This is a new parameter which has been added in postgresql.conf file for Standby server. Using this parameter, now user would be able avoid canceling of Queries. This enable Hot Standby to postpone of cleaning old version of rows if any SELECT query running on Standby and makes Hot Standby to send feedback, once as per wal_receive_status_interval, to primary about queries currently executing on Standby.
Note:: Setting this parameter may result in bloat on primary.

b. New column:(conflicts) in pg_stat_database
In PostgreSQL 9.1, New column conflicts has been added in pg_stat_database. This columns gives the total number of queries canceled due to conflict with recovery on standby. Example is given below:

postgres=# select datname, conflicts from pg_stat_database;
  datname  | conflicts
-----------+-----------
 template1 |         0
 template0 |         0
 postgres  |         0
 korean    |         0
(4 rows)

c. New view: pg_stat_database_conflicts
In 9.1, pg_stat_database_conflicts view has been added for monitoring and finding the cancelled queries due dropped tablespaces/ lock timeouts/old snapshots/pinned buffers/deadlocks.
This view contains one row per database, which gives information on database OID, database name and the number of queries that have been canceled in this database due to dropped tablespaces, lock timeouts, old snapshots, pinned buffers and deadlocks. Example is given below:

postgres=# select * from pg_stat_database_conflicts;
 datid |  datname  | confl_tablespace | confl_lock | confl_snapshot | confl_bufferpin | confl_deadlock
-------+-----------+------------------+------------+----------------+-----------------+----------------
     1 | template1 |                0 |          0 |              0 |               0 |              0
 12172 | template0 |                0 |          0 |              0 |               0 |              0
 12180 | postgres  |                0 |          0 |              0 |               0 |              0
 25354 | korean    |                0 |          0 |              0 |               0 |              0
(4 rows)

Note: User has to run queries against this view on Standby, since conflicts occurs on Standby.

d. Increase the maximum values for max_standby_archive_delay and max_standby_streaming_delay.
In PostgreSQL 9.0, maximum value for max_standby_archive_delay and max_standby_streaming_delay were 35 minutes. Now in PostgreSQL 9.1, user can mention much larger value. These parameters determines the maximum total time allowed to apply any WAL segment/WAL data.
For more detail about these parameter, please refer following page of document:

   http://www.postgresql.org/docs/9.1/static/runtime-config-replication.html#GUC-MAX-STANDBY-ARCHIVE-DELAY

e. New Error Code: ERRCODE_T_R_DATABASE_DROPPED

  Error Code    Condition Name
      57P04	database_dropped

Till 9.0, PostgreSQL used to use ERRCODE_ADMIN_SHUTDOWN for recovery conflict( on Standby) caused by Database Drop on Master. Now, in 9.1, ERRCODE_T_R_DATABASE_DROPPED will be used for Same situation. This change made for poolers to handle such situation (where database on Master no longer exists) correctly like pgpool.

8. New in Recovery Control:

a. pg_create_restore_point(text) function and recovery_target_name parameter:

PostgreSQL 9.1 has come with the special function pg_create_restore_point(text). Using this function, admin/DBA can now create their own recovery/restore point. pg_create_restore_point returns Transaction log location, upto which user can restore their Hotbackup. To support the named restore point, PostgreSQL 9.1, has new parameter recovery_target_name for recovery.conf file
Let see these two in action.
i. Set following parameters in postgresql.conf file of cluster:

    archive_mode=on
    archive_command='cp -i %p /Users/postgres/archive/%f'

ii. Now, lets take base backup using pg_basebackup as given below:

edbs-MacBook-Pro:standby postgres$ pg_basebackup -D /Users/postgres/test_backup -v -Fp -l "Standby Backup" -h localhost
NOTICE:  pg_stop_backup complete, all required WAL segments have been archived
pg_basebackup: base backup completed

iii. Now lets connect to database and make Named restore point using pg_create_restore_point

postgres=# select pg_create_restore_point('myrecoverylocation');
 pg_create_restore_point
-------------------------
 0/3B000118
(1 row)

To verify the behavior of recovery_target_name parameter, lets create a table and perform some activities:

postgres=# create table test_id as select id from generate_series(1,1000000) as t(id);
SELECT 1000000
postgres=#

iv. Now, create recovery.conf file as given below in Backup directory:

restore_command = 'cp -i /Users/postgres/archive/%f %p'		# e.g. 'cp /mnt/server/archivedir/%f %p'
recovery_target_name = myrecoverylocation

and change the port number of restored postgresql.conf as given below:

vi /Users/postgres/test_backup/postgresql.conf
port=5433

v. Now, start the restored PostgreSQL cluster as given below:

pg_ctl -D /Users/postgres/test_backup start

After executing above command, PostgreSQL will perform recovery till recovery_target_name location. To verify this, user can check PostgreSQL logfile, which will display message something like given below:

2011-11-14 17:15:18 IST LOG:  database system was interrupted; last known up at 2011-11-14 17:10:51 IST
2011-11-14 17:15:18 IST LOG:  creating missing WAL directory "pg_xlog/archive_status"
2011-11-14 17:15:18 IST LOG:  starting point-in-time recovery to "myrecoverylocation"

With this, after connecting to restore database, user can see that test_id table which we had created, after pg_create_restore_point(‘myrecoverylocation’), will not be part of restored database as given below:

--Restored Database:
edbs-MacBook-Pro:test_backup postgres$ psql -p 5433
Password:
psql (9.1.1)
Type "help" for help.

postgres=# \dt
             List of relations
 Schema |     Name      | Type  |  Owner
--------+---------------+-------+----------
 public | dept          | table | postgres
 public | emp           | table | postgres
 public | employees     | table | postgres
 public | entities      | table | postgres
 public | jobhist       | table | postgres
 public | like_op_table | table | postgres
 public | person_job    | table | postgres
 public | test          | table | postgres
 public | test_default  | table | postgres
(9 rows)

-- Primary Database:
postgres=# \dt
             List of relations
 Schema |     Name      | Type  |  Owner
--------+---------------+-------+----------
 public | dept          | table | postgres
 public | emp           | table | postgres
 public | employees     | table | postgres
 public | entities      | table | postgres
 public | jobhist       | table | postgres
 public | like_op_table | table | postgres
 public | person_job    | table | postgres
 public | test          | table | postgres
 public | test_default  | table | postgres
 public | test_id       | table | postgres
(10 rows)

b. Standby recovery to switch to a new timeline automatically

Till 9.0, when user specify recovery_target_timeline=’latest’, postgreSQL scan for the latest timeline at the beginning of recovery, and pick that as the target. If new timelines appear during recovery, PostgreSQL stick to the target chosen in the beginning, the new timelines are ignored. To make PostgreSQL to notice about the new timeline, user has to restart the Standby. Now in 9.1, Standby servers scan the archive directory for new timelines periodically and switch to new timeline new timeline appears during recovery.

c. New parameter: restart_after_crash = on/off

By default till 9.0, whenever, there is backend crash PostgreSQL used to automatically get restarted and there was no control on restarting of PostgreSQL (like for clusterware solutions, where clusterware userd to try to restart the PostgreSQL). Adding this parameter gives control on restart of postgreSQL.
If value of restart_after_crash is on, then PostgreSQL will restart automatically, after a backend crash.
If value is off, then PostgreSQL will not restart, after backend crash and will be get shutdown.

Queries Improvement in PostgreSQL 9.1

1. True serializable isolation level:

Before version 9.1, PostgreSQL did not support a full serializable isolation level. A request for serializable transaction isolation actually provided snapshot isolation. This has well known anomalies which can allow inconsistent views of the data during concurrent transactions; although these anomalies only occur when certain patterns of read-write dependencies exist within a set of concurrent transactions. Where these patterns exist, the anomalies can be prevented by introducing conflicts through explicitly programmed locks or otherwise unnecessary writes to the database. Snapshot isolation is popular because performance is better than serializable isolation and the integrity guarantees which it does provide allow anomalies to be avoided or managed with reasonable effort in many environments.

Following is a link on more details:

http://wiki.postgresql.org/wiki/SSI

2. INSERT/UPDATE/DELETE in WITH CLAUSE.

Now in 9.1, User would be able include INSERT/UPDATE/DELETE in WITH Clause and can pass data to the containing query. Following are some example:

-- INSERT ... RETURNING
WITH t AS (
    INSERT INTO y
    VALUES
        (11),
        (12),
        (13),
        (14),
        (15),
        (16),
        (17),
        (18),
        (19),
        (20)
    RETURNING *
)
SELECT * FROM t;

-- UPDATE ... RETURNING
WITH t AS (
    UPDATE y
    SET a=a+1
    RETURNING *
)
SELECT * FROM t;

-- DELETE ... RETURNING
WITH t AS (
    DELETE FROM y
    WHERE a <= 10
    RETURNING *
)
SELECT * FROM t;

Also, user can attache WITH CLAUSE to INSERT/UPDATE and DELETE Statements as given below:

-- data-modifying WITH in a modifying statement
WITH t AS (
    DELETE FROM y
    WHERE a <= 10
    RETURNING *
)
INSERT INTO y SELECT -a FROM t RETURNING *;

3. GROUP BY enhancement for missing columns

Previous version of PostgreSQL used to throw error message, if user forgets to specify any columns of target list in GROUP BY Clause, even if primary key is specified, as given below:

select version();
                                                     version
------------------------------------------------------------------------------------------------------------------
PostgreSQL 9.0.5 on i686-pc-linux-gnu, compiled by GCC gcc-4.5.real (Ubuntu/Linaro 4.5.2-8ubuntu4) 4.5.2, 32-bit

SELECT count(*), entity_name, entity_address as address
FROM entities JOIN employees using (entity_name)
GROUP BY entity_name;
ERROR:  column "entities.entity_address" must appear in the GROUP BY clause or be used in an aggregate function

Now in PostgreSQL 9.1, GROUP BY Clause can gues about the missing Column as given below:

SELECT count(*), entity_name, entity_address as address
FROM entities JOIN employees using (entity_name)
GROUP BY entity_name;
 count | entity_name | address
-------+-------------+----------
     2 | HR          | address1
     2 | SALES       | address2
(2 rows)

4. Per Column Collation.

In Previous version of PostgreSQL, collation (the sort ordering of text string) was supported only at database level. Now, in 9.1, user can set collation per column, index, or expression via COLLATE Clause. Some example is given below:

postgres=#  CREATE TABLE french_messages (message TEXT COLLATE "fr_FR");
CREATE TABLE
postgres=# select * from french_messages order by 1;
 message
---------
 Élève
 élever
 élevé
 élève
(4 rows)

postgres=# SELECT a, b, a < b as lt FROM
postgres-#   (VALUES ('a', 'B'), ('A', 'b' COLLATE "C")) v(a,b);
 a | b | lt
---+---+----
 a | B | f
 A | b | t
(2 rows)

5. Planner Improvements
a. Inheritance Table.
Now in 9.1, Some planner improvements has been done, like MIN/MAX for Inheritance tables. This improvement will boost up performance queries for user, if they are using paritioning.
lets see those improvements:

i) Create partition table as given below:

      CREATE TABLE main(id integer, val text);
      CREATE TABLE parition_1(CHECK(id >=1 and id <=20)) INHERITS(main);
      CREATE TABLE parition_2(CHECK(id >=21 and id <=40)) INHERITS(main);
      CREATE TABLE parition_3(CHECK(id >=41 and id <=60)) INHERITS(main);
      CREATE TABLE parition_other(CHECK(id >=61)) INHERITS(main);
      CREATE INDEX parition_1_idx on parition_1(id);
      CREATE INDEX parition_2_idx on parition_2(id);
      CREATE INDEX parition_3_idx on parition_3(id);
      CREATE INDEX parition_other_idx on parition_other(id);

ii) Create trigger as given below:

       CREATE OR REPLACE FUNCTION main_insert_direct( )
       RETURNS  trigger
       LANGUAGE plpgsql
       AS $function$
            BEGIN
                   IF NEW.id >=1 AND NEW.id <=20 THEN
                       INSERT INTO parition_1 values(NEW.*);
                   ELSIF NEW.id >=21 AND NEW.ID <=40 THEN
			INSERT INTO parition_2 values(NEW.*);
                   ELSIF NEW.id >=41 AND NEW.ID <=60 THEN
			INSERT INTO parition_3 values(NEW.*);
                   ELSE
                         INSERT INTO parition_other VALUES(NEW.*);
                   END IF;
                   RETURN NULL;
           END;
       $function$;

       CREATE TRIGGER insert_on_main
         BEFORE INSERT ON main
         FOR EACH ROW EXECUTE PROCEDURE main_insert_direct();

iii). INSERT some values as given below:

        INSERT INTO main SELECT * FROM generate_series(1,1000000000);

Now lets see plan. On previous version of PostgreSQL:

postgres=# explain analyze select id from main order by id desc limit 10

                                                                  QUERY PLAN
------------------------------------------------------------------------------------------------------------------------------------------------
 Limit  (cost=1148.68..1148.70 rows=10 width=4) (actual time=147.799..147.812 rows=10 loops=1)
   ->  Sort  (cost=1148.68..1220.94 rows=28905 width=4) (actual time=147.796..147.800 rows=10 loops=1)
         Sort Key: public.main.id
         Sort Method:  top-N heapsort  Memory: 17kB
         ->  Result  (cost=0.00..524.05 rows=28905 width=4) (actual time=0.018..117.908 rows=50000 loops=1)
               ->  Append  (cost=0.00..524.05 rows=28905 width=4) (actual time=0.017..74.136 rows=50000 loops=1)
                     ->  Seq Scan on main  (cost=0.00..22.30 rows=1230 width=4) (actual time=0.001..0.001 rows=0 loops=1)
                     ->  Seq Scan on parition_1 main  (cost=0.00..22.30 rows=1230 width=4) (actual time=0.014..0.073 rows=100 loops=1)
                     ->  Seq Scan on parition_2 main  (cost=0.00..22.30 rows=1230 width=4) (actual time=0.011..0.070 rows=100 loops=1)
                     ->  Seq Scan on parition_3 main  (cost=0.00..22.30 rows=1230 width=4) (actual time=0.009..0.057 rows=100 loops=1)
                     ->  Seq Scan on parition_other main  (cost=0.00..434.85 rows=23985 width=4) (actual time=0.021..32.197 rows=49700 loops=1)
 Total runtime: 147.921 ms
(12 rows)

However, in 9.1:

    postgres=# explain analyze select id from main order by id desc limit 10                                                          ;
                                                                                 QUERY PLAN
----------------------------------------------------------------------------------------------------------------------------------------------------------------------------
 Limit  (cost=1.07..6.44 rows=10 width=4) (actual time=6.828..6.849 rows=10 loops=1)
   ->  Result  (cost=1.07..10790.41 rows=20072 width=4) (actual time=6.825..6.844 rows=10 loops=1)
         ->  Merge Append  (cost=1.07..10790.41 rows=20072 width=4) (actual time=6.824..6.840 rows=10 loops=1)
               Sort Key: public.main.id
               ->  Sort  (cost=1.01..1.01 rows=1 width=4) (actual time=0.026..0.026 rows=0 loops=1)
                     Sort Key: public.main.id
                     Sort Method: quicksort  Memory: 25kB
                     ->  Seq Scan on main  (cost=0.00..1.00 rows=1 width=4) (actual time=0.020..0.020 rows=0 loops=1)
               ->  Index Scan Backward using parition_1_idx on parition_1 main  (cost=0.00..13.17 rows=61 width=4) (actual time=0.028..0.028 rows=1 loops=1)
               ->  Index Scan Backward using parition_2_idx on parition_2 main  (cost=0.00..13.15 rows=60 width=4) (actual time=0.024..0.024 rows=1 loops=1)
               ->  Index Scan Backward using parition_3_idx on parition_3 main  (cost=0.00..13.15 rows=60 width=4) (actual time=0.024..0.024 rows=1 loops=1)
               ->  Index Scan Backward using parition_other_idx on parition_other main  (cost=0.00..10233.63 rows=19890 width=4) (actual time=6.716..6.727 rows=10 loops=1)
 Total runtime: 6.932 ms
(13 rows)

Looking at above Plans, in PostgreSQL 9.0, Executor will take all from all child table will do sorting before returning 10 records. However, in PostgreSQL 9.1, Executor will take records from sorted child table and will use indexes (if available) to merge them with the sorted one to return 10 records. Looking at Total runtime, 9.1 is faster.

b. FULL OUTER JOIN improvement:
In 9.1, FULL OUTER JOIN can now use HASH Algorithms, which is faster than old method FULL OUTER JOIN (i.e. 2 Sorts) as given below:

i) Create tables and insert some values, as given below:

     CREATE TABLE test1 (a int);
     CREATE TABLE test2 (a int);
     INSERT INTO test1 SELECT generate_series(1,100000);
     INSERT INTO test2 SELECT generate_series(100,1000);

ii). EXPLAIN ANALYZE:

In Previous version of PostgreSQL:

        postgres=#  EXPLAIN ANALYZE SELECT * FROM test1 FULL OUTER JOIN test2 USING (a);
                                                       QUERY PLAN
-------------------------------------------------------------------------------------------------------------------------
 Merge Full Join  (cost=10952.05..11465.56 rows=100000 width=8) (actual time=214.420..370.898 rows=100000 loops=1)
   Merge Cond: (test1.a = test2.a)
   ->  Sort  (cost=10894.82..11144.82 rows=100000 width=4) (actual time=213.512..269.596 rows=100000 loops=1)
         Sort Key: test1.a
         Sort Method:  external sort  Disk: 1368kB
         ->  Seq Scan on test1  (cost=0.00..1393.00 rows=100000 width=4) (actual time=0.010..50.044 rows=100000 loops=1)
   ->  Sort  (cost=57.23..59.48 rows=901 width=4) (actual time=0.894..1.309 rows=901 loops=1)
         Sort Key: test2.a
         Sort Method:  quicksort  Memory: 38kB
         ->  Seq Scan on test2  (cost=0.00..13.01 rows=901 width=4) (actual time=0.012..0.436 rows=901 loops=1)
 Total runtime: 412.315 ms
(11 rows)

In PostgreSQL 9.1:

postgres=#  EXPLAIN ANALYZE SELECT * FROM test1 FULL OUTER JOIN test2 USING (a);
                                                    QUERY PLAN
-------------------------------------------------------------------------------------------------------------------
 Hash Full Join  (cost=24.27..1851.28 rows=100000 width=8) (actual time=0.588..74.434 rows=100000 loops=1)
   Hash Cond: (test1.a = test2.a)
   ->  Seq Scan on test1  (cost=0.00..1443.00 rows=100000 width=4) (actual time=0.011..21.683 rows=100000 loops=1)
   ->  Hash  (cost=13.01..13.01 rows=901 width=4) (actual time=0.563..0.563 rows=901 loops=1)
         Buckets: 1024  Batches: 1  Memory Usage: 32kB
         ->  Seq Scan on test2  (cost=0.00..13.01 rows=901 width=4) (actual time=0.014..0.217 rows=901 loops=1)
 Total runtime: 82.555 ms
(7 rows)

Looking at above EXPLAIN ANALYZE, PostgreSQL 9.1 needs to create a HASH on the smallest table. However, in Previous Version, PostgreSQL required 2 sorts (one on smallest table test2 and one on test1) which is costly, which shows that 9.1 optimizer is smarter and giving better plan.

Updateable Views in PostgreSQL 9.1 using INSTEAD OF Trigger

About updateable views user ask many times. Is it supported in PostgreSQL? Can we write Complex updateable views?
Answer for above is yes. Till 9.0, we have to use RULE for implementing updateable view. Again, RULE Implementation used to be a bit tidious, since user has to write multiple RULES to implement this feature.

Following code can be use to see this.

CREATE TABLE person_detail(pid NUMERIC PRIMARY KEY, pname TEXT);
CREATE TABLE person_job(pid NUMERIC PRIMARY KEY references person_detail(pid), job TEXT);

INSERT INTO person_detail VALUES(1,'Angela');
INSERT INTO person_detail VALUES(2,'Tom');
INSERT INTO person_detail VALUES(3,'Heikki');

INSERT INTO person_job VALUES(1,'Documenter');
INSERT INTO person_job VALUES(2,'Developer');
INSERT INTO person_job VALUES(3,'Commiter');

CREATE VIEW person_detail_job_vw AS SELECT p.pid, p.pname, j.job FROM person_detail p LEFT JOIN person_job j ON (j.pid=p.pid);

SELECT * FROM person_detail_job_vw;
 pid | pname  |    job
-----+--------+------------
   1 | Angela | Documenter
   2 | Tom    | Developer
   3 | Heikki | Commiter

Till 9.0, User has to write rules something like given below:

1. INSERT RULE

CREATE RULE person_detaik_job_vw_INSERT AS ON INSERT TO person_detail_job_vw DO INSTEAD (
       INSERT INTO  person_detail VALUES(NEW.pid,NEW.pname);
       INSERT INTO  person_job VALUES(NEW.pid,NEW.job)
      );

2. UPDATE RULE:

CREATE RULE person_detaik_job_vw_UPDATE AS ON UPDATE TO person_detail_job_vw DO INSTEAD (
       UPDATE person_detail SET pid=NEW.pid, pname=NEW.pname WHERE pid=OLD.pid;
       UPDATE person_job SET pid=NEW.pid, job=NEW.job WHERE pid=OLD.pid
      );

3. DELETE RULE:

  CREATE OR REPLACE RULE person_detaik_job_vw_DELETE AS ON DELETE TO person_detail_job_vw DO INSTEAD (
       DELETE FROM person_job WHERE pid=OLD.pid;
       DELETE FROM person_detail WHERE pid=OLD.pid
      );

Lets see RULE WORK:

 INSERT INTO person_detail_job_vw VALUES(4,'Singh','New JOB');
 INSERT 0 1

 SELECT * FROM person_detail_job_vw;
 pid | pname  |    job
-----+--------+------------
   1 | Angela | Documenter
   2 | Tom    | Developer
   3 | Heikki | Commiter
   4 | Singh  | New JOB
(4 rows)

 UPDATE person_detail_job_vw SET job='PATCHER'  WHERE pid=4;
 UPDATE 1

  SELECT * FROM person_detail_job_vw;
 pid | pname  |    job
-----+--------+------------
   1 | Angela | Documenter
   2 | Tom    | Developer
   3 | Heikki | Commiter
   4 | Singh  | PATCHER

 DELETE FROM person_detail_job_vw WHERE pid=4;
 DELETE 1

 SELECT * FROM person_detail_job_vw;
 pid | pname  |    job
-----+--------+------------
   1 | Angela | Documenter
   2 | Tom    | Developer
   3 | Heikki | Commiter

If view has more complex query, then I need to break into more RULES.

Now in PostgreSQL 9.1, user can use INSTEAD OF Trigger. Following is an example:
Trigger Function

CREATE OR REPLACE FUNCTION person_detail_job_vw_dml()
RETURNS TRIGGER
LANGUAGE plpgsql
AS $function$
   BEGIN
      IF TG_OP = 'INSERT' THEN
        INSERT INTO  person_detail VALUES(NEW.pid,NEW.pname);
        INSERT INTO  person_job VALUES(NEW.pid,NEW.job);
        RETURN NEW;
      ELSIF TG_OP = 'UPDATE' THEN
       UPDATE person_detail SET pid=NEW.pid, pname=NEW.pname WHERE pid=OLD.pid;
       UPDATE person_job SET pid=NEW.pid, job=NEW.job WHERE pid=OLD.pid;
       RETURN NEW;
      ELSIF TG_OP = 'DELETE' THEN
       DELETE FROM person_job WHERE pid=OLD.pid;
       DELETE FROM person_detail WHERE pid=OLD.pid;
       RETURN NULL;
      END IF;
      RETURN NEW;
    END;
$function$;

Trigger:

CREATE TRIGGER person_detail_job_vw_dml_trig
    INSTEAD OF INSERT OR UPDATE OR DELETE ON
      person_detail_job_vw FOR EACH ROW EXECUTE PROCEDURE person_detail_job_vw_dml();

Lets see its work:

postgres=# SELECT VERSION();
                                                            version
--------------------------------------------------------------------------------------------------------------------------------
 PostgreSQL 9.1.0 on x86_64-apple-darwin, compiled by i686-apple-darwin10-gcc-4.2.1 (GCC) 4.2.1 (Apple Inc. build 5664), 64-bit

postgres=# CREATE TRIGGER person_detail_job_vw_dml_trig
postgres-#     INSTEAD OF INSERT OR UPDATE OR DELETE ON
postgres-#       person_detail_job_vw FOR EACH ROW EXECUTE PROCEDURE person_detail_job_vw_dml();
CREATE TRIGGER
postgres=# INSERT INTO person_detail_job_vw VALUES(4,'Singh','New JOB');
INSERT 0 1
postgres=# SELECT * FROM person_detail_job_vw;
 pid | pname  |    job
-----+--------+------------
   1 | Angela | Documenter
   2 | Tom    | Developer
   3 | Heikki | Commiter
   4 | Singh  | New JOB
(4 rows)

postgres=#  UPDATE person_detail_job_vw SET job='PATCHER'  WHERE pid=4;
UPDATE 1
postgres=# SELECT * FROM person_detail_job_vw;
 pid | pname  |    job
-----+--------+------------
   1 | Angela | Documenter
   2 | Tom    | Developer
   3 | Heikki | Commiter
   4 | Singh  | PATCHER
(4 rows)
postgres=# DELETE FROM person_detail_job_vw WHERE pid=4;
DELETE 1
postgres=# SELECT * FROM person_detail_job_vw;
 pid | pname  |    job
-----+--------+------------
   1 | Angela | Documenter
   2 | Tom    | Developer
   3 | Heikki | Commiter
(3 rows)

Wow! Now, in 9.1, if somebody has to do any implementation, they can also do using plpgsql function.

Utility Operations improvement in PostgreSQL 9.1

1. Transaction-level advisory locks:

PostgreSQL provides a means for creating locks that have application-defined meanings. These are called advisory locks, because the system does not enforce their use — it is up to the application to use them correctly.
Till 9.0, postgreSQL had only Session level locks. Now, in PostgreSQL 9.1, we have transaction level advisory lock. Some examples are given below:

BEGIN;

-- grabbing txn locks multiple times

SELECT
        pg_advisory_xact_lock(1), pg_advisory_xact_lock(1),
        pg_advisory_xact_lock_shared(2), pg_advisory_xact_lock_shared(2),
        pg_advisory_xact_lock(1, 1), pg_advisory_xact_lock(1, 1),
        pg_advisory_xact_lock_shared(2, 2), pg_advisory_xact_lock_shared(2, 2);

SELECT locktype, classid, objid, objsubid, mode, granted
        FROM pg_locks WHERE locktype = 'advisory'
        ORDER BY classid, objid, objsubid;

COMMIT;

Advantage of Advisory lock are given below:
1. Advisory locks are faster
2. It avoid MVCC bloat,
3. and It can be automatically cleaned up by the server at the end of the session.

Due to above reason users/developers should think of using advisory lock over making Database level locks.
Important Transaction advisory locks in PostgreSQL 9.1 are given below:

pg_advisory_xact_lock(key bigint)              :      Obtain exclusive transaction level advisory lock
pg_advisory_xact_lock(key1 int, key2 int)      :      Obtain exclusive transaction level advisory lock
pg_advisory_xact_lock_shared(key bigint)       :      Obtain shared transaction level advisory lock
pg_advisory_xact_lock_shared(key1 int, key2 int):     Obtain shared advisory lock for the current transaction
pg_try_advisory_xact_lock(key bigint):                Obtain exclusive transaction level advisory lock if available
pg_try_advisory_xact_lock(key1 int, key2 int:         Obtain exclusive transaction level advisory lock if available
pg_try_advisory_xact_lock_shared(key bigint): 	      Obtain shared transaction level advisory lock if available
pg_try_advisory_xact_lock_shared(key1 int, key2 int): Obtain shared transaction level advisory lock if available

2. New version of TRUNCATE (TRUNCATE … RESTART IDENTITY)
This is a new improvement in PostgreSQL 9.1.

Till 9.0, “TRUNCATE … RESTART IDENTITY” used to use “ALTER SEQUENCE RESTART” in backend to rollback sequences, in which, on error between truncating and commiting, sequence may get of out of sync with the table contents. In previous approach resetting of associated sequences used to happen before TRUNCATE of table and at the time of backend crash sequence used to get out of sync.

To fix this, in PostgreSQL 9.1, same command will create a new refilenode for a sequence with reset due to RESTART IDENTITY. If transaction aborts, then PG will automatically revert to old reflfilenode file. This approach requires exclusing lock on sequence, since TRUNCATE has already exclusive lock on TABLE, therefore having exclusive lock on associated sequence won’t have any effect.

3. COPY command improvement:
In PostgreSQL 9.1, Now copy has ENCODING Option. ENCODING option is useful when user wants to copy data in some other ENCODING. Some example is given below:

    postgres=# COPY y TO stdout WITH CSV FORCE QUOTE col2 ESCAPE E'\\' ENCODING 'sql_ascii';
"Jackson, Sam","\\h"
"It is \"perfect\".","	"
"",

4. EXPLAIN VERBOSE:
In PostgreSQL 9.1, VERBOSE Option has been included with EXPLAIN Command. Using this command user would be able to see the funcation call expression in a functionscan node. Example is given below:

   postgres=# explain (verbose,analyze) select max(sal) from emp;
                                                 QUERY PLAN
------------------------------------------------------------------------------------------------------------
 Aggregate  (cost=1.18..1.19 rows=1 width=5) (actual time=0.027..0.027 rows=1 loops=1)
   Output: max(sal)
   ->  Seq Scan on public.emp  (cost=0.00..1.14 rows=14 width=5) (actual time=0.008..0.012 rows=14 loops=1)
         Output: empno, ename, job, mgr, hiredate, sal, comm, deptno
 Total runtime: 0.077 ms
(5 rows)

5. New VACUUM FULL VERBOSE and CLUSTER VERBOSE

In PostgreSQL 9.0, due to new implementation of VACUUM FULL, VERBOSE option was not giving much information. However, this is now fixed in PostgreSQL 9.1. Now VERBOSE with VACUUM FULL and CLUSTER will new information, which includes live and dead tuple and also if CLUSTER is using an index to rebuild the tablel. Some Example is given below:
CLUSTER VERBOSE Example:

postgres=# cluster verbose;
INFO:  clustering "public.emp" using sequential scan and sort
INFO:  "emp": found 0 removable, 14 nonremovable row versions in 1 pages
DETAIL:  0 dead row versions cannot be removed yet.
CPU 0.00s/0.00u sec elapsed 0.00 sec.
CLUSTER
postgres=#

VACUUM FULL VERBOSE Example

postgres=# VACUUM FULL VERBOSE emp;
INFO:  vacuuming "public.emp"
INFO:  "emp": found 0 removable, 14 nonremovable row versions in 1 pages
DETAIL:  0 dead row versions cannot be removed yet.
CPU 0.00s/0.00u sec elapsed 0.00 sec.
VACUUM

6. autovacuum improvement.

Now, in 9.1 autovacuum process will not wait for locked tables, on which lock already acquired by some other process.
In 9.1, autovacuum process will skip locked tables and will continue for vacuuming other tables and will try to vacuum such tables later.

New Functions/Improvements in PostgreSQL 9.1

1. SQL function format(text, …):
This function is similar to the C function sprintf; However only the following conversion specifications are recognized:

%s interpolates the corresponding argument as a string
%I escapes its argument as an SQL identifier
%L escapes its argument as an SQL literal
%% outputs a literal %.

A conversion can reference an explicit parameter position by preceding the conversion specifier with n$, where n is the argument position

Some Examples are given below:

postgres=# select format('%1$s %3$s', 1, 2, 3);
 format
--------
 1 3
(1 row)

postgres=# select format('Hello %s', 'World');
   format
-------------
 Hello World
(1 row)
postgres=# select format('Hello %s %1$s %s', 'World', 'Hello again');
            format
-------------------------------
 Hello World World Hello again
(1 row)

2. New string functions concat(), concat_ws(), left(), right(), and reverse()
(i). concat() function:
It Concatenate all arguments. NULL arguments are ignored. Example is given below:

postgres=# select concat('one');
 concat
--------
 one
(1 row)

postgres=# select concat(1,2,3,'hello',true, false, to_date('20100309','YYYYMMDD'));
        concat
----------------------
 123hellotf2010-03-09
(1 row)

(ii). concat_ws() function:
It works similar to concat() function, however in this function first argument will be seperators. Example is given below:

postgres=# select concat_ws('#',1,2,3,'hello',true, false, to_date('20100309','YYYYMMDD'));
         concat_ws
----------------------------
 1#2#3#hello#t#f#2010-03-09
(1 row)

postgres=# select concat_ws(',',10,20,null,30);
 concat_ws
-----------
 10,20,30
(1 row)

(iii). left() function:
This function returns first n characters in the string. When n is negative, return all but last |n| characters.

postgres=# select left('Hello World',2);
 left
------
 He
(1 row)
postgres=# select left('Hello World',-2);
   left
-----------
 Hello Wor
(1 row)

(iv). right() function:
This function returns last n characters in the string. When n is negative, return all but first |n| characters. Example is given below:

postgres=# select right('Hello World',2);
 right
-------
 ld
(1 row)

postgres=# select right('Hello World',-2);
   right
-----------
 llo World
(1 row)

(v) reverse() function:
   This function returns reversed string. Example is given below:
postgres=# select reverse('1234567');
 reverse
---------
 7654321
(1 row)

3. pg_read_binary_file() function:
This function is similar to pg_read_file, except it returns return content of Binary file in bytea format. Example is given below:

select pg_read_binary_file('testbin');
                              pg_read_binary_file
--------------------------------------------------------------------------------
 \x303131302020203030313109203031313020202030303031092030313131202020313030300a
(1 row)

4. New version of pg_read_file() for reading entire file:

In Previous version of PostgreSQL, pg_read_file() requires offset and length of data needs to read. Now in PostgreSQL 9.1, there is new version of pg_read_file, using which user would be able to read entire file. Example is given below:

postgres=# select pg_read_file('postgresql.conf');
                                                 pg_read_file
---------------------------------------------------------------------------------------------------------------
 # -----------------------------                                                                              +
 # PostgreSQL configuration file                                                                              +
 # -----------------------------                                                                              +
 #                                                                                                            +
 # This file consists of lines of the form:                                                                   +
 #                                                                                                            +
 #   name = value                                                                                             +
 #                                                                                                            +
 # (The "=" is optional.)  Whitespace may be used.  Comments are introduced with                              +
 # "#" anywhere on a line.  The complete list of parameter names and allowed                                  +
 # values can be found in the PostgreSQL documentation.                                                       +
 #                                                                                                            +
 # The commented-out settings shown in this file represent the default values.                                +
 # Re-commenting a setting is NOT sufficient to revert it to the default value;                               +
 # you need to reload the server.                                                                             +
 #                                                                                                            +
 # This file is read on server startup and when the server receives a SIGHUP                                  +
 # signal.  If you edit the file on a running system, you have to SIGHUP the                                  +
 # server for the changes to take effect, or use "pg_ctl reload".  Some                                       +
 # parameters, which are marked below, require a server shutdown and restart to                               +
 # take effect.                                                                                               +
 #                                                                                                            +
 # Any parameter can also be given as a command-line option to the server, e.g.,                              +
 # "postgres -c log_connections=on".  Some parameters can be changed at run time                              +
 # with the "SET" SQL command.                                                                                +
 #                                                                                                            +
 # Memory units:  kB = kilobytes        Time units:  ms  = milliseconds                                       +
 #                MB = megabytes                     s   = seconds                                            +
 #                GB = gigabytes                     min = minutes                                            +
 #                                                   h   = hours                                              +
 #                                                   d   = days                                               +
                                                                                                              +
:

4. Optional Third argument in array_to_string()/string_to_array() functions for NULL processing.

In PostgreSQL 9.1, array_to_string/string_to_array function has third argument for processing of NULL Values (In previous version of PostgreSQL, this was missing). Some Examples are given below:

postgres=# select array_to_string(array[1,2,3,4,NULL,6], ',', '*');
 array_to_string
-----------------
 1,2,3,4,*,6
(1 row)

Similarly for string_to_array, example is given below:

postgres=# select string_to_array('1,2,3,4,*,6', ',', '*');
 string_to_array
------------------
 {1,2,3,4,NULL,6}
(1 row)

XML Functions:

1. XMLEXISTS and xpath_exists

(i). XMLEXISTS function:

In postgreSQL XMLEXISTS Function has been added. Using this function, user can verify XPath’s first argument returns any nodes or not. This function returns true or false. Example is given below

postgres=# SELECT xmlexists('//town[text() = ''Toronto'']' PASSING BY REF 'Bidford-on-AvonCwmbranBristol');
 xmlexists
-----------
 f
(1 row)

postgres=# SELECT xmlexists('//town[text() = ''Cwmbran'']' PASSING BY REF 'Bidford-on-AvonCwmbranBristol');
 xmlexists
-----------
 t

(ii) xpath_exists function:
PostgreSQL 9.1, now has one more function xpath_exists. This function is a specialized form of the xpath function. Instead of returning the individual XML values that satisfy the XPath, this function returns a Boolean indicating whether the query was satisfied or not. This function is equivalent to the standard XMLEXISTS predicate, except that it also offers support for a namespace mapping argument. Example is given below:

postgres=# SELECT xpath_exists('//town[text() = ''Toronto'']','Bidford-on-AvonCwmbranBristol'::xml);
 xpath_exists
--------------
 f
(1 row)
postgres=# SELECT xpath_exists('//town[text() = ''Cwmbran'']','Bidford-on-AvonCwmbranBristol'::xml);
 xpath_exists
--------------
 t

2. xml_is_well_formed(), xml_is_well_formed_document(), xml_is_well_formed_content() functions:

(i) xml_is_well_formed() function:
xml_is_well_formed_document checks for a well-formed document example is given below:

 postgres=# SELECT xml_is_well_formed('bar');
 xml_is_well_formed
--------------------
 t
(1 row)

(ii). xml_is_well_formed_content() function:
This function checks for xml well-formed content. Example is given below

SELECT xml_is_well_formed_content('bar');
 xml_is_well_formed_content
----------------------------
 t
(1 row)

SELECT xml_is_well_formed_content('abc');
 xml_is_well_formed_content
----------------------------
 t
(1 row)

(iii). xml_is_well_formed_document function:
This function checks for a well-formed document. Example is given below:

postgres=# SELECT xml_is_well_formed_document('bar');
 xml_is_well_formed_document
-----------------------------
 t
(1 row)

postgres=# SELECT xml_is_well_formed_document('bar');
 xml_is_well_formed_document
-----------------------------
 f
(1 row)

Some Object Information Functions improvements in PostgreSQL 9.1

1. pg_describe_object:

In PostgreSQL 9.1, pg_describe_object function has been added. Using this function, user can get human readable string for understanding the pg_depend content as given below:

postgres=# select classid, objid, objsubid, pg_describe_object(classid, objid, objsubid) from pg_depend where classid &lt;&gt; 0 and pg_describe_object(classi
 classid | objid | objsubid |               pg_describe_object
---------+-------+----------+------------------------------------------------
    1247 | 16426 |        0 | type test_money
    1247 | 16425 |        0 | type test_money[]
    1259 | 16424 |        0 | table test_money
    1259 | 16429 |        0 | index test_money_idx
    2606 | 16431 |        0 | constraint uniq_constraint on table test_money

2. quote_all_identifiers (boolean) parameter:

With this parameter, User can force quoating of all identifiers in EXMPLAIN and in system catalog functions like pg_get_viewdef().
Example is given below:

postgres=# explain select * from emp;
                     QUERY PLAN
-----------------------------------------------------
 Seq Scan on emp  (cost=0.00..1.14 rows=14 width=41)
(1 row)

postgres=# set quote_all_identifiers to true;
SET
postgres=# explain select * from emp;
                      QUERY PLAN
-------------------------------------------------------
 Seq Scan on "emp"  (cost=0.00..1.14 rows=14 width=41)
(1 row)

Similarly for pg_get_viewdef()/any system function, example is given below:

postgres=# select pg_get_Viewdef('salesemp');
                                                  pg_get_viewdef
-------------------------------------------------------------------------------------------------------------------
 SELECT emp.empno, emp.ename, emp.hiredate, emp.sal, emp.comm FROM emp WHERE ((emp.job)::text = 'SALESMAN'::text);
(1 row)

postgres=# set quote_all_identifiers to true;
SET
postgres=# select pg_get_Viewdef('salesemp');
                                                                 pg_get_viewdef
-------------------------------------------------------------------------------------------------------------------------------------------------
 SELECT "emp"."empno", "emp"."ename", "emp"."hiredate", "emp"."sal", "emp"."comm" FROM "emp" WHERE (("emp"."job")::"text" = 'SALESMAN'::"text");
(1 row)

Cluster-wise setting of quote_all_identifiers,will force pg_dump/pg_dumpall command to quote identifiers while taking database dump

3. Complete implementation of columns about sequence in information_schema.sequences:

In Previous versions of PostgreSQL Colums about sequence existed in information_schema.sequence view, however sequenece parameter were not implemented.

Now, in 9.1, there is complete implementation of sequence parameter in information_schema.sequeunces. Example is given below:

In 9.0:

postgres=# \x
Expanded display is on.
postgres=# select * from information_schema.sequences where sequence_name='next_empno';
-[ RECORD 1 ]-----------+-----------
sequence_catalog        | postgres
sequence_schema         | public
sequence_name           | next_empno
data_type               | bigint
numeric_precision       | 64
numeric_precision_radix | 2
numeric_scale           | 0
maximum_value           |
minimum_value           |
increment               |
cycle_option            |

In 9.1:

postgres=# select * from information_schema.sequences where sequence_name='next_empno';
-[ RECORD 1 ]-----------+--------------------
sequence_catalog        | postgres
sequence_schema         | public
sequence_name           | next_empno
data_type               | bigint
numeric_precision       | 64
numeric_precision_radix | 2
numeric_scale           | 0
start_value             | 8000
minimum_value           | 1
maximum_value           | 9223372036854775807
increment               | 1
cycle_option            | NO


4. public as a pseudo-role name in has_table_privilege() and related functions:

In pervious verions, public role was not recognized by has_*_previleges. Now in 9.1, public as pseudo-role name has been added, which allows checking for public permissions.
Example is given below:
In PostgreSQL 9.0:

postgres=# select has_table_privilege('public','emp','select');
ERROR:  role "public" does not exist

In postgreSQL 9.1:

postgres=# select has_table_privilege('public','emp','select');
 has_table_privilege
---------------------
 t
(1 row)