Tip:: PPAS 9.4 and Global Temporary Table

Customers who moved/migrated their database from Oracle to PPAS frequently ask for Global Temporary Table in PPAS.

Currently, PPAS doesn’t support Global Temporary tables. However, there is a way user can achieve this functionality in PPAS.

Before we continue with the implementation, lets first understand characteristics of Global Temporary Table. Following are the important characteristics of Global Temporary Table.
1. Global Temporary Table gives predefined structure for storing data.
2. It’s an unlogged table which means any activity on this table will not be logged.
3. The data in a global temporary table are private, such that data inserted by a session can only be accessed by that session.

Based on the above characteristics of Global Temporary Table AKA GTT, we can define similar kind of work by using the following method:
1. Create UNLOGGED TABLE in PPAS, which activity won’t be logged.
2. Create Row Level Security in such a way that session should be able to see their information (based on PID).
3. Create a process which can cleanup data from GTT based on pids which are not active in database.

Lets see how we can implement it in Adavanced Server.

1. Create an UNLOGGED table with all columns required and extra column of Pid.

CREATE UNLOGGED TABLE test_global_temporary_table(id numeric, col text, pid bigint default pg_backend_pid());

2. Create a function to restrict the visibility of data.

CREATE OR REPLACE FUNCTION verify_pid_context (
    p_schema       TEXT,
    p_object       TEXT
)
RETURN VARCHAR2
IS
DECLARE
   predicate TEXT;
BEGIN
    IF ( current_setting('is_superuser') = 'on')
    THEN
      predicate = 'true';
    ELSE
      predicate := format('pid = %s',pg_backend_pid());
    END IF;
    RETURN predicate;
END;

3. Apply the security policy based on above function.

DECLARE
  v_object_schema VARCHAR2(30)   := 'public';
  v_object_name VARCHAR2(30)     := 'test_global_temporary_table';
  v_policy_name VARCHAR2(30)     := 'secure_by_pid';
  v_function_schema VARCHAR2(30) := 'public';
  v_policy_function VARCHAR2(30) := 'verify_pid_context';
  v_statement_types VARCHAR2(30) := 'INSERT,UPDATE,DELETE,SELECT';
  v_update_check BOOLEAN         := TRUE;
  v_enable BOOLEAN               := TRUE;
BEGIN
  DBMS_RLS.ADD_POLICY( v_object_schema,
                       v_object_name,
                       v_policy_name,
                       v_function_schema,
                       v_policy_function,
                       v_statement_types,
                       v_update_check,
                       v_enable
                     );
END;

4. Create UPDATABLE view which can hide pid column. All sessions will be using this view as GTT.

   CREATE OR REPLACE VIEW test_global_temporary AS SELECT id, col FROM test_global_temporary_table;

5. Create a backend job, which can cleanup Table based on stale/old sessions.
For job, user/developer can do following:
a. use superuser and execute DELETE command on table:
DELETE FROM test_global_temporary WHERE pid NOT in (SELECT pid FROM pg_stat_activity);
b. To Schedule above DELETE command, user can use one of the following:
i. Crontab
ii. Or PPAS DBMS_SCHEDULE Package.

6. GRANT ALL privileges to database user who can access Global Temporary Table.

    GRANT ALL on test_global_temporary TO testdbuser;
    GRANT ALL on test_global_temporary_table To testdbuser;

Now, lets try above implementation of Global Temporary Table.

Open two sessions as a normal user (testdbuser) as given below:

[vibhorkumar@localhost ~]$ psql -U testdbuser edb
psql.bin (9.4.4.9)
Type "help" for help.

edb=> 
edb=> select pg_backend_pid();
 pg_backend_pid 
----------------
          32722
(1 row)


edb=> select pg_backend_pid();
 pg_backend_pid 
----------------
          32729
(1 row)

Now from both session insert some records:
From first session:

edb=> INSERT INTO test_global_temporary VALUES(1,'FROM pid 32722');
INSERT 0 1
edb=> INSERT INTO test_global_temporary VALUES(2,'FROM pid 32722');
INSERT 0 1
edb=> INSERT INTO test_global_temporary VALUES(3,'FROM pid 32722');
INSERT 0 1

From Second session:

edb=> INSERT INTO test_global_temporary VALUES(1,'FROM pid 32729');
INSERT 0 1
edb=> INSERT INTO test_global_temporary VALUES(2,'FROM pid 32729');
INSERT 0 1
edb=> INSERT INTO test_global_temporary VALUES(3,'FROM pid 32729');
INSERT 0 1

From First Session:

edb=> SELECT * FROM test_global_temporary;
 id |      col       
----+----------------
  1 | FROM pid 32722
  2 | FROM pid 32722
  3 | FROM pid 32722
(3 rows)

From Second Session:

edb=> SELECT * FROm test_global_temporary;
 id |      col       
----+----------------
  1 | FROM pid 32729
  2 | FROM pid 32729
  3 | FROM pid 32729
(3 rows)

which shows that unlogged table with right RLS policy and backend job, can be a potential solution for Global Temporary Tables.

Postgres and Transparent Data Encryption (TDE)

Security has always been a great concern of Enterprises. Especially, if you have crucial information stored in the database, you would always prefer to have high security around it. Over the years, technologies have evolved and provided better solutions around it.

If you have very sensitive information, people try to keep this information encrypted so, that in case, somebody gets access of the system, then they cannot view this information, if they are not authorized.

For managing sensitive information, Enterprises use multiple methods:

  1. Encrypting specific information.

If you are PPAS users, you would like to use DBMS_CRYPTO package which provides a way of encrypting sensitive information in databases.

For more information, please refer following link:

http://www.enterprisedb.com/docs/en/9.4/oracompat/Database_Compatibility_for_Oracle_Developers_Guide.1.178.html#

For PostgreSQL, users can use pgcrypto module.

  1. Transparent Data Encryption (TDE) is another method employed by both Microsoft and Oracle to encrypt database files. TDE offers encryption at file level. This method solves the problem of protecting data at rest i.e. encrypting databases both on the hard drive and consequently on backup media. Enterprises typically employ TDE to solve compliance issues such as PCI DSS.

Postgres Plus, currently doesn’t have inbuilt TDE, however, if Enterprises looking for encryption at the database file level, they can use one of the following methods for protecting data at rest:

  1. Full Disk Encryption:

Full disk or partition encryption is one of the best ways of protecting your data. This method not only protects each file, however, also protects the temporary storage that may contain parts of these files.  Full disk encryption protects all of your files and then you do not have to worry about selecting what you want to protect and possibly missing a file.

RHEL (Red Hat) supports Linux Unified Key Setup-on-disk-format (or LUKS). LUKS bulk encrypts Hard Drive partition.

For more information on LUKS, please refer following link:

https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Linux/6/html/Security_Guide/chap-Security_Guide-Encryption.html#sect-Security_Guide-LUKS_Disk_Encryption

  1. File system-level encryption:

File system-level encryption often called file/directory encryption. In this method individual files or directories are encrypted by the file system itself.

There is stackable cryptographic file system encryption available which user can utilize in their environment.

File system level Encryption gives following advantages:

  1. Flexible file-based key management, so that each file can be and usually is encrypted with a separate encryption key.
  1. Individual management of encrypted files e.g. Incremental backups of the individual changed files even in encrypted form, rather than backup of the entire encrypted volume.
  1. Access control can be enforced through the use of public-key cryptography, and the fact that cryptographic keys are only held in memory while the file that is decrypted by them is held open.

Stackable cryptographic file system encryption can be use for Postgres for Transparent Data Encryption.

In this blog, I will discuss using mount ecrpytfs as it requires less overhead in setup (LUKS requires a new disk to be configured and formatted before storing data on it. “mount ecrpytfs” works with existing directories and data).

If Enterprises want to give the control to DBAs for TDE, they can use/define few sudo rules for DBAs to execute commands for encryption.

Following is a method, which they can use:

  • Ask system admin to create sudo rules to allow DBA to execute encryption for data directory for Postgres Plus. One common way to do this is using the “mount ecryptfs” command in Linux operating systems.
  • If user needs to encrypt the /ppas94/data directory, they can use following command:
sudo mount -t ecryptfs /ppas94/data /ppas94/data
        

More information can be found in the documentation from RHEL:

https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Linux/6/html/Storage_Administration_Guide/ch-efs.html

User can also specify encryption key type (passphrase, openssl), cipher (aes, des3_ede…) key byte size, and other options with above commands.

Example is given below:

# mount -t ecryptfs /home /home -o ecryptfs_unlink_sigs \

 ecryptfs_key_bytes=16 ecryptfs_cipher=aes ecryptfs_sig=c7fed37c0a341e19
<strong>


Centos 7 and RHEL 7, by default doesn’t come with ecrpytfs therefore, users can also use encfs command.

For more information on encfs, please refer following link:

https://wiki.archlinux.org/index.php/EncFS

Following are the steps to use encfs to encrypt the data directory.

  1. Create a data directory using the following command, as enterprisedb user.
   mkdir /var/lib/ppas/9.4/encrypted_data
   chmod 700 /var/lib/ppas/9.4/encrypted_data
  1. Use following encfs command to encrypt the data directory.
         encfs /var/lib/ppas-9.4/encrypted_data/ /var/lib/ppas-9.4/data

Snapshot of above command is given below:

encfs /var/lib/ppas/9.4/encrypted_data /var/lib/ppas/9.4/data

The directory "/var/lib/ppas/9.4/data" does not exist. Should it be created? (y,n) y

Creating new encrypted volume.

Please choose from one of the following options:

 enter "x" for expert configuration mode,

 enter "p" for pre-configured paranoia mode,

 anything else, or an empty line will select standard mode.

?> p




Paranoia configuration selected.




Configuration finished.  The filesystem to be created has

the following properties:

Filesystem cipher: "ssl/aes", version 3:0:2

Filename encoding: "nameio/block", version 3:0:1

Key Size: 256 bits

Block Size: 1024 bytes, including 8 byte MAC header

Each file contains 8 byte header with unique IV data.

Filenames encoded using IV chaining mode.

File data IV is chained to filename IV.

File holes passed through to ciphertext.




-------------------------- WARNING --------------------------

The external initialization-vector chaining option has been

enabled.  This option disables the use of hard links on the

filesystem. Without hard links, some programs may not work.

The programs 'mutt' and 'procmail' are known to fail.  For

more information, please see the encfs mailing list.

If you would like to choose another configuration setting,

please press CTRL-C now to abort and start over.




Now you will need to enter a password for your filesystem.

You will need to remember this password, as there is absolutely

no recovery mechanism.  However, the password can be changed

later using encfsctl.




New Encfs Password: 

Verify Encfs Password: 

  1. After encrypting, data directory, users also need to modify the postgresql-<version> service script to include proper command in it for password. For that either, they can use sshpass or they can write their own program which can pass the password for mounting directory.

As you can see, achieving Transparent Data Encryption Postgres is very easy.

Dynamic RLS implementation in PPAS 9.3

In the course of my work at EnterpriseDB, migrating Oracle databases to EnterpriseDB’s Postgres Plus Advanced Server is a common task. However, now and then we encounter unique situations. While working on a migration project recently, we encountered a new use case for RLS (Row level Security).

The customer had a centralized database where it stored a huge number of transactions. These transactions are performed by different business units located in different parts of the world. There are certain types of transactions that should not be visible even if they are being queried by the same company. That is where RLS comes in. With RSL, specific transactions, or kinds of transactions, that can remain visible are mapped back to an attribute in the table.

The customer needed the application to authenticate users and set the context for which records in the database become visible for a specific session.
In its deployment of Oracle, the customer had used the functions/procedure in the Oracle package DBMS_SESSION. In the application, the customer used DBMS_SESSION.SET_CONTEXT to set the context. And for the Row Level Security, the customer was using the DBMS_SESSION.SYS_CONTEXT to implement security around the transactions.

Postgres Plus Advanced Server has a DBMS_SESSION package that is compatible with Oracle. However, it does not currently offer users the capability of setting the user defined context and implementing RLS based on those context. Given others may experience similar situations , as our customer, I wanted to provide the procedures and functions that users could deploy.

SET_CONTEXT procedure

The definition of this procedure is given below:

CREATE OR REPLACE PROCEDURE set_context(namespace TEXT,
                                        attribute TEXT,
                                        val       TEXT)
AS
BEGIN
    EXECUTE IMMEDIATE format('SET %s.%s TO %s',namespace, attribute,val);
END;

Using this procedure, users can set their own context at session level.

The following is a function to help view the context in session, which is set using the above procedure.

CREATE OR REPLACE FUNCTION USYS_CONTEXT(namespace TEXT,
                                        parameter TEXT,
                                        len       BIGINT DEFAULT 8)
RETURN TEXT
AS
  DECLARE
    return_val TEXT;
  BEGIN
    EXECUTE IMMEDIATE format('SHOW %s.%s',namespace,parameter) INTO return_val;
    RETURN substr(return_val,1,len);
    EXCEPTION WHEN others THEN
     RETURN NULL;
END;

The following is an example of how we can implement row level security based on the above procedure and functions:
1. Create a table which will have attribute context_check to map the context set by procedure:

CREATE TABLE test_rls(id numeric, col text, context_check text);
INSERT INTO test_rls SELECT id, 'First_check','aaa' FROM generate_series(1,10) foo(id);
INSERT INTO test_rls SELECT id, 'First_check','bbb' FROM generate_series(1,10) foo(id);
INSERT INTO test_rls SELECT id, 'First_check','ddd' FROM generate_series(1,10) foo(id);

2. Now create a function to check the application context. Below is one function:

CREATE OR REPLACE FUNCTION verify_user_context (
    p_schema       TEXT,
    p_object       TEXT
)
RETURN VARCHAR2
IS
DECLARE
   predicate TEXT;
BEGIN
    predicate := format('context_check = public.usys_context(''%s''::text,''%s''::text, 8)','CONTEXT','APP_PREDICATE');
    RETURN predicate;
END;

3. Now Apply Security Policy using Policy Functions shown below:

DECLARE
v_object_schema VARCHAR2(30) := 'public';
v_object_name VARCHAR2(30) := 'test_rls';
v_policy_name VARCHAR2(30) := 'secure_data';
v_function_schema VARCHAR2(30) := 'public';
v_policy_function VARCHAR2(30) := 'verify_user_context';
v_statement_types VARCHAR2(30) := 'INSERT,UPDATE,DELETE,SELECT';
v_update_check BOOLEAN := TRUE;
v_enable BOOLEAN := TRUE;
BEGIN
DBMS_RLS.ADD_POLICY(
v_object_schema,
v_object_name,
v_policy_name,
v_function_schema,
v_policy_function,
v_statement_types,
v_update_check,
v_enable
);
END;

Now we are set to test this implementation.

Connect to one session and try the following:
1. Set the context using procedure SET_CONTEXT as given below:

EXEC  SET_CONTEXT('CONTEXT','APP_PREDICATE','ddd');

EDB-SPL Procedure successfully completed

2. Verify in the same session to determine if we have set the Context properly:

 SELECT USYS_CONTEXT('CONTEXT','APP_PREDICATE',2000);
 usys_context
--------------
 ddd
(1 row)

3. Since in session, we have Context set as ddd, there in this session, we should be able to see rows respective to set contexts:

beta=# SELECT * FROM test_rls ;
 id |     col     | context_check
----+-------------+---------------
  1 | First_check | ddd
  2 | First_check | ddd
  3 | First_check | ddd
  4 | First_check | ddd
  5 | First_check | ddd
  6 | First_check | ddd
  7 | First_check | ddd
  8 | First_check | ddd
  9 | First_check | ddd
 10 | First_check | ddd
(10 rows)

As you can see, the DBMS_RLS package in Postgres Plus Advanced Service can help in implementing Row Level Security based on Application Context.

Compiling PLV8 with Postgres Plus Advanced Server

PLV8 is a programming language that lets users write stored procedures and triggers in JavaScript and store them in their Postgres database. This allows application programmers to write a lot of their server-side programming in the same language they use to build their web client applications. Fewer languages to learn usually means fewer mistakes and faster time to completion. The extensive language support is one of many reasons why Postgres’ use across the world is increasing lately. The recent addition of document data support with JSON and JSONB data types in PostgreSQL, and in Postgres Plus Advanced Server from EnterpriseDB, is the main reason for the increasing interest in the PL/V8 language extension.

Below are the steps you need to compile PLV8 with Postgres Plus Advanced Server 9.3/9.4.

To get started, here are the prerequisites:
1. A supported version of PostgreSQL or Postgres Plus Advanced Server, such as versions 9.1 and higher.
2. V8 version 3.14.5
3. g++ version 4.5.1

If you want to know more about V8, you can visit the following wiki page:
http://en.wikipedia.org/wiki/V8_(JavaScript_engine)

It’s important to note that when compiling PLV8 with Postgres Plus Advanced Server 9.3 or the upcoming 9.4, you will get the following two types of error messages:

The first error:

[root@localhost plv8js]# make
sed -e 's/^#undef PLV8_VERSION/#define PLV8_VERSION "1.5.0-dev1"/' plv8_config.h.in > plv8_config.h
g++ -Wall -O2 -DV8_USE_UNSAFE_HANDLES  -I. -I./ -I/usr/ppas-9.4/include/server -I/usr/ppas-9.4/include/internal -I/usr/include/et -D_GNU_SOURCE -I/usr/include/libxml2  -I/usr/include -fPIC -c -o plv8.o plv8.cc
plv8.cc: In function ‘void _PG_init()’:
plv8.cc:226: error: invalid conversion from ‘void (*)(XactEvent, void*)’ to ‘void (*)(XactEvent, void*, bool)’
plv8.cc:226: error:   initializing argument 1 of ‘void RegisterXactCallback(void (*)(XactEvent, void*, bool), void*)’
make: *** [plv8.o] Error 1

The above error message is a result of a different signature of

typedef void (*XactCallback)

in the Advanced Server transaction system.

To fix the above issue, the user can replace the following in plv8.cc:

static void plv8_xact_cb(XactEvent event, void *arg);

With

static void plv8_xact_cb(XactEvent event, void *arg, bool spl_context);

The second error:
After making the above changes, you may get the following error after trying to compile the source code using the “make” command:

[root@localhost plv8js]# make
g++ -Wall -O2 -DV8_USE_UNSAFE_HANDLES  -I. -I./ -I/usr/ppas-9.4/include/server -I/usr/ppas-9.4/include/internal -I/usr/include/et -D_GNU_SOURCE -I/usr/include/libxml2  -I/usr/include -fPIC -c -o plv8.o plv8.cc
plv8.cc:137: warning: ‘void plv8_xact_cb(XactEvent, void*, bool)’ used but never defined
plv8.cc:232: warning: ‘void plv8_xact_cb(XactEvent, void*)’ defined but not used
g++ -Wall -O2 -DV8_USE_UNSAFE_HANDLES  -I. -I./ -I/usr/ppas-9.4/include/server -I/usr/ppas-9.4/include/internal -I/usr/include/et -D_GNU_SOURCE -I/usr/include/libxml2  -I/usr/include -fPIC -c -o plv8_type.o plv8_type.cc
g++ -Wall -O2 -DV8_USE_UNSAFE_HANDLES  -I. -I./ -I/usr/ppas-9.4/include/server -I/usr/ppas-9.4/include/internal -I/usr/include/et -D_GNU_SOURCE -I/usr/include/libxml2  -I/usr/include -fPIC -c -o plv8_func.o plv8_func.cc
In file included from plv8_param.h:11,
                 from plv8_func.cc:9:
/usr/ppas-9.4/include/server/nodes/params.h:77: error: expected ‘,’ or ‘...’ before ‘typeid’
make: *** [plv8_func.o] Error 1

The above is mainly due to the use of typeid in params.h; typeid is the reserved keyword of C++ compiler.

To fix this issue, make the following changes in plv8.h

extern "C" {
#include "postgres.h"

#include "access/htup.h"
#include "fmgr.h"
#include "mb/pg_wchar.h"
#include "utils/tuplestore.h"
#include "windowapi.h"
}

with

#define typeid __typeid
extern "C" {
#include "postgres.h"

#include "access/htup.h"
#include "fmgr.h"
#include "mb/pg_wchar.h"
#include "utils/tuplestore.h"
#include "windowapi.h"
}
#undef typeid

In plv8_param.h, change the following:

extern "C" {
#include "postgres.h"

/*
 * Variable SPI parameter is since 9.0.  Avoid include files in prior versions,
 * as they contain C++ keywords.
 */
#include "nodes/params.h"
#if PG_VERSION_NUM >= 90000
#include "parser/parse_node.h"
#endif	// PG_VERSION_NUM >= 90000

} // extern "C"

With

#define typeid __typeid
extern "C" {
#include "postgres.h"

/*
 * Variable SPI parameter is since 9.0.  Avoid including files in prior versions,
 * as they contain C++ keywords.
 */
#include "nodes/params.h"
#if PG_VERSION_NUM >= 90000
#include "parser/parse_node.h"
#endif	// PG_VERSION_NUM >= 90000

} // extern "C"
#undef typeid

In plv8_param.cc, replace following:

extern "C" {

#include "catalog/pg_type.h"
#include "utils/builtins.h"
#include "utils/lsyscache.h"

} // extern "C"

with

#define typeid __typeid
extern "C" {

#include "catalog/pg_type.h"
#include "utils/builtins.h"
#include "utils/lsyscache.h"

} // extern "C"
#undef typeid

After making the above changes, you will be able to compile PLV8 with Advanced Server as shown below:

[root@localhost plv8js]# make
sed -e 's/^#undef PLV8_VERSION/#define PLV8_VERSION "1.5.0-dev1"/' plv8_config.h.in > plv8_config.h
g++ -Wall -O2 -DV8_USE_UNSAFE_HANDLES  -I. -I./ -I/usr/ppas-9.4/include/server -I/usr/ppas-9.4/include/internal -I/usr/include/et -D_GNU_SOURCE -I/usr/include/libxml2  -I/usr/include -fPIC -c -o plv8.o plv8.cc
g++ -Wall -O2 -DV8_USE_UNSAFE_HANDLES  -I. -I./ -I/usr/ppas-9.4/include/server -I/usr/ppas-9.4/include/internal -I/usr/include/et -D_GNU_SOURCE -I/usr/include/libxml2  -I/usr/include -fPIC -c -o plv8_type.o plv8_type.cc
g++ -Wall -O2 -DV8_USE_UNSAFE_HANDLES  -I. -I./ -I/usr/ppas-9.4/include/server -I/usr/ppas-9.4/include/internal -I/usr/include/et -D_GNU_SOURCE -I/usr/include/libxml2  -I/usr/include -fPIC -c -o plv8_func.o plv8_func.cc
g++ -Wall -O2 -DV8_USE_UNSAFE_HANDLES  -I. -I./ -I/usr/ppas-9.4/include/server -I/usr/ppas-9.4/include/internal -I/usr/include/et -D_GNU_SOURCE -I/usr/include/libxml2  -I/usr/include -fPIC -c -o plv8_param.o plv8_param.cc
echo "extern const unsigned char coffee_script_binary_data[] = {" >coffee-script.cc
(od -txC -v coffee-script.js | \
	sed -e "s/^[0-9]*//" -e s"/ \([0-9a-f][0-9a-f]\)/0x\1,/g" -e"\$d" ) >>coffee-script.cc
echo "0x00};" >>coffee-script.cc
g++ -Wall -O2 -DV8_USE_UNSAFE_HANDLES  -I. -I./ -I/usr/ppas-9.4/include/server -I/usr/ppas-9.4/include/internal -I/usr/include/et -D_GNU_SOURCE -I/usr/include/libxml2  -I/usr/include -fPIC -c -o coffee-script.o coffee-script.cc
echo "extern const unsigned char livescript_binary_data[] = {" >livescript.cc
(od -txC -v livescript.js | \
	sed -e "s/^[0-9]*//" -e s"/ \([0-9a-f][0-9a-f]\)/0x\1,/g" -e"\$d" ) >>livescript.cc
echo "0x00};" >>livescript.cc
g++ -Wall -O2 -DV8_USE_UNSAFE_HANDLES  -I. -I./ -I/usr/ppas-9.4/include/server -I/usr/ppas-9.4/include/internal -I/usr/include/et -D_GNU_SOURCE -I/usr/include/libxml2  -I/usr/include -fPIC -c -o livescript.o livescript.cc
g++ -O2 -g -pipe -Wall -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector --param=ssp-buffer-size=4 -m64 -mtune=generic -I/usr/include/et -Wall -Wmissing-prototypes -Wpointer-arith -Wdeclaration-after-statement -Wendif-labels -Wmissing-format-attribute -Wformat-security -fno-strict-aliasing -fwrapv -fpic -shared -o plv8.so plv8.o plv8_type.o plv8_func.o plv8_param.o coffee-script.o livescript.o -L/usr/ppas-9.4/lib -L/usr/lib64 -Wl,--as-needed -Wl,-rpath,'/usr/ppas-9.4/lib',--enable-new-dtags  -lv8 
sed -e 's/@PLV8_VERSION@/1.5.0-dev1/g' plv8.control.common | g++ -E -P -DLANG_plv8 - > plv8.control
sed -e 's/@LANG_NAME@/plv8/g' plv8.sql.common | g++ -E -P -I. -I./ -I/usr/ppas-9.4/include/server -I/usr/ppas-9.4/include/internal -I/usr/include/et -D_GNU_SOURCE -I/usr/include/libxml2  -I/usr/include -DLANG_plv8 - > plv8--1.5.0-dev1.sql
sed -e 's/@PLV8_VERSION@/1.5.0-dev1/g' plv8.control.common | g++ -E -P -DLANG_plcoffee - > plcoffee.control
sed -e 's/@LANG_NAME@/plcoffee/g' plv8.sql.common | g++ -E -P -I. -I./ -I/usr/ppas-9.4/include/server -I/usr/ppas-9.4/include/internal -I/usr/include/et -D_GNU_SOURCE -I/usr/include/libxml2  -I/usr/include -DLANG_plcoffee - > plcoffee--1.5.0-dev1.sql
sed -e 's/@PLV8_VERSION@/1.5.0-dev1/g' plv8.control.common | g++ -E -P -DLANG_plls - > plls.control
sed -e 's/@LANG_NAME@/plls/g' plv8.sql.common | g++ -E -P -I. -I./ -I/usr/ppas-9.4/include/server -I/usr/ppas-9.4/include/internal -I/usr/include/et -D_GNU_SOURCE -I/usr/include/libxml2  -I/usr/include -DLANG_plls - > plls--1.5.0-dev1.sql
/bin/mkdir -p '/usr/ppas-9.4/lib'
/bin/mkdir -p '/usr/ppas-9.4/share/extension'
/bin/mkdir -p '/usr/ppas-9.4/share/extension'
/usr/bin/install -c -m 755  plv8.so '/usr/ppas-9.4/lib/plv8.so'
/usr/bin/install -c -m 644 plv8.control '/usr/ppas-9.4/share/extension/'
/usr/bin/install -c -m 644 plv8.control plv8--1.5.0-dev1.sql plcoffee.control plcoffee--1.5.0-dev1.sql plls.control plls--1.5.0-dev1.sql '/usr/ppas-9.4/share/extension/'

After compiling PLV8, you now can install the PLV8 language in Advanced Server using the following command:

beta=# CREATE EXTENSION PLV8;
CREATE EXTENSION
beta=# 

To test your installed PLV8, here is some sample code:

beta=# DO $$ PLV8.elog(NOTICE, 'this', 'is', 'inline', 'code') $$ LANGUAGE PLV8;
NOTICE:  this is inline code
DO

beta=# CREATE TYPE rec AS (i integer, t text);
CREATE TYPE
beta=# CREATE FUNCTION set_of_records() RETURNS SETOF rec AS
beta-# $$
beta$#     // PLV8.return_next() stores records in an internal tuplestore,
beta$#     // and return all of them at the end of function.
beta$#     PLV8.return_next( { "i": 1, "t": "a" } );
beta$#     PLV8.return_next( { "i": 2, "t": "b" } );
beta$# 
beta$#     // You can also return records with an array of JSON.
beta$#     return [ { "i": 3, "t": "c" }, { "i": 4, "t": "d" } ];
beta$# $$
beta-# LANGUAGE PLV8;
CREATE FUNCTION
beta=# SELECT * FROM set_of_records();
 i | t 
---+---
 1 | a
 2 | b
 3 | c
 4 | d
(4 rows)

In case you need a patched version of PLV8, use the following git repository:

 https://github.com/vibhorkum/PLV8_FOR_PPAS.git plv8_ppas

To use this, execute the following command:

 git clone https://github.com/vibhorkum/PLV8_FOR_PPAS.git plv8_ppas
cd PLV8_ppas
make
make install

To test the compiled PLV8, you can use the following command:

[root@localhost plv8_ppas]# make installcheck
/usr/ppas-9.4/lib/pgxs/src/makefiles/../../src/test/regress/pg_regress --inputdir=./ --psqldir='/usr/ppas-9.4/bin'    --dbname=contrib_regression init-extension plv8 inline json startup_pre startup varparam json_conv window dialect
(using postmaster on Unix socket, port 5444)
============== dropping database "contrib_regression" ==============
DROP DATABASE
============== creating database "contrib_regression" ==============
CREATE DATABASE
ALTER DATABASE
============== running regression test queries        ==============
test init-extension           ... ok
test plv8                     ... ok
test inline                   ... ok
test json                     ... ok
test startup_pre              ... ok
test startup                  ... ok
test varparam                 ... ok
test json_conv                ... ok
test window                   ... ok
test dialect                  ... ok

======================
 All 10 tests passed. 
======================

Postgres Plus Advanced Server 9.3 Features

Postgres Plus Advanced Server 9.3, released on Tuesday for general availability, contains all of the innovation that was part of PostgreSQL 9.3, which was released in September.
These features include the following:

1. Configuration directive ‘include_dir’
2. COPY FREEZE for more efficient bulk loading
3. Custom Background Workers
4. Data Checksums
5. JSON: Additional functionality
6. LATERAL JOIN
7. Parallel pg_dump for faster backups
8. ‘pg_isready’ server monitoring tool
9. Switch to Posix shared memory and mmap()
10. Event Triggers
11. VIEW Features:
                   Materialized Views
                   Recursive View Syntax
                   Updatable Views
12. Writeable Foreign Tables
                   postgres_fdw
13. Replication Improvements
                    Streaming-Only Remastering
                    Fast Failover
                    Architecture-Independent Streaming
                    pg_basebackup conf setup

For Postgres Plus Advanced Server, we integrated into the core PostgreSQL additional performance improvements, new packages and Object oriented features so that our database can address a wider range of enterprise use cases.

Partitioning enhancements to boost performance for INSERTS/UPDATES/SELECT was a major development for Postgres Plus. Below are graphs illustrating the performance increases of Postgres Plus Advanced Server 9.3 compared to the 9.2 version.

TPS_Select

Figure: TPS SELECT

TPS_Update

 Figure TPS UPDATE

The release features some important functions that can make a developer’s life easier, such as:

1. DBMS_RANDOM package. This packages helps users to create Random numbers, Random strings and Random dates. PostgreSQL supports random functions, which enable users to have their own function on top of random for random string and random date.

With this package, users can easily use the built-in functions for those two purposes. Below are some examples:
DBMS_RANDOM package can be used to easily generate random strings and dates, and users don’t need to make their own wrapper function for these two activities. Below is one simple example of using it.

user=# BEGIN
user$#   DBMS_OUTPUT.put_line('Run 1 : seed=0');
user$#   DBMS_RANDOM.seed (val => 0);
user$#   FOR i IN 1 ..5 LOOP
user$#     DBMS_OUTPUT.put_line('i=' || i || ' : value=' || DBMS_RANDOM.value(low => 1, high => 10));
user$#   END LOOP;
user$#
user$#   DBMS_OUTPUT.put_line('Run 2 : seed=0');
user$#   DBMS_RANDOM.seed (val => 0);
user$#   FOR i IN 1 ..5 LOOP
user$#     DBMS_OUTPUT.put_line('i=' || i || ' : value=' || DBMS_RANDOM.value(low => 1, high => 10));
user$#   END LOOP;
user$#
user$# END;
Run 1 : seed=0
i=1 : value=6.756044853478672
i=2 : value=7.806749390438203
i=3 : value=1.1173334638588129
i=4 : value=5.221515491604802
i=5 : value=8.445010517258194
Run 2 : seed=0
i=1 : value=6.756044853478672
i=2 : value=7.806749390438203
i=3 : value=1.1173334638588129
i=4 : value=5.221515491604802
i=5 : value=8.445010517258194

EDB-SPL Procedure successfully completed
user=# BEGIN
user$#   FOR i IN 1 .. 5 LOOP
user$#     DBMS_OUTPUT.put_line('string(''x'',10)= ' || DBMS_RANDOM.string('x',10));
user$#   END LOOP;
user$# END;
string('x',10)= 1TT23XR8X2
string('x',10)= DO5D2KUUVD
string('x',10)= AGNPAXDECT
string('x',10)= 7JC6RMU9KX
string('x',10)= 13BW6JM6KN

EDB-SPL Procedure successfully completed
user=# BEGIN
user$#   FOR i IN 1 .. 5 LOOP
user$#     DBMS_OUTPUT.put_line('date= ' || TRUNC(SYSDATE + DBMS_RANDOM.value(0,366)));
user$#   END LOOP;
user$# END;
date= 20-JUN-14 00:00:00
date= 26-MAY-14 00:00:00
date= 11-JAN-14 00:00:00
date= 27-JUN-14 00:00:00
date= 21-DEC-13 00:00:00

EDB-SPL Procedure successfully completed

2. DBMS_LOCK.sleep: Similar to pg_sleep function, the DBMS_LOCK.sleep package is meant for Oracle users/developers who are more familiar with Oracle packages.

3. DBMS_CRYPTO: DMBMS_CRYPTO is new in Postgres Plus Advanced Server 9.3 and it provides the interface to encrypt and decrypt stored data, and can be used in conjunction with PL/SQL programs for running network communications. It provides support for several industry-standard encryption and hashing algorithms.

DECLARE
   input_string       VARCHAR2 (200) :=  'Secret Message';
   output_string      VARCHAR2 (200);
   encrypted_raw      RAW (2000);             -- stores encrypted binary text
   decrypted_raw      RAW (2000);             -- stores decrypted binary text
   num_key_bytes      NUMBER := 256/8;        -- key length 256 bits (32 bytes)
   key_bytes_raw      RAW (32);               -- stores 256-bit encryption key
   encryption_type   INTEGER :=          -- total encryption type
                            DBMS_CRYPTO.ENCRYPT_DES
                          + DBMS_CRYPTO.CHAIN_CBC
                          + DBMS_CRYPTO.PAD_PKCS5;
BEGIN
   DBMS_OUTPUT.PUT_LINE ( 'Original string: ' || input_string);
   key_bytes_raw := DBMS_CRYPTO.RANDOMBYTES (num_key_bytes::INTEGER);
   encrypted_raw := DBMS_CRYPTO.ENCRYPT
      (
         src => convert_to(input_string, 'LATIN1'),
         typ => encryption_type,
         key => key_bytes_raw
      );
    -- The encrypted value "encrypted_raw" can be used here
   decrypted_raw := DBMS_CRYPTO.DECRYPT
      (
         src => encrypted_raw,
         typ => encryption_type,
         key => key_bytes_raw
      );
   output_string := convert_from(decrypted_raw,'LATIN1');

   DBMS_OUTPUT.PUT_LINE ('Decrypted string: ' || output_string);
END;

4. DBMS_SCHEDULER: With Postgres Plus 9.2, we got DBMS_JOB package, and now Postgres Plus 9.3 has come out with DBMS_SCHEDULER, which has more control options for jobs and better visibility of scheduled jobs.
5. UTL_ENCODE: This is another important package for keeping sensitive information in the database. Using this package, users can easily encode and decode their data and keep it in database. This package can be utilized in new applications, and gives flexibility around important encode/decode functions for developers and users where data is very important. This is particularly useful for users working applications that were written for Oracle but migrated onto Postgres Plus Advanced Server. .

Let’s see an example:

user=# DECLARE
user-#     v_str VARCHAR2(100);
user$#   BEGIN
user$#      --generate encoded value
user$#     v_str := utl_encode.text_encode('EnterpriseDB','gb18030', UTL_ENCODE.BASE64);
user$#     dbms_output.put_line('Encoded string => '||v_str);
user$#
user$#      --take the encoded value and decode it
user$#    v_str := utl_encode.text_decode(v_str,'gb18030', UTL_ENCODE.BASE64);
user$#    dbms_output.put_line('Decoded string => '||v_str);
user$# END;
Encoded string => RW50ZXJwcmlzZURC
Decoded string => EnterpriseDB

EDB-SPL Procedure successfully completed

6. UTL_HTTP: Advanced Server 9.3 has an UTL_HTTP package. This package provides functions for HTTP callouts from SQL/SPL. Developers/users can use this package and associated functions for accessing data on the Internet over HTTP.

7. UTL_URL: This is one of additions we developed for Postgres Plus 9.3. This package has two functions, escape and unescape mechanisms for URL characters. The escape function helps to escape a URL before the URL can be used to fetch data from a website. The unescape function can unescape any escaped character used in the URL, before fetching the data from a website. These two packages with UTL_HTTP allow users to direct the fetching of data from a website without having to write complex code in the application for handling specific data from a website.

8. New in EDB*loader. In Postgres Plus 9.3 EDB*loader has more control options for bulk loading.
a. ROWS parameter: prior to version 9.3, EDB*Loader processed entire data files as a single transaction. With parameter, users can control the processing of large amounts of data after which COMMIT needs to be executed. More control for processing/Loading data files.
b. Error Codes: EDB*loader now supports some additional exit/error codes, which will help users/developers include proper exit code checking while using a higher speed bulk data loader with parallel processing. Exit codes are shown below:

      0: Success
      1: Failure
      2: Warning
      3: Fatal

c. Streaming output files to client: Prior to version 9.3, users had to check the logfile of EDB*loader on the server side. Files: logfile, Bad file and discard file used to be created on the server side, and for troubleshooting users had to log in on the server to verify these files. Now, Postgres Plus 9.3 enables these files to be created at the client site. Users are no longer required to log into the server to verify/check these files..
d. New GUC for empty string. EDB*loader now has one GUC which users can utilize to control the default behavior of an empty_string in their datafile. edbldr.empty string has the following valid values:
1. null: empty field is treated as a null if the raw field contains no characters or a pair of delimiters with nothing in between.
2. empty_string: empty field is treated as a empty_string, if the raw field contains no characters or a pair of delimiters with nothing in between.
3. pgsql: empty field is treated as a null if the raw field contains no characters, but as an empty string if it contains a pair of delimiters with nothing in between.
Default is pgsql.

9. New REGEXP functions. Postgres Plus 9.3 nowhas three new REGEXP functions for developers.
a. REGEXP_COUNT: This searches a string for a regular expression, and returns a count of the times that the regular expression occurs. Here’s a simple example:

          user=# SELECT REGEXP_COUNT('reinitializing', 'i', 1);
          regexp_count
           --------------
            5
          (1 row)
 

b. REGEXP_INSTR: This function searches a string for a POSIX-style regular expression and returns the position within the string where the match was located.

user=# SELECT REGEXP_INSTR('800-555-1212', '[0-9][0-9][0-9]', 1, 1) ;
 regexp_instr
--------------
            1
(1 row)

c. REGEXP_SUBSTR: This function searches a string for a pattern specified by a POSIX compliant regular expression and returns the string that matches the pattern specified in the call to the function.


user=# SELECT REGEXP_SUBSTR('800-555-1212', '[0-9][0-9][0-9]', 1, 2);
 regexp_substr
---------------
 555
(1 row)

10. New exception codes for UTL_FILE package: The UTL_FILE package provides the capability to read from, and write to files on the operating system’s file system. To provide error control capabilities to developers/users, Postgres Plus 9.3 has added some new exception codes, which users can utilize in their PL/SPL code for better exception handling. One simple example is given below:

user=# CREATE DIRECTORY tempdir_fb22954 AS '/tmp/';
CREATE DIRECTORY
user=# SELECT dirname
user-# FROM edb_dir
user-# WHERE dirname='tempdir_fb22954';
     dirname
-----------------
 tempdir_fb22954
(1 row)

user=#
user=#  -- check "utl_file.invalid_operation" exception
user=# DECLARE v_testfile UTL_FILE.FILE_TYPE;
user$#
user$#  v_directory VARCHAR2(50) := 'tempdir_fb22954';
user$#
user$#  v_filename VARCHAR2(50) := 'test_file_exist.txt';
user$#
user$#  BEGIN -- Create file
user$#  v_testfile := UTL_FILE.FOPEN(v_directory,v_filename,'w');
user$#
user$#  UTL_FILE.PUT(v_testfile,'A');
user$#
user$#  UTL_FILE.NEW_LINE(v_testfile);
user$#
user$#  UTL_FILE.FCLOSE(v_testfile);
user$#
user$#  DBMS_OUTPUT.PUT_LINE('Created file: ' || v_filename);
user$#
user$#  -- It should throw exception because file is open for read.
user$#  v_testfile := UTL_FILE.FOPEN(v_directory,v_filename,'r');
user$#
user$#  UTL_FILE.PUT(v_testfile,'B');
user$#
user$#  UTL_FILE.NEW_LINE(v_testfile);
user$#
user$#  UTL_FILE.PUT(v_testfile,'C');
user$#
user$#  UTL_FILE.NEW_LINE(v_testfile);
user$#
user$#  exception
user$#  WHEN utl_file.invalid_operation THEN
user$# RAISE notice 'exception caught utl_file.invalid_operation : SQLERRM: %',
user$#                                                       sqlerrm;
user$# UTL_FILE.FCLOSE(v_testfile);
user$#  WHEN others THEN
user$# RAISE notice 'exception, others : SQLERRM: %',
user$#                         sqlerrm;
user$# UTL_FILE.FCLOSE(v_testfile);
user$#  END;
Created file: test_file_exist.txt
NOTICE:  exception caught utl_file.invalid_operation : SQLERRM: invalid file operation

EDB-SPL Procedure successfully completed

New in PostgreSQL 9.3: New in Functions

In the series of blogging about new features in 9.3, today, I thought about blogging new functions and improvements coming in PostgreSQL.

Lets look whats new in 9.3, in terms of in build functions:

1. New in one array functions for one dimensional array

PostgreSQL 9.3, is coming with new functions which can help users to manipulate one dimensional arrays by calling simple functions at the place of crafting their own functions and following some methods to do the modification in it.

i. array_remove function

This is a new function added in 9.3, which provides ability for removing the elements from array. Function takes two arguments:
a. One dimensional array from which user wants to remove elements
b. element value which user wants to remove
.

Syntax of this function is given below:

ARRAY_REMOVE(<one dimensional array>, element)

Example of array_remove is given below:

postgres=# select array_remove(ARRAY['First','Second','Delete','Four'],'Delete');
    array_remove     
---------------------
 {First,Second,Four}
(1 row)

ii. array_replace function

This is a new in 9.3, this helps user to replace any element in array. This function can be use multi-dimensional array Or for single dimensional array.
Example of array_replace is given below:

pgsqltest=# select array_replace(ARRAY[ARRAY[1,2],ARRAY[3,4],ARRAY[56,6],ARRAY[7,8]],56,5);
       array_replace       
---------------------------
 {{1,2},{3,4},{5,6},{7,8}}
(1 row)

2. VARIADIC-labeled arguments expansion for concat and format functions

This is new addition to concat and format function. pre-9.3 concat and format, function used to ignore VARIADIC label and doesn’t used to print right output. However, this has been fixed in 9.3. Lets look at the example of pre-9.3 and in 9.3

In pre-9.3

worktest=# select concat(variadic array[1,2,3]);
 concat  
---------
 {1,2,3}
(1 row)

In 9.3

pgsqltest=# select concat(variadic array[1,2,3]);
 concat 
--------
 123
(1 row)

You can see above in 9.3, using VARIADIC label working properly.

Lets look at the format() function:

In pre-9.3

worktest=# select format(string_agg('%s',','), variadic array_agg(i)) from generate_series(1,3) g(i);
ERROR:  too few arguments for format

woow, its error out. However, this is fixed in 9.3 :-).

In 9.3

pgsqltest=# select format(string_agg('%s',','), variadic array_agg(i)) from generate_series(1,3) g(i);
 format 
--------
 1,2,3
(1 row)

3. Improvement in format() function to handle field width and left/right alignment

This new added in 9.3 format() function, which is going to increase the usability of format() function for developers. In 9.3, format function is coming with following enhancements:
i). Proper handling of field width.
ii). Proper handling of Left/right alignment.

Above two improvement was missing in pre-9.3. However, addition of above improvement made format() function to be follower of sprintf() C function.

Lets look at the example:
In pre-9.3, using of field width resulted into error message, as given below

ERROR:  unterminated conversion specifier
worktest=#  select format('>>%10s<<', 'Hello');
ERROR:  unterminated conversion specifier
worktest=# 

However, 9.3 is coming with proper field width support.

pgsqltest=#  select format('>>%10s<<', 'Hello');
     format     
----------------
 >>     Hello<<
(1 row)

9.3 is also coming proper handling left/right alignment. Examples are given below:
Left alignment

pgsqltest=# select format('>>%-10s<<', 'Hello');
     format     
----------------
 >>Hello     <<
(1 row)

Right alignment

pgsqltest=# select format('>>%1$10s<<', 'Hello');
     format     
----------------
 >>     Hello<<
(1 row)

4. Proper handling of Negative century in to_char, to_date and to_timestamp functions

In pre-9.3, following function behavior for negative century was wrong or inconsistent
a. to_char
b. to_date
c. to_timestamp.

However, 9.3 is coming with proper fix/handling for negative century. Lets look at the output in pre-9.3 and in 9.3 for each functions.

pre-9.3

worktest=# select to_timestamp('4713-01-08 BC','YYYY-DD-MM bc');
         to_timestamp         
------------------------------
 01-AUG-13 00:00:00 -04:56:02
(1 row)

Above you can see that its displaying wrong result for BC. In 9.3

pgsqltest=# select to_timestamp('4713-01-08 BC','YYYY-DD-MM bc');
          to_timestamp           
---------------------------------
 4713-08-01 00:00:00-04:56:02 BC
(1 row)

Lets see for to_date functions. In 9.2/pre-9.3

worktest=# SELECT to_date('4713-01-08 BC','YYYY-DD-MM bc');
ERROR:  full year must be between -4712 and +9999, and not be 0
worktest=# 

In 9.3

pgsqltest=# SELECT to_date('4713-01-08 BC','YYYY-DD-MM bc');
    to_date    
---------------
 4713-08-01 BC
(1 row)

Above, you can see in 9.3,to_date function is working right. However to_date in 9.2 didn’t able to handle it. Similar behavior you can see for to_char function.

5. Improvement in pg_get_viewdef() to print new line after each SELECT and FROM entry

This is kind of enhancement made in pg_get_viewdef function of 9.3. And user will be able to see this enhancement in pg_dump plain dump too.

This enhancement is more like readability of output of view definition in 9.3 and reducing the line length of view definition. Lets look at how this improvement makes user experience of getting/viewing view definition much better.

Before 9.3 or in pre-9.3

worktest=# select pg_get_viewdef('pg_tables'::regclass);
                                                                                                    
                                                                                  pg_get_viewdef    
                                                                                                    
                                                                              
----------------------------------------------------------------------------------------------------
----------------------------------------------------------------------------------------------------
----------------------------------------------------------------------------------------------------
------------------------------------------------------------------------------
 SELECT n.nspname AS schemaname, c.relname AS tablename, pg_get_userbyid(c.relowner) AS tableowner, 
t.spcname AS tablespace, c.relhasindex AS hasindexes, c.relhasrules AS hasrules, c.relhastriggers AS
 hastriggers FROM ((pg_class c LEFT JOIN pg_namespace n ON ((n.oid = c.relnamespace))) LEFT JOIN pg_
tablespace t ON ((t.oid = c.reltablespace))) WHERE (c.relkind = 'r'::"char");
(1 row)

Above you can see that pg_get_viewdef has printed defintion of view in one line, which not only make readability of view definition difficult. However also increase the line length.

Lets look at the pg_get_viewdef in 9.3

pgsqltest=# select pg_get_viewdef('pg_tables'::regclass);
                        pg_get_viewdef                        
--------------------------------------------------------------
  SELECT n.nspname AS schemaname,                            +
     c.relname AS tablename,                                 +
     pg_get_userbyid(c.relowner) AS tableowner,              +
     t.spcname AS tablespace,                                +
     c.relhasindex AS hasindexes,                            +
     c.relhasrules AS hasrules,                              +
     c.relhastriggers AS hastriggers                         +
    FROM ((pg_class c                                        +
    LEFT JOIN pg_namespace n ON ((n.oid = c.relnamespace)))  +
    LEFT JOIN pg_tablespace t ON ((t.oid = c.reltablespace)))+
   WHERE (c.relkind = 'r'::"char");
(1 row)

which seems me better in terms of readability and doesn’t have long line.

Enjoy!!

New in Postgres Plus Advanced Server 9.2

Good News Postgres Plus Advanced Server Beta Version is now availabale, which has all the new feature of PostgreSQL 9.2, and it also has new features which are specific to Advanced Server.

I will cover/explain New features of PostgreSQL 9.2 later as per release Notes. However, the following link covers major features of PostgreSQL 9.2 and can be taken as reference.

http://wiki.postgresql.org/wiki/What's_new_in_PostgreSQL_9.2

In this Blog, I am going to cover only specific features of Advanced Server 9.2 Core. Which are:

1. INSERT APPEN HINT in PPAS 9.2

PPAS 9.2, now supports INSERT append hint. This is a very interesting feature and it is very useful for users who frequently delete records in Bulk and do bulk of INSERTs.
This hint can provide some benefits in INSERTs. This hint makes PPAS not use Free Space Map and Append the rows at the end of relation(table).
Its usage is given below:

INSERT /*+append*/ INTO tab21115 VALUES(1,'abc',sysdate);

2. Procedure Called like Function Call.

PPAS 9.2 now allows calling procedure with following syntax:

SELECT * FROM procedure(arg1,arg2,…);

This type of Procedure call is allowed for Procedures which have OUT/INOUT Params. With this, user can use exec Function.

lets see how it works:

a. Create a Procedure as Given below:

CREATE OR REPLACE PROCEDURE foo_proc(A IN INT, B INOUT INT, C OUT INT)
AS
BEGIN
   b:=a+b;
   c:=b+1;
END;

Till 9.1 and even in 9.2, user can do something like given below:

DECLARE
  d int:=1;
  e int:=2;
  f int;
BEGIN
   foo_proc(d,e,f);
   DBMS_OUTPUT.PUT_LINE('e = '||e);
   DBMS_OUTPUT.PUT_LINE('d = '||d);
   DBMS_OUTPUT.PUT_LINE('f= '||f);
END;
e = 3
d = 1
f= 4

Now in 9.2, user can also do this:

edb=# select * from foo_proc(1,2);
 b | c 
---+---
 3 | 4
(1 row)

In case the user has Procedure and Function with a same name and the user wants to execute procedure using select command, then SELECT command is going to give preference to Function over Procedure as given below:

Function Definition:

CREATE OR REPLACE FUNCTION foo(a INT) RETURN INT
AS
BEGIN
  RETURN 2;
END;

CREATE PROCEDURE foo(A int, B OUT int)
AS
BEGIN
   B:=a+1;
END;

edb=# select * from foo(4);
 foo 
-----
   2

However, user can still use EXEC command to execute Procedure as given below:

edb=# exec foo(4);
 ?column? 
----------
        5
(1 row)

Or if you are using Anonymous function then PERFORM will also work with Procedure as given below:

edb=# DECLARE
edb-#     a int:=4;
edb$#     b int;
edb$# BEGIN
edb$#    PERFORM foo(a,b);
edb$#    DBMS_OUTPUT.PUT_LINE('b = '||b);
edb$# END;
b = 5

EDB-SPL Procedure successfully completed

3. Object Type Support:

PPAS already has Object Type support. However, the new version is coming with a new enhancement in that support. In New Version, user would be able to create Objects with Attributes, Functions, and Procedures. This gives an advantage of reduced coding in terms of defining Object types and maintaining it.

Example is given below:

a. Define a Type:

    CREATE OR REPLACE TYPE PersonObj AS OBJECT (
  first_name  VARCHAR2(50),
  last_name   VARCHAR2(50),
  date_of_birth  DATE,
  MEMBER FUNCTION getAge RETURN NUMBER
);

b. Define a Type Body:

CREATE OR REPLACE TYPE BODY PersonObj AS
  MEMBER FUNCTION getAge RETURN NUMBER AS
  BEGIN
    RETURN Trunc(Months_Between(Sysdate, date_of_birth)/12);
  END getAge;
END;

c. Define a Table based on Body type:

CREATE TABLE people (
  id      NUMBER(10) NOT NULL,
  person  PersonObj
);

d. To insert Data, Default constructor can be use as given below:

INSERT INTO people
VALUES (1, PersonObj('John','Doe',
        TO_DATE('01/01/1999','DD/MM/YYYY')));

e. With the following way, a user can access the data in the table:

SELECT p.id,
       (p.person).first_name,
       p.person.getAge() age
FROM   people p;
 id | first_name | age 
----+------------+-----
  1 | John       |  13
  2 | Jane       |  13
(2 rows)

4. PL/SQL Subtypes:

Subtypes is considered as a TYPE, which is defined on base/user defined types with some constraints, syntax for declaring SUBTYPE in PL/SQL is given below:

SUBTYPE <subtype_name> IS <type_name> [(constraint)] [NOT NULL];

Example is given below:

Example 1:

DECLARE
  SUBTYPE INT2 IS NUMBER;
  v2 INT2 (3,0);
BEGIN
  v2:= 123;
  DBMS_OUTPUT.PUT_LINE ('V2 = '|| v2);
END;

Example 2:

DECLARE
  SUBTYPE v_word IS VARCHAR2(10) NOT NULL;
  verb  v_word  := 'verb';
  noun  v_word  := 'noun';
BEGIN
  noun :='n1';
  DBMS_OUTPUT.PUT_LINE (UPPER(verb));
  DBMS_OUTPUT.PUT_LINE (UPPER(noun));
END;

5. DROP TABLE CASCADE CONSTRAINTS in 9.2:

PPAS 9.2 now supports DROP TABLE CASCADE CONSTRAINTS, this is a very intersting feature and is not available in PostgreSQL 9.2.

Using this command, user would be able to drop all referential integrity constraints that refer to primary and unique keys in the dropped table. If you omit this clause and such referential integrity constraints exist, then the database returns an error and does not drop the table.

This is useful, when somebody wants to drop Master table but doesn’t want to drop child table, which has Foreign Key Constraint and is referencing the Master. An example is given below:

CREATE TABLE master(id numeric primary key);
CREATE table child(id numeric references master(id));
insert into master values(1);
insert into child values(1);
edb=# drop table master cascade constraints;
NOTICE:  drop cascades to constraint child_id_fkey on table child
DROP TABLE
edb=# \d child
  Table "enterprisedb.child"
 Column |  Type   | Modifiers 
--------+---------+-----------
 id     | numeric | 

As you can see above, I have dropped the Master table, on which child table has referential integrity constaint. However, my Child table still exists. PostgreSQL 9.2, doesn’t have CASCADE CONSTRAINTS. Therefore, user has to first drop the constraints on table which has referential integrity constraints and then drop the referenced table. This process requires user to find all tables which has referential integrity constraints that refer to primary key of master table. This commands helps DBAs/Users not to worry about finding the child tables.

6. Allow throwing Multiple Errors as Warning, while compiling Procedures/Functions using SPL.

This is also a very interesting enhancement found for SPL. Whenever the user wants to compile a function/procedure in SPL, it will show all the error messages in the particular Function/Procedure Body, so that the user doesn’t have to re-visit their code everytime to fix the other issue. An example is given below:

CREATE OR REPLACE FUNCTION foo() RETURN integer
AS
BEGIN
   a:=a=1
;
B:=b + 1;
I am making fool;
END;

Output:
ERROR:  "a" is not a known variable
LINE 4:    a:=a=1
           ^
ERROR:  "b" is not a known variable
LINE 6: B:=b + 1;
        ^
ERROR:  syntax error at or near "I"
LINE 7: I am making fool;
        ^
ERROR:  compilation of SPL function/procedure "foo" failed due to 3 errors

As you can see above, it has given all the mistakes that I made in Function Body, and now I can fix all of them in one go, to reduce the overhead of executing again and again.

This is very useful for Developers/DBAs.

7. DEFAULT for Types declared in Packages:

Now, in PPAS 9.2, user would be able to define DEFAULT value for particular type, as given below:

CREATE OR REPLACE PACKAGE INITPKG_pro_b4_default AS
PROCEDURE show_default_values;
 n1 NUMBER DEFAULT 20;
 c1 VARCHAR2 DEFAULT 'Default';
END;

CREATE OR REPLACE PACKAGE BODY INITPKG_pro_b4_default AS
 PROCEDURE show_default_values IS
  n number;
 BEGIN
  dbms_output.put_line(c1);
  dbms_output.put_line(n1);
  n1 := n1*n1;
  n1 := SQRT(n1);
  dbms_output.put_line(n1);
 END;
END;

edb=# exec INITPKG_pro_b4_default.show_default_values;
Default
20
20.000000000000000

EDB-SPL Procedure successfully completed

8. TABLE Expression support for Nested Tables.

Now, in 9.2, user would be able to use TABLE Expressions for Nested tables. This feature was missing and asked by many Oracle DBAs. Table Expressions allows the user to query a collection in the FROM Clause like a Table.

edb=# CREATE OR REPLACE TYPE comp_typ_str_21189 IS TABLE OF VARCHAR2(100);
CREATE TYPE
edb=# 
edb=# select * from table(comp_typ_str_21189('PPP','QQQ')) ;
 column_value 
--------------
 PPP
 QQQ
(2 rows)

9. INPUT/OUTPUT Functions for NESTED TABLES:

PPAS 9.2 supports user defined input/output functions for NESTED TABLE. This feature was missing in 9.1.

Some examples are given below:

edb=# create or replace type mytab as table of varchar2(90);
CREATE TYPE
edb=# Create or replace function fun return mytab
edb-# as
edb$# begin
edb$#  return mytab('a','b',3);
edb$# end;
CREATE FUNCTION
edb=# select fun;
 fun21168 
----------
 {a,b,3}
(1 row)

edb=# 
edb=# create or replace function fun return mytab
edb-# as
edb$#  nvar mytab;
edb$# begin
edb$#  nvar := mytab();
edb$#  nvar.extend(4);
edb$#  nvar(1) := 'foo';
edb$#  nvar(2) := NULL;
edb$#  nvar(3) := 'deleteme';
edb$#  nvar(4) := 'bar';
edb$#  return nvar;
edb$# end;
CREATE FUNCTION
edb=# --Verify User's is able to see that data
edb=# select fun;
        fun21168         
-------------------------
 {foo,NULL,deleteme,bar}
(1 row)

9. LOG is no more a reserved keyword for functions.
Till PPAS 9.1, LOG was a reserved keyword and users were not allowed to create functions using this Keyword. In 9.2, a user would be able to use this keyword.

edb=# CREATE OR REPLACE FUNCTION log( a int ) return int as
edb$# BEGIN
edb$# dbms_output.put_line('Function LOG is called');
edb$# return a*2;
edb$# END;
CREATE FUNCTION
edb=# SELECT LOG(10);
Function LOG is called
 log 
-----
  20
(1 row)

10. Variables can be named as current_date.

We know that current_date is special function which returns current date of PostgreSQL/PPAS DB. Till PPAS 9.1, users were not allowed to use this reserved name in variables. Now, in 9.2, users would be able to use it. As given below:

edb=# create table t_currentdate(current_date int);
CREATE TABLE
edb=# desc t_currentdate
    Table "enterprisedb.t_currentdate"
    Column    |  Type   | Modifiers 
--------------+---------+-----------
 current_date | integer | 

create or replace procedure proc_currentdate(current_date date) is
begin
 dbms_output.put_line(current_date);
end;

edb=# exec proc_currentdate(current_date);
05-OCT-12 00:00:00

EDB-SPL Procedure successfully completed

11. New Data Type STRING.

Since Oracle supports string as data type and this datatype wasn’t available in PPAS 9.1, we have included it in 9.2. String is an alias to VARCHAR Data type. Example is given below:

edb=# CREATE TABLE test_string(col string)
edb-# ;
CREATE TABLE
edb=# insert into test_string values('As an Example');
INSERT 0 1
edb=# select * from test_string;
      col      
---------------
 As an Example
(1 row)

edb=# \d test_string
    Table "enterprisedb.test_string"
 Column |       Type        | Modifiers 
--------+-------------------+-----------
 col    | character varying | 

12. NVARCHAR2 data type support in 9.2

PPAS 9.2 allowes users to use NVARCHAR2 as a datatype. In PPAS 9.2, NVARCHAR2 is mapped to VARCHAR data type. This is only for an Oracle user, who always asked question on having this in PPAS.

Example is given below:

edb=# CREATE TABLE test_nvarchar(col NVARCHAR2(10))
edb-# ;
CREATE TABLE
edb=# \d test_nvarchar
     Table "enterprisedb.test_nvarchar"
 Column |         Type          | Modifiers 
--------+-----------------------+-----------
 col    | character varying(10) | 

13. MultiByte and string delimiter support in EDB*loader.

EDB*Loader in PPAS 9.2, is more efficient, compared to 9.1, in loading data. There were some cases, in which it has been observed that it is difficult to find single character delimiter for data in file as the data in a file has all possible characters. To load those kind of data files, users can now define there own STRING Delimiter. An example is given below:

Data File: data.log:
1$$abc
2$$ccc
3$$dddddddd

Control FIle:
LOAD DATA
INFILE '/tmp/data.log'
BADFILE '/tmp/data.bad'
truncate INTO table edb_loader
FIELDS TERMINATED BY '$$' optionally enclosed by '"'
(id , col )

-bash-4.1$ edbldr userid=enterprisedb/ control=/tmp/edb.ctl log=/tmp/edb.log
EDB*Loader: Copyright (c) 2007-2012, EnterpriseDB Corporation.

Successfully processed (3) records
-bash-4.1$ psql
psql (9.2.0.1)
Type "help" for help.

edb=# select * from edb_loader ;
 id |   col    
----+----------
  1 | abc
  2 | ccc
  3 | dddddddd
(3 rows)