Tuesday, July 30, 2013

How to improve VMware performance on your PC

I found very interesting post here - VMware Performance Enhancing Tweaks (Over-the-Counter Solutions). I used it on my VMware Player and Workstation and when I disabled vmem file my laptop it stopped blinking HDD led for 5 - 10 min every time it wake up from sleep mode.

regards,
Marcin

Sunday, July 28, 2013

DataGuard – Far Sync – part 2 - Data Guard Broker

Oracle introduced Far Sync Data Guard configuration which I described briefly in this post. Now is time for part two and using Data Guard Broker to add Far Sync instance.
Assuming that you have basic Data Guard Broker configuration ready (as described in - How to quickly build standby database and setup DataGuard configuration using Oracle 12c) adding new Far Sync instance is quite easy task.

First we have to setup a new instance which will be used as Far Sync for our primary database.

Instance parameter file - as a copy of primary database configuration (not necessary)
*.audit_file_dest='/home/oracle/admin/test12c/adump'
*.audit_trail='db'
*.compatible='12.1.0.0.0'
*.control_files='/u01/app/oracle/oradata/test12c/control01.ctl','/u01/app/oracle/oradata/test12c/control02.ctl','/u01/app/oracle/oradata/test12c/control03.ctl'
*.db_block_size=8192
*.db_domain=''
*.db_name='test12c'
*.diagnostic_dest='/u01/app/oracle'
*.dispatchers='(PROTOCOL=TCP) (SERVICE=test12cXDB)'
*.log_archive_dest_1='location=/u01/app/oracle/oradata/test12c/arch'
*.open_cursors=300
*.processes=300
*.remote_login_passwordfile='EXCLUSIVE'
*.sga_target=450m
*.undo_tablespace='UNDOTBS1'
# for far sync
*.db_unique_name='test12c_far_sync'
*.LOG_FILE_NAME_CONVERT='test','test'
*.local_listener='(ADDRESS_LIST = (ADDRESS = (PROTOCOL = TCP)(HOST = 192.168.1.60)(PORT = 1522)))'
There are three parameters which have to be changed:
  • db_unique_name 
  • local_listener - new instance has to be registered in proper listener
  • log_file_name_convert - it is useful to have even with dummy values so Oracle can recreate standby logs automatically
Listener configuration
SID_LIST_LISTENER_DG =
 (SID_LIST =
   (SID_DESC=
       (ORACLE_HOME= /u01/app/oracle/product/12.1.0/db1)
       (SID_NAME=test12c)
   )
   (SID_DESC =
      (SID_NAME = test12c)
      (GLOBAL_DBNAME = test12c_far_sync_DGMGRL)
      (ORACLE_HOME = /u01/app/oracle/product/12.1.0/db1)
   )
 )


LISTENER_DG =
 (DESCRIPTION_LIST =
   (DESCRIPTION =
     (ADDRESS = (PROTOCOL = IPC)(KEY = EXTPROCDG1))
     (ADDRESS = (PROTOCOL = TCP)(HOST = 192.168.1.60)(PORT = 1522))
   )
 )
tnsnames file - entry test12c_far_sync has to be added on primary and standby server as well
test12c_prim =
 (DESCRIPTION =
   (ADDRESS_LIST =
     (ADDRESS = (PROTOCOL = TCP)(HOST = 192.168.1.45)(PORT = 1522))
   )
 (CONNECT_DATA =
   (SID = test12c)
   (SERVER=dedicated)
 )
)

test12c_std =
 (DESCRIPTION =
   (ADDRESS_LIST =
     (ADDRESS = (PROTOCOL = TCP)(HOST = 10.0.1.79)(PORT = 1522))
   )
 (CONNECT_DATA =
   (SID = test12c)
   (SERVER=dedicated)
 )
)


test12c_far_sync =
 (DESCRIPTION =
   (ADDRESS_LIST =
     (ADDRESS = (PROTOCOL = TCP)(HOST = 192.168.1.60)(PORT = 1522))
   )
 (CONNECT_DATA =
   (SID = test12c)
   (SERVER=dedicated)
 )
)
When instance parameter file and network configuration is ready we need to create Far Sync control file on primary server.
[oracle@ora12c dbs]$ rlwrap sqlplus / as sysdba

SQL*Plus: Release 12.1.0.1.0 Production on Fri Jul 26 23:21:59 2013

Copyright (c) 1982, 2013, Oracle.  All rights reserved.


Connected to:
Oracle Database 12c Enterprise Edition Release 12.1.0.1.0 - 64bit Production
With the Partitioning, OLAP, Advanced Analytics and Real Application Testing options

SQL> ALTER DATABASE CREATE FAR SYNC INSTANCE CONTROLFILE AS '/tmp/farsync.ctl';

Database altered.

Copy it to Far Sync server
[oracle@ora12c dbs]$ scp /tmp/farsync.ctl 192.168.1.60:/u01/app/oracle/oradata/test12c/control01.ctl
oracle@192.168.1.60's password:
farsync.ctl                                                                                                                  100%   10MB  10.3MB/s   00:00
[oracle@ora12c dbs]$ scp /tmp/farsync.ctl 192.168.1.60:/u01/app/oracle/oradata/test12c/control02.ctl
oracle@192.168.1.60's password:
farsync.ctl                                                                                                                  100%   10MB  10.3MB/s   00:01
[oracle@ora12c dbs]$ scp /tmp/farsync.ctl 192.168.1.60:/u01/app/oracle/oradata/test12c/control03.ctl
oracle@192.168.1.60's password:
farsync.ctl                                                                                                                  100%   10MB  10.3MB/s   00:00
[oracle@ora12c dbs]$
Now Far Sync instance can be started and new standby logs could be created. Make sure if Data Guard Broker has been started as well.
[oracle@oraclese dbs]$ rlwrap sqlplus / as sysdba
SQL*Plus: Release 12.1.0.1.0 Production on Fri Jul 26 23:24:27 2013
Copyright (c) 1982, 2013, Oracle.  All rights reserved.
Connected to an idle instance.

SQL> create spfile from pfile;
File created.

SQL> startup mount
ORACLE instance started.

Total System Global Area  471830528 bytes
Fixed Size                  2289688 bytes
Variable Size             293605352 bytes
Database Buffers          167772160 bytes
Redo Buffers                8163328 bytes
Database mounted.

SQL> alter system set dg_broker_start = true;
System altered.

SQL> alter database add standby logfile ('/u01/app/oracle/oradata/test12c/standby01.log') size 50M reuse;
Database altered.

SQL> alter database add standby logfile ('/u01/app/oracle/oradata/test12c/standby02.log') size 50M reuse;
Database altered.

SQL> alter database add standby logfile ('/u01/app/oracle/oradata/test12c/standby03.log') size 50M reuse;
Database altered.
Like if previous post about Data Guard configuration - Broker will be used to do all configuration work. You should to connect to Broker from primary database. This is how current configuration looks like:
[oracle@ora12c ~]$ rlwrap dgmgrl /
DGMGRL for Linux: Version 12.1.0.1.0 - 64bit Production

Copyright (c) 2000, 2012, Oracle. All rights reserved.

Welcome to DGMGRL, type "help" for information.
Connected as SYSDG.
DGMGRL> connect /
Connected as SYSDG.
DGMGRL>  show configuration verbose;

Configuration - fsc

  Protection Mode: MaxPerformance
  Databases:
  test12c_prim - Primary database
    test12c_std - Physical standby database

  Properties:
    FastStartFailoverThreshold      = '30'
    OperationTimeout                = '30'
    TraceLevel                      = 'USER'
    FastStartFailoverLagLimit       = '30'
    CommunicationTimeout            = '180'
    ObserverReconnect               = '0'
    FastStartFailoverAutoReinstate  = 'TRUE'
    FastStartFailoverPmyShutdown    = 'TRUE'
    BystandersFollowRoleChange      = 'ALL'
    ObserverOverride                = 'FALSE'

Fast-Start Failover: DISABLED

Configuration Status:
SUCCESS
Adding Far Sync
DGMGRL> add far_sync test12c_far_sync as connect identifier is 'test12c_far_sync';
far sync instance "test12c_far_sync" added
DGMGRL>  show configuration verbose;

Configuration - fsc

  Protection Mode: MaxPerformance
  Databases:
  test12c_prim - Primary database
    test12c_std - Physical standby database
    test12c_far_sync  - Far Sync (disabled)

  Properties:
    FastStartFailoverThreshold      = '30'
    OperationTimeout                = '30'
    TraceLevel                      = 'USER'
    FastStartFailoverLagLimit       = '30'
    CommunicationTimeout            = '180'
    ObserverReconnect               = '0'
    FastStartFailoverAutoReinstate  = 'TRUE'
    FastStartFailoverPmyShutdown    = 'TRUE'
    BystandersFollowRoleChange      = 'ALL'
    ObserverOverride                = 'FALSE'

Fast-Start Failover: DISABLED

Configuration Status:
SUCCESS
Now new rules for redo log transport have to be configured
DGMGRL> edit database 'test12c_prim' set property redoroutes='(local : test12c_far_sync sync)';
Property "redoroutes" updated
DGMGRL> edit far_sync 'test12c_far_sync' set property redoroutes='(test12c_prim : test12c_std)';
Property "redoroutes" updated
Above configuration is sending redo data from primary database to far sync instance using SYNC mode and redo data from far sync instance to standby database using ASYNC mode. As primary and far sync instance are close one to each other you can have no data loss mode without additional network synchronization overhead.
Redo logs routing rules are described in Oracle documentation.

Enabling and checking configuration
DGMGRL> enable far_sync test12c_far_sync;
Enabled.
DGMGRL> show configuration verbose;

Configuration - fsc

  Protection Mode: MaxPerformance
  Databases:
  test12c_prim - Primary database
    test12c_far_sync  - Far Sync
      test12c_std - Physical standby database

  Properties:
    FastStartFailoverThreshold      = '30'
    OperationTimeout                = '30'
    TraceLevel                      = 'USER'
    FastStartFailoverLagLimit       = '30'
    CommunicationTimeout            = '180'
    ObserverReconnect               = '0'
    FastStartFailoverAutoReinstate  = 'TRUE'
    FastStartFailoverPmyShutdown    = 'TRUE'
    BystandersFollowRoleChange      = 'ALL'
    ObserverOverride                = 'FALSE'

Fast-Start Failover: DISABLED

Configuration Status:
SUCCESS
If everything is OK we can switch protection mode from Maximum Performance to Maximum Availability to get all advantages of our new configured Far Sync instance.
DGMGRL> edit configuration set protection mode as MaxAvailability;
Succeeded.
DGMGRL> show configuration verbose;

Configuration - fsc

  Protection Mode: MaxAvailability
  Databases:
  test12c_prim - Primary database
    test12c_far_sync  - Far Sync
      test12c_std - Physical standby database

  Properties:
    FastStartFailoverThreshold      = '30'
    OperationTimeout                = '30'
    TraceLevel                      = 'USER'
    FastStartFailoverLagLimit       = '30'
    CommunicationTimeout            = '180'
    ObserverReconnect               = '0'
    FastStartFailoverAutoReinstate  = 'TRUE'
    FastStartFailoverPmyShutdown    = 'TRUE'
    BystandersFollowRoleChange      = 'ALL'
    ObserverOverride                = 'FALSE'

Fast-Start Failover: DISABLED

Configuration Status:
SUCCESS

Now all is done. Happy testing. 
Marcin

Saturday, July 27, 2013

How to quickly build standby database and setup DataGuard configuration using Oracle 12c

With every release of Oracle database there are new possibilities to build a required solution (ex. standby database) faster and with fewer steps. This is not official guide but rather a mix of Oracle 12c functionality used to build DataGuard configuration using RMAN and DataGuard Broker.

Assumptions:

  • There is a database which will be used as primary.
  • New listener will be added to support standby traffic
Steps to run on primary database

Listener configuration
LISTENER_DG =
 (DESCRIPTION_LIST =
   (DESCRIPTION =
     (ADDRESS = (PROTOCOL = IPC)(KEY = EXTPROCDG1))
     (ADDRESS = (PROTOCOL = TCP)(HOST = 192.168.1.45)(PORT = 1522))
   )
 )

SID_LIST_LISTENER_DG =
 (SID_LIST =
   (SID_DESC=
       (ORACLE_HOME= /u01/app/oracle/product/12.1.0/db1)
       (SID_NAME=test12c)
   )
   (SID_DESC =
      (SID_NAME = test12c)
      (GLOBAL_DBNAME = test12c_prim_DGMGRL)
      (ORACLE_HOME = /u01/app/oracle/product/12.1.0/db1)
   )
 )
tnsnames entries
test12c_std =
  (DESCRIPTION =
    (ADDRESS_LIST =
      (ADDRESS = (PROTOCOL = TCP)(HOST = 10.0.1.79)(PORT = 1522))
    )
    (CONNECT_DATA =
      (SID = test12c)
      (SERVER = dedicated)
    )
  )


test12c_prim =
  (DESCRIPTION =
    (ADDRESS_LIST =
      (ADDRESS = (PROTOCOL = TCP)(HOST = 192.168.1.45)(PORT = 1522))
    )
    (CONNECT_DATA =
      (SID = test12c)
      (SERVER = dedicated)
    )
  )


New parameters for database
[oracle@ora12c ~]$ rlwrap sqlplus / as sysdba
SQL*Plus: Release 12.1.0.1.0 Production on Fri Jul 26 21:02:14 2013
Copyright (c) 1982, 2013, Oracle.  All rights reserved.

Connected to:
Oracle Database 12c Enterprise Edition Release 12.1.0.1.0 - 64bit Production
With the Partitioning, OLAP, Advanced Analytics and Real Application Testing options

SQL> alter system set local_listener='(ADDRESS_LIST = (ADDRESS = (PROTOCOL = TCP)(HOST = 192.168.1.45)(PORT = 1522)))';
System altered.

SQL> alter system set db_unique_name='test12c_prim' scope=spfile;
System altered.

SQL> shutdown immediate
Database closed.
Database dismounted.
ORACLE instance shut down.
SQL> startup
ORACLE instance started.

Total System Global Area  471830528 bytes
Fixed Size                  2289688 bytes
Variable Size             255856616 bytes
Database Buffers          209715200 bytes
Redo Buffers                3969024 bytes
Database mounted.
Database opened.
New control file for standby
SQL> alter database create standby controlfile as '/tmp/standby.ctl' reuse;
Database altered.
Copy standby control file to standby server
[oracle@ora12c ~]$ scp -i /home/oracle/oracle_standby.key  /tmp/standby.ctl     oracle@10.0.1.79:/u01/app/oracle/oradata/test12c/control01.ctl
standby.ctl                                                                                                                       100%   10MB 438.0KB/s   00:24
[oracle@ora12c ~]$ scp -i /home/oracle/oracle_standby.key  /tmp/standby.ctl     oracle@10.0.1.79:/u01/app/oracle/oradata/test12c/control02.ctl
standby.ctl                                                                                                                       100%   10MB 438.0KB/s   00:24
[oracle@ora12c ~]$ scp -i /home/oracle/oracle_standby.key  /tmp/standby.ctl     oracle@10.0.1.79:/u01/app/oracle/oradata/test12c/control03.ctl
standby.ctl                                                                                                                       100%   10MB 438.0KB/s   00:24

Steps to run on new standby database

Database parameter file - it could be a copy of primary database file with two changes - db_unique_name and local_listener
*.audit_file_dest='/home/oracle/admin/test12c/adump'
*.audit_trail='db'
*.compatible='12.1.0.0.0'
*.control_files='/u01/app/oracle/oradata/test12c/control01.ctl','/u01/app/oracle/oradata/test12c/control02.ctl','/u01/app/oracle/oradata/test12c/control03.ctl'
*.db_block_size=8192
*.db_domain=''
*.db_name='test12c'
*.diagnostic_dest='/u01/app/oracle'
*.dispatchers='(PROTOCOL=TCP) (SERVICE=test12cXDB)'
*.log_archive_dest_1='location=/u01/app/oracle/oradata/test12c/arch'
*.open_cursors=300
*.pga_aggregate_target=150m
*.processes=300
*.remote_login_passwordfile='EXCLUSIVE'
*.sga_target=450m
*.undo_tablespace='UNDOTBS1'
# changes for standby
*.db_unique_name='test12c_std'
*.local_listener='(ADDRESS_LIST = (ADDRESS = (PROTOCOL = TCP)(HOST = 10.0.1.79)(PORT = 1522)))'
Listener configuration
LISTENER_DG =
 (DESCRIPTION_LIST =
   (DESCRIPTION =
     (ADDRESS = (PROTOCOL = IPC)(KEY = EXTPROCDG1))
     (ADDRESS = (PROTOCOL = TCP)(HOST = 10.0.1.79)(PORT = 1522))
   )
 )

SID_LIST_LISTENER_DG =
 (SID_LIST =
   (SID_DESC=
       (ORACLE_HOME= /u01/app/oracle/product/12.1.0/db1)
       (SID_NAME=test12c)
   )
   (SID_DESC =
      (SID_NAME = test12c)
      (GLOBAL_DBNAME = test12c_std_DGMGRL)
      (ORACLE_HOME = /u01/app/oracle/product/12.1.0/db1)
   )
 )
tnsnames entries
test12c_std =
  (DESCRIPTION =
    (ADDRESS_LIST =
      (ADDRESS = (PROTOCOL = TCP)(HOST = 10.0.1.79)(PORT = 1522))
    )
    (CONNECT_DATA =
      (SID = test12c)
      (SERVER = dedicated)
    )
  )


test12c_prim =
  (DESCRIPTION =
    (ADDRESS_LIST =
      (ADDRESS = (PROTOCOL = TCP)(HOST = 192.168.1.45)(PORT = 1522))
    )
    (CONNECT_DATA =
      (SID = test12c)
      (SERVER = dedicated)
    )
  )
 
In next step we have to check if all datafile and configuration directory are in place on standby machine and startup instance in mount mode.

Starting up instance
 
[oracle@ip-10-0-1-79 dbs]$ rlwrap sqlplus / as sysdba
SQL*Plus: Release 12.1.0.1.0 Production on Fri Jul 26 17:39:48 2013
Copyright (c) 1982, 2013, Oracle.  All rights reserved.
Connected to an idle instance.

SQL> create spfile from pfile;
File created.

SQL> startup mount
ORACLE instance started.

Total System Global Area  471830528 bytes
Fixed Size                  2289688 bytes
Variable Size             255856616 bytes
Database Buffers          209715200 bytes
Redo Buffers                3969024 bytes
Database mounted.
SQL>

RMAN restore over network is an Oracle 12c feature. Yes there was duplicate from active database in 11g but this one is much easier to run and required fewer configuration steps. To restore Oracle data files on standby server we need to configure only instance in mount mode and add tns alias to primary database.

Starting RMAN and restoring data files 
[oracle@ip-10-0-1-79 dbs]$ rlwrap rman target /
Recovery Manager: Release 12.1.0.1.0 - Production on Fri Jul 26 16:18:57 2013
Copyright (c) 1982, 2013, Oracle and/or its affiliates.  All rights reserved.
connected to target database: TEST12C (DBID=1424547711, not open)

RMAN> restore database from service test12c_prim using compressed backupset;
Starting restore at 26-JUL-13
using target database control file instead of recovery catalog
allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=27 device type=DISK

channel ORA_DISK_1: starting datafile backup set restore
channel ORA_DISK_1: using compressed network backup set from service test12c_prim
channel ORA_DISK_1: specifying datafile(s) to restore from backup set
channel ORA_DISK_1: restoring datafile 00001 to /u01/app/oracle/oradata/test12c/system01.dbf
channel ORA_DISK_1: restore complete, elapsed time: 00:08:15
channel ORA_DISK_1: starting datafile backup set restore
channel ORA_DISK_1: using compressed network backup set from service test12c_prim
channel ORA_DISK_1: specifying datafile(s) to restore from backup set
channel ORA_DISK_1: restoring datafile 00002 to /u01/app/oracle/oradata/test12c/sysaux01.dbf
channel ORA_DISK_1: restore complete, elapsed time: 00:08:55
channel ORA_DISK_1: starting datafile backup set restore
channel ORA_DISK_1: using compressed network backup set from service test12c_prim
channel ORA_DISK_1: specifying datafile(s) to restore from backup set
channel ORA_DISK_1: restoring datafile 00003 to /u01/app/oracle/oradata/test12c/undotbs01.dbf
channel ORA_DISK_1: restore complete, elapsed time: 00:01:25
channel ORA_DISK_1: starting datafile backup set restore
channel ORA_DISK_1: using compressed network backup set from service test12c_prim
channel ORA_DISK_1: specifying datafile(s) to restore from backup set
channel ORA_DISK_1: restoring datafile 00004 to /u01/app/oracle/oradata/test12c/users01.dbf
channel ORA_DISK_1: restore complete, elapsed time: 00:00:15
channel ORA_DISK_1: starting datafile backup set restore
channel ORA_DISK_1: using compressed network backup set from service test12c_prim
channel ORA_DISK_1: specifying datafile(s) to restore from backup set
channel ORA_DISK_1: restoring datafile 00005 to /u01/app/oracle/oradata/test12c/soe.dbf
channel ORA_DISK_1: restore complete, elapsed time: 00:46:45
Finished restore at 26-JUL-13

RMAN>

At that stage we have primary and standby database almost ready. 

Step to run on primary and standby database

Starting DataGuard Broker and create standby log files
 
SQL> alter system set dg_broker_start = true;
System altered.

SQL> alter database add standby logfile ('/u01/app/oracle/oradata/test12c/standby01.log') size 50M reuse;
Database altered.

SQL> alter database add standby logfile ('/u01/app/oracle/oradata/test12c/standby02.log') size 50M reuse;
Database altered.

SQL> alter database add standby logfile ('/u01/app/oracle/oradata/test12c/standby03.log') size 50M reuse;
Database altered.

Steps to run on primary database

Now is time for DataGuard Broker to work for us. 

Adding primary database
[oracle@ora12c ~]$ rlwrap dgmgrl /
DGMGRL for Linux: Version 12.1.0.1.0 - 64bit Production

Copyright (c) 2000, 2012, Oracle. All rights reserved.

Welcome to DGMGRL, type "help" for information.
Connected as SYSDG.
DGMGRL> connect /
Connected as SYSDG.
DGMGRL> create configuration FSC as primary database is test12c_prim connect identifier is 'test12c_prim';
Configuration "fsc" created with primary database "test12c_prim"
 
Adding standby database and displaying configuration
DGMGRL> add database test12c_std as connect identifier is 'test12c_std' maintained as physical;
Database "test12c_std" added
DGMGRL> show configuration verbose;

Configuration - fsc

  Protection Mode: MaxPerformance
  Databases:
  test12c_prim - Primary database
    test12c_std - Physical standby database

  Properties:
    FastStartFailoverThreshold      = '30'
    OperationTimeout                = '30'
    TraceLevel                      = 'USER'
    FastStartFailoverLagLimit       = '30'
    CommunicationTimeout            = '180'
    ObserverReconnect               = '0'
    FastStartFailoverAutoReinstate  = 'TRUE'
    FastStartFailoverPmyShutdown    = 'TRUE'
    BystandersFollowRoleChange      = 'ALL'
    ObserverOverride                = 'FALSE'

Fast-Start Failover: DISABLED

Configuration Status:
DISABLED
 
Enabling configuration and checking DataGuard status
DGMGRL> enable configuration;
Enabled.
DGMGRL> show configuration verbose;

Configuration - fsc

  Protection Mode: MaxPerformance
  Databases:
  test12c_prim - Primary database
    test12c_std - Physical standby database

  Properties:
    FastStartFailoverThreshold      = '30'
    OperationTimeout                = '30'
    TraceLevel                      = 'USER'
    FastStartFailoverLagLimit       = '30'
    CommunicationTimeout            = '180'
    ObserverReconnect               = '0'
    FastStartFailoverAutoReinstate  = 'TRUE'
    FastStartFailoverPmyShutdown    = 'TRUE'
    BystandersFollowRoleChange      = 'ALL'
    ObserverOverride                = 'FALSE'

Fast-Start Failover: DISABLED

Configuration Status:
SUCCESS

And our DataGuard configuration is up and running. Now we can change default configuration to one which meet our requirements. I next post I will add Far Sync instance and switch configuration to Maximum Availability.

regards,
Marcin 

Tuesday, July 16, 2013

DataGuard – Far Sync – part 1 – simple configuration

Oracle introduced a new feature for DataGuard called Far Sync. This configuration is design to support synchronous redo transport between primary and standby database which are far apart. In usual configuration synchronous redo transport between data centre in Europe and US is possible but inefficient from the primary database performance perspective. Every commit has to be written locally to redo logs and send to standby with confirmation – a network round trip between Europe and US is a significant delay for every commit operation in synchronous mode. If database has to run in no data loss mode DBA had no choice but run primary in SYNC mode and suffer from every transaction network round-trip or setup both databases in data centres which are relatively close to each other.

To mitigate that problem an Oracle 12c has introduced DataGuard Far Sync instance. Far Sync instance is a log transport "proxy" and it has to be located close to the primary database. As Far Sync is a type of instance only – there is no need to put whole database there. Far Sync instance requires space for a standby and an archive logs.
In that configuration primary is sending transaction to Far Sync instance in
synchronous mode so every transaction is written to primary redo logs and Far Sync standby logs – so commit time is limited to time required for both writes and network round trip between primary and Far Sync. In next stage Far Sync instance is sending redo stream in asynchronous mode to standby database located in other data centre.

In my test configuration I setup OpenVPN connection between VM running in my home ESX server (located in Europe) and two EC2 instances running in USA. That configuration allows me to measure commit time for
synchronous mode with and without Far Sync instance.

Configuration details:

Primary database – TEST_PRM
Far sync instance – FARSYNC
Standby database – TEST_STD


Primary database configuration – DataGuard related parameters only
DB_UNIQUE_NAME=TEST_PRM
FAL_SERVER=TEST_STD
LOG_ARCHIVE_CONFIG='DG_CONFIG=( TEST_PRM,FARSYNC, TEST_STD)' 
LOG_ARCHIVE_DEST_1='LOCATION=USE_DB_RECOVERY_FILE_DEST
VALID_FOR=(ALL_LOGFILES,ALL_ROLES) DB_UNIQUE_NAME= TEST_PRM
Standby configuration:
LOG_ARCHIVE_CONFIG='DG_CONFIG=( TEST_PRM,FARSYNC, TEST_STD)' 
LOG_ARCHIVE_DEST_1='location=/u01/app/oracle/oradata/test12c/arch'
 
First test case – SYNC mode redo transport between primary and standby database 

Primary log transport configuration
log_archive_dest_2 = 'service="TEST_STD", SYNC AFFIRM delay=0 optional compression=disable max_failure=0 max_connections=1 reopen=300 db_unique_name="TEST_STD" net_timeout=10, valid_for=(online_logfile,all_roles)';
log_archive_config = 'dg_config=(TEST_PRM,TEST_STD,FARSYNC)
In both cases I run Swingbench against primary database and monitor time histogram for ‘log file sync’ event (primary database was restarted every time to have identical test conditions and reset counters)
SQL> select * from  v$event_histogram where  EVENT = 'log file sync';
    EVENT# EVENT                          WAIT_TIME_MILLI WAIT_COUNT LAST_UPDATE_TIME                                               CON_ID
---------- ------------------------------ --------------- ---------- ------------------------------------------------------------------------- ----------
       144 log file sync                                1          5 10-JUL-13 08.50.01.857358 PM +01:00                         0
       144 log file sync                                2         23 10-JUL-13 08.38.51.202819 PM +01:00                         0
       144 log file sync                                4         11 10-JUL-13 08.40.00.723312 PM +01:00                         0
       144 log file sync                                8          2 10-JUL-13 08.59.06.068904 PM +01:00                         0
       144 log file sync                               16          8 10-JUL-13 08.59.22.090580 PM +01:00                         0
       144 log file sync                               32         18 10-JUL-13 08.59.29.450597 PM +01:00                         0
       144 log file sync                               64         41 10-JUL-13 08.59.33.983966 PM +01:00                         0
       144 log file sync                              128         62 10-JUL-13 08.59.39.447536 PM +01:00                         0
       144 log file sync                              256        326 10-JUL-13 08.59.40.640604 PM +01:00                         0
       144 log file sync                              512       1423 10-JUL-13 08.59.40.640691 PM +01:00                         0
       144 log file sync                             1024         37 10-JUL-13 08.59.11.646609 PM +01:00                         0
       144 log file sync                             2048          1 10-JUL-13 08.53.53.715921 PM +01:00                         0
       144 log file sync                             4096          1 10-JUL-13 08.56.15.150343 PM +01:00                         0

13 rows selected.
As we can see in histogram – log file sync time between 256 and 512 ms is most frequent one and commit time for most of transaction were in that time range. In addition to Swingbench traffic I run one test transaction with simple row insert and traced it – here are important lines from trace file  

WAIT #140043716402464: nam='log file sync' ela= 184627 buffer#=419 sync scn=987285
 

As you can see 184 ms for commit is not a great result.

Second test case – SYNC mode redo transport between primary and far sync instance 

Primary log transport configuration
LOG_ARCHIVE_DEST_2='service="FARSYNC"','SYNC AFFIRM delay=0 optional compression=disable max_failure=0 max_connections=1 reopen=300 db_unique_name="FARSYNC" net_timeout=30','valid_for=(online_logfile,all_roles)' SCOPE=BOTH;
Far Sync instance required Oracle 12c binaries installed and special version of control file created on primary database using following command
ALTER DATABASE CREATE FAR SYNC INSTANCE CONTROLFILE AS '/tmp/farsync.ctl';
Instance parameters:
CONTROL_FILE=/u01/app/oracle/oradata/test12c/control01.ctl
DB_UNIQUE_NAME= FARSYNC
FAL_SERVER= TEST_PRM
LOG_FILE_NAME_CONVERT='test','test'
LOG_ARCHIVE_CONFIG='DG_CONFIG=( TEST_PRM,FARSYNC, TEST_STD)' 

LOG_ARCHIVE_DEST_1       = "location=/u01/app/oracle/oradata/test12c/arch"
LOG_ARCHIVE_DEST_2='service="TEST_STD"','ASYNC NOAFFIRM delay=0 optional compression=disable max_failure=0 max_connections=1 reopen=300 db_unique_name="TEST_STD" net_timeout=10','valid_for=(standby_logfile,all_roles)' SCOPE=BOTH;
As you can see in above configuration LOG_ARCHIVE_DEST_2 is configured as ASYNC and pointed to real standby database. 

Let's run test again
SQL> select * from  v$event_histogram where  EVENT = 'log file sync';

    EVENT# EVENT                          WAIT_TIME_MILLI WAIT_COUNT LAST_UPDATE_TIME                                               CON_ID
---------- ------------------------------ --------------- ---------- ------------------------------------------------------------------------- ----------
       144 log file sync                                1        254 10-JUL-13 09.23.58.069192 PM +01:00                         0
       144 log file sync                                2        364 10-JUL-13 09.23.58.027215 PM +01:00                         0
       144 log file sync                                4        338 10-JUL-13 09.23.57.951897 PM +01:00                         0
       144 log file sync                                8        278 10-JUL-13 09.23.57.540682 PM +01:00                         0
       144 log file sync                               16         82 10-JUL-13 09.23.55.700236 PM +01:00                         0
       144 log file sync                               32         29 10-JUL-13 09.23.56.896548 PM +01:00                         0
       144 log file sync                               64         29 10-JUL-13 09.23.52.709345 PM +01:00                         0
       144 log file sync                              128         18 10-JUL-13 09.23.40.675756 PM +01:00                         0
       144 log file sync                              256          8 10-JUL-13 09.23.32.935084 PM +01:00                         0
       144 log file sync                              512         18 10-JUL-13 09.23.32.949511 PM +01:00                         0
       144 log file sync                             1024          7 10-JUL-13 09.22.50.183919 PM +01:00                         0

11 rows selected.
Now results are totally different – most of transactions have a commit time between 1 to 8 ms – so all transaction has been commit after writing data to primary redo and far sync standby logs. From performance perspective this is much better and still primary database is protected in no data loss mode. Similar line from trace file:

WAIT #140132880983720: nam='log file sync' ela= 1003 buffer#=63 sync scn=1042355

This time commit time was almost 180 faster and took only 1 ms. 


Above configuration is a simplest one and doesn't include alternate destination if Far Sync is down. When standby database become primary there is no far sync instance close to new primary and it has to be addressed as well. 
I will keep my tests and I hope I will blog soon next part.


regards,
Marcin

Tuesday, July 9, 2013

Oracle 12c - incremental backup for DataGuard over network

If you have DataGuard or standby database in your organization you probably will love that new RMAN feature. Since 12c it is possible to catchup standby database using incremental backup using one command. Additional space and time need to run incremental backup, copy over to standby and restore can be limited to time required to run incremental backup over network.

See short example:

Stopping recovery on standby 

[oracle@ip-10-0-1-79 ~]$ sqlplus / as sysdba
SQL*Plus: Release 12.1.0.1.0 Production on Sun Jul 7 12:56:24 2013
Copyright (c) 1982, 2013, Oracle.  All rights reserved.

Connected to:
Oracle Database 12c Enterprise Edition Release 12.1.0.1.0 - 64bit Production
With the Partitioning, OLAP, Advanced Analytics and Real Application Testing options

SQL>  ALTER DATABASE RECOVER MANAGED STANDBY DATABASE cancel;

Database altered.

SQL> exit
Disconnected from Oracle Database 12c Enterprise Edition Release 12.1.0.1.0 - 64bit Production
With the Partitioning, OLAP, Advanced Analytics and Real Application Testing options


Running incremental backup:
[oracle@ip-10-0-1-79 ~]$ rman target /
Recovery Manager: Release 12.1.0.1.0 - Production on Sun Jul 7 12:56:39 2013
Copyright (c) 1982, 2013, Oracle and/or its affiliates.  All rights reserved.
connected to target database: TEST12C (DBID=1424547711, not open)

RMAN>  recover database from service test12c_b using compressed backupset;

Starting recover at 07-JUL-13
using target database control file instead of recovery catalog
allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=35 device type=DISK
channel ORA_DISK_1: starting incremental datafile backup set restore
channel ORA_DISK_1: using compressed network backup set from service test12c_b
destination for restore of datafile 00001: /u01/app/oracle/oradata/test12c/system01.dbf
channel ORA_DISK_1: restore complete, elapsed time: 00:00:25
channel ORA_DISK_1: starting incremental datafile backup set restore
channel ORA_DISK_1: using compressed network backup set from service test12c_b
destination for restore of datafile 00002: /u01/app/oracle/oradata/test12c/sysaux01.dbf
channel ORA_DISK_1: restore complete, elapsed time: 00:00:35
channel ORA_DISK_1: starting incremental datafile backup set restore
channel ORA_DISK_1: using compressed network backup set from service test12c_b
destination for restore of datafile 00003: /u01/app/oracle/oradata/test12c/undotbs01.dbf
channel ORA_DISK_1: restore complete, elapsed time: 00:00:25
channel ORA_DISK_1: starting incremental datafile backup set restore
channel ORA_DISK_1: using compressed network backup set from service test12c_b
destination for restore of datafile 00004: /u01/app/oracle/oradata/test12c/users01.dbf
channel ORA_DISK_1: restore complete, elapsed time: 00:00:07

...

RMAN>; exit
Recovery Manager complete.

For anyone who spend night trying to catch up standby database above solution looks brilliant.

regards,
Marcin