Thursday, January 19, 2012

DataGuard Broker configuration backup

Some time ago I have seen question on oracle-l list about DataGuard Broker configuration backup. There was no clear answer and there were some suggestions to use "dgmgrl" and perl script to display and parse output. This topic was very interesting for me and I decided to dig deeper and find out how Oracle is keeping these information. 
First step was to figure out how DataGuard Broker is gathering that data. You can enable debug option for dgmgrl tool and check how it works
piorovm$ dgmgrl -xml -debug
DGMGRL for Linux: Version 11.2.0.2.0 - 64bit Production

Copyright (c) 2000, 2009, Oracle. All rights reserved.

Welcome to DGMGRL, type "help" for information.
DGMGRL> connect /
[W000 01/19 11:44:50.82] Connecting to database using .
[W000 01/19 11:44:50.84] Checking broker version [BEGIN :version := dbms_drs.dg_broker_info('VERSION'); END;].
[W000 01/19 11:44:50.84] Broker version is '11.2.0.2.0'
Connected.
DGMGRL>
As you can see there is a dbms_drs package which is used for that.I have checked what I could find in package definition file ( dbmsdrs.sql ) and it was it. There are two examples for blocking and non-blocking queries. Last thing to figure out was to find a XML based command I can use with dbms_drs package to get configuration.
DGMGRL> show configuration verbose;
<do_monitor version="11.2"><verify level="minor" object_id="4096"> </verify>
</do_monitor>
And then step after step I crated script showed below - displayconfig.sql

In 11.2 Oracle add new internal table X$DRC and it can be used to quick check of DataGuard Configuration. Object_id column can be decoded using following rules:
- bit 12 - object_id = 4096 - configuration
- bit 16 - object_id = 65536 - database (probably first database in site)
- bit 24 - object_id = 16777216 - first site
- bit 25 - object_id = 33554432 - second site

First database in first site will have following object_id = 16777216 + 65536 = 16842752.

SQL> set pagesize 1000
SQL> column value format a100
SQL> set linesize 200
SQL> select  attribute, object_id, utl_raw.cast_to_varchar2(hextoraw(value_raw)) value FROM x$drc;

ATTRIBUTE                       OBJECT_ID VALUE
------------------------------ ---------- ----------------------
DRC                                  4096 fsf
intended_state                       4096 ONLINE
protection_mode                      4096 MaxAvailability
enabled                              4096 YES
fast_start_failover                  4096 ENABLED
observer_info                        4096 client.local
fsfo_target                          4096 testdg_b
SITE                             16777216 testdg_a
DATABASE                         16842752 testdg_a
intended_state                   16842752 READ-WRITE-XPTON
connect_string                   16842752 testdg_a.local
RAC                              16842752 NO
enabled                          16842752 YES
role                             16842752 PRIMARY
INSTANCE                         16842753 testdg
SITE                             33554432 testdg_b
DATABASE                         33619968 testdg_b
intended_state                   33619968 PHYSICAL-APPLY-ON
connect_string                   33619968 testdg_b.local
RAC                              33619968 NO
enabled                          33619968 YES
role                             33619968 PHYSICAL
INSTANCE                         33619969 testdg
Disclaimer:
I have tested that script with 11.1 and 11.2 on Linux with simple primary / standby configuration and with RAC primary. Please check this script first on your test environment before you will run it on production configuration.

regards,
Marcin
displayconfig.sql:
set serveroutput on
set linesize 200
declare
rid integer;
indoc varchar2(4000);
outdoc varchar2(4000);
p integer;
z XMLType;
y clob;
v_xml xmltype;
tout varchar2(4000);
begin
    indoc:='<DO_CONFIGURE_DRC><ENUM_DRC/></DO_CONFIGURE_DRC>';
    y := null;
    rid :=dbms_drs.do_control(indoc);

    outdoc :=NULL;
    p:=1;
    while (outdoc is null)
    loop
       outdoc:=dbms_drs.get_response(rid,p);
       y := y || to_clob(outdoc);
    end loop;

    begin
    while (outdoc is not NULL)
    loop
       p:=p+1;
   
       outdoc:=dbms_drs.get_response(rid,p);
       y := y || to_clob(outdoc);
       
    end loop;
    exception
    when no_data_found then
        NULL;
    end;
    z := XMLType(y);

dbms_drs.delete_request(rid);    
    
for l in (select name , role, id, connectdb from XMLTABLE('/DRC/SITE/DATABASE' passing z COLUMNS name PATH '/DATABASE/@name', role PATH '/DATABASE/@role', id PATH '/DATABASE/@id', connectdb PATH '/DATABASE/@connect') )
 loop
    if (l.role = 'PRIMARY') then
        outdoc := 'create configuration DGCONF as primary database is ''' || l.name || ''' connect identifier is ' || l.connectdb || ';';
        dbms_output.put_line(outdoc);
    else
        outdoc := 'add database ''' || l.name || ''' as connect identifier is ' || l.connectdb || ';';
        dbms_output.put_line(outdoc);
    end if;
    
    indoc:='<DO_CONFIGURE_SITE version="11.2"><GET_RESOURCE res_id="' || l.id || '"/></DO_CONFIGURE_SITE>';
    y := null;
    rid :=dbms_drs.do_control(indoc);

    outdoc :=NULL;
    p:=1;
    while (outdoc is null)
    loop
       outdoc:=dbms_drs.get_response(rid,p);
       y := y || to_clob(outdoc);
    end loop;

    begin
    while (outdoc is not NULL)
    loop
       p:=p+1;
   
       outdoc:=dbms_drs.get_response(rid,p);
       y := y || to_clob(outdoc);
       
    end loop;
    exception
    when no_data_found then
        NULL;
    end;
    z := XMLType(y);
   
    select XMLQuery('/RESOURCE/PROPERTY_LIST/VALUE' passing z returning content) into v_xml from dual;
   
   
    for q in (select name , value , property_type as r from
           XMLTABLE('/VALUE' passing v_xml COLUMNS name PATH '/VALUE/@name', value PATH '/VALUE/@value', property_type PATH '/VALUE/@property_type' )
           where value is not null and name in
           ('AlternateLocation','ApplyInstanceTimeout','ApplyParallel','ArchiveLagTarget','Binding','BystandersFollowRoleChange','CommunicationTimeout',
           'DbFileNameConvert','DelayMins','DGConnectIdentifier','FastStartFailoverAutoReinstate','FastStartFailoverLagLimit','FastStartFailoverPmyShutdown',
           'FastStartFailoverTarget','FastStartFailoverThreshold','LogArchiveFormat','LogArchiveMaxProcesses','LogArchiveMinSucceedDest','LogArchiveTrace',
           'LogFileNameConvert','LogShipping','LogXptMode','LsbyASkipCfgPr','LsbyASkipErrorCfgPr','LsbyASkipTxnCfgPr','LsbyDSkipCfgPr','LsbyDSkipErrorCfgPr','LsbyDSkipTxnCfgPr',
           'LsbyMaxEventsRecorded','LsbyMaxSga','LsbyMaxServers','LsbyPreserveCommitOrder','LsbyRecordAppliedDdl','LsbyRecordSkipDdl','LsbyRecordSkipErrors','MaxConnections',
           'MaxFailure','NetTimeout','ObserverConnectIdentifier','PreferredApplyInstance','RedoCompression','ReopenSecs','StandbyArchiveLocation','StandbyFileManagement'
           ))
           loop
       dbms_output.put_line('edit database ' || l.name || ' set property ''' || q.name || ''' = ''' || q.value || ''';' );
    end loop;
    dbms_drs.delete_request(rid);
   
end loop;    
end;
/

Friday, December 2, 2011

Oracle Information Integration, Migration, and Consolidation - book review

Recently I read interesting book about data migration and consolidation published by PACKT Publishing titled “Oracle Information Integration, Migration, and Consolidation”.

Whole book is split into 9 chapters covering different areas of this complicated topic. In my opinion this book is intended for architects or anyone who want to have a general overview of data migration and integration based on Oracle tools. You can find some details there as well but this book is covering too many tools to do a deep dive into each of them.
 
Chapter 1: Getting Started with Information Integration
This is a overview of data and process integration challenges and general approaches. Some high level use cases are presented as well.

Chapter 2: Oracle Tools and Products
This chapter is describing all major tools created or acquired by Oracle Corp. and used for  consolidation, migration or data exchange. It is a general overview and can be used as easier way to go thought all application owned by Oracle.

Chapter 3: Application and Data Integration Case Study
Proof of Value (POV) for insurance company is a case study described in this chapter. This project include process and data integration using Oracle BPEL between mainframe VSAM, flat files, COBOL application, MQ Series and Oracle database. Author is presenting his findings and solutions to migrate one of business process in organization. Using step by step approach and Oracle SOA Suite project achieve all goals and bring some more benefits for organization.

Chapter 4: Oracle Database Migrations
There are many way to migrate data from one database to other and between versions of this same database. A few ways are mentioned here but author focused on migration from non-oracle database to Oracle database using SQL Developer. All important steps are presented and everyone can try to reproduce it on his own test environment.

Chapter 5: Database Migration Challenges and Solutions
Database migration is a tough projects and list of well know problems described in this chapter is a good point to start thinking about it. Most of RDBMS systems have their own ways to tread null/not-null columns, objects names have a different rules and procedural language running inside database is not a standard neither. Addressing most of these issues before starting a project is a good way to avoid unexpected results at the end.

Chapter 6: Data Consolidation and Management
Trusted data sources are one of base requirements in every enterprise application or reporting tools. Oracle solutions like Oracle Master Data Management Suit is presented here as an answer to these needs. Beside that some other solutions like data centric consolidation solution using in-memory TimesTen database or application centric data consolidation using Oracle Coherence are presented as well.

Chapter 7: Database-centric Data Integration
Database centric data integration is based on continuous data exchange process in heterogeneous database environment. Two very powerful products are presented here: Oracle GoldenGate and Oracle Data Integrator. In addition to that Oracle Gateways (Heteregeneous Services Agent) is presented as well. This chapter together with chapter 4 are most technical and detailed ones.

Chapter 8: Application and Process Integration
This chapter is presenting history of Oracle application and process integration solutions over the years. Starting with basic export / import tables for Oracle Apps and ending with Oracle Fusion Middleware and Oracle SOA Suite.

Chapter 9: Information Lifecycle Management for Transactional Applications
Last chapter is covering problem of data and application retention. How and how long data have to be kept in database, how to managed them and how to deal with application retirement process.
 
I can recommend that book for every one who want to have a general overview of Oracle approach to integration problems. As I said before one book can’t cover all technical problems/issues but it is good starting point to enter integration world.

regards,
Marcin

Sunday, November 27, 2011

New release of S-ASH v.2.3

New version of Oracle Simulate ASH is ready to download and test. You can find it here - sash-v2.3
New features:
- new easier installation process
- new metrics history (IO and system)
- improved support for RAC / multi database in one repository
- Oracle scheduler is used to manage jobs

This version of OraSASH has been tested with Oracle 10g and 11g R1 and R2 as a target database and Oracle 11gR2 Express Edition as repository.

regards,
Marcin

Wednesday, October 12, 2011

Quick note about Hybrid Columnar Compression running on my laptop

As I mentioned in previous post Oracle allowed users to use Hybrid Columnar Compression on ZFS appliance. Two days ago there was a note in MOS about bug/enhancement but there was no patch for it. Yesterday I was able to download patch number 13041324 for Oracle 11.2.0.3 and successfully applied it against my installation on virtual machine. After that I run my Oracle's Sun Unified Storage Simulator and I created a new table space on it using NFS share together with Oracle Direct NFS client. Next I try to create compressed table using old existing table space on local ASM disk and it fail as expected. Than I give a try on new table space and this time table has been created :
SQL> create tablespace SUNNFSTS datafile '/nfs/sunnfsts.dbf' size 10M;
Tablespace created.

SQL> create table SCOTT.test1 tablespace USERS  compress for archive high as select * from dba_objects where rownum < 100;
create table SCOTT.test1 tablespace USERS  compress for archive high as select * from dba_objects where rownum < 100
                                                                                      *
ERROR at line 1:
ORA-64307: hybrid columnar compression is not supported for tablespaces on this
storage type

SQL> create table SCOTT.test tablespace SUNNFSTS  compress for archive high as select * from dba_objects where rownum < 100;
Table created.
SQL> exit
Looks like it is working and HCC can be tested without expensive hardware. I hope I will find some time to write down all my notes and create proper blog post about installation and configuration.
regards,
Marcin

Thursday, October 6, 2011

Oracle 11.2.0.3 and Hybrid Columnar Compression Support

Oracle announced support for Hybrid Columnar CompressionSupport for ZFS Storage Appliances and Pillar Axiom Storage Systems on Sep 30th. According to Steve Tunstall's blog these feature should work with 11.2.0.3 and any ZFS Appliance. I have tested it on Oracle Enterprise Linux 5.5 with 11.2.0.3 and ZFS Appliance software emulator and it didn’t work. I have read Steve post again and he mentioned in comment that additional patch is required. There is was no information about it on MOS yesterday evening but today morning I found following note - "Bug 13041324 - Enhancement to allow HCC on ZFS and Pillar Storage". I still can’t find patch and I’m really tempted to try it on my virtual machines and has a possibility to test Oracle HCC feature. Hope this patch will be available soon.

Regards,
Marcin

Tuesday, October 4, 2011

Grid Control 11g - 2 small improvments

I installed Oracle Grid Control 11g months ago and left it with typical configuration. Recently it becomes a little bit unstable and did a quick audit. Here are my first 2 finding but and will do a deeper dive into into if only I will find some more time.

Memory settings
In default configuration of Weblogic Management server running Grid Control instance maximum number of Java stack memory (parameter -Xmx) is unset and it is using default value of 512 MB. It is too small for big installation with many agents and concurrent users. Weblogic configuration allow users to set USER_MEM_ARGS variable with Java memory settings in starting script which will be used instead of MEM_ARGS variable. I has goint to change startManagedWebLogic.sh file as I want to keep default values for admin server but I found better solution on  Oracle Enterprise Manager blog in "Increasing the JAVA heap size of the OMS" section. Instead of startManagedWebLogic.sh you need to edit startEMServer.sh file
and add following section just before last line
if [ "${SERVER_NAME}" != "EMGC_ADMINSERVER" ] ; then
   USER_MEM_ARGS="-Xms256m -Xmx1024m -XX:MaxPermSize=512m -XX:CompileThreshold=8000 -XX:PermSize=128m"
   export USER_MEM_ARGS
fi
Log rotation
It was very typical for old Oracle Application Server or Grid Control 10g that logs weren't rotated in default configurations. I have seen this many times when I have been asked to do some troubleshooting. After installation of Grid Control 11g together with Weblogic I was thinking how many logs has been left without rotation this time. Most of them looks OK but I found one mod_wl_ohs.log belonging to WebTier (Oracle HTTP server) which is still not rotated. This log file is configured in this configuration file mod_wl_ohs.conf and it is quite easy to add rotation to it. As all rotated Apache logs this line
WLLogFile ${ORACLE_INSTANCE}/diagnostics/logs/${COMPONENT_TYPE}/${COMPONENT_NAME}/mod_wl_ohs.log
should be started with rotatelog command:
WLLogFile "|${ORACLE_HOME}/ohs/bin/rotatelogs ${ORACLE_INSTANCE}/diagnostics/logs/${COMPONENT_TYPE}/${COMPONENT_NAME}/mod_wl_ohs.log 43200"

Update:
Oracle Enterprise Manager 12c (like Cloud) has been just officially presented. There is still not documentation but you can download and install it.
regards,
Marcin

Monday, October 3, 2011

Oracle PeopleSoft Enterprise Financial Management - book review

This is my second review in short time frame but totally different from first one. It is not about Oracle DB and I’m doing it from newbie position. This is my first experience with Oracle PeopleSoft from business perspective. Oracle PeopleSoft Enterprise Financial Management 9.1 Implementation is covering following topics from business perspective:
1. Financials Fundamentals
2. PeopleSoft Security
3. PeopleSoft Billing Module
4. PeopleSoft Accounts Receivable Module
5. PeopleSoft Asset Management Module
6. PeopleSoft Accounts Payable Module
7. PeopleSoft General Ledger Module
8. PeopleSoft Expenses Module
9. PeopleSoft Commitment Control

Whole book is covering one example based on Automotive company and every chapter is describing other business requirements.
All topics are explained in details and every topic has a short introduction from functional perspective. There are also implementation challenges for most common issues. Every chapter is guiding user though module configuration and typical actions. Examples have slides and description of all actions.

I can recommend this book for all for people without financial background like me as a starting point into PeopleSoft application world.

regards,
Marcin