Sunday, August 24, 2014

AWR Warehouse

I just noticed last week that there is a new patch for Enterprise Manager and it is enabling AWR Warehouse feature. There is a note ID 1901202.1 which describe bundle patch for OEM 12c release 4.

Today I had a chance to install it in my lab and now I can start testing new OEM feature.
There is some documentation here and on Kellyn's blog.


It is not configured so first task is to configure AWR Warehouse repository. In my case I will use same database which is used for OEM repository.



Retention period and staging area for snapshot files has to be configured as well.


 After these two steps AWR Warehouse configuration job is started and when it will be finished AWR Warehouse will be ready to use.
 

When repository is ready we can start adding databases which will be a source of AWR data.
 

To add a new database to warehouse it has be already configured in OEM and has a default credentials.


If all conditions are met database has been successfully added.



Now it's time to play with these new feature and see what we can achieve using it.

regards,
Marcin




Crossplatform transportable tablespaces - part 2

It took some time since I wrote a first post about TTS migration but I finished that project literally hours before my summer break. Now after couple of days while I enjoyed thermal waters and good wine of Hungary it's time to write next post.

As I described in my previous post I had to migrate database from HP-UX into Linux and also upgrade it from 10g into 12c. This time it was only PoC but my goal was to minimize downtime of production database.

Source database datasheet:
- version 10.2.0.4 
- OS - HP-UX 
- existing backup using data files copy 
- there is a one backup set per data file 
- daily incremental backups are recovered into data files and keep in FRA

On target server a new version of Oracle 12.1.0.1 has been installed and configured with ASM. New database with same character set as source database has been created as well.


Target database datasheet:
- version 12.1.0.1
- OS -Linux 64 bit
- storage - ASM

Transportable tablespaces (TTS) allow us to migrate data between databases but it is DBA responsibility to migrate rest of objects like views and PL/SQL code using for example DataPump. Before I have started a work on TTS I did the following preparation steps:
  1. On source database identify list of tablespaces and it's datafiles to move to new server
  2. On source database identify owners of objects included in TTS
    select distinct owner from dba_tables where tablespace_name like ('LIST','OF','TABLESPACES','TO','MIGRATE');
    
  3. On source database verify that tablespaces are self contained
    begin
       SYS.DBMS_TTS.TRANSPORT_SET_CHECK(ts_list => 'LIST,OF,TABLESPACES,TO,MIGRATE', full_check => TRUE);
    end;
    /
    
    select * from SYS.TRANSPORT_SET_VIOLATIONS;
    
  4. On target database create owners for all objects included in TTS 

This is list of steps I performed to achieve my goal.
  1. Copy existing data files copies into new server - if other location is used on new server change script in point 2
  2. Create a script to convert data file from data file copy into data file in new location
    select 'convert datafile ''' || b.name || ''' format ''+DATA/POCDB/TTS/' || REGEXP_REPLACE(f.name,'(/disk\d/oradata/XXX/)','') || ''' from platform ''HP-UX IA (64-bit)'';' from V$BACKUP_COPY_DETAILS b, v$datafile f where f.file# = b.file#;
    
  3. Convert file using script from point 2. Example output
    convert datafile '/oracle/fra/o1_mf_pocdb_rep_9x3xjcon_.dbf' format '+DATA/POCDB/TTS/reports01.dbf' from platform 'HP-UX IA (64-bit)';
    convert datafile '/oracle/fra/o1_mf_pocdb_rep_aas24412_.dbf' format '+DATA/POCDB/TTS/reports02.dbf' from platform 'HP-UX IA (64-bit)';
    convert datafile '/oracle/fra/o1_mf_pocdb_rep_22ee1445_.dbf' format '+DATA/POCDB/TTS/reports03.dbf' from platform 'HP-UX IA (64-bit)';
    convert datafile '/oracle/fra/o1_mf_pocdb_rep_34ddr545_.dbf' format '+DATA/POCDB/TTS/reports04.dbf' from platform 'HP-UX IA (64-bit)';
    
  4. Copy daily incremental backupsets into new server - if other location is used on new server change script in point 5
  5. Create a script to apply incremental backupset into new files
    set linesize 600 pagesize 999 feedback off head off trimspool on
    select 'recover from platform ''HP-UX IA (64-bit)'' foreign datafilecopy ''' || name || ''' from backupset ''' || handle || ''';'
    from V$BACKUP_DATAFILE bd, v$datafile d, V$BACKUP_PIECE bp where bd.file# = d.file#
    and bp.set_count = bd.set_count and handle is not null
    and bp.COMPLETION_TIME > sysdate -1
    order by bp.set_count;
    
  6. Recover data files copies
    recover from platform 'HP-UX IA (64-bit)' foreign datafilecopy '+DATA/POCDB/TTS/reports01.dbf' from backupset '/oracle/fra/POCDB/backupset/2014_07_20/o1_mf_nnnd1_TAG20140720T065649_9wppkp6w_.bkp';
    recover from platform 'HP-UX IA (64-bit)' foreign datafilecopy '+DATA/POCDB/TTS/reports02.dbf' from backupset '/oracle/fra/POCDB/backupset/2014_07_20/o1_mf_nnnd1_TAG20140720T065649_9wppkxg5_.bkp';
    recover from platform 'HP-UX IA (64-bit)' foreign datafilecopy '+DATA/POCDB/TTS/reports03.dbf' from backupset '/oracle/fra/POCDB/backupset/2014_07_20/o1_mf_nnnd1_TAG20140720T065649_9wppk4w9_.bkp';
    recover from platform 'HP-UX IA (64-bit)' foreign datafilecopy '+DATA/POCDB/TTS/reports04.dbf' from backupset '/oracle/fra/POCDB/backupset/2014_07_20/o1_mf_nnnd1_TAG20140720T065649_9wppkbws_.bkp';
    
  7. Run steps 4 to 6 until cut over date
  8. Run incremental backup on source
  9. Switch all required tablespace into read only mode
  10. Export transportable tablespaces using DataPump  using parameter file like this
    directory=XXX_DPDUMP
    dumpfile=tts_aws1.dmp
    logfile=tts.log
    TRANSPORT_TABLESPACES='TABLESPACES', 'TO', 'EXPORT'
    TRANSPORT_FULL_CHECK=y
    
    EXPDP command
    expdp parfile=tts.par
    
  11. Run incremental backup on source
  12. Copy backupsets from point 8 and 11 into new server
  13. Create a script to apply incremental backupset into new files (like in point 5)
  14. Import transportable tablespaces using dump file from point 9 and all converted files. In my case first attempt took very long as I didn't excluded stats and Oracle was gathering stats during importing process. This operation can be postponed to next phase using EXCLUDE option. Example IMPDP parameter file
    directory=AWS
    dumpfile=tts_aws1.dmp
    logfile=aws_tts_import1.log
    exclude=TABLE_STATISTICS,INDEX_STATISTICS
    TRANSPORT_DATAFILES=+DATA/POCDB/TTS/reports01.dbf,
    +DATA/POCDB/TTS/reports02.dbf,
    +DATA/POCDB/TTS/reports03.dbf,
    +DATA/POCDB/TTS/reports04.dbf
    
    Run IMPDP command
    impdp parfile=imp.par
    
  15. Export source database code and users
    expdp directory=DPDUMP dumpfile=code.dmp exclude=TABLE_DATA full=y
    
  16. Import PL/SQL code - quick, dirty approach - but it was enough fot that case
    impdp directory=AWS TABLE_EXISTS_ACTION=SKIP dumpfile=code.dmp log=code_import.log full=y
    
  17. Perform backup of new database and gather new statistics

 
Performing all steps above allow me to migrate 1 TB database from HP-UX into Linux with 30 min downtime on source database. As it was POC I left source database working as main production database. For real migration time it's necessary to add time to recover last incremental backup and import TTS on new platform and also resolve issue with time necessary to gather statistics on new platform. Probably copy existing stats using PL/SQL will be solution there but it has to be check in next phase.

This post is long enough so I leave lesson learned to the next one.

regards,
Marcin