It took some time since I wrote a first post about TTS migration but I finished that project literally hours before my summer break. Now after couple of days while I enjoyed thermal waters and good wine of Hungary it's time to write next post.
As I described in my previous post I had to migrate database from HP-UX into Linux and also upgrade it from 10g into 12c. This time it was only PoC but my goal was to minimize downtime of production database.
Source database datasheet:
- version 10.2.0.4
- OS - HP-UX
- existing backup using data files copy
- there is a one backup set per data file
- daily incremental backups are recovered into data files and keep in FRA
On target server a new version of Oracle 12.1.0.1 has been installed and configured with ASM. New database with same character set as source database has been created as well.
Target database datasheet:
- version 12.1.0.1
- OS -Linux 64 bit
- storage - ASM
Transportable tablespaces (TTS) allow us to migrate data between databases but it is DBA responsibility to migrate rest of objects like views and PL/SQL code using for example DataPump. Before I have started a work on TTS I did the following preparation steps:
This is list of steps I performed to achieve my goal.
Performing all steps above allow me to migrate 1 TB database from HP-UX into Linux with 30 min downtime on source database. As it was POC I left source database working as main production database. For real migration time it's necessary to add time to recover last incremental backup and import TTS on new platform and also resolve issue with time necessary to gather statistics on new platform. Probably copy existing stats using PL/SQL will be solution there but it has to be check in next phase.
This post is long enough so I leave lesson learned to the next one.
regards,
Marcin
As I described in my previous post I had to migrate database from HP-UX into Linux and also upgrade it from 10g into 12c. This time it was only PoC but my goal was to minimize downtime of production database.
Source database datasheet:
- version 10.2.0.4
- OS - HP-UX
- existing backup using data files copy
- there is a one backup set per data file
- daily incremental backups are recovered into data files and keep in FRA
On target server a new version of Oracle 12.1.0.1 has been installed and configured with ASM. New database with same character set as source database has been created as well.
Target database datasheet:
- version 12.1.0.1
- OS -Linux 64 bit
- storage - ASM
Transportable tablespaces (TTS) allow us to migrate data between databases but it is DBA responsibility to migrate rest of objects like views and PL/SQL code using for example DataPump. Before I have started a work on TTS I did the following preparation steps:
- On source database identify list of tablespaces and it's datafiles to move to new server
- On source database identify owners of objects included in TTS
select distinct owner from dba_tables where tablespace_name like ('LIST','OF','TABLESPACES','TO','MIGRATE');
- On source database verify that tablespaces are self contained
begin SYS.DBMS_TTS.TRANSPORT_SET_CHECK(ts_list => 'LIST,OF,TABLESPACES,TO,MIGRATE', full_check => TRUE); end; / select * from SYS.TRANSPORT_SET_VIOLATIONS;
- On target database create owners for all objects included in TTS
This is list of steps I performed to achieve my goal.
- Copy existing data files copies into new server - if other location is used on new server change script in point 2
- Create a script to convert data file from data file copy into data file in new location
select 'convert datafile ''' || b.name || ''' format ''+DATA/POCDB/TTS/' || REGEXP_REPLACE(f.name,'(/disk\d/oradata/XXX/)','') || ''' from platform ''HP-UX IA (64-bit)'';' from V$BACKUP_COPY_DETAILS b, v$datafile f where f.file# = b.file#;
- Convert file using script from point 2. Example output
convert datafile '/oracle/fra/o1_mf_pocdb_rep_9x3xjcon_.dbf' format '+DATA/POCDB/TTS/reports01.dbf' from platform 'HP-UX IA (64-bit)'; convert datafile '/oracle/fra/o1_mf_pocdb_rep_aas24412_.dbf' format '+DATA/POCDB/TTS/reports02.dbf' from platform 'HP-UX IA (64-bit)'; convert datafile '/oracle/fra/o1_mf_pocdb_rep_22ee1445_.dbf' format '+DATA/POCDB/TTS/reports03.dbf' from platform 'HP-UX IA (64-bit)'; convert datafile '/oracle/fra/o1_mf_pocdb_rep_34ddr545_.dbf' format '+DATA/POCDB/TTS/reports04.dbf' from platform 'HP-UX IA (64-bit)';
- Copy daily incremental backupsets into new server - if other location is used on new server change script in point 5
- Create a script to apply incremental backupset into new files
set linesize 600 pagesize 999 feedback off head off trimspool on select 'recover from platform ''HP-UX IA (64-bit)'' foreign datafilecopy ''' || name || ''' from backupset ''' || handle || ''';' from V$BACKUP_DATAFILE bd, v$datafile d, V$BACKUP_PIECE bp where bd.file# = d.file# and bp.set_count = bd.set_count and handle is not null and bp.COMPLETION_TIME > sysdate -1 order by bp.set_count;
- Recover data files copies
recover from platform 'HP-UX IA (64-bit)' foreign datafilecopy '+DATA/POCDB/TTS/reports01.dbf' from backupset '/oracle/fra/POCDB/backupset/2014_07_20/o1_mf_nnnd1_TAG20140720T065649_9wppkp6w_.bkp'; recover from platform 'HP-UX IA (64-bit)' foreign datafilecopy '+DATA/POCDB/TTS/reports02.dbf' from backupset '/oracle/fra/POCDB/backupset/2014_07_20/o1_mf_nnnd1_TAG20140720T065649_9wppkxg5_.bkp'; recover from platform 'HP-UX IA (64-bit)' foreign datafilecopy '+DATA/POCDB/TTS/reports03.dbf' from backupset '/oracle/fra/POCDB/backupset/2014_07_20/o1_mf_nnnd1_TAG20140720T065649_9wppk4w9_.bkp'; recover from platform 'HP-UX IA (64-bit)' foreign datafilecopy '+DATA/POCDB/TTS/reports04.dbf' from backupset '/oracle/fra/POCDB/backupset/2014_07_20/o1_mf_nnnd1_TAG20140720T065649_9wppkbws_.bkp';
- Run steps 4 to 6 until cut over date
- Run incremental backup on source
- Switch all required tablespace into read only mode
- Export transportable tablespaces using DataPump
using parameter file like this
directory=XXX_DPDUMP dumpfile=tts_aws1.dmp logfile=tts.log TRANSPORT_TABLESPACES='TABLESPACES', 'TO', 'EXPORT' TRANSPORT_FULL_CHECK=y
EXPDP commandexpdp parfile=tts.par
- Run incremental backup on source
- Copy backupsets from point 8 and 11 into new server
- Create a script to apply incremental backupset into new files (like in point 5)
- Import transportable tablespaces using dump file from point 9 and all converted files. In my case first attempt took very long as I didn't excluded stats and Oracle was gathering stats during importing process. This operation can be postponed to next phase using EXCLUDE option. Example IMPDP parameter file
directory=AWS dumpfile=tts_aws1.dmp logfile=aws_tts_import1.log exclude=TABLE_STATISTICS,INDEX_STATISTICS TRANSPORT_DATAFILES=+DATA/POCDB/TTS/reports01.dbf, +DATA/POCDB/TTS/reports02.dbf, +DATA/POCDB/TTS/reports03.dbf, +DATA/POCDB/TTS/reports04.dbf
Run IMPDP commandimpdp parfile=imp.par
- Export source database code and users
expdp directory=DPDUMP dumpfile=code.dmp exclude=TABLE_DATA full=y
- Import PL/SQL code - quick, dirty approach - but it was enough fot that case
impdp directory=AWS TABLE_EXISTS_ACTION=SKIP dumpfile=code.dmp log=code_import.log full=y
- Perform backup of new database and gather new statistics
Performing all steps above allow me to migrate 1 TB database from HP-UX into Linux with 30 min downtime on source database. As it was POC I left source database working as main production database. For real migration time it's necessary to add time to recover last incremental backup and import TTS on new platform and also resolve issue with time necessary to gather statistics on new platform. Probably copy existing stats using PL/SQL will be solution there but it has to be check in next phase.
This post is long enough so I leave lesson learned to the next one.
regards,
Marcin
0 comments:
Post a Comment