Wednesday, November 7, 2007
Partitioning issues
If you have a system where you are adding new partition daily, weekly or monthly, do you gather statistics on new partition only or on whole table ? A lot of people do it on partition level only, and this is enough if you select statement is accessing only one partition.
But what happen if you select statement is accessing more then one partition ? Which statistics are used by CBO to generate execution plan ?
Please visit my blog soon and I provide you some examples.
Saturday, October 20, 2007
New stuff
- direct SGA access monitoring tool for Windows based server - 9i and 10g
- raw trace file analyzator
I will publish first beta version in 2 - 3 weeks.
Thursday, August 30, 2007
migration to UTF character set
This problem can appear when table was created with char or varchar2 column which size was specified in bytes not in characters. There is a parameter NLS_LENGTH_SEMANTICS
which has two values - byte or char. Default value is byte, and in that case if you make a export from database with default parameter and then you import this data into new database, all tables will be created with byte size for character columns (ex. varchar2(50 byte)).
My first thought was, that if I change this parameter to "char" in new database, all tables will be created in "char" size. But this is only partial true, new tables are created with "char" length of columns but import process still is creating columns with byte.
My solution is to split import process into three steps.
1. Import only tables definitions (import with parameter rows=n)
2. Change column definition from byte to char using prepared script
3. Import data.
How to create script for column conversion ? Using SQL query in SQL*Plus:
set pagesize 0
set linesize 100
spool convert.sql
select 'alter table ' || owner || '.' || table_name || ' modify ' || column_name || ' ' || data_type || '(' || data_length || ')' from dba_tab_columns where owner='schema' and char_used = 'B' and table_name not in (select view_name from dba_views where owner='schema');
spool off
@convert.sql
This query generate script, which is executed and after that all char and varchar2 columns should use "char" as length size.
Friday, August 17, 2007
Where is my alert.log ?
Quick check of background_dump_dest parameter show all. There is a new default location of
all trace files. I've started looking some information in Oracle docs, and I found that in 11g Oracle introduce a new diagnostic system ADR (Automatic Diagnostic Repository). I'm just starting to read documentation of it to find out how it is working.
Thursday, August 16, 2007
11g on Linux
Installation was nice and quite easy. I will post step by step instruction soon.
Order by - sort versus index full scan
In my case I have a table which is a part of interface between two systems and once a week this table is filled with a lot of rows (about 1 000 000). When rows are processed, interface service remove them from the table. But there is a other service which check how many rows is in table and run two statement:
select count(*) from interface_table
select col1, col2, col3 from interface table order by col1,col2
First statement is quite easy to tune is enough to add
index but second one is more complicated. One can smile here and
tell "We need to add one hint. that's all." Of course, we can if we have a access to source code.
In this case we have not.
Please find below examples of problem:
Table definition:
CREATE TABLE marcin_test
(
ROWSTAMP number(16),
INTERNAME VARCHAR2(50 BYTE),
TRANSID NUMBER(16),
TRANSORDER NUMBER(16),
ERRORCOUNT INTEGER,
ERRORFILE VARCHAR2(50 BYTE),
EMAILSENT VARCHAR2(10 BYTE)
)
begin
for i in 11..50000 loop
insert into marcin_test values (i,'aaaa',i,i,1,'a','a');
end loop;
commit;
end;
/
Testing SQL statement:
select intername, transid, transorder, errorfile
from marcin_test
order by transid, transorder;
Results:
- no primary key – table with 50000 rows
0 SELECT STATEMENT Optimizer=ALL_ROWS (Cost=523 Card=50000 Bytes=1000000)
1 0 SORT (ORDER BY) (Cost=523 Card=50000 Bytes=1000000)
2 1 TABLE ACCESS (FULL) OF 'MARCIN_TEST' (Cost=95 Card=50000 Bytes=1000000)
- no primary key – table with 100 rows – HWM was set by 50000 rows
0 SELECT STATEMENT Optimizer=ALL_ROWS (Cost=108 Card=10 Bytes=160)
1 0 SORT (ORDER BY) (Cost=108 Card=10 Bytes=160)
2 1 TABLE ACCESS (FULL) OF 'MARCIN_TEST' (Cost=95 Card=10 Bytes=160)
- primary key – table with 50000 rows
0 SELECT STATEMENT Optimizer=ALL_ROWS (Cost=523 Card=50000 Bytes=1000000)
1 0 SORT (ORDER BY) (Cost=523 Card=50000 Bytes=1000000)
2 1 TABLE ACCESS (FULL) OF 'MARCIN_TEST' (Cost=95 Card=50000 Bytes=1000000)
- primary key – table with 100 rows – HWM was set by 50000 rows
0 SELECT STATEMENT Optimizer=ALL_ROWS (Cost=3 Card=10 Bytes=160)
1 0 TABLE ACCESS (BY INDEX ROWID) OF 'MARCIN_TEST' (Cost=3 Card=10 Bytes=160)
2 1 INDEX (FULL SCAN) OF 'T1_PK_2' (UNIQUE) (Cost=2 Card=10)
This examples shows that if we have a “ORDER BY” clause and if we can add primary key to table, we can avoid full scan on table with very high HWM and reduce response time for this
query.