peter
2005-11-16 21:27:01 UTC
Hey guys,
I thought I'd try to get some feedback on something I'm looking at.
I have this 2 table query that I'm trying to figure out the bad
execution
plan that it's doing. On thing I noticed is that the cardinality
estimates from the
driving table are wrong..which then leads to the wrong join.
I've setup a little test to help illustrate the problem.
I've noticed that regularly when the mod function is used in the
predicate, it
throws off the optimizers's ability to estimate the number of rows
from the table...which then
causes execution plans to change..
Here is a little test I did, that I hope will help illustrate the
problem and hopefully you
can try it yourselves. I have not yet looked at the 10053 trace.
My environment:
============
- Oracle 10.1.0.4 on Solaris 64bit.
- pga_aggregate_target = 1.5 GB
- workarea_size_policy = auto
- db_cache_size = 2GB.
- shared_pool_size = 1GB.
- undo = AUTO.
- optimizer_features_enable=10.1.0
- optimizer_mode = 'ALL_ROWS'
- optimizer_index_caching=80
- optimizer_index_cost_adj=30
- db_file_multiblock_read_count =128
/* Here I setup a table of 10,000 rows where the product_id is going
to be
8000000 for 8000+ records and 4000000 for the rest. This is to
mimick my real production problem. */
SQL> CREATE TABLE TEST_T1 AS
SELECT ROWNUM+1000000 USER_ID,
DECODE(MOD(ROWNUM,8),0,'4000000000','8000000000) PRODUCT_ID,
0 CONFIRMED, sysdate CREATED
FROM ALL_TABLES where rownum <= 10000
/* here is what I have */
PRODUCT_ID| COUNT(*)
----------------------|----------
4000000000 | 1250
8000000000 | 8750
/* Then I create an index .. just to mimick my environment.
SQL> CREATE UNIQUE INDEX TEST_T1_PUC_U
ON TEST_T1(PRODUCT_ID,USER_ID,CONFIRMED);
/* Then I run dbms_stats. */
SQL> exec
dbms_stats.gather_table_stats('MYSCHEMA','TEST_T1',cascade=>TRUE);
TEST QUERIES
=============
SQL> select count(*) from TEST_T1 where product_id =8000000000
- oracle does a FTS and estimates the cardinality at about 5000 ...
I can live with that.
SQL> select count(*) from TEST_T1 where product_id = 8000000000 and
mod(user_id,2) = 0;
- oracle estimates the cardinality at 50-80 records depending on
the sample size from dbms_stats.
This estimate is very wrong because the query really returns 3750
records.
Can someone help me out here...maybe test in your similar environment.
You can probably see that if the optimizer estimates incorrectly, then
join orders can and probably be altered..
Why does the optimizer incorrectly guess the cardinality when using the
mod function?
--peter
I thought I'd try to get some feedback on something I'm looking at.
I have this 2 table query that I'm trying to figure out the bad
execution
plan that it's doing. On thing I noticed is that the cardinality
estimates from the
driving table are wrong..which then leads to the wrong join.
I've setup a little test to help illustrate the problem.
I've noticed that regularly when the mod function is used in the
predicate, it
throws off the optimizers's ability to estimate the number of rows
from the table...which then
causes execution plans to change..
Here is a little test I did, that I hope will help illustrate the
problem and hopefully you
can try it yourselves. I have not yet looked at the 10053 trace.
My environment:
============
- Oracle 10.1.0.4 on Solaris 64bit.
- pga_aggregate_target = 1.5 GB
- workarea_size_policy = auto
- db_cache_size = 2GB.
- shared_pool_size = 1GB.
- undo = AUTO.
- optimizer_features_enable=10.1.0
- optimizer_mode = 'ALL_ROWS'
- optimizer_index_caching=80
- optimizer_index_cost_adj=30
- db_file_multiblock_read_count =128
/* Here I setup a table of 10,000 rows where the product_id is going
to be
8000000 for 8000+ records and 4000000 for the rest. This is to
mimick my real production problem. */
SQL> CREATE TABLE TEST_T1 AS
SELECT ROWNUM+1000000 USER_ID,
DECODE(MOD(ROWNUM,8),0,'4000000000','8000000000) PRODUCT_ID,
0 CONFIRMED, sysdate CREATED
FROM ALL_TABLES where rownum <= 10000
/* here is what I have */
PRODUCT_ID| COUNT(*)
----------------------|----------
4000000000 | 1250
8000000000 | 8750
/* Then I create an index .. just to mimick my environment.
SQL> CREATE UNIQUE INDEX TEST_T1_PUC_U
ON TEST_T1(PRODUCT_ID,USER_ID,CONFIRMED);
/* Then I run dbms_stats. */
SQL> exec
dbms_stats.gather_table_stats('MYSCHEMA','TEST_T1',cascade=>TRUE);
TEST QUERIES
=============
SQL> select count(*) from TEST_T1 where product_id =8000000000
- oracle does a FTS and estimates the cardinality at about 5000 ...
I can live with that.
SQL> select count(*) from TEST_T1 where product_id = 8000000000 and
mod(user_id,2) = 0;
- oracle estimates the cardinality at 50-80 records depending on
the sample size from dbms_stats.
This estimate is very wrong because the query really returns 3750
records.
Can someone help me out here...maybe test in your similar environment.
You can probably see that if the optimizer estimates incorrectly, then
join orders can and probably be altered..
Why does the optimizer incorrectly guess the cardinality when using the
mod function?
--peter