Discussion:
need help on statspack report, Parse CPU to Parse Elapsd %
(too old to reply)
Joe
2004-03-05 01:42:39 UTC
Permalink
Hi Oracle Experts,

I need some help on performance tuning. Hope someone can give me some hints.
I run the statspack report yesterday, the following is part of the report

Snap Id Snap Time Sessions
------- ------------------ --------
Begin Snap: 4 04-Mar-04 09:00:05 140
End Snap: 13 04-Mar-04 18:00:02 140
Elapsed: 539.95 (mins)

Cache Sizes
~~~~~~~~~~~
db_block_buffers: 60000 log_buffer: 1048576
db_block_size: 8192 shared_pool_size: 600000000

Load Profile
~~~~~~~~~~~~ Per Second Per Transaction
--------------- ---------------
Redo size: 12,923.83 10,119.23
Logical reads: 24,560.24 19,230.43
Block changes: 91.95 72.00
Physical reads: 557.54 436.55
Physical writes: 51.18 40.08
User calls: 71.64 56.09
Parses: 25.66 20.09
Hard parses: 0.26 0.21
Sorts: 12.12 9.49
Logons: 0.17 0.14
Executes: 208.73 163.43
Transactions: 1.28

% Blocks changed per Read: 0.37 Recursive Call %: 81.57
Rollback per transaction %: 7.24 Rows per Sort: 62.75

Instance Efficiency Percentages (Target 100%)
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Buffer Nowait %: 100.00 Redo NoWait %: 99.99
Buffer Hit %: 97.73 In-memory Sort %: 99.97
Library Hit %: 99.83 Soft Parse %: 98.98
Execute to Parse %: 87.71 Latch Hit %: 99.95
Parse CPU to Parse Elapsd %: 1.80 % Non-Parse CPU: 98.94

Shared Pool Statistics Begin End
------ ------
Memory Usage %: 21.78 72.57
% SQL with executions>1: 45.64 60.15
% Memory for SQL w/exec>1: 74.31 73.98

Everything seems fine (if it is not, please tell me), except the Parse CPU to Parse Elapsd %. It is 1.80% only, which is far below the target 100%, and I think it is unreasonable low. I am wondering if this somehow related to high CPU utilization of Oracle (I find the Oracle process taking up a unreasonable high CPU resources on my system).

I want to know

1. What Parse CPU to Parse Elapsd % means?
2. How can I improve it?

Thanks for any input.

Best Regards,
Joe
srivenu
2004-03-05 10:48:36 UTC
Permalink
Joe,
The duration of your statspack (about 9 hrs) is high, normally
Statspack reports are generated for a half-hour duration during peak
load.
Think you are running 8i.
A low value for Parse CPU to Parse Elapsd % means that the during
parsing, the server process is waiting for something, most likely
contending for a latch (shared pool and library cache latches).
But in your case this is not much of a disadvantage as you have a high
% Non-Parse CPU (98.94 %).
You should have also shown the Top 5 Wait Events (Top 5 Timed Events
from 9.2) and the latch information from the statspack.
You also have a high soft parse % - 98.98%.
So most of your parse calls are resulting in soft parses.
In such cases, the setting of SESSION_CACHED_CURSORS parameter helps
(you can start with a value of 150).
You may also want to look at the code, to see why there are so many
parse calls.
Ideally you should parse once and execute many times.
regards
Srivenu
Joe
2004-03-05 16:50:57 UTC
Permalink
Hi Srivenu,

Thanks a lots for your reply.
Yes, I am using 8.1.7.4. and i am experiencing contention in the following area

Top 5 Wait Events
~~~~~~~~~~~~~~~~~ Wait % Total
Event Waits Time (cs) Wt Time
-------------------------------------------- ------------ ------------ -------
enqueue 33,290 10,195,809 57.13
latch free 1,887,651 6,092,953 34.14
db file scattered read 1,788,217 537,596 3.01
db file sequential read 1,589,366 369,621 2.07
log file sync 32,896 164,777 .92
-------------------------------------------------------------

I understand the 3-5 wait events, and able to handle them. However for the enqueue and latch free, I don't have much idea on how to improve it.
I do read from the metalink that setting the spin_count may help to improve latch free wait. However, spin_count is a undocumented parameter, and i don't have much information of it. I want to reduce the value of this parameter, but I don't even know what is the current value of it. I can't find it in v$system_parameter view.

Can you give me some advise on latch free and enqueue wait?
Any input is appreciated.
Thanks a lots.

Regards,
Joe
Post by srivenu
Joe,
The duration of your statspack (about 9 hrs) is high, normally
Statspack reports are generated for a half-hour duration during peak
load.
Think you are running 8i.
A low value for Parse CPU to Parse Elapsd % means that the during
parsing, the server process is waiting for something, most likely
contending for a latch (shared pool and library cache latches).
But in your case this is not much of a disadvantage as you have a high
% Non-Parse CPU (98.94 %).
You should have also shown the Top 5 Wait Events (Top 5 Timed Events
from 9.2) and the latch information from the statspack.
You also have a high soft parse % - 98.98%.
So most of your parse calls are resulting in soft parses.
In such cases, the setting of SESSION_CACHED_CURSORS parameter helps
(you can start with a value of 150).
You may also want to look at the code, to see why there are so many
parse calls.
Ideally you should parse once and execute many times.
regards
Srivenu
Ron
2004-03-05 18:35:35 UTC
Permalink
Hello Joe,

Can you please post whole Statspack report. This would make analysis much easier.

Regards,

Ron
DBA Infopower
http://www.dbainfopower.com
Standard disclaimer: http://www.dbainfopower.com/dbaip_advice_disclaimer.html

"Joe" <***@netvigator.com> wrote in message news:c2ab5g$***@imsp212.netvigator.com...
Hi Srivenu,

Thanks a lots for your reply.
Yes, I am using 8.1.7.4. and i am experiencing contention in the following area

Top 5 Wait Events
~~~~~~~~~~~~~~~~~ Wait % Total
Event Waits Time (cs) Wt Time
-------------------------------------------- ------------ ------------ -------
enqueue 33,290 10,195,809 57.13
latch free 1,887,651 6,092,953 34.14
db file scattered read 1,788,217 537,596 3.01
db file sequential read 1,589,366 369,621 2.07
log file sync 32,896 164,777 .92
-------------------------------------------------------------

I understand the 3-5 wait events, and able to handle them. However for the enqueue and latch free, I don't have much idea on how to improve it.
I do read from the metalink that setting the spin_count may help to improve latch free wait. However, spin_count is a undocumented parameter, and i don't have much information of it. I want to reduce the value of this parameter, but I don't even know what is the current value of it. I can't find it in v$system_parameter view.

Can you give me some advise on latch free and enqueue wait?
Any input is appreciated.
Thanks a lots.

Regards,
Joe
Post by srivenu
Joe,
The duration of your statspack (about 9 hrs) is high, normally
Statspack reports are generated for a half-hour duration during peak
load.
Think you are running 8i.
A low value for Parse CPU to Parse Elapsd % means that the during
parsing, the server process is waiting for something, most likely
contending for a latch (shared pool and library cache latches).
But in your case this is not much of a disadvantage as you have a high
% Non-Parse CPU (98.94 %).
You should have also shown the Top 5 Wait Events (Top 5 Timed Events
from 9.2) and the latch information from the statspack.
You also have a high soft parse % - 98.98%.
So most of your parse calls are resulting in soft parses.
In such cases, the setting of SESSION_CACHED_CURSORS parameter helps
(you can start with a value of 150).
You may also want to look at the code, to see why there are so many
parse calls.
Ideally you should parse once and execute many times.
regards
Srivenu
Mark
2004-03-06 00:26:19 UTC
Permalink
I agree with Ron. You need to post the entire file. Under 8i the top
5 events don't mean much unless you can see the CPU time used later in
the report.

Mark Simmons
Sr. Oracle DBA
Sabre-Holdings, Southlake, TX
Post by Ron
Hello Joe,
Can you please post whole Statspack report. This would make analysis much easier.
Regards,
Ron
DBA Infopower
http://www.dbainfopower.com
http://www.dbainfopower.com/dbaip advice disclaimer.html
Hi Srivenu,
Thanks a lots for your reply.
Yes, I am using 8.1.7.4. and i am experiencing contention in the following area
Top 5 Wait Events
~~~~~~~~~~~~~~~~~ Wait
% Total
Event Waits Time (cs)
Wt Time
-------------------------------------------- ------------ ------------ -------
enqueue 33,290 10,195,809
57.13
latch free 1,887,651 6,092,953
34.14
db file scattered read 1,788,217 537,596
3.01
db file sequential read 1,589,366 369,621
2.07
log file sync 32,896 164,777
.92
-------------------------------------------------------------
I understand the 3-5 wait events, and able to handle them. However for
the enqueue and latch free, I don't have much idea on how to improve it.
I do read from the metalink that setting the spin count may help to
improve latch free wait. However, spin count is a undocumented
parameter, and i don't have much information of it. I want to reduce the
value of this parameter, but I don't even know what is the current value
of it. I can't find it in v$system parameter view.
Can you give me some advise on latch free and enqueue wait?
Any input is appreciated.
Thanks a lots.
Regards,
Joe
Post by srivenu
Joe,
The duration of your statspack (about 9 hrs) is high, normally
Statspack reports are generated for a half-hour duration during peak
load.
Think you are running 8i.
A low value for Parse CPU to Parse Elapsd % means that the during
parsing, the server process is waiting for something, most likely
contending for a latch (shared pool and library cache latches).
But in your case this is not much of a disadvantage as you have a
high
Post by srivenu
% Non-Parse CPU (98.94 %).
You should have also shown the Top 5 Wait Events (Top 5 Timed Events
from 9.2) and the latch information from the statspack.
You also have a high soft parse % - 98.98%.
So most of your parse calls are resulting in soft parses.
In such cases, the setting of SESSION CACHED CURSORS parameter helps
(you can start with a value of 150).
You may also want to look at the code, to see why there are so many
parse calls.
Ideally you should parse once and execute many times.
regards
Srivenu
--
Brian Peasland
2004-03-05 19:55:15 UTC
Permalink
Post by Joe
Hi Oracle Experts,
I need some help on performance tuning. Hope someone can give me some hints.
I run the statspack report yesterday, the following is part of the report
Snap Id Snap Time Sessions
------- ------------------ --------
Begin Snap: 4 04-Mar-04 09:00:05 140
End Snap: 13 04-Mar-04 18:00:02 140
Elapsed: 539.95 (mins)
Cache Sizes
~~~~~~~~~~~
1048576
600000000
Load Profile
~~~~~~~~~~~~ Per Second Per
Transaction
---------------
---------------
Redo size: 12,923.83
10,119.23
Logical reads: 24,560.24
19,230.43
Block changes: 91.95
72.00
Physical reads: 557.54
436.55
Physical writes: 51.18
40.08
User calls: 71.64
56.09
Parses: 25.66
20.09
Hard parses: 0.26
0.21
Sorts: 12.12
9.49
Logons: 0.17
0.14
Executes: 208.73
163.43
Transactions: 1.28
% Blocks changed per Read: 0.37 Recursive Call %: 81.57
Rollback per transaction %: 7.24 Rows per Sort: 62.75
Instance Efficiency Percentages (Target 100%)
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Buffer Nowait %: 100.00 Redo NoWait %: 99.99
Buffer Hit %: 97.73 In-memory Sort %: 99.97
Library Hit %: 99.83 Soft Parse %: 98.98
Execute to Parse %: 87.71 Latch Hit %: 99.95
Parse CPU to Parse Elapsd %: 1.80 % Non-Parse CPU: 98.94
Shared Pool Statistics Begin End
------ ------
Memory Usage %: 21.78 72.57
% SQL with executions>1: 45.64 60.15
% Memory for SQL w/exec>1: 74.31 73.98
Everything seems fine (if it is not, please tell me), except the Parse
CPU to Parse Elapsd %. It is 1.80% only, which is far below the target
100%, and I think it is unreasonable low. I am wondering if this
somehow related to high CPU utilization of Oracle (I find the Oracle
process taking up a unreasonable high CPU resources on my system).
I want to know
1. What Parse CPU to Parse Elapsd % means?
2. How can I improve it?
Thanks for any input.
Best Regards,
Joe
Joe,

What are you trying to tune? What goal do you have in mind? What is your
stopping condition? One should rarely tune just to tune. You mentioned
that Oracle was using large amounts of CPU. Have you tracked which
session(s) are using the CPU? Or is it a background process that is
using the CPU?

I just think there are too many potentials here and more information is
needed.

Cheers,
Brian
DJ
2004-03-06 17:54:42 UTC
Permalink
Post by Joe
Hi Oracle Experts,
Attached please find the statspack report I generated on 04-MAR-2004.
We are running Oracle eBuisness Suite on AIX.
The users complain about the performance of the system, and I start to
investigate what is causing this.
In the investigation process, I find the Oracle process is taking
unreasonable high CPU resources.
My goal is how to improve this unreasonable high CPU usage.
I already try my best to spread the I/O.
Any input is appreciated.
Thanks.
Regards,
Joe
where is the statspack report from when it was running normally?
VC
2004-03-06 20:23:05 UTC
Permalink
Hello Joe,
Post by Joe
Hi Oracle Experts,
Attached please find the statspack report I generated on 04-MAR-2004.
We are running Oracle eBuisness Suite on AIX.
A cursory look at you statspack shows:

1. High enqueue wait time. The usual cause is that several sessions are
trying to lock the same row (SELECT FOR UPDATE probably?) and the average
wait time is 3 seconds per enqueue. Other reasons may be bitmap indexes
update/unindexed foreign keys/lack of ITL slots, in the order of decreasing
probability.

2. The second highest event, latching, is probably caused by some SQLs not
using bind variables (library cache latching).

3. The percentage of rollbacks is quite high: 7.24%

4. The CPU used by the session is below waits for enqueue/latches (1,509,296
cs).

5. You probably have an 8-cpu machine.

Val
DJ
2004-03-06 21:31:01 UTC
Permalink
Post by Ron
Hello Joe,
Post by Joe
Hi Oracle Experts,
Attached please find the statspack report I generated on 04-MAR-2004.
We are running Oracle eBuisness Suite on AIX.
1. High enqueue wait time. The usual cause is that several sessions are
trying to lock the same row (SELECT FOR UPDATE probably?) and the average
wait time is 3 seconds per enqueue. Other reasons may be bitmap indexes
update/unindexed foreign keys/lack of ITL slots, in the order of decreasing
probability.
2. The second highest event, latching, is probably caused by some SQLs not
using bind variables (library cache latching).
3. The percentage of rollbacks is quite high: 7.24%
4. The CPU used by the session is below waits for enqueue/latches (1,509,296
cs).
5. You probably have an 8-cpu machine.
Val
how do you know that isnt perfectly normal in his system?

need somethint to compare against otherwuse you are just guessing and
probbaly leading the OP up the garden path
VC
2004-03-06 22:03:45 UTC
Permalink
Hello,
Post by VC
Post by VC
1. High enqueue wait time. The usual cause is that several sessions are
trying to lock the same row (SELECT FOR UPDATE probably?) and the average
wait time is 3 seconds per enqueue. Other reasons may be bitmap indexes
update/unindexed foreign keys/lack of ITL slots, in the order of
decreasing
Post by VC
probability.
This one (3 seconds) per enqueue is clearly abnormal. Would not you agree ?
Post by VC
Post by VC
3. The percentage of rollbacks is quite high: 7.24%
As a roll back is an expensive operation, this one should be taken care of
too whatever the 'normal' baseline performance is.


VC
Joe
2004-03-07 03:42:05 UTC
Permalink
Thanks a lots for the information of DV and DJ.
It is the Oracle eBusiness Suite, and Oracle not recommend us to make
changes to the whole system.
And in fact we don't have enough manpower to change that.
So I can only do at the database level, before I seriously consider to
upgrade to a later version for performance improvement, if any........

Something maynot be shown in the statspack report is there is a very high
CPU utilization of Oracle.
After searching metalink, it is find that the latch free wait will consume
CPU, I suspect this is one of the reasons of high CPU utilization. And
SPIN_COUNT parameter in init.ora may help to improve the situation a bit.
However SPIN_COUNT is an undocmented parameter, I can't find much
information about it, not even know how to get the current value of it (I
can't find it in the v$system_paramter table).

Any advise for me to reduce the enqueue wait and latch free wait. These 2
numbers are making me uncomfortable.

Any input is greatly appreciated.
Thanks a lots.

Best Regards,
Joe




"VC" <***@hotmail.com> Šb¶l¥ó news:5Vr2c.129345$***@attbi_s52 €€
Œ¶Œg...
Post by VC
Hello,
Post by VC
Post by VC
1. High enqueue wait time. The usual cause is that several sessions
are
Post by VC
Post by VC
trying to lock the same row (SELECT FOR UPDATE probably?) and the
average
Post by VC
Post by VC
wait time is 3 seconds per enqueue. Other reasons may be bitmap indexes
update/unindexed foreign keys/lack of ITL slots, in the order of
decreasing
Post by VC
probability.
This one (3 seconds) per enqueue is clearly abnormal. Would not you agree ?
Post by VC
Post by VC
3. The percentage of rollbacks is quite high: 7.24%
As a roll back is an expensive operation, this one should be taken care of
too whatever the 'normal' baseline performance is.
VC
VC
2004-03-07 06:34:59 UTC
Permalink
Joe,
Post by Joe
Thanks a lots for the information of DV and DJ.
It is the Oracle eBusiness Suite, and Oracle not recommend us to make
changes to the whole system.
And in fact we don't have enough manpower to change that.
Any advise for me to reduce the enqueue wait and latch free wait. These 2
numbers are making me uncomfortable.
Firstly, you need to produce a report for a smaller interval ( 10-15 min )
during a typical load. A five-hour report is hardly useful.

Pls. re-read what I wrote about enqueues. They are not a problem but a
symptom of a possible problem with sessions probably locking the same
rows(s). If this is the case, you need to fix the application. In order
to determine who blocks whom, you can use the following statement:

select (select username from v$session where sid=a.sid) ,
a.sid,
' is blocking ',
(select username from v$session where sid=b.sid) ,
b.sid
from v$lock a, v$lock b
where a.block = 1
and b.request > 0
and a.id1 = b.id1
and a.id2 = b.id2

If you execute the above repeatedly during a a typical load, you may cacth
blocking/blocked sessions and decide what to do about it.

Latches, as I wrote, appear to be a symptom of some SQL statements not
using bind variables (library cache latches). If it's a packaged
application, you can hardly do much about it.

According to the statspack report you posted, CPU is not a bottleneck:

enqueue 10,195,809
latch free 6,092,953
CPU 1,509,296 <-- that's your
CPU
db file scattered read 537,596
db file sequential read 369,621

Again, pls. post data collected during a smaller period of time (10-15)
min. in order to see what's going on. Probably the picture with CPU and the
rest will be different than what your 5 hour report shows..


VC
Joe
2004-03-07 06:45:03 UTC
Permalink
yes
i got your point
Thanks a lots for your advise : )

Joe


"VC" <***@hotmail.com> Šb¶l¥ó news:noz2c.196154$***@attbi_s51
€€Œ¶Œg...
Post by VC
Joe,
Post by Joe
Thanks a lots for the information of DV and DJ.
It is the Oracle eBusiness Suite, and Oracle not recommend us to make
changes to the whole system.
And in fact we don't have enough manpower to change that.
Any advise for me to reduce the enqueue wait and latch free wait. These 2
numbers are making me uncomfortable.
Firstly, you need to produce a report for a smaller interval ( 10-15 min )
during a typical load. A five-hour report is hardly useful.
Pls. re-read what I wrote about enqueues. They are not a problem but a
symptom of a possible problem with sessions probably locking the same
rows(s). If this is the case, you need to fix the application. In order
select (select username from v$session where sid=a.sid) ,
a.sid,
' is blocking ',
(select username from v$session where sid=b.sid) ,
b.sid
from v$lock a, v$lock b
where a.block = 1
and b.request > 0
and a.id1 = b.id1
and a.id2 = b.id2
If you execute the above repeatedly during a a typical load, you may cacth
blocking/blocked sessions and decide what to do about it.
Latches, as I wrote, appear to be a symptom of some SQL statements not
using bind variables (library cache latches). If it's a packaged
application, you can hardly do much about it.
enqueue 10,195,809
latch free 6,092,953
CPU 1,509,296 <-- that's your
CPU
db file scattered read 537,596
db file sequential read 369,621
Again, pls. post data collected during a smaller period of time (10-15)
min. in order to see what's going on. Probably the picture with CPU and the
rest will be different than what your 5 hour report shows..
VC
boa
2004-03-07 07:18:12 UTC
Permalink
Post by Ron
Hello Joe,
Post by Joe
Hi Oracle Experts,
Attached please find the statspack report I generated on 04-MAR-2004.
We are running Oracle eBuisness Suite on AIX.
1. High enqueue wait time. The usual cause is that several sessions are
trying to lock the same row (SELECT FOR UPDATE probably?) and the average
wait time is 3 seconds per enqueue. Other reasons may be bitmap indexes
update/unindexed foreign keys/lack of ITL slots, in the order of decreasing
probability.
Another reason may be freelist shortage. We have an application where
lots of users insert new rows to the same tables. The PK is generated
from a sequence. We had lots of enqueue waits and some Buffer Busy
Waits/Data Block waits. Adding more freelists as well as more ITL's
solved most of the enqueue waits.

The PK indexes was also a problem. Adding ITL's (INITRANS), more
freelists and reversing the bytes on the index helped.

Bjørn
[snip]
VC
2004-03-07 13:41:59 UTC
Permalink
Hello Bjorn,
Post by boa
Post by VC
1. High enqueue wait time. The usual cause is that several sessions are
trying to lock the same row (SELECT FOR UPDATE probably?) and the average
wait time is 3 seconds per enqueue. Other reasons may be bitmap indexes
update/unindexed foreign keys/lack of ITL slots, in the order of decreasing
probability.
Another reason may be freelist shortage. We have an application where
lots of users insert new rows to the same tables. The PK is generated
from a sequence. We had lots of enqueue waits and some Buffer Busy
Waits/Data Block waits. Adding more freelists as well as more ITL's
solved most of the enqueue waits.
The enqueue waits have got nothing to do with freelists. The OP does not
have buffer busy waits in the top 5 wait events therefore tweaking freelists
appers to be quite unnecessry. ITLs may or may not be a problem which will
become clear after checking for more comon causes which I mentioned in my
earlier posting. The 3 seconds enqueue timeouts indicate a possible an
application problem.
Post by boa
Bjørn
[snip]
boa
2004-03-07 14:41:01 UTC
Permalink
Post by VC
Hello Bjorn,
Post by boa
Post by VC
1. High enqueue wait time. The usual cause is that several sessions
are
Post by boa
Post by VC
trying to lock the same row (SELECT FOR UPDATE probably?) and the
average
Post by boa
Post by VC
wait time is 3 seconds per enqueue. Other reasons may be bitmap indexes
update/unindexed foreign keys/lack of ITL slots, in the order of
decreasing
Post by boa
Post by VC
probability.
Another reason may be freelist shortage. We have an application where
lots of users insert new rows to the same tables. The PK is generated
from a sequence. We had lots of enqueue waits and some Buffer Busy
Waits/Data Block waits. Adding more freelists as well as more ITL's
solved most of the enqueue waits.
The enqueue waits have got nothing to do with freelists. The OP does not
have buffer busy waits in the top 5 wait events therefore tweaking freelists
appers to be quite unnecessry. ITLs may or may not be a problem which will
become clear after checking for more comon causes which I mentioned in my
earlier posting. The 3 seconds enqueue timeouts indicate a possible an
application problem.
I guess you're right, just wanted to share my experience. Almost all of
our enqueue waits, and we had a lot, disappeared when we added more
freelists to a few busy tables. I tracked both the number of enqueue
waits and buffer busy/data block waits for weeks, as well as the
correlation between them. The events correlated a lot, even if we didn't
see buffer busy wait events on the Top 5 list.

Bjørn
Post by VC
Post by boa
Bjørn
[snip]
Loading...