Thursday, 10 March 2011

ORA-29770 LMHB Terminates Instance as LMON Waited for Control File IO for too Long

Applies to: [ID 1197674.1]

Oracle Server - Enterprise Edition - Version: 11.1.0.6 to 11.2.0.1 - Release: 11.1 to 11.2
Information in this document applies to any platform.

Symptoms

Instance crashes with messages like the following:
Wed Sep 09 03:24:14 2009
LMON (ospid: 31216) waits for event 'control file sequential read' for 88 secs.
Wed Sep 09 03:24:29 2009
Errors in file /oracle/base/diag/rdbms/prod/prod3/trace/prod3_lmhb_31304.trc (incident=2329):
ORA-29770: global enqueue process LMON (OSID 31216) is hung for more than 70 seconds
Incident details in: /oracle/base/diag/rdbms/prod/prod3/incident/incdir_2329/prod3_lmhb_31304_i2329.trc
Wed Sep 09 03:24:39 2009
ERROR: Some process(s) is not making progress.
LMHB (ospid: 31304) is terminating the instance.

OR:

Mon Jan 10 14:23:00 2011
LMON (ospid: 8594) waits for event 'control file sequential read' for 87 secs.
Mon Jan 10 14:23:31 2011
LMON (ospid: 8594) waits for event 'control file sequential read' for 118 secs.
ERROR: LMON is not healthy and has no heartbeat.
ERROR: LM** (ospid: 8614) is terminating the instance.


Cause

RAC critical background process not participating heart beat for longer than default threshold of 70 seconds as it's waiting for control file IO

Solution

Control file IO could take longer than the default threshold under some circumstances, bug 8888434 which is affecting 11.1 and 11.2.0.1 but has been fixed in 11.2.0.2 will prevent instance from being terminated.

Troubleshoot the ORA-29740 error in a Real Application

Applies to:

Oracle Server - Enterprise Edition
Oracle Server - Enterprise Edition - Version: 9.2.0.1 to 11.1.0.8 - Release: 9.2 to 11.1
Information in this document applies to any platform.

Purpose

This note was created to troubleshoot the ORA-29740 error in a Real Application
Clusters environment.

Last Review Date

January 22, 2010

Instructions for the Reader

A Troubleshooting Guide is provided to assist in debugging a specific issue. When possible, diagnostic tools are included in the document to assist in troubleshooting.

Troubleshooting Details

Troubleshooting ORA-29740 in a RAC Environment
==============================================

An ORA-29740 error occurs when a member was evicted from the group by another
member of the cluster database for one of several reasons, which may include
a communications error in the cluster, failure to issue a heartbeat to the
control file, and other reasons. This mechanism is in place to prevent
problems from occurring that would affect the entire database. For example,
instead of allowing a cluster-wide hang to occur, Oracle will evict the
problematic instance(s) from the cluster. When an ORA-29740 error occurs, a
surviving instance will remove the problem instance(s) from the cluster.
When the problem is detected the instances 'race' to get a lock on the
control file (Results Record lock) for updating. The instance that obtains
the lock tallies the votes of the instances to decide membership. A member
is evicted if:

a) A communications link is down
b) There is a split-brain (more than 1 subgroup) and the member is
not in the largest subgroup
c) The member is perceived to be inactive

Sample message in Alert log of the evicted instance:

Fri Sep 28 17:11:51 2001
Errors in file /oracle/export/TICK_BIG/lmon_26410_tick2.trc:
ORA-29740: evicted by member %d, group incarnation %d
Fri Sep 28 17:11:53 2001
Trace dumping is performing id=[cdmp_20010928171153]
Fri Sep 28 17:11:57 2001
Instance terminated by LMON, pid = 26410

The key to resolving the ORA-29740 error is to review the LMON trace files
from each of the instances. On the evicted instance we will see something
like:

*** 2002-11-20 18:49:51.369
kjxgrdtrt: Evicted by 0, seq (3, 2)
^
|
This indicates which instance initiated the eviction.

On the evicting instance we will see something like:

kjxgrrcfgchk: Initiating reconfig, reason 3
*** 2002-11-20 18:49:29.559
kjxgmrcfg: Reconfiguration started, reason 3

...
*** 2002-11-20 18:49:29.727
Obtained RR update lock for sequence 2, RR seq 2
*** 2002-11-20 18:49:31.284
Voting results, upd 0, seq 3, bitmap: 0
Evicting mem 1, stat 0x0047 err 0x0002

You can see above that the instance initiated a reconfiguration for reason 3
(see Note 139435.1 for more information on reconfigurations). The
reconfiguration is then started and this instance obtained the RR lock
(Results Record lock) which means this instance will tally the votes of the
instances to decide membership. The last lines show the voting results then
this instance evicts instance 1.

For troubleshooting ORA-29740 errors, the 'reason' will be very important.
In the above example, the first section indicates the reason for the
initiated reconfiguration. The reasons are as follows:



Reason 0 = No reconfiguration
Reason 1 = The Node Monitor generated the reconfiguration.
Reason 2 = An instance death was detected.
Reason 3 = Communications Failure
Reason 4 = Reconfiguration after suspend

For ORA-29740 errors, you will most likely see reasons 1, 2, or 3.

-----------------------------------------------------------------------------

Reason 1: The Node Monitor generated the reconfiguration. This can happen if:

a) An instance joins the cluster
b) An instance leaves the cluster
c) A node is halted

It should be easy to determine the cause of the error by reviewing the alert
logs and LMON trace files from all instances. If an instance joins or leaves
the cluster or a node is halted then the ORA-29740 error is not a problem.

ORA-29740 evictions with reason 1 are usually expected when the cluster
membership changes. Very rarely are these types of evictions a real problem.

If you feel that this eviction was not correct, do a search in Metalink or
the bug database for:

ORA-29740 'reason 1'

Important files to review are:

a) Each instance's alert log
b) Each instance's LMON trace file
c) Statspack reports from all nodes leading up to the eviction
d) Each node's syslog or messages file
e) iostat output before, after, and during evictions
f) vmstat output before, after, and during evictions
g) netstat output before, after, and during evictions

There is a tool called "OS Watcher" that is being developed that helps gather
this information. For more information on "OS Watcher" see Note 301137.1
"OS Watcher User Guide".


-----------------------------------------------------------------------------

Reason 2: An instance death was detected. This can happen if:

a) An instance fails to issue a heartbeat to the control file.

When the heartbeat is missing, LMON will issue a network ping to the instance
not issuing the heartbeat. As long as the instance responds to the ping,
LMON will consider the instance alive. If, however, the heartbeat is not
issued for the length of time of the control file enqueue timeout, the
instance is considered to be problematic and will be evicted.


Common causes for an ORA-29740 eviction (Reason 2):

a) NTP (Time changes on cluster) - usually on Linux, Tru64, or IBM AIX
b) Network Problems (SAN).
c) Resource Starvation (CPU, I/O, etc..)
d) An Oracle bug.



Important files to review are:

a) Each instance's alert log
b) Each instance's LMON trace file
c) Statspack reports from all nodes leading up to the eviction
d) The CKPT process trace file of the evicted instance
e) Other bdump or udump files...
f) Each node's syslog or messages file
g) iostat output before, after, and during evictions
h) vmstat output before, after, and during evictions
i) netstat output before, after, and during evictions

There is a tool called "OS Watcher" that is being developed that helps gather
this information. For more information on "OS Watcher" see Note 301137.1
"OS Watcher User Guide".

-----------------------------------------------------------------------------

Reason 3: Communications Failure. This can happen if:

a) The LMON processes lose communication between one another.
b) One instance loses communications with the LMS, LMD, process of
another instance.
c) The LCK processes lose communication between one another.
d) A process like LMON, LMD, LMS, or LCK is blocked, spinning, or stuck
and is not responding to remote requests.

In this case the ORA-29740 error is recorded when there are communication
issues between the instances. It is an indication that an instance has been
evicted from the configuration as a result of IPC send timeout. A
communications failure between processes across instances will also generate a
ORA-29740 with reason 3. When this occurs, the trace file of the process
experiencing the error will print a message:

Reporting Communication error with instance:

If communication is lost at the cluster layer (for example, network cables
are pulled), the cluster software may also perform node evictions in the
event of a cluster split-brain. Oracle will detect a possible split-brain
and wait for cluster software to resolve the split-brain. If cluster
software does not resolve the split-brain within a specified interval,
Oracle proceeds with evictions.


Oracle Support has seen cases where resource starvation (CPU, I/O, etc...) can
cause an instance to be evicted with this reason code. The LMON or LMD process
could be blocked waiting for resources and not respond to polling by the remote
instance(s). This could cause that instance to be evicted. If you have
a statspack report available from the time just prior to the eviction on the
evicted instance, check for poor I/O times and high CPU utilization. Poor I/O
times would be an average read time of > 20ms.

Common causes for an ORA-29740 eviction (Reason 3):

a) Network Problems.
b) Resource Starvation (CPU, I/O, etc..)
c) Severe Contention in Database.
d) An Oracle bug.



Tips for tuning inter-instance performance can be found in the following note:

Note 181489.1
Tuning Inter-Instance Performance in RAC and OPS

Important files to review are:

a) Each instance's alert log
b) Each instance's LMON trace file
c) each instance's LMD and LMS trace files
d) Statspack reports from all nodes leading up to the eviction
e) Other bdump or udump files...
f) Each node's syslog or messages file
g) iostat output before, after, and during evictions
h) vmstat output before, after, and during evictions
i) netstat output before, after, and during evictions

There is a tool called "OS Watcher" that is being developed that helps gather
this information. For more information on "OS Watcher" see Note 301137.1
"OS Watcher User Guide".


Tuesday, 8 March 2011

Troubleshooting 10g and 11.1 Clusterware Reboots


If there is a ocssd.bin problem/failure, the oprocd daemon detected a scheduling
problem, or some other fatal problem, a node will reboot in a RAC cluster. This
functionality is used for I/O fencing to ensure that writes from I/O capable clients
can be cleared avoiding potential corruption scenarios in the event of a network
split, node hang, or some other fatal event.

1.0 - PROCESS ROLES FOR REBOOTS


OCSSD (aka CSS daemon) - This process is spawned in init.cssd. It runs in both
vendor clusterware and non-vendor clusterware environments and is armed with a
node kill via the init script. OCSSD's primary job is internode health monitoring
and RDBMS instance endpoint discovery. It runs as the Oracle user.

PS Output:
oracle 686 0.0 0.23207216608 ? S 11:42:42 0:12 /oracle/10g/crs/bin/ocssd.bin

INIT.CSSD - In a normal environment, init spawns init.cssd, which in turn spawns
OCSSD as a child. If ocssd dies or is killed, the node kill functionality of the
init script will kill the node. If the script is killed, its ocssd survives and
continues operating. However init has been instructed to respawn init.cssd via
inittab. When it does so, the second init.cssd will attempt to start its own ocssd.
That ocssd starts up, finds that its endpoint is owned by the first ocssd, fails,
and then the 2nd init.cssd kills the node.

PS Output:
root 635 0.0 0.0 1120 840 ? S 11:41:41 0:00 /bin/sh /etc/init.d/init.cssd fatal

OPROCD - This process is spawned in any non-vendor clusterware environment, except
on Windows where Oracle uses a kernel driver to perform the same actions and Linux
prior to version 10.2.0.4. If oprocd detects problems, it will kill a node via C
code. It is spawned in init.cssd and runs as root. This daemon is used to detect
hardware and driver freezes on the machine. If a machine were frozen for long enough
that the other nodes evicted it from the cluster, it needs to kill itself to prevent
any IO from getting reissued to the disk after the rest of the cluster has remastered
locks."

PS Output:
root 684 0.0 0.0 2240 968 ? S 11:42:42 0:00 /oracle/10g/crs/bin/oprocd start -t 1000 -m 50

OCLSOMON (10.2.0.2 and above) - This process monitors the CSS daemon for hangs or
scheduling issues and can reboot a node if there is a perceived hang.

2.0 - DETERMINING WHICH PROCESS IS RESPONSIBLE FOR A REBOOT


* Messages file locations:
Sun: /var/adm/messages
HP-UX: /var/adm/syslog/syslog.log
Tru64: /var/adm/messages
Linux: /var/log/messages
IBM: /bin/errpt -a > messages.out

** CSS log locations:
11.1 and 10.2: <CRS_HOME>/log/<node name>/cssd
10.1: <CRS_HOME>/css/log

*** Oprocd log locations:
In /etc/oracle/oprocd or /var/opt/oracle/oprocd depending on version/platform.

Note that oprocd only runs when no vendor clusterware is running or on Linux > 10.2.0.4

3.0 - TROUBLESHOOTING OCSSD REBOOTS


If you have encountered an OCSSD reboot, review common causes in section 3.1 below.
If the problem cannot be determined by reviewing the common causes, review and
collect the data from section 3.3.

3.1 - COMMON CAUSES OF OCSSD REBOOTS


- Network failure or latency between nodes. It would take at least 30 consecutive
missed checkins to cause a reboot, where heartbeats are issued once per second.

Example of missed checkins in the CSS log:

WARNING: clssnmPollingThread: node <node> (1) at 50% heartbeat fatal, eviction in 29.100 seconds
WARNING: clssnmPollingThread: node <node> (1) at 75% heartbeat fatal, eviction in 14.960 seconds
WARNING: clssnmPollingThread: node <node> (1) at 75% heartbeat fatal, eviction in 13.950 seconds

The first thing to do is find out if the missed checkins ARE the problem or are a
result of the node going down due to other reasons. Check the messages file to see
what exact time the node went down and compare it to the time of the missed checkins.

- If the messages file reboot time < missed checkin time then the node eviction was
likely not due to these missed checkins.

- If the messages file reboot time > missed checkin time then the node eviction was
likely a result of the missed checkins.


- Problems writing to or reading from the CSS voting disk.

Example of a voting disk problem in the CSS log:

ERROR: clssnmDiskPingMonitorThread: voting device access hanging (160008 miliseconds)

- Lack of CPU resources. There are some situations which will appear to be missed
heartbeat issues, however turn out to be caused by a user running a high
sustained load average. When a machine gets too heavily loaded, the scheduling
reliability can be bad. This could cause CSS to not get scheduled in time and
thus CSS cannot get its work done. If this happens, the node is declared
not-viable for cluster work and is evicted.

- A problem with the executables (for example, removing CRS Home files)

- Misconfiguration of CRS. Possible misconfigurations:

- Wrong network selected as the private network for CRS (confirm with CSS log,
/etc/hosts, and ifconfig output). Make sure it is not the public or VIP
address. Look in the CSS log for strings like...
clsc_listen: (*) Listening on
(ADDRESS=(PROTOCOL=tcp)(HOST=dlsun2046)(PORT=61196))

- Putting the CSS vote file on a Netapp that's shared over some kind of public
network or otherwise excessively loaded/unreliable network. If this is the
case, you are likely to see the following message in the CSS logfile:

ERROR: clssnmDiskPingThread(): Large disk IO timeout * seconds.

If you ever see this error, then it's important to investigate why the disk
subsystem is unresponsive.

See section 3.2 for information on how to correct common misconfiguration
problems.

- Killing the "init.cssd fatal" process or "ocssd" process.

- An unexpected failure of the OCSSD process, this can be caused by any of the
above issues.

- An Oracle bug. Known bugs that can cause CSS reboots:



Note 264699.1 - CSS Fails to Flush Writes After Installing 10.1.0.2 CRS on Linux with OCFS
Workaround: Put OCR and CSS Voting files on raw devices
Fixed in OCFS 1.0.11 and above.

Bug 3942568 - A deadlock can occur between 2 threads of the CSS daemon process.
Fixed in 10.1.0.4 and above.




SOLARIS ONLY: See these bugids that fixed the problem (in Solaris 9; the fixes were
backported to Solaris 8 Update 6):
Bug 4308370 cond_timedwait(), sigtimedwait(), poll() and /proc time out too soon
Bug 4391799 Fix for BugID 4308370 causes timeout failures when system time is reset


3.2 - FILES TO REVIEW AND GATHER FOR OCSSD REBOOTS


If logging a service request, please provide ALL of the following files to Oracle
Support if possible:

- All the files in the following directories from all nodes.

For 10.2 and above, all files under:

<CRS_HOME>/log

Recommended method for gathering these for each node would be to run the
diagcollection.pl script.

For 10.1:

<CRS_HOME>/crs/log
<CRS_HOME>/crs/init
<CRS_HOME>/css/log
<CRS_HOME>/css/init
<CRS_HOME>/evm/log
<CRS_HOME>/evm/init
<CRS_HOME>/srvm/log

Recommended method for gathering these for each node:

cd <CRS_HOME>
tar cf crs.tar crs/init crs/log css/init css/log evm/init evm/log srvm/log

- Messages or Syslog from all nodes from the time of the problem:

Sun: /var/adm/messages
HP-UX: /var/adm/syslog/syslog.log
Tru64: /var/adm/messages
Linux: /var/log/messages
IBM: /bin/errpt -a > messages.out

- If a core files was written it would be useful to obtain a stack trace of the
core file using Note 1812.1 "TECH Getting a Stack Trace from a CORE file".
Core files are usually writtin in one of the following directories:

<CRS_HOME>/crs/init
<CRS_HOME>/css/init
<CRS_HOME>/evm/init

You should also check all threads of the core file and get a stack trace for each.
Note 118252.1 has information on gathering multiple threads.

- OCR dump file - To get this cd to <CRS_HOME>/bin as the root user and issue
"ocrdump <unique filename>". This will generate two files (ocrdump.log and the
a dump file with the name given for it).

- 'opatch lsinventory -detail' output for the CRS home

- Back up the scls_scr directory, inittab, and hosts file for analysis with:

Sun, HP-UX, HP Tru64:

cd /
tar cf /var/backup/oraclecrs.tar var/opt/oracle/scls_scr etc/hosts etc/inittab

Linux, IBM-AIX:

cd /
tar cf /var/backup/oraclecrs.tar etc/oracle/scls_scr etc/hosts etc/inittab


- Ifconfig output from each node (ifconfig -a on unix platforms).
@ These can be useful in determining whether the user has set up the private
@ interconnect names to resolve to the same IP on all nodes. They can be useful
@ in determining whether someone is using their public IP as their private
@ interconnect address. In some cases it may also be useful to get the output
@ of nslookup for each private interconnect name. If some layer is attempting to
@ connect across the private interconnect and there is a possibility of getting
@ different results from /etc/hosts and from the nameserver, it could cause quite
@ a bit of confusion.

@ If it looks like there are network issues, asking for a detailed map of
@ their network wiring is pretty appropriate, and can be handy.

- It would also be useful to get the following from each node leading up to the time
of the reboot:

- netstat -is (or equivelant)
- iostat -x (or equivelant)
- vmstat (or equivelant)
- ping -s (or equivelant) output of the private network

There is a tool called "OS Watcher" that helps gather this information. This tool
will dump netstat, vmstat, iostat, and other output at an inverval and save x number
of hours of archived data. For more information about this tool see Note 301137.1.

4.0 - TROUBLESHOOTING OPROCD REBOOTS


If you have encountered an OPROCD reboot, review common causes in section 4.1 below.
If the problem cannot be determined by reviewing the common causes, review and
collect the data from section 4.2.

4.1 - COMMON CAUSES OF OPROCD REBOOTS


- A problem detected by the OPROCD process. This can be caused by 4 things:

1) An OS scheduler problem.
2) The OS is getting locked up in a driver or hardware.
3) Excessive amounts of load on the machine, thus preventing the scheduler from
behaving reasonably.
4) An Oracle bug.

OPROCD Bugs Known to Cause Reboots:


Bug 5015469 - OPROCD may reboot the node whenever the system date is moved
backwards.
Fixed in 10.2.0.3+

Bug 4206159 - Oprocd is prone to time regression due to current API used (AIX only)
Fixed in 10.1.0.3 + One off patch for Bug 4206159.


Diagnostic Fixes (VERY NECESSARY IN MOST CASES):

Bug 5137401 - Oprocd logfile is cleared after a reboot
Fixed in 10.2.0.4+

Bug 5037858 - Increase the warning levels if a reboot is approaching
Fixed in 10.2.0.3+

4.2 - FILES TO REVIEW AND GATHER FOR OPROCD REBOOTS


If logging a service request, please provide ALL of the following files to Oracle
Support if possible:

- Oprocd logs in /etc/oracle/oprocd or /var/opt/oracle/oprocd depending on version/platform.

- All the files in the following directories from all nodes.

For 10.2 and above, all files under:

<CRS_HOME>/log

Recommended method for gathering these for each node would be to run the
diagcollection.pl script.

For 10.1:

<CRS_HOME>/crs/log
<CRS_HOME>/crs/init
<CRS_HOME>/css/log
<CRS_HOME>/css/init
<CRS_HOME>/evm/log
<CRS_HOME>/evm/init
<CRS_HOME>/srvm/log

Recommended method for gathering these for each node:

cd <CRS_HOME>
tar cf crs.tar crs/init crs/log css/init css/log evm/init evm/log srvm/log

- Messages or Syslog from all nodes from the time of the problem:

Sun: /var/adm/messages
HP-UX: /var/adm/syslog/syslog.log
Tru64: /var/adm/messages
Linux: /var/log/messages
IBM: /bin/errpt -a > messages.out

- 'opatch lsinventory -detail' output for the CRS home

- It would also be useful to get the following from each node leading up to the time
of the reboot:

- netstat -is (or equivelant)
- iostat -x (or equivelant)
- vmstat (or equivelant)

There is a tool called "OS Watcher" that helps gather this information. This tool
will dump netstat, vmstat, iostat, and other output at an inverval and save x number
of hours of archived data. For more information about this tool see Note 301137.1.


5.0 - TROUBLESHOOTING OCLSOMON REBOOTS


If you have encountered an OCLSOMON reboot, review common causes in section 5.1 below.
If the problem cannot be determined by reviewing the common causes, review and
collect the data from section 5.2.

5.1 - COMMON CAUSES OF OCLSOMON REBOOTS


- A problem detected by the OCLSOMON process. This can be caused by 4 things:

1) A thread(s) within the CSS daemon hung.
2) An OS scheduler problem.
3) Excessive amounts of load on the machine, thus preventing the scheduler from
behaving reasonably.
4) An Oracle bug.

5.2 - FILES TO REVIEW AND GATHER FOR OCLSOMON REBOOTS


If logging a service request, please provide ALL of the following files to Oracle
Support if possible:

- All the files in the following directories from all nodes. For a description of
these directories, see Note 259301.1 :

For 10.2, all files under:

<CRS_HOME>/log

Recommended method for gathering these for each node would be to run the
diagcollection.pl script.

- Messages or Syslog from all nodes from the time of the problem:

Sun: /var/adm/messages
HP-UX: /var/adm/syslog/syslog.log
Tru64: /var/adm/messages
Linux: /var/log/messages
IBM: /bin/errpt -a > messages.out

- 'opatch lsinventory -detail' output for the CRS home

- It would also be useful to get the following from each node leading up to the time
of the reboot:

- netstat -is (or equivelant)
- iostat -x (or equivelant)
- vmstat (or equivelant)

There is a tool called "OS Watcher" that helps gather this information. This tool
will dump netstat, vmstat, iostat, and other output at an inverval and save x number
of hours of archived data. For more information about this tool see Note 301137.1.

Troubleshooting 10g and 11.1 Clusterware Reboots [ID 265769.1]


______________________________________________________________________
This email has been scanned by the MessageLabs Email Security System.
For more information please visit http://www.messagelabs.com/email
______________________________________________________________________

TroubleShoot Node Evictions in Oracle RAC


Troubleshooting 11.2 Clusterware Node Evictions (Reboots) [ID 1050693.1]

Troubleshooting 10g and 11.1 Clusterware Reboots [ID 265769.1]


What to Do if 11gR2 Clusterware is Unhealthy [1068835.1]


How to Troubleshoot Grid Infrastructure Startup Issues [1050908.1]





Monday, 7 March 2011

Oracle Clusterware Node Fencing – A Killing

Oracle Clusterware Node Fencing – A Killing

CRS-01613: No I/O has completed after 90% of the maximum interval. Voting file string will be considered not functional in number milliseconds

CRS-01613: No I/O has completed after 90% of the maximum interval. Voting file string will be considered not functional in number milliseconds: "Oracle Error :: CRS-01613
No I/O has completed after 90% of the maximum interval. Voting file string will be considered not functional in number milliseconds

Cause
Voting device I/O has not completed for a long time. This could be due some error with the device the voting file is on or in some element in the path of the I/O to the device.

Action
Verify that the device is working properly including all elements in the I/O path. The voting file listed will be considered inactive in the number of milliseconds specified. Failure of a majority of devices will result in node reboot."

Thursday, 7 October 2010

Oracle RAC Monitoring

Brief intro

Our last article was about the Extended RAC--a way to spread out our RAC nodes across the city. Whether your RAC is in the building or outside, a smart and prudent DBA always monitors RAC.

As the application is taken into production, it grows and flourishes. That can be a big burden for the DBA if he is not proactively monitoring his database. There could be several causes for this growth: disk usage might go up, network bandwidth might bottleneck, transactions may start taking too long too commit, more users and more aggressive usage. This may be good for the business but the Service Level Agreements (SLAs) still need to be met. Proactive monitoring your Oracle RAC or even a typical single node Oracle database will keep you upwind of problems. However, that means you need to know which tools you should be using to do just that.

What are we monitoring?

The questions below can assist a DBA to optimize his routine administration tasks as well as help management make timely decisions.

1.      Are we meeting our SLA (Service Level Agreements)?

2.      Are the High Availability (HA) objectives being met?

3.      Are all instances sharing the load evenly?

4.      What is the interconnect load (latency/overhead)?

5.      CPU: Are Oracle processes getting enough resources?

6.      Memory: How is the memory for System Global Area (SGA) etc?

Questions similar to these, broken up to monitor at all levels (Application, Database, OS, and HW), help a DBA to monitor the RAC environment efficiently and effectively.

At hardware level: Disks, HBAs, NICs, cabling, backup devices etc. need to function properly. All these devices need to be configured properly.

At OS level: You need to monitor the CPU, Memory, Disk performance and Network traffic.

At Database level: You have to monitor all the cluster logs, event logs, asm logs and rdbms logs at your database level.

Application Level:

At Application level, we need to carefully sprinkle the monitoring code (for instance, if the application server were a Tomcat Server or JBoss, then you would be interested in all of the Catalina logs, java.util logs or log4J logging, etc). There are tools, which are more professional, like BIRT, which can be employed to monitor your application's performance.

The Toolkit

For the OS you will need the following tools:

Checking I/O stat

Let's quickly check if our cluster is online. In our test/development scenario, I changed the disk configuration from RAID5 to RAID 0 for optimal performance of my RAC nodes. If you have an enterprise version of ESX, you can put all of the VMs on SAN and not have to bother with the disk configurations, as you will not need DAS (Direct Attached Storage).

Upon restarting, do the cluster check.

[oracle@vm01 bin]$ crs_stat -t

Name Type Target State Host

------------------------------------------------------------

ora.esxrac.db application ONLINE ONLINE vm02

ora....c1.inst application ONLINE ONLINE vm01

ora....c2.inst application ONLINE ONLINE vm02

ora....serv.cs application ONLINE ONLINE vm02

ora....ac1.srv application ONLINE ONLINE vm01

ora....ac2.srv application ONLINE ONLINE vm02

ora....SM1.asm application ONLINE ONLINE vm01

ora....01.lsnr application ONLINE ONLINE vm01

ora.vm01.gsd application ONLINE ONLINE vm01

ora.vm01.ons application ONLINE ONLINE vm01

ora.vm01.vip application ONLINE ONLINE vm01

ora....SM2.asm application ONLINE ONLINE vm02

ora....02.lsnr application ONLINE ONLINE vm02

ora.vm02.gsd application ONLINE ONLINE vm02

ora.vm02.ons application ONLINE ONLINE vm02

ora.vm02.vip application ONLINE ONLINE vm02

Then do the iostat at intervals of 3 seconds. To see the full manual for this excellent tool, type man iostat.

[oracle@vm01 udump]$ iostat --help

sysstat version 5.0.5

(C) Sebastien Godard

Usage: iostat [ options... ] [ <interval> [ <count> ] ]

Options are:

[ -c | -d ] [ -k ] [ -t ] [ -V ] [ -x ]

[ { <device> [ ... ] | ALL } ] [ -p [ { <device> | ALL } ] ]

[oracle@vm01 udump]$ iostat

Linux 2.6.9-42.0.0.0.1.ELsmp (vm01.wolga.nl) 05/03/2007

avg-cpu: %user %nice %sys %iowait %idle

1.49 0.07 5.81 2.32 90.30

Device: tps Blk_read/s Blk_wrtn/s Blk_read Blk_wrtn

sda 2.18 2.60 76.77 2220396 65637664

sda1 9.65 2.60 76.76 2219427 65637656

sdb 0.00 0.00 0.00 2635 0

sdb1 0.00 0.00 0.00 1186 0

sdc 2.75 44.07 33.41 37681732 28563403

sdc1 2.76 44.07 33.41 37678487 28563403

sdd 1.71 10.44 33.41 8926315 28563403

sdd1 1.73 10.44 33.41 8923070 28563403

sde 1.12 1.26 22.43 1079253 19182446

sde1 1.13 1.26 22.43 1076008 19182446

sdf 4.31 502.78 4.18 429902055 3570958

sdf1 4.31 502.78 4.18 429900486 3570958

sdg 7.24 1004.57 5.15 858957930 4407357

sdg1 7.24 1004.57 5.15 858956361 4407357

sdh 1.00 1.00 0.50 858776 428077

sdh1 1.00 1.00 0.50 857207 428077

[oracle@vm01 udump]$ iostat -t 3

Linux 2.6.9-42.0.0.0.1.ELsmp (vm01.wolga.nl) 05/03/2007

Time: 02:37:19 PM

avg-cpu: %user %nice %sys %iowait %idle

1.49 0.07 5.81 2.32 90.30

Device: tps Blk_read/s Blk_wrtn/s Blk_read Blk_wrtn

sda 2.18 2.60 76.77 2220396 65645200

sda1 9.65 2.60 76.77 2219427 65645192

sdb 0.00 0.00 0.00 2635 0

sdb1 0.00 0.00 0.00 1186 0

sdc 2.75 44.07 33.40 37685412 28565854

sdc1 2.76 44.07 33.40 37682167 28565854

sdd 1.71 10.44 33.40 8926763 28565854

sdd1 1.73 10.44 33.40 8923518 28565854

sde 1.12 1.26 22.43 1079381 19183793

sde1 1.13 1.26 22.43 1076136 19183793

sdf 4.31 502.78 4.18 429949905 3571674

sdf1 4.31 502.78 4.18 429948336 3571674

sdg 7.24 1004.57 5.15 859052465 4408164

sdg1 7.24 1004.57 5.15 859050896 4408164

sdh 1.00 1.00 0.50 858870 428124

sdh1 1.00 1.00 0.50 857301 428124

Time: 02:37:22 PM

avg-cpu: %user %nice %sys %iowait %idle

1.00 0.08 6.91 1.00 91.01

Device: tps Blk_read/s Blk_wrtn/s Blk_read Blk_wrtn

sda 0.00 0.00 0.00 0 0

sda1 0.00 0.00 0.00 0 0

sdb 0.00 0.00 0.00 0 0

sdb1 0.00 0.00 0.00 0 0

sdc 2.00 42.67 13.33 128 40

sdc1 2.00 42.67 13.33 128 40

sdd 0.67 0.00 13.33 0 40

sdd1 0.67 0.00 13.33 0 40

sde 0.67 0.00 13.33 0 40

sde1 0.67 0.00 13.33 0 40

sdf 3.33 343.00 1.00 1029 3

sdf1 3.00 343.00 1.00 1029 3

sdg 6.33 856.33 2.67 2569 8

sdg1 6.67 856.33 2.67 2569 8

sdh 1.33 1.33 0.67 4 2

sdh1 1.33 1.33 0.67 4 2

Time: 02:37:25 PM

avg-cpu: %user %nice %sys %iowait %idle

0.83 0.08 5.67 1.75 91.67

Device: tps Blk_read/s Blk_wrtn/s Blk_read Blk_wrtn

sda 0.66 0.00 74.42 0 224

sda1 9.30 0.00 74.42 0 224

sdb 0.00 0.00 0.00 0 0

sdb1 0.00 0.00 0.00 0 0

sdc 1.33 21.26 13.29 64 40

sdc1 1.33 21.26 13.29 64 40

sdd 0.66 0.00 13.29 0 40

sdd1 0.66 0.00 13.29 0 40

sde 0.66 0.00 13.29 0 40

sde1 0.66 0.00 13.29 0 40

sdf 4.32 512.62 1.66 1543 5

sdf1 4.32 512.62 1.66 1543 5

sdg 6.64 1023.26 1.99 3080 6

sdg1 6.31 1023.26 1.99 3080 6

sdh 0.66 0.66 0.33 2 1

sdh1 0.66 0.66 0.33 2 1

Time: 02:37:28 PM

Here sda, sdb are the files for OS installation and swap. Sdc, sdd and sde are the files used for OCR, Votingdisk and Spfileasm respectively. Sdf and Sdg are the files we chose for oradata (where all of our Oracle data files are residing) and sdh is for the flashback recovery. You can clearly see that iowait is considerably low, which is a good thing--had it been higher you would be looking at an I/O bottleneck. On the disks (devices section) you can clearly see that our cluster is doing fine but the oradata disks are working hard (and rightly so!). That is why I explained earlier that I have optimized my test scenario on the DAS to have more advantage on the spindle speed, seek time and throughput. The oradata files are on a separate disk. After getting your disks and data all evenly spread out, you can use the –x parameter to get additional useful information, such as average request size, average wait time for requests and average service time for requests.

Let's run it with –x parameter.

[oracle@vm01 ~]$ iostat -x

Linux 2.6.9-42.0.0.0.1.ELsmp (vm01.wolga.nl) 05/04/2007

avg-cpu: %user %nice %sys %iowait %idle

2.86 0.00 14.76 5.79 76.59

Device: rrqm/s wrqm/s r/s w/s rsec/s wsec/s rkB/s wkB/s avgrq-sz avgqu-sz await svctm %util

sda 8.02 13.87 13.55 7.73 1130.44 172.78 565.22 86.39 61.23 0.34 16.09 4.89 10.41

sdb 2.51 0.00 0.11 0.00 6.22 0.00 3.11 0.00 57.28 0.00 2.89 2.89 0.03

sdc 2.56 0.01 8.72 1.40 276.30 26.56 138.15 13.28 29.94 0.26 25.62 5.41 5.47

sdd 2.59 0.01 7.05 1.40 242.61 26.56 121.30 13.28 31.85 0.23 27.22 6.19 5.23

sde 2.56 0.12 0.55 1.00 19.11 75.21 9.55 37.60 60.91 0.02 15.33 13.81 2.14

sdf 10.18 0.00 16.07 3.45 472.50 21.01 236.25 10.51 25.29 0.06 3.20 3.06 5.97

sdg 10.18 0.00 17.19 3.98 748.88 21.55 374.44 10.77 36.40 0.07 3.30 3.02 6.39

sdh 10.18 0.00 1.01 0.42 87.56 0.69 43.78 0.35 61.40 0.01 7.00 6.97 1.00

Conclusion

As you can see, all you need is a good toolkit to monitor your system. We took a brief look at the iostat tool and in the upcoming articles we will take a more detailed look at other tools. There is also a utility called OSwatcher (downloadable at Metalink), which lumps all of the tools together so you can run them together.