Thursday, 17 March 2011

Stopping the Oracle RAC 10g Environment

The first step is to stop the Oracle instance. When the instance (and related services) is down, then bring down the ASM instance. Finally, shut down the node applications (Virtual IP, GSD, TNS Listener, and ONS).

$ export ORACLE_SID=orcl1 $ emctl stop dbconsole $ srvctl stop instance -d orcl -i orcl1 $ srvctl stop asm -n linux1 $ srvctl stop nodeapps -n linux1 

Starting the Oracle RAC 10g Environment

The first step is to start the node applications (Virtual IP, GSD, TNS Listener, and ONS). When the node applications are successfully started, then bring up the ASM instance. Finally, bring up the Oracle instance (and related services) and the Enterprise Manager Database console.

$ export ORACLE_SID=orcl1 $ srvctl start nodeapps -n linux1 $ srvctl start asm -n linux1 $ srvctl start instance -d orcl -i orcl1 $ emctl start dbconsole 

Start/Stop All Instances with SRVCTL

Start/stop all the instances and their enabled services. I have included this step just for fun as a way to bring down all instances!

$ srvctl start database -d orcl  $ srvctl stop database -d orcl

**************************************************************************************
A brief Summary of Steps to Start / Shutdown RAC/ASM Setup (Oracle 11g):

Shutdown

1. shutdown database
srvctl stop database -d "that will shutdown ALL running instances of "
srvctl stop instance -d -i "that will shutdown only specified instance of , other running instances will continue to run"

2. shutdown ASM
srvctl stop asm -n "run it for every rac node one-by-one"

3. shutdown nodeapps
srvctl stop nodeapps -n "run it for every rac node one-by-one"

4. Shutdown crs processes
crsctl stop crs

Startup

1. crsctl start crs -> that will start crs processes (HA-Engine), NodeApps, then ASM then Database's Instances then HA-Services (if any)

Thursday, 10 March 2011

Migrate OCR and Voting Disk from External to High Redundancy disk group in Oracle ASM

Multiple OCRs and vote disks on ASM in Oracle 11gR2 « Guenadi N Jilevski's Oracle BLOG

Voting Disk and OCR in 11gR2 On ASM Storage

Having just delivered an Oracle Database 11gR2 RAC Admin course, I’d like to point out some remarkable changes in the way we handle now the important Clusterware components Voting Disk and Oracle Cluster Registry (OCR): Amazingly, we can now store the two inside of an Automatic Storage Management (ASM) Disk Group, which was not possible in 10g.

Then you cannot create more than 1 voting disk in the same or on another/different diskgroup disk when using External Redundancy in 11.2.

The rules are as follows:
External = 1 voting disk
Normal= 3 voting disk
High= 5 voting disk

External redundancy diskgroup depends on the third party hardware vendor to handle the redundancy.

If you want to have the voting disk multiplexed at ASM level then it is recommended you change your redundancy to Normal so that ASM provides this redundancy for you.

The OCR is striped and mirrored (if we have a redundancy other than external), similar as ordinary Database Files are. So we can now leverage the mirroring capabilities of ASM to mirror the OCR also, without having to use multiple RAW devices for that purpose only. The Voting Disk (or Voting File, as it is now also referred to) is not striped but put as a whole on ASM Disks – if we use a redundancy of normal on the Diskgroup, 3 Voting Files are placed, each on one ASM Disk. This is a concern, if our ASM Diskgroups consist of only 2 ASM Disks! Therefore, the new quorum failgroup clause was introduced:

create diskgroup data normal redundancy

failgroup fg1 disk 'ORCL:ASMDISK1'

failgroup fg2 disk 'ORCL:ASMDISK2'

quorum failgroup fg3 disk 'ORCL:ASMDISK3'

attribute 'compatible.asm' = '11.2.0.0.0';

The failgroup fg3 above needs only one small Disk (300 MB should be on the safe side here, since the Voting File is only about 280 MB in size) to keep one Mirror of the Voting File. fg1 and fg2 will contain each one Voting File and all the other stripes of the Database Area as well, but fg3 will only get that one Voting File.

[root@uhesse1 ~]# /u01/app/11.2.0/grid/bin/crsctl query css votedisk

## STATE File Universal Id File Name Disk group

-- ----- ----------------- --------- ---------

1. ONLINE 511de6e64e354f9bbf4be318fc928c28 (ORCL:ASMDISK1) [DATA]

2. ONLINE 2f1973ed4be84f50bffc2475949b428f (ORCL:ASMDISK2) [DATA]

3. ONLINE 5ed44fb7e79c4f79bfaf09b402ba70df (ORCL:ASMDISK3) [DATA]

Another important change regarding the Voting File is that it is no longer supported to take a manual backup of it with dd. Instead, the Voting File gets backed up automatically into the OCR. As a New Feature, you can now do a manual backup of the OCR any time you like, without having to wait until that is done automatically – which is also still done:

[root@uhesse1 ~]# /u01/app/11.2.0/grid/bin/ocrconfig -showbackup

uhesse1 2010/10/06 09:37:30 /u01/app/11.2.0/grid/cdata/cluhesse/backup00.ocr

uhesse1 2010/10/06 05:37:29 /u01/app/11.2.0/grid/cdata/cluhesse/backup01.ocr

uhesse1 2010/10/06 01:37:27 /u01/app/11.2.0/grid/cdata/cluhesse/backup02.ocr

uhesse1 2010/10/05 01:37:21 /u01/app/11.2.0/grid/cdata/cluhesse/day.ocr

uhesse1 2010/10/04 13:37:19 /u01/app/11.2.0/grid/cdata/cluhesse/week.ocr

Above are the automatic backups of the OCR as in earlier versions. Now the manual backup:

[root@uhesse1 ~]# /u01/app/11.2.0/grid/bin/ocrconfig -manualbackup

uhesse1 2010/10/06 13:07:03 /u01/app/11.2.0/grid/cdata/cluhesse/backup_20101006_130703.ocr

I got a manual backup on the default location on my master node. We can define another backup location for the automatic backups as well as for the manual backups – preferrable on a Shared Device that is accessible by all the nodes (which is not the case with /home/oracle, unfortunately :-) ):

[root@uhesse1 ~]# /u01/app/11.2.0/grid/bin/ocrconfig -backuploc /home/oracle

[root@uhesse1 ~]# /u01/app/11.2.0/grid/bin/ocrconfig -manualbackup

uhesse1 2010/10/06 13:10:50 /home/oracle/backup_20101006_131050.ocr

uhesse1 2010/10/06 13:07:03 /u01/app/11.2.0/grid/cdata/cluhesse/backup_20101006_130703.ocr

[root@uhesse1 ~]# /u01/app/11.2.0/grid/bin/ocrconfig -showbackup

uhesse1 2010/10/06 09:37:30 /u01/app/11.2.0/grid/cdata/cluhesse/backup00.ocr

uhesse1 2010/10/06 05:37:29 /u01/app/11.2.0/grid/cdata/cluhesse/backup01.ocr

uhesse1 2010/10/06 01:37:27 /u01/app/11.2.0/grid/cdata/cluhesse/backup02.ocr

uhesse1 2010/10/05 01:37:21 /u01/app/11.2.0/grid/cdata/cluhesse/day.ocr

uhesse1 2010/10/04 13:37:19 /u01/app/11.2.0/grid/cdata/cluhesse/week.ocr

uhesse1 2010/10/06 13:10:50 /home/oracle/backup_20101006_131050.ocr

uhesse1 2010/10/06 13:07:03 /u01/app/11.2.0/grid/cdata/cluhesse/backup_20101006_130703.ocr

Conclusion: The way to handle Voting Disk and OCR has changed significantly – they can be kept inside of an ASM Diskgroup especially.

ORA-29770 LMHB Terminates Instance as LMON Waited for Control File IO for too Long

Applies to: [ID 1197674.1]

Oracle Server - Enterprise Edition - Version: 11.1.0.6 to 11.2.0.1 - Release: 11.1 to 11.2
Information in this document applies to any platform.

Symptoms

Instance crashes with messages like the following:
Wed Sep 09 03:24:14 2009
LMON (ospid: 31216) waits for event 'control file sequential read' for 88 secs.
Wed Sep 09 03:24:29 2009
Errors in file /oracle/base/diag/rdbms/prod/prod3/trace/prod3_lmhb_31304.trc (incident=2329):
ORA-29770: global enqueue process LMON (OSID 31216) is hung for more than 70 seconds
Incident details in: /oracle/base/diag/rdbms/prod/prod3/incident/incdir_2329/prod3_lmhb_31304_i2329.trc
Wed Sep 09 03:24:39 2009
ERROR: Some process(s) is not making progress.
LMHB (ospid: 31304) is terminating the instance.

OR:

Mon Jan 10 14:23:00 2011
LMON (ospid: 8594) waits for event 'control file sequential read' for 87 secs.
Mon Jan 10 14:23:31 2011
LMON (ospid: 8594) waits for event 'control file sequential read' for 118 secs.
ERROR: LMON is not healthy and has no heartbeat.
ERROR: LM** (ospid: 8614) is terminating the instance.


Cause

RAC critical background process not participating heart beat for longer than default threshold of 70 seconds as it's waiting for control file IO

Solution

Control file IO could take longer than the default threshold under some circumstances, bug 8888434 which is affecting 11.1 and 11.2.0.1 but has been fixed in 11.2.0.2 will prevent instance from being terminated.

Troubleshoot the ORA-29740 error in a Real Application

Applies to:

Oracle Server - Enterprise Edition
Oracle Server - Enterprise Edition - Version: 9.2.0.1 to 11.1.0.8 - Release: 9.2 to 11.1
Information in this document applies to any platform.

Purpose

This note was created to troubleshoot the ORA-29740 error in a Real Application
Clusters environment.

Last Review Date

January 22, 2010

Instructions for the Reader

A Troubleshooting Guide is provided to assist in debugging a specific issue. When possible, diagnostic tools are included in the document to assist in troubleshooting.

Troubleshooting Details

Troubleshooting ORA-29740 in a RAC Environment
==============================================

An ORA-29740 error occurs when a member was evicted from the group by another
member of the cluster database for one of several reasons, which may include
a communications error in the cluster, failure to issue a heartbeat to the
control file, and other reasons. This mechanism is in place to prevent
problems from occurring that would affect the entire database. For example,
instead of allowing a cluster-wide hang to occur, Oracle will evict the
problematic instance(s) from the cluster. When an ORA-29740 error occurs, a
surviving instance will remove the problem instance(s) from the cluster.
When the problem is detected the instances 'race' to get a lock on the
control file (Results Record lock) for updating. The instance that obtains
the lock tallies the votes of the instances to decide membership. A member
is evicted if:

a) A communications link is down
b) There is a split-brain (more than 1 subgroup) and the member is
not in the largest subgroup
c) The member is perceived to be inactive

Sample message in Alert log of the evicted instance:

Fri Sep 28 17:11:51 2001
Errors in file /oracle/export/TICK_BIG/lmon_26410_tick2.trc:
ORA-29740: evicted by member %d, group incarnation %d
Fri Sep 28 17:11:53 2001
Trace dumping is performing id=[cdmp_20010928171153]
Fri Sep 28 17:11:57 2001
Instance terminated by LMON, pid = 26410

The key to resolving the ORA-29740 error is to review the LMON trace files
from each of the instances. On the evicted instance we will see something
like:

*** 2002-11-20 18:49:51.369
kjxgrdtrt: Evicted by 0, seq (3, 2)
^
|
This indicates which instance initiated the eviction.

On the evicting instance we will see something like:

kjxgrrcfgchk: Initiating reconfig, reason 3
*** 2002-11-20 18:49:29.559
kjxgmrcfg: Reconfiguration started, reason 3

...
*** 2002-11-20 18:49:29.727
Obtained RR update lock for sequence 2, RR seq 2
*** 2002-11-20 18:49:31.284
Voting results, upd 0, seq 3, bitmap: 0
Evicting mem 1, stat 0x0047 err 0x0002

You can see above that the instance initiated a reconfiguration for reason 3
(see Note 139435.1 for more information on reconfigurations). The
reconfiguration is then started and this instance obtained the RR lock
(Results Record lock) which means this instance will tally the votes of the
instances to decide membership. The last lines show the voting results then
this instance evicts instance 1.

For troubleshooting ORA-29740 errors, the 'reason' will be very important.
In the above example, the first section indicates the reason for the
initiated reconfiguration. The reasons are as follows:



Reason 0 = No reconfiguration
Reason 1 = The Node Monitor generated the reconfiguration.
Reason 2 = An instance death was detected.
Reason 3 = Communications Failure
Reason 4 = Reconfiguration after suspend

For ORA-29740 errors, you will most likely see reasons 1, 2, or 3.

-----------------------------------------------------------------------------

Reason 1: The Node Monitor generated the reconfiguration. This can happen if:

a) An instance joins the cluster
b) An instance leaves the cluster
c) A node is halted

It should be easy to determine the cause of the error by reviewing the alert
logs and LMON trace files from all instances. If an instance joins or leaves
the cluster or a node is halted then the ORA-29740 error is not a problem.

ORA-29740 evictions with reason 1 are usually expected when the cluster
membership changes. Very rarely are these types of evictions a real problem.

If you feel that this eviction was not correct, do a search in Metalink or
the bug database for:

ORA-29740 'reason 1'

Important files to review are:

a) Each instance's alert log
b) Each instance's LMON trace file
c) Statspack reports from all nodes leading up to the eviction
d) Each node's syslog or messages file
e) iostat output before, after, and during evictions
f) vmstat output before, after, and during evictions
g) netstat output before, after, and during evictions

There is a tool called "OS Watcher" that is being developed that helps gather
this information. For more information on "OS Watcher" see Note 301137.1
"OS Watcher User Guide".


-----------------------------------------------------------------------------

Reason 2: An instance death was detected. This can happen if:

a) An instance fails to issue a heartbeat to the control file.

When the heartbeat is missing, LMON will issue a network ping to the instance
not issuing the heartbeat. As long as the instance responds to the ping,
LMON will consider the instance alive. If, however, the heartbeat is not
issued for the length of time of the control file enqueue timeout, the
instance is considered to be problematic and will be evicted.


Common causes for an ORA-29740 eviction (Reason 2):

a) NTP (Time changes on cluster) - usually on Linux, Tru64, or IBM AIX
b) Network Problems (SAN).
c) Resource Starvation (CPU, I/O, etc..)
d) An Oracle bug.



Important files to review are:

a) Each instance's alert log
b) Each instance's LMON trace file
c) Statspack reports from all nodes leading up to the eviction
d) The CKPT process trace file of the evicted instance
e) Other bdump or udump files...
f) Each node's syslog or messages file
g) iostat output before, after, and during evictions
h) vmstat output before, after, and during evictions
i) netstat output before, after, and during evictions

There is a tool called "OS Watcher" that is being developed that helps gather
this information. For more information on "OS Watcher" see Note 301137.1
"OS Watcher User Guide".

-----------------------------------------------------------------------------

Reason 3: Communications Failure. This can happen if:

a) The LMON processes lose communication between one another.
b) One instance loses communications with the LMS, LMD, process of
another instance.
c) The LCK processes lose communication between one another.
d) A process like LMON, LMD, LMS, or LCK is blocked, spinning, or stuck
and is not responding to remote requests.

In this case the ORA-29740 error is recorded when there are communication
issues between the instances. It is an indication that an instance has been
evicted from the configuration as a result of IPC send timeout. A
communications failure between processes across instances will also generate a
ORA-29740 with reason 3. When this occurs, the trace file of the process
experiencing the error will print a message:

Reporting Communication error with instance:

If communication is lost at the cluster layer (for example, network cables
are pulled), the cluster software may also perform node evictions in the
event of a cluster split-brain. Oracle will detect a possible split-brain
and wait for cluster software to resolve the split-brain. If cluster
software does not resolve the split-brain within a specified interval,
Oracle proceeds with evictions.


Oracle Support has seen cases where resource starvation (CPU, I/O, etc...) can
cause an instance to be evicted with this reason code. The LMON or LMD process
could be blocked waiting for resources and not respond to polling by the remote
instance(s). This could cause that instance to be evicted. If you have
a statspack report available from the time just prior to the eviction on the
evicted instance, check for poor I/O times and high CPU utilization. Poor I/O
times would be an average read time of > 20ms.

Common causes for an ORA-29740 eviction (Reason 3):

a) Network Problems.
b) Resource Starvation (CPU, I/O, etc..)
c) Severe Contention in Database.
d) An Oracle bug.



Tips for tuning inter-instance performance can be found in the following note:

Note 181489.1
Tuning Inter-Instance Performance in RAC and OPS

Important files to review are:

a) Each instance's alert log
b) Each instance's LMON trace file
c) each instance's LMD and LMS trace files
d) Statspack reports from all nodes leading up to the eviction
e) Other bdump or udump files...
f) Each node's syslog or messages file
g) iostat output before, after, and during evictions
h) vmstat output before, after, and during evictions
i) netstat output before, after, and during evictions

There is a tool called "OS Watcher" that is being developed that helps gather
this information. For more information on "OS Watcher" see Note 301137.1
"OS Watcher User Guide".