Home » Server Options » RAC & Failsafe » root.sh failed on second node (Oracle, 10.2.0.1.0, RHEL 4)
root.sh failed on second node [message #497573] Sun, 06 March 2011 04:18 Go to next message
suresh.wst
Messages: 53
Registered: June 2008
Location: Hyderabad
Member
Hi,

I am trying to install RAC on RHEL4 using vmware for learning purpose. root.sh on node1 was successful but on node2, following error occured:

The "/home/oracle/oracle/product/10.2.0/cfgtoollogs/configToolFailedCommands" script contains all commands that failed, were skipped or were cancelled. This file may be used to run these configuration assistants outside of OUI. Note that you may have to update this script with passwords (if any) before executing the same.
The "/home/oracle/oracle/product/10.2.0/cfgtoollogs/configToolAllCommands" script contains all commands to be executed by the configuration assistants. This file may be used to run the configuration assistants outside of OUI. Note that you may have to update this script with passwords (if any) before executing the same.


Commands in the above script is:

/home/oracle/oracle/product/10.2.0/bin/racgons add_config rac1.tsb.com:6200 rac2.tsb.com:6200

/home/oracle/oracle/product/10.2.0/bin/oifcfg setif -global eth0/192.168.100.0:public eth1/200.200.100.0:cluster_interconnect

/home/oracle/oracle/product/10.2.0/bin/cluvfy stage -post crsinst -n rac1,rac2


Please help me in finding the solution for this. Thanks in advance.

Suresh
Re: root.sh failed on second node [message #497578 is a reply to message #497573] Sun, 06 March 2011 04:41 Go to previous messageGo to next message
John Watson
Messages: 8922
Registered: January 2010
Location: Global Village
Senior Member
What happened when you ran the commands?
Re: root.sh failed on second node [message #497582 is a reply to message #497578] Sun, 06 March 2011 05:00 Go to previous messageGo to next message
suresh.wst
Messages: 53
Registered: June 2008
Location: Hyderabad
Member
I tried to run individual commands, following is the output:

[root@rac2 ~]# su - oracle
[oracle@rac2 ~]$ /home/oracle/oracle/product/10.2.0/bin/racgons add_config rac1.tsb.com:6200 rac2.tsb.com:6200
[oracle@rac2 ~]$ /home/oracle/oracle/product/10.2.0/bin/oifcfg setif -global eth0/192.168.100.0:public eth1/200.200.100.0:cluster_interconnect
PRIF-12: failed to initialize cluster support services
[oracle@rac2 ~]$ /home/oracle/oracle/product/10.2.0/bin/cluvfy stage -post crsinst -n rac1,rac2

Performing post-checks for cluster services setup

Checking node reachability...
Node reachability check passed from node "rac2".


Checking user equivalence...
User equivalence check failed for user "oracle".
Check failed on nodes:
rac2

WARNING:
User equivalence is not set for nodes:
rac2
Verification will proceed with nodes:
rac1

Checking Cluster manager integrity...


Checking CSS daemon...
Daemon status check failed for "CSS daemon".
Check failed on nodes:
rac1

Cluster manager integrity check failed.

Checking cluster integrity...


Cluster integrity check failed. This check did not run on the following nodes(s):
rac1


Checking OCR integrity...

Checking the absence of a non-clustered configuration...
All nodes free of non-clustered, local-only configurations.


ERROR:
Unable to obtain OCR integrity details from any of the nodes.


OCR integrity check failed.

Checking CRS integrity...

Checking daemon liveness...
Liveness check failed for "CRS daemon".
Check failed on nodes:
rac1

Checking daemon liveness...
Liveness check failed for "CSS daemon".
Check failed on nodes:
rac1

Checking daemon liveness...
Liveness check failed for "EVM daemon".
Check failed on nodes:
rac1

CRS integrity check failed.

Post-check for cluster services setup was unsuccessful on all the nodes.


In the above output I found that there is a problem with user equivalency, but I am able to ssh to both the nodes without password as oracle user.

on node1:
--------------
[oracle@rac1 ~]$ id oracle
uid=500(oracle) gid=2000(oinstall) groups=2000(oinstall),1000(dba)
[oracle@rac1 ~]$
[oracle@rac1 ~]$ ssh rac2 date
Sun Mar 6 16:28:04 IST 2011
[oracle@rac1 ~]$


on node2:
-----------------
[oracle@rac2 ~]$ id oracle
uid=500(oracle) gid=2000(oinstall) groups=2000(oinstall),1000(dba)
[oracle@rac2 ~]$
[oracle@rac2 ~]$ ssh rac1 date
Sun Mar 6 16:28:40 IST 2011
[oracle@rac2 ~]$


Suresh
Re: root.sh failed on second node [message #497599 is a reply to message #497582] Sun, 06 March 2011 06:59 Go to previous messageGo to next message
John Watson
Messages: 8922
Registered: January 2010
Location: Global Village
Senior Member
Suresh, these commands are part of the root.sh script. Therefore, they must be run as root. And let each one finish before running the next.
Re: root.sh failed on second node [message #499034 is a reply to message #497599] Sat, 12 March 2011 20:33 Go to previous message
mkounalis
Messages: 147
Registered: October 2009
Location: Dallas, TX
Senior Member
It is also important to let the script completely finish on the node you are running it on before starting it on another node.
Previous Topic: 11.2 grid upgrade
Next Topic: Grid 11g 11.2.0.2 Install Error - device checks for asm failed
Goto Forum:
  


Current Time: Fri Mar 29 03:38:28 CDT 2024