Home » Server Options » RAC & Failsafe » installing Oracle database 10.2 on RHEL AS/ES 4
icon14.gif  installing Oracle database 10.2 on RHEL AS/ES 4 [message #220173] Mon, 19 February 2007 05:40 Go to next message
tanmoy7
Messages: 20
Registered: February 2007
Location: Dhaka, Bangladesh
Junior Member

Dear Viewers,

i have configured and installed successfully oracle 10.2 database RAC (Real Application Cluster) on RHEL AS/ES 3/4.

Using Raw devices and ASM.

If anybody fall problem doing this, inform me , may be i shall help you.

Thanks

Tanmoy
Re: installing Oracle database 10.2 on RHEL AS/ES 4 [message #222087 is a reply to message #220173] Thu, 01 March 2007 13:10 Go to previous messageGo to next message
tofy79
Messages: 2
Registered: March 2007
Junior Member
Dear Tanmoy,

I am going to Install Oracle 10gR2 RAC with ASM on RHEL AS 4 Update 4,I have 3 nodes ,all have connect to same SAN Storage device .

Now My question what steps to configure SAN device and partitions on RHEL on all Nodes before using CRS and DB setup Steps, please tell me step-by-step setup and configure SAN for this issue.

Thanks & Best Regards.

[Updated on: Thu, 01 March 2007 13:21]

Report message to a moderator

icon14.gif  Re: installing Oracle database 10.2 on RHEL AS/ES 4 [message #222400 is a reply to message #222087] Sat, 03 March 2007 22:42 Go to previous messageGo to next message
tanmoy7
Messages: 20
Registered: February 2007
Location: Dhaka, Bangladesh
Junior Member

dear,

you are using SAN storage. its ok. but what type of storage method are you going to choose such as ASM, RAW DEVICES or OCFS2??

do u get SAN by the following command:

fdisk -l

if u get your SAN then make partition it as you needed...


TANMOY
Re: installing Oracle database 10.2 on RHEL AS/ES 4 [message #222409 is a reply to message #222400] Sun, 04 March 2007 02:35 Go to previous messageGo to next message
tofy79
Messages: 2
Registered: March 2007
Junior Member
Thanks Dear Tanmoy,

I will using ASM with RAC , I need example to how configure SAN Partitions for OCR Devices,Voting Disk Devices and ASM for Oracle 10g R2 RAC on RHEL4AS-U4.

Regards,

Tofy
Re: installing Oracle database 10.2 on RHEL AS/ES 4 [message #259242 is a reply to message #220173] Tue, 14 August 2007 16:35 Go to previous messageGo to next message
shiv8d
Messages: 4
Registered: August 2007
Junior Member
Hi Tanmoy,
We have installed Oracle RAC with two DB servers Db1 and Db2 with instances Db1_instance1 and Db2_instance2 and ASM1 and ASM2 respectively connected to SAN with RAC software installed on SAN voting and OCR disks.

We were using ASM to manage raw devices on SAN.Everything was running smoothly and one bad day Db1 had some hardware failures(Hard Disk) failed and have to reinstall Db1 from scratch.

So I have my RAC and Db2 instances running. I wanted to know how to install Db1_instance1 and ASM1 instance and add them to Existing RAC as it has every details of Db1.

I wanted to create Db1_instance1 and ASM1 instances by assigning the existing database and diskgroups from SAN,
""How can I do this can you please give me the instructions on how to install Db1_instance1 and ASM1 instance???""

I will be very grateful to you.
icon14.gif  Re: installing Oracle database 10.2 on RHEL AS/ES 4 [message #259292 is a reply to message #259242] Tue, 14 August 2007 23:21 Go to previous messageGo to next message
tanmoy7
Messages: 20
Registered: February 2007
Location: Dhaka, Bangladesh
Junior Member

DEAR shiv8d,

I am understanding your problem. Go through the following document. hope it will help you.

Md. Shafiul Alam Tanmoy
DBA, SSD
LEADS Corporation limited,
Dhaka, Bangladesh
Phone : (+880-2) 9552145
Mobile : +88-01717459111
Fax : (880-2) 9565586
E-mail : shafiul@leads-bd.com; tanmoy7@yahoo.com
web : www.leads-bd.com
================================================================

Adding New Nodes to Your

Oracle RAC 10g Cluster on

Linux



A step-by-step guide to adding a node to an existing Oracle RAC 10g Release 2 cluster
Published September 2006

In most businesses, a primary business requirement for an Oracle Real Application Clusters (RAC) configuration is scalability of the database tier across the entire system—so that when the number of users increases, additional instances can be added to the cluster to distribute the load.

In Oracle RAC 10g, this specific feature has become much easier. Oracle incorporates the plug-and-play feature with a few minimum steps of setup, after the node/instance is brought to a usable state.

In this article, I will discuss the steps required to add a node to an existing Oracle RAC 10g Release 2 cluster.

Current Environment

For demonstration purposes, our environment here is a four-node Red Hat Linux cluster. The task is to add an additional node, making it a five-node cluster.

Database Name: SSKYDB
Number of Nodes: Four Nodes – oradb1, oradb2, oradb3 and oradb4
Database Version: 10.2.0.1
Number of Instances: Four Instances – SSKY1, SSKY2, SSKY3 and SSKY4
O/S Kernel Version: Red Hat Enterprise Linux AS 3 Linux sumsky.net 2.4.21-32.ELsmp
File System: OCFS 1.0 and ASM
Cluster Manager: Oracle Clusterware


The process will occur in the following seven steps:

1. Account for Dependencies and Prerequisites
2. Configure Network Components
3. Install Oracle Clusterware
4. Configure Oracle Clusterware
5. Install Oracle Software
6. Add New Instance(s)
7. Do Housekeeping Chores


Step 1: Account for Dependencies and Prerequisites

The first primary step in any software installation or upgrade is to ensure that a complete backup of the system is available, including OS and data files. The next step is to verify system requirements, operating system versions, and all application patch levels.

The new node should have the same version of the operating system as the existing nodes, including all patches required for Oracle. In this scenario, as the operating system residing on nodes 1 through 4 is Red Hat Enterprise Linux 3, the new node should have that version as well. And to maintain the current naming convention, you should call the new node oradb5.

Apart from the basic operating system, the following packages required by Oracle should also be installed:

[root@oradb5 root]# rpm -qa | grep -i gcc
compat-gcc-c++-7.3-2.96.128
compat-gcc-7.3-2.96.128
libgcc-3.2.3-42
gcc-3.2.3-42
[root@oradb5 root]# rpm -qa | grep -i openmotif
openmotif-2.2.3-3.RHEL3
openmotif21-2.1.30-8
[root@oradb5 root]# rpm -qa | grep -i glibc
glibc-2.3.3-74
glibc-utils-2.3.3-74
glibc-kernheaders-2.4-8.34.1
glibc-common-2.3.3-74
glibc-headers-2.3.3-74
glibc-devel-2.3.3-74
[root@oradb5 root]# rpm -qa | grep -i compat
compat-libstdc++-7.3-2.96.128
compat-gcc-c++-7.3-2.96.128
compat-gcc-7.3-2.96.128
compat-db-4.0.14-5
compat-libstdc++-devel-7.3-2.96.128
[root@oradb5 root]#

Update the kernel parameters with the following values.

kernel.core_uses_pid = 1
kernel.hostname = oradb5.sumsky.net
kernel.domainname = sumsky.net
kernel.shmall = 2097152
#kernel.shmmax = 536870912
kernel.shmmax = 2147483648
kernel.shmmni = 4096
kernel.shmseg = 4096
kernel.sem = 250 32000 100 150
kernel.msgmnl = 2878
kernel.msgmnb = 65535
fs.file-max = 65536
net.ipv4.ip_local_port_range = 1024 65000
net.core.rmem_default = 262144
net.core.wmem_default = 262144
net.core.rmem_max = 262144
net.core.wmem_max = 262144

Add the following parameters to /etc/security/limits.conf.

oracle soft nproc 2047
oracle hard nproc 16384
oracle soft nofile 1024
oracle hard nofile 65536

Add devices to /etc/fstab--copy the devices definitions from one of the existing nodes to oradb5.

[root@oradb5 root]$ more /etc/fstab
LABEL=/ / ext3 defaults 1 1
none /dev/pts devpts gid=5,mode=620 0 0
none /proc proc defaults 0 0
none /dev/shm tmpfs defaults 0 0
/dev/sda2 swap swap defaults 0 0
/dev/cdrom /mnt/cdrom udf,iso9660 noauto,owner,kudzu,ro 0 0
/dev/fd0 /mnt/floppy auto noauto,owner,kudzu 0 0
/dev/sdb5 /u01 ocfs _netdev 0 0
/dev/sdb6 /u02 ocfs _netdev 0 0
/dev/sdb7 /u03 ocfs _netdev 0 0
/dev/sdb8 /u04 ocfs _netdev 0 0
/dev/sdb9 /u05 ocfs _netdev 0 0
/dev/sdb10 /u06 ocfs _netdev 0 0
/dev/sdb14 /u14 ocfs _netdev 0 0

Next, create the administrative user. Every installation of Oracle requires an administrative user account on each node. In all existing nodes the administrative owner is oracle, so the next step is to create an administrative user account on node oradb5. While creating this user account, it is important that the UID and the GID of user oracle are identical to that of the other RAC nodes. This information can be obtained using the following command:

[oracle@oradb1 oracle]$ id oracle
uid=500(oracle) gid=500(oinstall) groups=501(dba), 502(oper)

Connect to oradb5 (Linux or Unix based environment) as root and create the following operating system groups.

groupadd -g 500 oinstall
groupadd -g 501 dba
groupadd -g 502 oper

Once the groups have been created, create the oracle user account as a member of the dba group using the following command, and subsequently reset the user password using the passwd (password) command.

useradd -u 500 -g oinstall -G dba, oper oracle

passwd oracle
Changing password for user oracle.
New password:
Retype new password:
passwd: all authentication tokens updated successfully.

Once the groups and the user have been created, they should be verified to ensure that the output from the following command is identical across all nodes in the cluster.

[root@oradb5 root]$ id oracle
uid=500(oracle) gid=500(oinstall) groups=501(dba), 502(oper)

Step 2: Configure Network Components

Add all network address to the /etc/hosts file on node oradb5. Also cross-register node oradb5 information on other four nodes in the cluster.

root@oradb5 root]# more /etc/hosts
127.0.0.1 localhost.localdomain localhost
192.168.2.10 oradb1.sumsky.net oradb1
192.168.2.20 oradb2.sumsky.net oradb2
192.168.2.30 oradb3.sumsky.net oradb3
192.168.2.40 oradb4.sumsky.net oradb4
192.168.2.50 oradb5.sumsky.net oradb5

#Private Network/interconnect
10.168.2.110 oradb1-priv.sumsky.net oradb1-priv
10.168.2.120 oradb2-priv.sumsky.net oradb2-priv
10.168.2.130 oradb3-priv.sumsky.net oradb3-priv
10.168.2.140 oradb4-priv.sumsky.net oradb4-priv
10.168.2.150 oradb5-priv.sumsky.net oradb5-priv

# VIP
192.168.2.15 oradb1-vip.sumsky.net oradb1-vip
192.168.2.25 oradb2-vip.sumsky.net oradb2-vip
192.168.2.35 oradb3-vip.sumsky.net oradb3-vip
192.168.2.45 oradb4-vip.sumsky.net oradb4-vip
192.168.2.55 oradb5-vip.sumsky.net oradb5-vip

Establishing user equivalence with SSH. When adding nodes to the cluster, Oracle copies files from the node where the installation was originally performed to the new node in the cluster. Such a copy process is performed either by using ssh protocol where available or by using the remote copy (rcp). In order for the copy operation to be successful, the oracle user on the RAC node must be able to login to the new RAC node without having to provide a password or passphrase.

Currently the existing four nodes are configured to use ssh. To configure the oracle account on the new node to use ssh without using any passwords, perform the following tasks:

Create the authentication key for user oracle. In order to create this key, change the current directory to the default login directory of the oracle user and perform the following operation:
[oracle@oradb5 oracle]$ ssh-keygen -t dsa -b 1024
Generating public/private dsa key pair.
Enter file in which to save the key (/home/oracle/.ssh/id_dsa):
Created directory '/home/oracle/.ssh'.
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Your identification has been saved in /home/oracle/.ssh/id_dsa.
Your public key has been saved in /home/oracle/.ssh/id_dsa.pub.
The key fingerprint is:
b6:07:42:ae:47:56:0a:a3:a5:bf:75:3e:21:85:8d:30 oracle@oradb5.sumsky.net
[oracle@oradb5 oracle]$

Keys generated from the new node should be appended to /home/oracle/.ssh/id_dsa/authorized_keys file on all nodes, meaning each node should contain the keys from all other nodes in the cluster.
[oracle@oradb5 oracle]$ cd .ssh
[oracle@oradb5 .ssh]$ cat id_dsa.pub > authorized_keys

Once the keys have been created and copied to all nodes, oracle user accounts can connect from one node to another oracle account on another node without using a password. This allows the Oracle Universal Installer to copy files from the installing node to the other nodes in the cluster.
The output below is the verification showing the ssh command from node oradb1 to node oradb5.

[oracle@oradb1 oracle]$ ssh oradb1 hostname
oradb1.sumsky.net
[oracle@oradb1 oracle]$ ssh oradb5 hostname
Oradb5.sumsky.net
[oracle@oradb1 oracle]$ ssh oradb1-priv hostname
oradb1.sumsky.net
[oracle@oradb1 oracle]$ ssh oradb5-priv hostname
Oradb5.sumsky.net

Note: When performing these tests for the first time, the operating system will display a key and request the user to accept or decline. Enter ‘Yes’ to accept and register the key. Tests should be performed on all other nodes across all interfaces in the cluster except for the VIP.

Step 3: Install Oracle Clusterware

Oracle Clusterware is already installed on the cluster; the task here is to add the new node to the clustered configuration. This task is performed by executing the Oracle provided utility called addnode located in the Clusterware’s home oui/bin directory. Oracle Clusterware has two files (Oracle cluster repository, OCR; and Oracle Cluster Synchronization service, CSS, voting disk) that contain information concerning the cluster and the applications managed by the Oracle Clusterware. These files need to be updated with the information concerning the new node. The first step in the clusterware installation process is to verify if the new node is ready for the install.

Cluster Verification. In Oracle Database 10g Release 2, Oracle introduced a new utility called Cluster Verification Utility (CVU)as part of the clusterware software. Executing the utility using the appropriate parameters determines the status of the cluster. At this stage, before beginning installation of the Oracle Clusterware, you should perform two verifications:

If the hardware and operating system configuration is complete:
cluvfy stage -post hwos -n oradb1,oradb5
Performing post-checks for hardware and operating system setup
Checking node reachability...
Node reachability check passed from node "oradb1".
Checking user equivalence...
User equivalence check passed for user "oracle".
Checking node connectivity...
Node connectivity check passed for subnet "192.168.2.0" with node(s) oradb5,oradb1.
Node connectivity check passed for subnet "10.168.2.0" with node(s) oradb5,oradb1.
Suitable interfaces for the private interconnect on subnet "192.168.2.0":
oradb5 eth0:192.168.2.50 eth0:192.168.2.55
oradb1 eth0:192.168.2.10 eth0:192.168.2.15
Suitable interfaces for the private interconnect on subnet "10.168.2.0":
oradb5 eth1:10.168.2.150
oradb1 eth1:10.168.2.110
Checking shared storage accessibility...
Shared storage check failed on nodes "oradb5".
Post-check for hardware and operating system setup was unsuccessful on all the nodes.

As highlighted, the above verification failed with the storage check verification; node oradb5 was unable to see the storage devices. In this specific case, that the disks did not have sufficient permissions.

If the installation were continued ignoring this error, the Oracle Clusterware installation would have failed. If resolved prior to re-execution however, the verification step would succeed as illustrated below.

Checking shared storage accessibility...
Shared storage check passed on nodes "oradb5,oradb1".
Post-check for hardware and operating system setup was successful on all the nodes.

Perform appropriate checks on all nodes in the node list before setting up Oracle Clusterware.
[oracle@oradb1 cluvfy]$ cluvfy stage -pre crsinst -n oradb1,oradb5
Performing pre-checks for cluster services setup
Checking node reachability...
Node reachability check passed from node "oradb1".
Checking user equivalence...
User equivalence check passed for user "oracle".
Checking administrative privileges...
User existence check passed for "oracle".
Group existence check passed for "oinstall".
Membership check for user "oracle" in group "oinstall" [as Primary] failed.
Check failed on nodes:
oradb5,oradb1
Administrative privileges check passed.
Checking node connectivity...
Node connectivity check passed for subnet "192.168.2.0" with node(s) oradb5,oradb1.
Node connectivity check passed for subnet "10.168.2.0" with node(s) oradb5,oradb1.
Suitable interfaces for the private interconnect on subnet "192.168.2.0":
oradb5 eth0:192.168.2.50 eth0:192.168.2.55
oradb1 eth0:192.168.2.10 eth0:192.168.2.15
Suitable interfaces for the private interconnect on subnet "10.168.2.0":
oradb5 eth1:10.168.2.150
oradb1 eth1:10.168.2.110
Checking system requirements for 'crs'...
Total memory check passed.
Check failed on nodes:
oradb5,oradb1
Free disk space check passed.
Swap space check passed.
System architecture check passed.
Kernel version check passed.
Package existence check passed for "make-3.79".
Package existence check passed for "binutils-2.14".
Package existence check passed for "gcc-3.2".
Package existence check passed for "glibc-2.3.2-95.27".
Package existence check passed for "compat-db-4.0.14-5".
Package existence check passed for "compat-gcc-7.3-2.96.128".
Package existence check passed for "compat-gcc-c++-7.3-2.96.128".
Package existence check passed for "compat-libstdc++-7.3-2.96.128".
Package existence check passed for "compat-libstdc++-devel-7.3-2.96.128".
Package existence check passed for "openmotif-2.2.3".
Package existence check passed for "setarch-1.3-1".
Group existence check passed for "dba".
Group existence check passed for "oinstall".
User existence check passed for "nobody".
System requirement failed for 'crs'
Pre-check for cluster services setup was successful on all the nodes.

Step 4: Configure Oracle Clusterware

Using the OUI requires that the terminal from where this installer is ran be X-windows compatible. If not, an appropriate X-windows emulator should be installed and the emulator invoked using the DISPLAY command using the following syntax.

export DISPLAY=<client IP address>:0.0

For example:

[oracle@oradb1 oracle]$export DISPLAY=192.168.2.101:0.0
The next step is to configure the Clusterware on the new node oradb5. For this, as mentioned earlier, Oracle has provided a new executable called addNode.sh located in the <Clusterware Home>/oui/bin directory.

1. Execute the script <Clusterware Home>/oui/bin/addNode.sh.
2. Welcome - click on Next.
3. Specify Cluster Nodes to Add to Installation - In this screen, OUI lists existing nodes in the cluster and in the bottom half of the screen lists the new node(s) information to be added in the appropriate columns. Once the information is entered click on Next.

Public Node Name : oradb5
Private Node Name: oradb5-priv
Virtual Host Name: oradb5-vip

4. Cluster Node Addition Summary - Verify the new node is listed under the “New Nodes” drilldown and click on Install.
5. Once all required clusterware components are copied from oradb1 to oradb5, OUI prompts to execute three files:

/usr/app/oracle/oraInventory/orainstRoot.sh on node oradb5

[root@oradb5 oraInventory]# ./orainstRoot.sh
Changing permissions of /usr/app/oracle/oraInventory to 770.
Changing groupname of /usr/app/oracle/oraInventory to dba.
The execution of the script is complete

[root@oradb5 oraInventory]#

/usr/app/oracle/product/10.2.0/crs/install/rootaddnode.sh on node oradb1. (The addnoderoot.sh file will add the new node information to the OCR using the srvctl utility. Note the srvctl command with the nodeapps parameter at the end of the script output below.)

[root@oradb1 install]# ./rootaddnode.sh
clscfg: EXISTING configuration version 3 detected.
clscfg: version 3 is 10G Release 2.
Attempting to add 1 new nodes to the configuration
Using ports: CSS=49895 CRS=49896 EVMC=49898 and EVMR=49897.
node <nodenumber>: <nodename> <private interconnect name> <hostname>
node 5: oradb5 oradb5-priv oradb5
Creating OCR keys for user 'root', privgrp 'root'..
Operation successful.
/usr/app/oracle/product/10.2.0/crs/bin/srvctl
add nodeapps -n oradb5 -A oradb5-v ip/255.255.255.0/bond0
-o /usr/app/oracle/product/10.2.0/crs
[root@oradb1 install]#

/usr/app/oracle/product/10.2.0/crs/root.sh on node oradb5.

[root@oradb5 crs]# ./root.sh
WARNING: directory '/usr/app/oracle/product/10.2.0' is not owned by root
WARNING: directory '/usr/app/oracle/product' is not owned by root
WARNING: directory '/usr/app/oracle' is not owned by root
Checking to see if Oracle CRS stack is already configured
/etc/oracle does not exist. Creating it now.

OCR backup directory '/usr/app/oracle/product/10.2.0/crs/cdata/SskyClst'
does not exist. Creating now
Setting the permissions on OCR backup directory
Setting up NS directories
Oracle Cluster Registry configuration upgraded successfully
WARNING: directory '/usr/app/oracle/product/10.2.0' is not owned by root
WARNING: directory '/usr/app/oracle/product' is not owned by root
WARNING: directory '/usr/app/oracle' is not owned by root
clscfg: EXISTING configuration version 3 detected.
clscfg: version 3 is 10G Release 2.
assigning default hostname oradb1 for node 1.
Successfully accumulated necessary OCR keys.
Using ports: CSS=49895 CRS=49896 EVMC=49898 and EVMR=49897.
node :
node 1: oradb1 oradb1-priv oradb1
node 2: oradb2 oradb2-priv oradb2
node 3: oradb3 oradb3-priv oradb3
node 4: oradb4 oradb4-priv oradb4
clscfg: Arguments check out successfully.

NO KEYS WERE WRITTEN. Supply -force parameter to override.
-force is destructive and will destroy any previous cluster
configuration.
Oracle Cluster Registry for cluster has already been initialized
Startup will be queued to init within 90 seconds.
Adding daemons to inittab
Expecting the CRS daemons to be up within 600 seconds.
CSS is active on these nodes.
oradb1
oradb2
oradb3
oradb4
oradb5
CSS is active on all nodes.
Waiting for the Oracle CRSD and EVMD to start
Oracle CRS stack installed and running under init(1M)
Running vipca(silent) for configuring nodeapps
IP address "oradb-vip" has already been used.
Enter an unused IP address.

The error “oradb-vip’ has already been used” raised because VIP was already configured on all nodes other than on oradb5. It is important to execute VIPCA (Virtual IP Configuration Assistant) manually before proceeding.

Manually configuring VIP using VIPCA. Similar to executing OUI, executing VIPCA requires that the terminal from where this installer is run should be X-windows compatible. If not, an appropriate X-windows emulator should be installed and the emulator invoked using the DISPLAY command using the following syntax:

export DISPLAY=<client IP address>:0.0
For example:

[oracle@oradb1 oracle]$export DISPLAY=192.168.2.101:0.0
Immediately after executing root.sh from the command prompt on node oradb1 (or the node from where the add node procedure is being executed) invoke VIPCA also as root. (VIPCA will also configure GSD and ONS resources on the new node.)

i) Welcome - click on Next.

ii) Step 1 of 2: Network Interfaces – A list of network interfaces are displayed; select the network public network interface to which the VIP will be assigned/mapped. Normally it is the first interface in the list (eth0); however, in this specific situation, since bonding is enabled for the private interconnect and the list is displayed in alphabetical order, the bond0 interface will be on the top of the list. Click on Next when complete.

iii) Step 2 of 2: Virtual IPs for cluster nodes – For each node name in the list, provide the VIP alias name and the Virtual IP address in the appropriate columns. Click on Next when complete.

iv) Summary - A summary of the current selected configuration is listed. Click on Finish when all settings are correct.

v) Configuration Assistant Progress Dialog – This screen displays the progress of the VIP, GSD, and ONS configuration process. Click on OK when prompted by VIPCA.

vi) Configuration Results – This screen displays the configuration results. Click on Exit to end VIPCA.

On completion of the Oracle Clusterware installation, the following files are created in their respective directories.

Clusterware files:

[root@oradb5 root]# ls -ltr /etc/init.d/init.*
-r-xr-xr-x 1 root root 3197 Aug 13 23:32 /etc/init.d/init.evmd
-r-xr-xr-x 1 root root 35401 Aug 13 23:32 /etc/init.d/init.cssd
-r-xr-xr-x 1 root root 4721 Aug 13 23:32 /etc/init.d/init.crsd
-r-xr-xr-x 1 root root 1951 Aug 13 23:32 /etc/init.d/init.crs
[root@oradb5 root]#

The operating system provided inittab file is updated with the following entries.

[root@oradb5 root]# tail -5 /etc/inittab
# Run xdm in runlevel 5
x:5:respawn:/etc/X11/prefdm -nodaemon
h1:35:respawn:/etc/init.d/init.evmd run >/dev/null 2>&1 </dev/null
h2:35:respawn:/etc/init.d/init.cssd fatal >/dev/null 2>&1 </dev/null
h3:35:respawn:/etc/init.d/init.crsd run >/dev/null 2>&1 </dev/null

6. Click on OK after all the listed scripts have run on all nodes.

7. End of Installation – Click on Exit.

8. Verify if the Clusterware has all the nodes registered using the olsnodes command.
[oracle@oradb1 oracle]$ olsnodes
oradb1
oradb2
oradb3
oradb4
oradb5
[oracle@oradb1 oracle]$

9. Verify if the cluster services is started, using the crs_stat command.
[oracle@oradb1 oracle]$ crs_stat -t
Name Type Target State Host
------------------------------------------------------------
ora.oradb1.gsd application ONLINE ONLINE oradb1
ora.oradb1.ons application ONLINE ONLINE oradb1
ora.oradb1.vip application ONLINE ONLINE oradb1
ora.oradb2.gsd application ONLINE ONLINE oradb2
...
ora.oradb3.vip application ONLINE ONLINE oradb3
ora.oradb4.gsd application ONLINE ONLINE oradb4
ora.oradb4.ons application ONLINE ONLINE oradb4
ora.oradb4.vip application ONLINE ONLINE oradb4
ora.oradb5.gsd application ONLINE ONLINE oradb5
ora.oradb5.ons application ONLINE ONLINE oradb5
ora.oradb5.vip application ONLINE ONLINE oradb5

10. Verify if the VIP services are configured at the OS level. The virtual IP address is configured and added to the OS network configuration and the network services are started. The VIP configuration could be verified using the ifconfig command at the OS level.
[oracle@oradb5 oracle]$ ifconfig -a
eth0 Link encap:Ethernet HWaddr 00:90:27:B8:58:10
inet addr:192.168.2.50 Bcast:192.168.2.255 Mask:255.255.255.0
UP BROADCAST RUNNING MASTER MULTICAST MTU:1500 Metric:1
RX packets:123 errors:0 dropped:0 overruns:0 frame:0
TX packets:67 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:583308844 (556.2 Mb) TX bytes:4676477 (4.4 Mb)

eth0:1 Link encap:Ethernet HWaddr 00:90:27:B8:58:10
inet addr:192.168.2.55 Bcast:192.168.3.255 Mask:255.255.252.0
UP BROADCAST RUNNING MASTER MULTICAST MTU:1500 Metric:1
RX packets:14631 errors:0 dropped:0 overruns:0 frame:0
TX packets:21377 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:8025681 (7.6 Mb) TX bytes:600 (600.0 b)
Interrupt:11 Base address:0x2400 Memory:41300000-41300038

Note: eth0:1 indicates that it is a VIP address for the basic host eth0. When the node fails eth0:1 will be moved to a surviving node in the cluster. The new identifier for the VIP on the failed over server will be indicated by eth0:2 or higher, depending on what other nodes have failed in the cluster and the VIP has migrated.

Step 5: Install Oracle Software

The next step is to install the Oracle software on the new node. As mentioned earlier, Oracle has provided a new executable called addNode.sh located in the $ORACLE_HOME/oui/bin directory.

1. Execute the script $ORACLE_HOME/oui/bin/addNode.sh.

2. Welcome - click on Next.

3.Specify Cluster Nodes to Add to Installation - In this screen OUI lists existing nodes in the cluster and in the bottom half of the screen lists the new node(s). Select the node oradb5. Once the information is entered, click on Next.

4. Cluster Node Addition Summary - Verify if the new node is listed under the “New Nodes” drilldown and click on the Install button.

5. When copy of Oracle software to node oradb5 is complete, OUI will prompt you to execute /usr/app/oracle/product/10.2.0/db_1/root.sh script from another window as the root user on the new node(s) in the cluster.
[root@oradb5 db_1]# ./root.sh
Running Oracle10 root.sh script...

The following environment variables are set as:
ORACLE_OWNER= oracle
ORACLE_HOME= /usr/app/oracle/product/10.2.0/db_1

Enter the full pathname of the local bin directory: [/usr/local/bin]:
The file "dbhome" already exists in /usr/local/bin. Overwrite it? (y/n)
[n]: y
Copying dbhome to /usr/local/bin ...
The file "oraenv" already exists in /usr/local/bin. Overwrite it? (y/n)
[n]: y
Copying oraenv to /usr/local/bin ...
The file "coraenv" already exists in /usr/local/bin. Overwrite it? (y/n)
[n]: y
Copying coraenv to /usr/local/bin ...

Creating /etc/oratab file...
Entries will be added to the /etc/oratab file as needed by
Database Configuration Assistant when a database is created
Finished running generic part of root.sh script.
Now product-specific root actions will be performed.

6. Click on OK after root.sh has run on node oradb5. ]

7. End of Installation – Click on Exit.

Once the RDBMS software is installed, it is a good practice to run netca before moving to the next step. Netca will configure all required network files and parameters such as the listener, sql*net, and tnsnames.ora files.

Step 6: Add New Instance(s)

DBCA has all the required options to add additional instances to the cluster.

Requirements:
i) Make a full cold backup of the database before commencing the upgrade process.

II) Oracle Clusterware should be running on all nodes.


1. Welcome screen - select Oracle Real Application Cluster database and click on Next.

2. Step 1 of 7: Operations - A list of all operations that can be performed using the DBCA are listed. Select Instance Management and click on Next.

3. Step 2 of 7: Instance Management - A list of instance management operations that can be performed are listed. Select Add an Instance and click on Next.

4. Step 3 of 7: List of cluster databases - A list of clustered databases running on the node are listed. In this case the database running on node oradb1 is SSKYDB; select this database. In the bottom part of the screen, DBCA requests you to “Specify a user with SYSDBA system privileges”:
Username: sys
Password: < > and click on Next.

5. Step 4 of 7: List of cluster database instances - The DBCA lists all the instances currently available on the cluster. Verify if all instances are listed and click on Next.

6. Step 5 of 7: Instance naming and node selection - DBCA lists the next instance name in the series and requests the node on which to add the instance. In our example the next instance name is SSKY5 and the node name is oradb5. Click on Next after making the appropriate selection. At this stage there is a small pause before the next screen appears as DBCA determines the current state of the new node and what services are configured on the existing nodes.

7. Step 6 of 7: Database Services - If the current configuration has any database services configured, this screen will appear (else is skipped). In our example, the current configuration has two services, CRM and PAYROLL, defined. This screen prompts to configure them across the new instance. Make the appropriate selections and click on Next when ready.

8. Step 7 of 7: Instance Storage - In this screen, DBCA will list the instances specific files such as undo tablespaces, redo log groups, and so on. Verify if all required files are listed and click on Finish.

9. Database Configuration Assistant: Summary - After verifying the summary, click on OK to begin the software installation.

10. DBCA verifies the new node oradb5, and as the database is configured to use ASM, prompts with the message “ASM is present on the cluster but needs to be extended to the following nodes: [oradb5]. Do you want ASM to be extended?” Click on Yes to add ASM to the new instance.

11. In order to create and start the ASM instances on the new node, Oracle requires the Listener to be present and started. DBCA prompts with requesting permission to configure the listener using port 1521 and listener name LISTENER_ORADB5. Click on Yes if the default port is good, else click on No and manually execute NetCA on oradb5 to create the listener using a different port.

12. Database Configuration Assistant progress screen - Once instance management is complete, the user is prompted with the message "Do you want to perform another operation?” Click on No to end.

13. At this stage, the following is true:
a.The clusterware has been installed on node oradb5 and is now part of the cluster.
b.The Oracle software has been installed on node oradb5.
c.The ASM5 and new Oracle instance SSKY5 has been created and configured on oradb5.

14. Verify that the upgrade is successful.
a.Verify that all instances in the cluster are started using the V$ACTIVE_INSTANCES view from any of the participating instances. For example:
SQL> select * from v$active_instances;

INST_NUMBER INST_NAME
----------- -----------------------------------
1 oradb1.sumsky.net:SSKY1
2 oradb2.sumsky.net:SSKY2
3 oradb3.sumsky.net:SSKY3
4 oradb4.sumsky.net:SSKY4
5 oradb5.sumsky.net:SSKY5

b. Verify that all ASM diskgroups are mounted and the datafiles are visible to the new instance.
SQL> SELECT NAME,STATE,TYPE FROM V$ASM_DISKGROUP;

NAME STATE TYPE
------------------------------ ----------- ------
ASMGRP1 CONNECTED NORMAL
ASMGRP2 CONNECTED NORMAL

SQL> SELECT NAME FROM V$DATAFILE;

NAME
-----------------------------------------------------------------
+ASMGRP1/sskydb/datafile/system.256.581006553
+ASMGRP1/sskydb/datafile/undotbs1.258.581006555
+ASMGRP1/sskydb/datafile/sysaux.257.581006553
+ASMGRP1/sskydb/datafile/users.259.581006555
+ASMGRP1/sskydb/datafile/example.269.581007007
+ASMGRP1/sskydb/datafile/undotbs2.271.581029215

c. Verify that OCR is aware of:
The new instance in the cluster:

[oracle@oradb1 oracle]$ srvctl status database -d SSKYDB
Instance SSKY1 is running on node oradb1
Instance SSKY2 is running on node oradb2
Instance SSKY3 is running on node oradb3
Instance SSKY4 is running on node oradb4
Instance SSKY5 is running on node oradb5

The database services:
[oracle@oradb1 oracle]$ srvctl status service -d SSKYDB
Service CRM is running on instance(s) SSKY1
Service CRM is running on instance(s) SSKY2
Service CRM is running on instance(s) SSKY3
Service CRM is running on instance(s) SSKY4
Service CRM is running on instance(s) SSKY5
Service PAYROLL is running on instance(s) SSKY1
Service PAYROLL is running on instance(s) SSKY5

Step 7: Do Housekeeping Chores


For easy administration and navigation, you should define several different environment variables in the login profile. For example:

[oracle@oradb5 oracle]$ more .bash_profile
# Get the aliases and functions
if [ -f ~/.bashrc ]; then
. ~/.bashrc
fi
# User specific environment and startup programs
export ORACLE_BASE=/usr/app/oracle
export ORACLE_HOME=$ORACLE_BASE/product/10.2.0/db_1
export ORA_CRS_HOME=$ORACLE_BASE/product/10.2.0/crs

export PATH=.:${PATH}:$HOME/bin:$ORACLE_HOME/bin
export PATH=${PATH}:$ORA_CRS_HOME/bin
export PATH=${PATH}:/usr/bin:/bin:/usr/bin/X11:/usr/local/bin:/sbin
export ORACLE_ADMIN=$ORACLE_BASE/admin
export TNS_ADMIN=$ORACLE_HOME/network/admin

export LD_ASSUME_KERNEL=2.4.19
export LD_LIBRARY=$ORACLE_HOME/lib
export LD_LIBRARY=${LD_LIBRARY}:/lib:/usr/lib:/usr/local/bin
export LD_LIBRARY=${LD_LIBRARY}:$ORA_CRS_HOME/lib

export CLASSPATH=$ORACLE_HOME/JRE
export CLASSPATH=${CLASSPATH}:$ORACLE_HOME/jlib
export CLASSPATH=${CLASSPATH}:$ORACLE_HOME/rdbms/jlib
export CLASSPATH=${CLASSPATH}:$ORACLE_HOME/network/jlib
export THREADS_FLAG=native
export ORACLE_SID=SSKY5

Add network addresses to DNS for lookup. In order for the applications and clients connecting to the database using VIP to translate the alias to an appropriate IP address, it is important that the VIP addresses are added to the DNS.

Also add the new network address to the clinet tnsnames.ora file to the appropriate connect descriptors.

CRAC =
(DESCRIPTION =
(ADDRESS = (PROTOCOL = TCP)(HOST = oradb1-vip)(PORT = 1521))
(ADDRESS = (PROTOCOL = TCP)(HOST = oradb2-vip)(PORT = 1521))
(ADDRESS = (PROTOCOL = TCP)(HOST = oradb3-vip)(PORT = 1521))
(ADDRESS = (PROTOCOL = TCP)(HOST = oradb4-vip)(PORT = 1521))
(ADDRESS = (PROTOCOL = TCP)(HOST = oradb5-vip)(PORT = 1521))
(LOAD_BALANCE = yes)
(CONNECT_DATA =
(SERVER = DEDICATED)
(SERVICE_NAME = CRM)
)
)

If the servers are configured to use FAN features, add the new server address to the onsctl files on all database servers. The ons.config file is located in
[oracle@oradb4 oracle]$ more $ORACLE_HOME/opmn/conf/ons.config
localport=6101
remoteport=6201
loglevel=3
useocr=on
nodes=oradb4.sumsky.net:6101,oradb2.sumsky.net:6201,
oradb1.sumsky.net:6201,oradb3.sumsky.net:6201,oradb5.sumsky.net:6201
onsclient1.sumsky.net:6200,onsclient2.sumsky.net:6200

Conclusion


Congratulations, you have just added a new node to an existing configuration of four nodes. The following table shows the new configuration.

Database Name:SSKYDB
Number of Nodes:Five Nodes – oradb1, oradb2, oradb3, oradb4 and oradb5
Database Version: 10.2.0.1
Number of Instances:Five Instances – SSKY1, SSKY2, SSKY3, SSKY4 and SSKY5
O/S Kernel Version:Red Hat Advanced Server 3.0 Linux sumsky.net 2.4.21-32.ELsmp
File System :OCFS 1.0 and ASM
Cluster Manager :Oracle Clusterware

=================================================================Md. Shafiul Alam Tanmoy
DBA, SSD
LEADS Corporation limited,
Dhaka, Bangladesh
Phone : (+880-2) 9552145
Mobile : +88-01717459111
Fax : (880-2) 9565586
E-mail : shafiul@leads-bd.com; tanmoy7@yahoo.com
web : www.leads-bd.com




Re: installing Oracle database 10.2 on RHEL AS/ES 4 [message #259463 is a reply to message #259292] Wed, 15 August 2007 12:19 Go to previous messageGo to next message
shiv8d
Messages: 4
Registered: August 2007
Junior Member
Wow that was a nice wonderful tutorial.
Thanks a Lot Tanmoy
Shiva
Re: installing Oracle database 10.2 on RHEL AS/ES 4 [message #270716 is a reply to message #259463] Thu, 27 September 2007 17:47 Go to previous message
shiv8d
Messages: 4
Registered: August 2007
Junior Member
Hi Tanmoy,
Your help worked out for rebuilding the corrupted RAC. I am installing another Fresh Oracle RAC on RHEL v3 using raw devices.

Can you please send me instructions (steps) for installing RAC and creating ASM and DB instances for 2-node RAC running RHEL v3.

Thanks a lot.
Shiva
Previous Topic: JDBC Connection to RAC
Next Topic: Loss of service from one of node
Goto Forum:
  


Current Time: Thu Mar 28 05:40:54 CDT 2024