Oracle[11gR2] 2 nodes RAC installation on Linux 5.5
1.Download Oracle software from
oracle.com.
2.Configure two VM machines as per below
configuration.
àMinimum
RAM should be 1.6GB but recommended is 2GB.
Node 1:-
Hostname: db-rac1.localdomain
IP Address eth0: 192.168.148.130 (public address)
IP Address eth1: 192.168.217.11(private address)
Node 2:-
Hostname: db-rac2.localdomain
IP Address eth0: 192.168.148.131 (public address)
IP Address eth1: 192.168.217.12(private address)
3.Oracle Installation Prerequisites (Perform
below steps parallel on both the nodes)
i) Install below required rpm’s or upgrade if
already installed.
·
rpm -Uvh binutils-2.*
·
rpm -Uvh compat-libstdc++-33*
·
rpm -Uvh elfutils-libelf-0.*
·
rpm -Uvh elfutils-libelf-devel-*
·
rpm -Uvh gcc-4.*
·
rpm -Uvh gcc-c++-4.*
·
rpm -Uvh glibc-2.*
·
rpm -Uvh glibc-common-2.*
·
rpm -Uvh glibc-devel-2.*
·
rpm -Uvh glibc-headers-2.*
·
rpm -Uvh ksh-2*
·
rpm -Uvh libaio-0.*
·
rpm -Uvh libaio-devel-0.*
·
rpm -Uvh libgcc-4.*
·
rpm -Uvh libstdc++-4.*
·
rpm -Uvh libstdc++-devel-4.*
·
rpm -Uvh make-3.*
·
rpm -Uvh sysstat-7.*
·
rpm -Uvh unixODBC-2.*
·
rpm -Uvh unixODBC-devel-2.*
Example:-
[root@db-rac1 ~]# mkdir ~/Desktop/rhel_cd
[root@db-rac1 ~]# cd ~/Desktop/rhel_cd/Server
[root@db-rac1 Server]# rpm -Uvh binutils-2.*
warning: binutils-2.17.50.0.6-14.el5.x86_64.rpm: Header V3 DSA signature: NOKEY, key ID 37017186
[root@db-rac1 ~]# cd ~/Desktop/rhel_cd/Server
[root@db-rac1 Server]# rpm -Uvh binutils-2.*
warning: binutils-2.17.50.0.6-14.el5.x86_64.rpm: Header V3 DSA signature: NOKEY, key ID 37017186
Preparing...
########################################### [100%]
package binutils-2.17.50.0.6-14.el5.x86_64 is already installed
package binutils-2.17.50.0.6-14.el5.x86_64 is already installed
ii) Add or amend the following lines to the
"/etc/sysctl.conf" file.
fs.aio-max-nr = 1048576
fs.file-max = 6815744
kernel.shmall = 2097152
kernel.shmmax = 1054504960
kernel.shmmni = 4096
# semaphores: semmsl, semmns, semopm, semmni
kernel.sem = 250 32000 100 128
net.ipv4.ip_local_port_range = 9000 65500
net.core.rmem_default=262144
net.core.rmem_max=4194304
net.core.wmem_default=262144
net.core.wmem_max=1048586
Run the following command to change the current kernel parameters.
#/sbin/sysctl –p
fs.aio-max-nr = 1048576
fs.file-max = 6815744
kernel.shmall = 2097152
kernel.shmmax = 1054504960
kernel.shmmni = 4096
# semaphores: semmsl, semmns, semopm, semmni
kernel.sem = 250 32000 100 128
net.ipv4.ip_local_port_range = 9000 65500
net.core.rmem_default=262144
net.core.rmem_max=4194304
net.core.wmem_default=262144
net.core.wmem_max=1048586
Run the following command to change the current kernel parameters.
#/sbin/sysctl –p
iii)Add the following lines to the
"/etc/security/limits.conf" file.
oracle soft nproc 2047
oracle hard nproc 16384
oracle soft nofile 1024
oracle hard nofile 65536
oracle soft nproc 2047
oracle hard nproc 16384
oracle soft nofile 1024
oracle hard nofile 65536
iv)Create the new groups and users.
groupadd -g 1000 oinstall
groupadd -g 1001
asmadmin
groupadd -g 1002
dba
groupadd -g 1003
oper
groupadd -g 1004
asmdba
groupadd -g 1005
asmoper
useradd -u 1101 -g
oinstall -G asmadmin,asmdba,asmoper grid
usermod -u 1102 -g
oinstall -G dba,asmdba,oper oracle
[root@db-rac1 ~]# id oracle
uid=1102(oracle) gid=1000(oinstall) groups=1000(oinstall),1002(dba),1003(oper),1001(asmdba)
[root@db-rac1 ~]#
[root@db-rac1 ~]# id grid
uid=1101(grid) gid=1000(oinstall)
groups=1000(oinstall),1004(asmadmin),1001(asmdba),1005(asmoper)
v)Create the directories in which the Oracle software will
be installed.
mkdir -p /u01/app/grid/product/11.2.0/grid
chown -R
grid:oinstall /u01/app/grid
mkdir -p
/u01/app/oracle/product/11.2.0/db_1
chown
oracle:oinstall /u01/app/oracle
chmod -R 775 /u01
passwd grid
passwd oracle
vi)Modify /etc/hosts file on both the nodes as per below.
If you are not using DNS, the "/etc/hosts" file
must contain the following information.
As we have added SCAN entries in /etc/hosts which is not
recommended, we must have DNS for SCAN to round-robin between 3 addresses on
the same subnet as the public IPs. For this installation, we will compromise
and use the hosts file.
[root@db-rac1 /]# cat /etc/hosts
# Do not remove the following line, or various programs
# that require network functionality will fail.
127.0.0.1
localhost.localdomain localhost
::1
localhost6.localdomain6 localhost6
# Public
192.168.148.130
db-rac1.localdomain db-rac1
192.168.148.131
db-rac2.localdomain db-rac2
# Private
192.168.217.11
db-rac1-priv.localdomain
db-rac1-priv
192.168.217.12
db-rac2-priv.localdomain
db-rac2-priv
# Virtual
192.168.148.132 db-rac1-vip.localdomain db-rac1-vip
192.168.148.133
db-rac2-vip.localdomain
db-rac2-vip
# SCAN
192.168.148.100
db-rac-scan.localdomain
db-rac-scan
192.168.148.101
db-rac-scan.localdomain
db-rac-scan
192.168.148.102
db-rac-scan.localdomain
db-rac-scan
check they can both ping all the public and private IP
addresses using the following commands.
ping -c 3 db-rac2
ping -c 3 db-rac2-priv
ping -c 3 db-rac2-vip
vii)If you have the Linux firewall enabled, you will need
to disable, the following is an example of disabling the firewall.
[root@db-rac1 ~]# service iptables stop
[root@db-rac1 ~]# chkconfig iptables off
ntpd is stopped
viii)Configure ASM disk.
First check newly added storage or hidden mount point
into your system.
[root@db-rac1 ~]# fdisk -l
Disk /dev/sda: 21.4 GB, 21474836480 bytes
255 heads, 63 sectors/track, 2610 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Device Boot Start End Blocks
Id System
/dev/sda1 * 1 13 104391
83 Linux
/dev/sda2
14 2610 20860402+
8e Linux LVM
Disk /dev/sdb: 21.4 GB, 21474836480 bytes
255 heads, 63 sectors/track, 2610 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Disk /dev/sdb doesn't contain a valid partition table
Disk /dev/sdd: 10.7 GB, 19226689536 bytes -------> New hidden mount point or
newly added storage.
255 heads, 63 sectors/track, 2337 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Disk /dev/sdd doesn't contain a valid partition
table -----------> note this
partition has no valid table.
[root@db-rac1 ~]#
[root@db-rac1 ~]# fdisk /dev/sdd --------------------------Step
2
Device contains neither a valid DOS partition table, nor
Sun, SGI or OSF disklabel
Building a new DOS disklabel. Changes will remain in
memory only,
until you decide to write them. After that, of course, the
previous
content won't be recoverable.
The number of cylinders for this disk is set to 2337.
There is nothing wrong with that, but this is larger than
1024,
and could in certain setups cause problems with:
1) software that runs at boot time (e.g., old versions of
LILO)
2) booting and partitioning software from other OSs
(e.g., DOS
FDISK, OS/2 FDISK)
Warning: invalid flag 0x0000 of partition table 4 will be
corrected by w(rite)
Command (m for help): m
--------------------------------------------->Step 3
Command action
a toggle a bootable flag
b edit bsd disklabel
c toggle the dos compatibility flag
d delete a partition
l list known partition types
m print this menu
n add a new partition
o create a new empty DOS partition table
p print the partition table
q quit without saving changes
s create a new empty Sun disklabel
t change a partition's system id
u change display/entry units
v verify the partition table
w write table to disk and exit
x extra functionality (experts only)
Command (m for help): n
-------------------------------------->Step 4
Command action
e extended
p primary partition (1-4)
p
Partition number (1-4): 1
-----------------------------Step 5
First cylinder (1-2337, default 1):
Using default value 1
Last cylinder or +size or +sizeM or +sizeK (1-2337,
default 2337): 2337
Command (m for help): p ----------------- Step 6
Disk /dev/sdd: 10.7 GB, 19226689536 bytes
255 heads, 63 sectors/track, 2337 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Device
Boot Start End Blocks
Id System
/dev/sddp1
1 2337 18771921
83 Linux
Command (m for help): w
-------------------------Step 7
The partition table has been altered!
Calling ioctl() to re-read partition table.
WARNING: Re-reading the partition table failed with error
22: Invalid argument.
The kernel still uses the old table.
The new table will be used at the next reboot.
Syncing disks.
Once the disk formatted, we can provision the disk for
ASM.
[root@db-rac1 ~]# /etc/init.d/oracleasm configure
Configuring the Oracle ASM library driver.
This will configure the on-boot properties of the Oracle
ASM library
driver. The
following questions will determine whether the driver is
loaded on boot and what permissions it will have. The current values
will be shown in brackets ('[]'). Hitting <ENTER> without typing an
answer will keep that current value. Ctrl-C will abort.
Default user to own the driver interface [oracle]: grid
Default group to own the driver interface [dba]: asmadmin
Start Oracle ASM library driver on boot (y/n) [y]:
Scan for Oracle ASM disks on boot (y/n) [y]:
Writing Oracle ASM library driver configuration: done
Initializing the Oracle ASMLib driver: [ OK ]
Scanning the system for Oracle ASMLib disks: [ OK ]
[root@db-rac1 ~]#
[root@db-rac1 ~]# /etc/init.d/oracleasm start
Initializing the Oracle ASMLib driver: [ OK ]
Scanning the system for Oracle ASMLib disks: [ OK ]
[root@db-rac1 ~]#
[root@db-rac1 ~]#
[root@db-rac1 ~]# /etc/init.d/oracleasm createdisk OCR1
/dev/sdc1
Marking disk "OCR1" as an ASM disk: [ OK ]
[root@db-rac1 ~]#
[root@db-rac1 ~]# /etc/init.d/oracleasm createdisk DATA1
/dev/sdd1
Marking disk "DATA1" as an ASM disk: [ OK ]
[root@db-rac1 ~]# /etc/init.d/oracleasm createdisk ARCH1
/dev/sde1
Marking disk "ARCH1" as an ASM disk: [ OK ]
[root@db-rac1 ~]#
[root@db-rac1 ~]# /etc/init.d/oracleasm createdisk
listdisks
Action "createdisk" requires two arguments, the
device and the label.
See oracleasm.init(8) for more information.
[root@db-rac1 ~]#
[root@db-rac1 ~]# /etc/init.d/oracleasm listdisks
ARCH1
DATA1
OCR1
[root@db-rac1 ~]#
[root@db-rac1 ~]# ls -ltr /dev/oracleasm/disks/*
brw-rw---- 1 grid asmadmin 8, 33 Feb 11 03:53
/dev/oracleasm/disks/OCR1
brw-rw---- 1 grid asmadmin 8, 49 Feb 11 03:53
/dev/oracleasm/disks/DATA1
brw-rw---- 1 grid asmadmin 8, 65 Feb 11 03:53
/dev/oracleasm/disks/ARCH1
[root@db-rac1 ~]#
Create the passwordless connectivity between nodes and
test.
Click on link to see passwordless connectivity setup.
[grid@db-rac1 sshsetup]$ ssh db-rac2 àIt
is working from node 1 to node 2
[grid@db-rac2 ~]$ ssh db-rac1 à
It is working from node 2 to node 1
Last login: Sat Feb 11 04:03:43 2017
[grid@db-rac1 ~]$
Execute runcluvfy.sh to verify all prerequisites are
done.
[grid@db-rac1 grid]$ ./runcluvfy.sh stage -pre crsinst -n
db-rac1,db-rac2 -verbose
For ./runcluvfy output click on below link.
Once Runcluvfy done move on to clusterware installation.
cd /u01/Softwr/grid
./runInstaller
Select the "Install and Configure Grid
Infrastructure for a Cluster" option, then click the "Next"
button.
Select the "Typical Installation"
option, then click the "Next" button.
On the "Specify Cluster Configuration"
screen, enter the SCAN name and click the "Add" button.
Enter the details of the second node in the
cluster, then click the "OK" button.
Click the "Identify network
interfaces..." button and check the public and private networks are
specified correctly. Once you are happy with them, click the "OK"
button and the "Next" button on the previous screen.
Enter /u01/app/grid/product/11.2.0/grid “as
the software location and "Automatic Storage Manager" as the cluster
registry storage type. Enter the ASM password and click the "Next"
button.
ORACLE_BASE =/u01/app/oracle
ORACLE_HOME=/u01/app/grid/product/11.2.0/grid
Set the redundancy to
"External". if the ASM disks are not displayed, click the
"Change Discovery Path" button and enter "/dev/oracleasm/disks/*"
and click the "OK" button. Select all 5 disks and click the
"Next" button.
Below I have taken example
of Disk slide.
·
Accept the default inventory directory by
clicking the "Next" button.
·
Wait while the prerequisite checks complete. If
you have any issues, either fix them or check the "Ignore All"
checkbox and click the "Next" button.
·
Check the summary information, click the
"Finish" button.
·
Wait while the setup takes place.
When prompted, run the configuration scripts on each node.
[root@db-rac1 oracle]#
/u01/app/oraInventory/orainstRoot.sh
Changing permissions of
/u01/app/oraInventory.
Adding read,write
permissions for group.
Removing
read,write,execute permissions for world.
Changing groupname of
/u01/app/oraInventory to oinstall.
The execution of the
script is complete.
[root@db-rac1 oracle]#
/u01/app/11.2.0/grid/root.sh
Performing root user
operation for Oracle 11g
The following
environment variables are set as:
ORACLE_OWNER= oracle
ORACLE_HOME= /u01/app/11.2.0/grid
Enter the full pathname
of the local bin directory: [/usr/local/bin]:
Copying dbhome to /usr/local/bin ...
Copying oraenv to /usr/local/bin ...
Copying coraenv to /usr/local/bin ...
Creating /etc/oratab
file...
Entries will be added to
the /etc/oratab file as needed by
Database Configuration
Assistant when a database is created
Finished running generic
part of root script.
Now product-specific
root actions will be performed.
Using configuration
parameter file: /u01/app/11.2.0/grid/crs/install/crsconfig_params
Creating trace directory
User ignored
Prerequisites during installation
Installing Trace File
Analyzer
OLR initialization -
successful
root wallet
root wallet cert
root cert export
peer wallet
profile reader wallet
pa wallet
peer wallet keys
pa wallet keys
peer cert request
pa cert request
peer cert
pa cert
peer root cert TP
profile reader root cert TP
pa root cert TP
peer pa cert TP
pa peer cert TP
profile reader pa cert TP
profile reader peer cert TP
peer user cert
pa user cert
Adding Clusterware
entries to inittab
CRS-2672: Attempting to
start 'ora.mdnsd' on 'db-rac1'
CRS-2676: Start of
'ora.mdnsd' on 'db-rac1' succeeded
CRS-2672: Attempting to
start 'ora.gpnpd' on 'db-rac1'
CRS-2676: Start of
'ora.gpnpd' on 'db-rac1' succeeded
CRS-2672: Attempting to
start 'ora.cssdmonitor' on 'db-rac1'
CRS-2672: Attempting to
start 'ora.gipcd' on 'db-rac1'
CRS-2676: Start of
'ora.gipcd' on 'db-rac1' succeeded
CRS-2676: Start of
'ora.cssdmonitor' on 'db-rac1' succeeded
CRS-2672: Attempting to
start 'ora.cssd' on 'db-rac1'
CRS-2672: Attempting to
start 'ora.diskmon' on 'db-rac1'
CRS-2676: Start of
'ora.diskmon' on 'db-rac1' succeeded
CRS-2676: Start of 'ora.cssd'
on 'db-rac1' succeeded
ASM created and started
successfully.
Disk Group OCR created
successfully.
clscfg: -install mode
specified
Successfully accumulated
necessary OCR keys.
Creating OCR keys for
user 'root', privgrp 'root'..
Operation successful.
CRS-4256: Updating the
profile
Successful addition of
voting disk 55563c311cc04f8ebfa695edd4e46465.
Successfully replaced
voting disk group with +OCR.
CRS-4256: Updating the
profile
CRS-4266: Voting file(s)
successfully replaced
## STATE
File Universal Id
File Name Disk group
-- -----
-----------------
--------- ---------
1. ONLINE
55563c311cc04f8ebfa695edd4e46465 (/dev/oracleasm/disks/OCR1) [OCR]
Located 1 voting
disk(s).
CRS-2672: Attempting to
start 'ora.asm' on 'db-rac1'
CRS-2676: Start of
'ora.asm' on 'db-rac1' succeeded
CRS-2672: Attempting to
start 'ora.OCR.dg' on 'db-rac1'
CRS-2676: Start of
'ora.OCR.dg' on 'db-rac1' succeeded
Configure Oracle Grid
Infrastructure for a Cluster ... succeeded
[root@db-rac1 oracle]#
2nd node :-
[root@db-rac2 ~]#
/u01/app/oraInventory/orainstRoot.sh
Changing permissions of
/u01/app/oraInventory.
Adding read,write
permissions for group.
Removing
read,write,execute permissions for world.
Changing groupname of
/u01/app/oraInventory to oinstall.
The execution of the
script is complete.
[root@db-rac2 ~]#
/u01/app/11.2.0/grid/root.sh
Performing root user
operation for Oracle 11g
The following
environment variables are set as:
ORACLE_OWNER= oracle
ORACLE_HOME= /u01/app/11.2.0/grid
Enter the full pathname
of the local bin directory: [/usr/local/bin]:
The contents of
"dbhome" have not changed. No need to overwrite.
The contents of
"oraenv" have not changed. No need to overwrite.
The contents of
"coraenv" have not changed. No need to overwrite.
Entries will be added to
the /etc/oratab file as needed by
Database Configuration
Assistant when a database is created
Finished running generic
part of root script.
Now product-specific
root actions will be performed.
Using configuration
parameter file: /u01/app/11.2.0/grid/crs/install/crsconfig_params
User ignored
Prerequisites during installation
Installing Trace File
Analyzer
Configure Oracle Grid
Infrastructure for a Cluster ... succeeded
Wait for the configuration assistants to
complete.
We expect the verification phase to fail with an
error relating to the SCAN, assuming you are not using DNS.
Provided this is the only error, it is safe to
ignore this and continue by clicking the "Next" button.
Click the "Close" button to exit the
installer.
We have completed the grid infrastructure
installation is now complete.
hi thanks for posting step by step Oracle RAC Clusterware Installation 11gr2. Its really a great help
ReplyDelete