Download Oracle SCSI & RAID Devices Driver



ELSA-2021-9006 - Unbreakable Enterprise kernel security update

Oracle VM VirtualBox can emulate the most common types of hard disk controllers typically found in computing devices: IDE, SATA (AHCI), SCSI, SAS, USB-based, and NVMe mass storage devices. IDE (ATA) controllers are a backwards-compatible yet very advanced extension of the disk controller in the IBM PC/AT (1984). Install Oracle 10g Release 2 RAC using SCSI Introduction. We will be installing Oracle 10g Release 2 RAC on two desktop computers using external SCSI as shared storage and will be using raw devices for the OCR and Voting Disk and ASM for the database files.

Type:SECURITY
Severity:IMPORTANT
Release Date:2021-01-12

Description


[5.4.17-2036.102.0.2uek]
- xen-blkback: set ring->xenblkd to NULL after kthread_stop() (Pawel Wieczorkiewicz) [Orabug: 32260252] {CVE-2020-29569}
- xenbus/xenbus_backend: Disallow pending watch messages (SeongJae Park) [Orabug: 32253409] {CVE-2020-29568}
- xen/xenbus: Count pending messages for each watch (SeongJae Park) [Orabug: 32253409] {CVE-2020-29568}
- xen/xenbus/xen_bus_type: Support will_handle watch callback (SeongJae Park) [Orabug: 32253409] {CVE-2020-29568}
- xen/xenbus: Add 'will_handle' callback support in xenbus_watch_path() (SeongJae Park) [Orabug: 32253409] {CVE-2020-29568}
- xen/xenbus: Allow watches discard events before queueing (SeongJae Park) [Orabug: 32253409] {CVE-2020-29568}
[5.4.17-2036.102.0.1uek]
- target: fix XCOPY NAA identifier lookup (David Disseldorp) [Orabug: 32248035] {CVE-2020-28374}
[5.4.17-2036.102.0uek]
- futex: Fix inode life-time issue (Peter Zijlstra) [Orabug: 32233515] {CVE-2020-14381}
- perf/core: Fix race in the perf_mmap_close() function (Jiri Olsa) [Orabug: 32233352] {CVE-2020-14351}
- intel_idle: Customize IceLake server support (Chen Yu) [Orabug: 32218858]
- dm crypt: Allow unaligned bio buffer lengths for skcipher devices (Sudhakar Panneerselvam) [Orabug: 32210418]
- vhost scsi: fix lun reset completion handling (Mike Christie) [Orabug: 32167069]
- vhost scsi: Add support for LUN resets. (Mike Christie) [Orabug: 32167069]
- vhost scsi: add lun parser helper (Mike Christie) [Orabug: 32167069]
- vhost scsi: fix cmd completion race (Mike Christie) [Orabug: 32167069]
- vhost scsi: alloc cmds per vq instead of session (Mike Christie) [Orabug: 32167069]
- vhost: Create accessors for virtqueues private_data (Eugenio Perez) [Orabug: 32167069]
- vhost: add helper to check if a vq has been setup (Mike Christie) [Orabug: 32167069]
- scsi: sd: Allow user to configure command retries (Mike Christie) [Orabug: 32167069]
- scsi: core: Add limitless cmd retry support (Mike Christie) [Orabug: 32167069]
- scsi: mpt3sas: Update driver version to 36.100.00.00 (Suganath Prabu S) [Orabug: 32242279]
- scsi: mpt3sas: Handle trigger page after firmware update (Suganath Prabu S) [Orabug: 32242279]
- scsi: mpt3sas: Add persistent MPI trigger page (Suganath Prabu S) [Orabug: 32242279]
- scsi: mpt3sas: Add persistent SCSI sense trigger page (Suganath Prabu S) [Orabug: 32242279]
- scsi: mpt3sas: Add persistent Event trigger page (Suganath Prabu S) [Orabug: 32242279]
- scsi: mpt3sas: Add persistent Master trigger page (Suganath Prabu S) [Orabug: 32242279]
- scsi: mpt3sas: Add persistent trigger pages support (Suganath Prabu S) [Orabug: 32242279]
- scsi: mpt3sas: Sync time periodically between driver and firmware (Suganath Prabu S) [Orabug: 32242279]
- scsi: mpt3sas: Bump driver version to 35.101.00.00 (Sreekanth Reddy) [Orabug: 32242279]
- scsi: mpt3sas: Add module parameter multipath_on_hba (Sreekanth Reddy) [Orabug: 32242279]
- scsi: mpt3sas: Handle vSES vphy object during HBA reset (Sreekanth Reddy) [Orabug: 32242279]
- scsi: mpt3sas: Add bypass_dirty_port_flag parameter (Sreekanth Reddy) [Orabug: 32242279]
- scsi: mpt3sas: Handling HBA vSES device (Sreekanth Reddy) [Orabug: 32242279]
- scsi: mpt3sas: Set valid PhysicalPort in SMPPassThrough (Sreekanth Reddy) [Orabug: 32242279]
- scsi: mpt3sas: Update hba_port objects after host reset (Sreekanth Reddy) [Orabug: 32242279]
- scsi: mpt3sas: Get sas_device objects using device's rphy (Sreekanth Reddy) [Orabug: 32242279]
- scsi: mpt3sas: Rename transport_del_phy_from_an_existing_port() (Sreekanth Reddy) [Orabug: 32242279]
- scsi: mpt3sas: Get device objects using sas_address & portID (Sreekanth Reddy) [Orabug: 32242279]
- scsi: mpt3sas: Update hba_port's sas_address & phy_mask (Sreekanth Reddy) [Orabug: 32242279]
- scsi: mpt3sas: Rearrange _scsih_mark_responding_sas_device() (Sreekanth Reddy) [Orabug: 32242279]
- scsi: mpt3sas: Allocate memory for hba_port objects (Sreekanth Reddy) [Orabug: 32242279]
- scsi: mpt3sas: Define hba_port structure (Sreekanth Reddy) [Orabug: 32242279]
- scsi: mpt3sas: Fix ioctl timeout (Suganath Prabu S) [Orabug: 32242279]
- icmp: randomize the global rate limiter (Eric Dumazet) [Orabug: 32227958] {CVE-2020-25705}
- perf/x86/intel/uncore: Add box_offsets for free-running counters (Kan Liang) [Orabug: 32020885]
- perf/x86/intel/uncore: Factor out __snr_uncore_mmio_init_box (Kan Liang) [Orabug: 32020885]
- perf/x86/intel/uncore: Add Ice Lake server uncore support (Kan Liang) [Orabug: 32020885]


Related CVEs


CVE-2020-14351
CVE-2020-25705
CVE-2020-14381
CVE-2020-29568
CVE-2020-29569
CVE-2020-28374

Updated Packages


Release/ArchitectureFilenameMD5sumSuperseded By Advisory
Oracle Linux 7 (aarch64) kernel-uek-5.4.17-2036.102.0.2.el7uek.src.rpmdf05b59811bd461966bf9d109f7efca5-
kernel-uek-5.4.17-2036.102.0.2.el7uek.aarch64.rpm72a401afb59c4367d12f0c3f33071a7d-
kernel-uek-debug-5.4.17-2036.102.0.2.el7uek.aarch64.rpmc2d8aa4bff533597894591d3185297f3-
kernel-uek-debug-devel-5.4.17-2036.102.0.2.el7uek.aarch64.rpm54c27c36ead23729683f5d88308e9b0f-
kernel-uek-devel-5.4.17-2036.102.0.2.el7uek.aarch64.rpmd6c5f2c24220f7b42b2fc242d6120a26-
kernel-uek-doc-5.4.17-2036.102.0.2.el7uek.noarch.rpm56fb97c6e4b8887f36538119b88fbc0e-
kernel-uek-tools-5.4.17-2036.102.0.2.el7uek.aarch64.rpmcaa2a941b45ecce8e8cefb66a3491f7d-
kernel-uek-tools-libs-5.4.17-2036.102.0.2.el7uek.aarch64.rpme6509b1e1459be7dbf56ea55aab488fd-
perf-5.4.17-2036.102.0.2.el7uek.aarch64.rpm18ef801efab6db63bb50dd0375e6683e-
python-perf-5.4.17-2036.102.0.2.el7uek.aarch64.rpm6f9aaff8b8415ecaf2d90690199c48e0-
Oracle Linux 7 (x86_64) kernel-uek-5.4.17-2036.102.0.2.el7uek.src.rpmdf05b59811bd461966bf9d109f7efca5-
kernel-uek-5.4.17-2036.102.0.2.el7uek.x86_64.rpmaa6f3fee6abe17d770ded7fae82fb1b8-
kernel-uek-debug-5.4.17-2036.102.0.2.el7uek.x86_64.rpmdefe49d3029421c7fbb9283f4ca33fa2-
kernel-uek-debug-devel-5.4.17-2036.102.0.2.el7uek.x86_64.rpmfb8edf71199f518a3bfae790ce3cf494-
kernel-uek-devel-5.4.17-2036.102.0.2.el7uek.x86_64.rpm62cfa1cb3c794664cbcc7377e6f4a438-
kernel-uek-doc-5.4.17-2036.102.0.2.el7uek.noarch.rpm56fb97c6e4b8887f36538119b88fbc0e-
kernel-uek-tools-5.4.17-2036.102.0.2.el7uek.x86_64.rpm80c0bc7d6fd46ff6e98658299913e68a-
Oracle Linux 8 (aarch64) kernel-uek-5.4.17-2036.102.0.2.el8uek.src.rpm96f25e1f998388fbe01cf4c8eb66893f-
kernel-uek-5.4.17-2036.102.0.2.el8uek.aarch64.rpm1025e8ca2eb08d348fbb1130fb9ca1b9-
kernel-uek-debug-5.4.17-2036.102.0.2.el8uek.aarch64.rpma615775b6cfde7f1d726216bcb1bb576-
kernel-uek-debug-devel-5.4.17-2036.102.0.2.el8uek.aarch64.rpm3a6607a70860c0bd107cfb421ab4d004-
kernel-uek-devel-5.4.17-2036.102.0.2.el8uek.aarch64.rpmd47a20ce72398af688326642a7600beb-
kernel-uek-doc-5.4.17-2036.102.0.2.el8uek.noarch.rpm6fdbedf3987d82bb9dd30e8cfc1404cc-
Oracle Linux 8 (x86_64) kernel-uek-5.4.17-2036.102.0.2.el8uek.src.rpm96f25e1f998388fbe01cf4c8eb66893f-
kernel-uek-5.4.17-2036.102.0.2.el8uek.x86_64.rpmb0c97cade64198d9d398d3d1f55561af-
kernel-uek-debug-5.4.17-2036.102.0.2.el8uek.x86_64.rpmca603f9b29d48601a72e81ea7666ff87-
kernel-uek-debug-devel-5.4.17-2036.102.0.2.el8uek.x86_64.rpm826e82078346f24f61e960c561b44b87-
kernel-uek-devel-5.4.17-2036.102.0.2.el8uek.x86_64.rpmbe07932a7584921735b1a90afa403150-
kernel-uek-doc-5.4.17-2036.102.0.2.el8uek.noarch.rpm6fdbedf3987d82bb9dd30e8cfc1404cc-

This page is generated automatically and has not been checked for errors or omissions. For clarificationor corrections please contact the Oracle Linux ULN team

Install Oracle 10g Release 2 RAC using SCSI
Introduction

We will be installing Oracle 10g Release 2 RAC on two desktop computers using external SCSI as shared storage and will be using raw devices for the OCR and Voting Disk and ASM for the database files.
Linux Operating System specific requirements

Install Linux OS system with default options (or you can choose everything) on both the nodes.

Once the RHEL5 has been installed, Please make sure, you have the following RPMs, if not, install them on both nodes.
(Metalink Note 169706.1 has a list of RPMs required for Oracle Software Installation).

For our 10G RAC Setup, we must have the following RPMs. All of these RPMs are available on the RHEL5 media. (If you select the defaults while installing RHEL5, it installs everything except the following)

# rpm -Uvh compat-gcc-34-3.4.6-4.i386.rpm
# rpm -Uvh compat-db-4.2.52-5.1.i386.rpm
# rpm -Uvh compat-gcc-34-c++-3.4.6-4.i386.rpm
# rpm -Uvh libXp-1.0.0-8.1.el5.i386.rpm
# rpm -Uvh libstdc++43-devel-4.3.2-7.el5.i386.rpm
# rpm -Uvh libaio-devel-0.3.106-3.2.i386.rpm
# rpm -Uvh sysstat-7.0.2-3.el5.i386.rpm
# rpm -Uvh unixODBC-2.2.11-7.1.i386.rpm
# rpm -Uvh unixODBC-devel-2.2.11-7.1.i386.rpm
Download and Install Oracle ASMLib

Download the required RPMs from the following link

Raid

Search for “Intel IA32 (x86) Architecture” and download and install the following RPMs

• oracleasm-support-2.1.3-1.el5
• oracleasmlib-2.0.4-1.el5
• oracleasm-2.6.18-128.el5-2.0.5-1.el5

Login as ROOT
# rpm -Uvh oracleasm-support-2.1.3-1.el5.i386.rpm
# rpm -Uvh oracleasm-2.6.18-128.el5-2.0.5-1.el5.i686.rpm
# rpm -Uvh oracleasmlib-2.0.4-1.el5.i386.rpm

CVU

Here are the steps to install cvuqdisk package (required to run ‘cluvfy’ utility)

Download the latest version from the following link
http://www.oracle.com/technology/products/database/clustering/cvu/cvu_download_homepage.html

As ORACLE user, unzip the downloaded file (cvupack_Linux_x86.zip) to a location from where you would like to run the ‘cluvfy’ utility. (for example /u01/app/oracle)

Copy the rpm (cvuqdisk-1.0.1-1.rpm or the latest version) to a local directory. You can find the rpm in /rpm directory where is the directory in which you have installed the CVU.

Set the environment variable to a group, who should own this binary, typically it is the “dba” group.

As ROOT user
# export CVUQDISK_GRP=dba

Erase any existing package
# rpm -e cvuqdisk

Install the rpm
# rpm -iv cvuqdisk-1.0.1-1.rpm

Verify the package
# rpm -qa | grep cvuqdisk

Disable SELinux (Metalink Note: 457458.1)
1. Editing /etc/selinux/config
• Change the SELINUX value to “SELINUX=disabled”.
• Reboot the server
To check the status of SELinux, issue:
# /usr/sbin/sestatus

SCP and SSH Location

Install requires scp and ssh to be located in the path /usr/local/bin on the RedHat Linux platform.
If scp and ssh are not in this location, then create a symbolic link in /usr/local/bin to
the location where scp and ssh are found.

# ln -s /usr/bin/scp /usr/local/bin/scp
# ln -s /usr/bin/ssh /usr/local/bin/ssh

Configure /etc/hosts

Make sure both the nodes have the same /etc/hosts file.
# Do not remove the following line, or various programs
# that require network functionality will fail.
127.0.0.1 localhost.localdomain localhost
192.168.0.101 rklx1.world.com rklx1 # Public Network on eth0
192.168.0.102 rklx1-vip.world.com rklx1-vip # VIP on eth0
10.10.10.1 rklx1-priv.world.com rklx1-priv # Private InterConnect on eth1
192.168.0.105 rklx2.world.com rklx2 # Public Network on eth0
192.168.0.106 rklx2-vip.world.com rklx2-vip # VIP on eth0
10.10.10.2 rklx2-priv.world.com rklx2-priv # Private InterConnect on eth1

Verify Memory and CPU Information
# grep MemTotal /proc/meminfo
# grep “model name” /proc/cpuinfo

Confirm if you are running 32 bit or 64 bit OS
# getconf LONG_BIT

Find the interface names
# /sbin/ifconfig –a

# ping ${NODE1}.${DOMAIN}
# ping ${NODE1}
# ping ${NODE1}-priv.${DOMAIN}
# ping ${NODE1}-priv
Configure the Network

Make sure files ‘ifcfg-eth0’ and ‘ifcfg-eth1’ on both nodes have the IPADDR, NETMASK, TYPE and BOOTPROTO set correctly (by default they are NOT)
NODE 1
[root@rklx1 network-scripts]# pwd
/etc/sysconfig/network-scripts

[root@rklx1 network-scripts]# cat ifcfg-eth0
# Intel Corporation 82801G (ICH7 Family) LAN Controller
DEVICE=eth0
BOOTPROTO=static
IPADDR=192.168.0.101
NETMASK=255.255.255.0
HWADDR=55:19:12:DD:E6:12
ONBOOT=yes
TYPE=Ethernet

[root@rklx1 network-scripts]# cat ifcfg-eth1
# ADMtek NC100 Network Everywhere Fast Ethernet 10/100
DEVICE=eth1
BOOTPROTO=static
IPADDR=10.10.10.1
HWADDR=55:24:KK:14:15:KO
ONBOOT=yes
TYPE=Ethernet
[root@rklx1 network-scripts]#
NODE 2

[root@rklx2 network-scripts]# pwd
/etc/sysconfig/network-scripts

[root@rklx2 network-scripts]# cat ifcfg-eth0
# Intel Corporation 82562ET/EZ/GT/GZ – PRO/100 VE (LOM) Ethernet Controller
DEVICE=eth0
BOOTPROTO=static
IPADDR=192.168.0.105
NETMASK=255.255.255.0
HWADDR=00:12:70:0L:24:2H
ONBOOT=yes
TYPE=Ethernet

[root@rklx2 network-scripts]# cat ifcfg-eth1
# ADMtek NC100 Network Everywhere Fast Ethernet 10/100
DEVICE=eth1
BOOTPROTO=static
IPADDR=10.10.10.2
HWADDR=00:56:DJ:18:17:FF
ONBOOT=yes
TYPE=Ethernet

Create the Groups and User Account

Create the user accounts and assign groups on all nodes. We will create a group ‘dba’ and user ‘oracle’.
(I prefer to use the OS GUI to create the accounts, you can use the below commands to create)
# /usr/sbin/groupadd –g 510 dba
# /usr/sbin/useradd “Oracle User” –u 520 -m /home/oracle -g dba oracle
# id oracle
uid=520(oracle) gid=510(dba) groups=510(dba)

The User ID and Group IDs must be the same on all cluster nodes.

# passwd oracle
Changing password for user oracle.
New password:
Retype new password:
passwd: all authentication tokens updated successfully.

Create Mount Points

Now create mount points where oracle software will be installed on both nodes.

Login as ROOT
# mkdir -p /u01/app/oracle
# chown -R oracle:dba /u01/app/oracle
# chmod -R 775 /u01/app/oracle

Configure Kernel Parameters

Login as ROOT and configure the Linux kernel parameters on both nodes.

cat >> /etc/sysctl.conf << EOF kernel.shmall = 2097152 kernel.shmmax = 536870912 kernel.shmmni = 4096 kernel.sem = 250 32000 100 128 fs.file-max = 658576 net.ipv4.ip_local_port_range = 1024 65000 net.core.rmem_default = 262144 net.core.wmem_default = 262144 net.core.rmem_max = 16777216 net.core.wmem_max = 1048536 EOF /sbin/sysctl -p Setting Shell Limits for the oracle User Oracle recommends setting the limits to the number of processes and number of open files each Linux account may use. Login as ROOT cat >> /etc/security/limits.conf << EOF oracle soft nproc 2047 oracle hard nproc 16384 oracle soft nofile 1024 oracle hard nofile 65536 EOF cat >> /etc/pam.d/login << EOF session required /lib/security/pam_limits.so EOF cat >> /etc/profile << EOF if [ $USER = “oracle” ]; then if [ $SHELL = “/bin/ksh” ]; then ulimit -u 16384 ulimit -n 65536 else ulimit -u 16384 -n 65536 fi umask 022 fi EOF cat >> /etc/csh.login << EOF if ( $USER “oracle” ) then limit maxproc 16384 limit descriptors 65536 umask 022 endif EOF Configure the Hangcheck Timer Login as ROOT modprobe hangcheck-timer hangcheck_tick=30 hangcheck_margin=180 cat >> /etc/rc.d/rc.local << EOF modprobe hangcheck-timer hangcheck_tick=30 hangcheck_margin=180 EOF Configure SSH for User Equivalence During the installation of Oracle RAC 10gR2, OUI needs to copy files to and execute programs on the other nodes in the cluster. In order for OUI to do that, you must configure SSH to allow user equivalence. Establishing user equivalence with SSH provides a secure means of copying files and executing programs on other nodes in the cluster without requiring password prompts. The first step is to generate public and private keys for SSH. There are two versions of the SSH protocol; version 1 uses RSA and version 2 uses DSA, so we will create both types of keys to ensure that SSH can use either version. The ssh-keygen program will generate public and private keys of either type depending upon the parameters passed to it. When you run ssh-keygen, you will be prompted for a location to save the keys. Accept default by pressing Enter . You will then be asked for a passphrase. Enter a password and then enter it again to confirm. Once done, you will have four files in the ~/.ssh directory: id_rsa, id_rsa.pub, id_dsa, and id_dsa.pub. The id_rsa and id_dsa files are your private keys and must not be shared with anyone. The id_rsa.pub and id_dsa.pub files are your public keys and must be copied to each of the other nodes in the cluster. Login as ORACLE for all the actions below for ssh setup Do the following on both nodes $ mkdir ~/.ssh $ chmod 755 ~/.ssh $ /usr/bin/ssh-keygen -t rsa Generating public/private rsa key pair. Enter file in which to save the key (/home/oracle/.ssh/id_rsa): Enter passphrase (empty for no passphrase): Enter same passphrase again: Your identification has been saved in /home/oracle/.ssh/id_rsa. Your public key has been saved in /home/oracle/.ssh/id_rsa.pub. The key fingerprint is: 7c:eg:72:71:82:da:61:ad:c2:f2:0d:h6:kf:20:fc:27 oracle@rklx1.world.com $ /usr/bin/ssh-keygen -t dsa Generating public/private dsa key pair. Enter file in which to save the key (/home/oracle/.ssh/id_dsa): Enter passphrase (empty for no passphrase): Enter same passphrase again: Your identification has been saved in /home/oracle/.ssh/id_dsa. Your public key has been saved in /home/oracle/.ssh/id_dsa.pub. The key fingerprint is: 7c:eg:72:71:82:da:61:ad:c2:f2:0d:h6:kf:20:fc:27 oracle@rklx1.world.com Now the contents of the public key files id_rsa.pub and id_dsa.pub on each node must be copied to the ~/.ssh/authorized_keys file on every other node. Use ssh to copy the contents of each file to the ~/.ssh/authorized_keys file. Note that the first time you access a remote node with ssh its RSA key will be unknown and you will be prompted to confirm that you wish to connect to the node and you type ‘yes’. SSH will record the RSA key for the remote nodes and will not prompt for this on subsequent connections to that node. From the NODE 1, (copy the local account’s keys so that ssh to the local node will work): $ cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys
$ cat ~/.ssh/id_dsa.pub >> ~/.ssh/authorized_keys
$ ssh oracle@rklx2 cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys
The authenticity of host ‘rklx2 (192.168.0.105)’ can’t be established.
RSA key fingerprint is d1:23:a7:ef:c1:gc:2e:19:h2:13:30:59:65:h8:1b:71.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added ‘rklx2,192.168.0.105’ (RSA) to the list of known hosts.
oracle@rklx2’s password:
$ ssh oracle@rklx2 cat ~/.ssh/id_dsa.pub >> ~/.ssh/authorized_keys
oracle@rklx2’s password:
$ chmod 644 ~/.ssh/authorized_keys

From the NODE 2. Notice that this time SSH will prompt for the passphrase you used when creating the keys rather than the oracle password. This is because the first node (rklx1) now knows the public keys for the second node and SSH is now using a different authentication protocol. Note, if you didn’t enter a passphrase when creating the keys with ssh-keygen, you will not be prompted for one here.

$ cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys
$ cat ~/.ssh/id_dsa.pub >> ~/.ssh/authorized_keys
$ ssh oracle@rklx1 cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys
The authenticity of host ‘rklx1 (192.168.0.101)’ can’t be established.
RSA key fingerprint is bd:0e:39:2a:23:2d:ca:f9:ea:71:f5:3d:d3:dd:3b:65.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added ‘rklx1,192.168.0.101’ (RSA) to the list of known hosts.
Enter passphrase for key ‘/home/oracle/.ssh/id_rsa’:
$ ssh oracle@rklx1 cat ~/.ssh/id_dsa.pub >> ~/.ssh/authorized_keys
Enter passphrase for key ‘/home/oracle/.ssh/id_rsa’:
$ chmod 644 ~/.ssh/authorized_keys

Establish User Equivalence

Finally, after all of the generating of keys, copying of files, and repeatedly entering passwords and passphrases , you’re ready to establish user equivalence. Once user equivalence is established, you will not be prompted for password again.

As oracle on the node where the Oracle 10gR2 software will be installed (host: rklx1): User Equivalence is established for the current session only

$ exec /usr/bin/ssh-agent $SHELL
$ /usr/bin/ssh-add
Enter passphrase for /home/oracle/.ssh/id_rsa:
Identity added: /home/oracle/.ssh/id_rsa (/home/oracle/.ssh/id_rsa)
Identity added: /home/oracle/.ssh/id_dsa (/home/oracle/.ssh/id_dsa)

Test Connectivity

Once you are done with the above steps, you should be able to ssh without being asked for password to any node in the cluster (NODE1 and NODE2 in our case).

You MUST test connectivity in each direction from all servers (to itself and other servers in the cluster) otherwise your installation of CRS will fail.

$ ssh rklx1 date
$ ssh rklx1.world.com date
$ ssh rklx1-priv date
$ ssh rklx1-priv.world.com date

$ ssh rklx2 date
$ ssh rklx2.world.com date
$ ssh rklx2-priv date
$ ssh rklx2-priv.world.com date

Answer ‘yes’ when you see this message and it will occur only the first time an operation on a remote node is performed, by testing the connectivity, you not only ensure that remote operations work properly, you also complete the initial security key exchange.
The authenticity of host ‘rklx2 (192.168.0.101)’ can’t be established.
RSA key fingerprint is df:d3:52:16:df:2f:51:d5:df:gg:6a:aa:4b:43:a6:a5.
Are you sure you want to continue connecting (yes/no)? yes

MAKE SURE VIP ip’s ARE NOT PLUMBED.

Login as ROOT
# ping rklx1-vip
# ping rklx2-vip

Example Output:

PING rklx1-vip.world.com (192.168.0.102) 56(84) bytes of data.
From rklx1.world.com (192.168.0.101) icmp_seq=1 Destination Host Unreachable

How to create partitions on SCSI Disk

Purpose: Creating RAW partitions for OCR and Voting Disk and
one big raw partition for ASM

Available Hardware: External SCSI Disk – size 147 GB

Create two 128MB RAW partitions for OCR (from Primary Partition)
and three 128MB RAW partitions for Voting Disk (from Extended Partition)
and leave the rest for ASM (from Extended Partition)

You need to run the following from NODE 1 only

[root@rklx1]# fdisk -l

Disk /dev/sdb: 147.0 GB, 147086327808 bytes
255 heads, 63 sectors/track, 17882 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

Device Boot Start End Blocks Id System
/dev/sdb1 * 1 1217 9775521 83 Linux
/dev/sdb2 1218 7297 48837600 83 Linux
/dev/sdb3 7298 17882 85024012+ 83 Linux
[root@rklx1]# fdisk /dev/sdb

The number of cylinders for this disk is set to 17882.
There is nothing wrong with that, but this is larger than 1024,
and could in certain setups cause problems with:
1) software that runs at boot time (e.g., old versions of LILO)
2) booting and partitioning software from other OSs
(e.g., DOS FDISK, OS/2 FDISK)

 If there are any existing partitions, Delete all the existing partitions

Command (m for help): d
Partition number (1-4): 1

Command (m for help): d
Partition number (1-4): 2

Command (m for help): d
Selected partition 3

Command (m for help): p

Disk /dev/sdb: 147.0 GB, 147086327808 bytes
255 heads, 63 sectors/track, 17882 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

Device Boot Start End Blocks Id System

 We will create two small primary partitions for OCR and one extended partition, then from extended partition, we will create three small partitions (128M) for the voting disk and will leave the rest for the ASM.

Command (m for help): n
Command action
e extended
p primary partition (1-4)
p
Partition number (1-4): 1
First cylinder (1-17882, default 1):
Using default value 1
Last cylinder or +size or +sizeM or +sizeK (1-17882, default 17882): +128M

Command (m for help): p

Disk /dev/sdb: 147.0 GB, 147086327808 bytes
255 heads, 63 sectors/track, 17882 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

Device Boot Start End Blocks Id System
/dev/sdb1 1 17 136521 83 Linux

Command (m for help): p

Disk /dev/sdb: 147.0 GB, 147086327808 bytes
255 heads, 63 sectors/track, 17882 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

Device Boot Start End Blocks Id System
/dev/sdb1 1 17 136521 83 Linux

Command (m for help): n
Command action
e extended
p primary partition (1-4)
p
Partition number (1-4): 2
First cylinder (18-17882, default 18):
Using default value 18
Last cylinder or +size or +sizeM or +sizeK (18-17882, default 17882): +128M

Command (m for help): p

Disk /dev/sdb: 147.0 GB, 147086327808 bytes
255 heads, 63 sectors/track, 17882 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

Device Boot Start End Blocks Id System
/dev/sdb1 1 17 136521 83 Linux
/dev/sdb2 18 34 136552+ 83 Linux

Command (m for help): n
Command action
e extended
p primary partition (1-4)
e
Partition number (1-4): 3
First cylinder (35-17882, default 35):
Using default value 35
Last cylinder or +size or +sizeM or +sizeK (35-17882, default 17882):
Using default value 17882

Command (m for help): p

Disk /dev/sdb: 147.0 GB, 147086327808 bytes
255 heads, 63 sectors/track, 17882 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

Device Boot Start End Blocks Id System
/dev/sdb1 1 17 136521 83 Linux
/dev/sdb2 18 34 136552+ 83 Linux
/dev/sdb3 35 17882 143364060 5 Extended

Command (m for help): n
Command action
l logical (5 or over)
p primary partition (1-4)
l
First cylinder (35-17882, default 35):
Using default value 35
Last cylinder or +size or +sizeM or +sizeK (35-17882, default 17882): +128M

Command (m for help): p

Disk /dev/sdb: 147.0 GB, 147086327808 bytes
255 heads, 63 sectors/track, 17882 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

Device Boot Start End Blocks Id System
/dev/sdb1 1 17 136521 83 Linux
/dev/sdb2 18 34 136552+ 83 Linux
/dev/sdb3 35 17882 143364060 5 Extended
/dev/sdb5 35 51 136521 83 Linux

Command (m for help): n
Command action
l logical (5 or over)
p primary partition (1-4)
l
First cylinder (52-17882, default 52):
Using default value 52
Last cylinder or +size or +sizeM or +sizeK (52-17882, default 17882): +128M

Command (m for help): n
Command action
l logical (5 or over)
p primary partition (1-4)
l
First cylinder (69-17882, default 69):
Using default value 69
Last cylinder or +size or +sizeM or +sizeK (69-17882, default 17882): +128M

Command (m for help): p

Disk /dev/sdb: 147.0 GB, 147086327808 bytes
255 heads, 63 sectors/track, 17882 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

Device Boot Start End Blocks Id System
/dev/sdb1 1 17 136521 83 Linux
/dev/sdb2 18 34 136552+ 83 Linux
/dev/sdb3 35 17882 143364060 5 Extended
/dev/sdb5 35 51 136521 83 Linux
/dev/sdb6 52 68 136521 83 Linux
/dev/sdb7 69 85 136521 83 Linux

Command (m for help): n
Command action
l logical (5 or over)
p primary partition (1-4)
l
First cylinder (86-17882, default 86):
Using default value 86
Last cylinder or +size or +sizeM or +sizeK (86-17882, default 17882):
Using default value 17882

Command (m for help): p

Disk /dev/sdb: 147.0 GB, 147086327808 bytes
255 heads, 63 sectors/track, 17882 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

Device Boot Start End Blocks Id System
/dev/sdb1 1 17 136521 83 Linux
/dev/sdb2 18 34 136552+ 83 Linux
/dev/sdb3 35 17882 143364060 5 Extended
/dev/sdb5 35 51 136521 83 Linux
/dev/sdb6 52 68 136521 83 Linux
/dev/sdb7 69 85 136521 83 Linux
/dev/sdb8 86 17882 142954371 83 Linux

Command (m for help): w
The partition table has been altered!

Calling ioctl() to re-read partition table.
Syncing disks.

RedHat 5 only Instead of create a rawdevice file, the udev system is used by using /etc/udev/rules.d/60-raw.rules.
{you need to add this on all nodes in the cluster}

NOTE: Make sure to replace b with your defined device. I have one local hard drive on node 1 (rklx1) and two local hard drives on node 2 (rklx2) and one external SCSI, so in my case SCSI shows up as /dev/sdb on node 1 and as /dev/sdc on node 2. So going by this example you have to replace sdbx with sbcx on node 2

vi /etc/udev/rules.d/60-raw.rules and add the following lines

ACTION”add”, KERNEL”sdb1″, RUN+=”/bin/raw /dev/raw/raw1 %N”
ACTION”add”, KERNEL”sdb2″, RUN+=”/bin/raw /dev/raw/raw2 %N”
ACTION”add”, KERNEL”sdb5″, RUN+=”/bin/raw /dev/raw/raw3 %N”
ACTION”add”, KERNEL”sdb6″, RUN+=”/bin/raw /dev/raw/raw4 %N”
ACTION”add”, KERNEL”sdb7″, RUN+=”/bin/raw /dev/raw/raw5 %N”

KERNEL”raw[1]*”, OWNER=”oracle”, GROUP=”dba”, MODE=”660″
KERNEL”raw[2-5]*”, OWNER=”oracle”, GROUP=”dba”, MODE=”660″

The first part will create the raw devices, the second will set the correct ownership and permissions.

Run start_udev to create the devices from the definitions:

# /sbin/start_udev

[root@rklx1]# fdisk -l

Disk /dev/sdb: 147.0 GB, 147086327808 bytes
255 heads, 63 sectors/track, 17882 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

Device Boot Start End Blocks Id System
/dev/sdb1 1 17 136521 83 Linux => /dev/raw/raw1: OCR
/dev/sdb2 18 34 136552+ 83 Linux => /dev/raw/raw2: OCR
/dev/sdb3 35 17882 143364060 5 Extended
/dev/sdb5 35 51 136521 83 Linux => /dev/raw/raw3: V Disk
/dev/sdb6 52 68 136521 83 Linux => /dev/raw/raw3: V Disk
/dev/sdb7 69 85 136521 83 Linux => /dev/raw/raw3: V Disk
/dev/sdb8 86 17882 142954371 83 Linux => ASM – Data files etc.

Configuring and Loading ASM

To load the ASM driver oracleams.o and to mount the ASM driver filesystem, enter:

Login as ROOT

# /etc/init.d/oracleasm configure

Configuring the Oracle ASM library driver.

Download Oracle Scsi & Raid Devices Driver Win 7

This will configure the on-boot properties of the Oracle ASM library
driver. The following questions will determine whether the driver is
loaded on boot and what permissions it will have. The current values
will be shown in brackets (‘[]’). Hitting without typing an
answer will keep that current value. Ctrl-C will abort.

Default user to own the driver interface []: oracle
Default group to own the driver interface []: oinstall
Start Oracle ASM library driver on boot (y/n) [n]: y
Fix permissions of Oracle ASM disks on boot (y/n) [y]: y
Writing Oracle ASM library driver configuration [ OK ]
Creating /dev/oracleasm mount point [ OK ]
Loading module “oracleasm” [ OK ]
Mounting ASMlib driver filesystem [ OK ]
Scanning system for ASM disks [ OK ]
#

Creating ASM Disks

NOTE: Creating ASM disks is done on one RAC node 1 The following commands should only be executed on one RAC node (NODE 1)

I executed the following commands to create my ASM disks: (make sure to change the device names)
(In this example I used partitions (/dev/sdb8)

Login as ROOT

# /etc/init.d/oracleasm createdisk DISK1 /dev/sdb8
Marking disk “/dev/sdb8” as an ASM disk [ OK ]

# # Replace “sdb8” with the name of your device. I used /dev/sdb8

To list all ASM disks, enter:
# /etc/init.d/oracleasm listdisks
DISK1

On all other RAC nodes, you just need to notify the system about the new ASM disks:

Login as ROOT
# /etc/init.d/oracleasm scandisks
Scanning system for ASM disks [ OK ]
# /etc/init.d/oracleasm listdisks
# DISK1

RESTART THE NETWORK

Execute the following commands as root to reflect the changes.

# service network stop
# service network start

Run the cluvfy utility to verify all pre-requisites are in place

I will be using the “-osdba dba and -orainv dba” switches to suppress OS group ‘oinstall’ errors, as I have not created the ‘oinstall’ group. I prefer to keep just one group ‘DBA’ for the user ‘oracle’

After you have configured the hardware and operating systems on the new nodes, you can use this command to verify node reachability. Run the CLUVFY command to verify your installation at the post hardware installation stage.
$ ./cluvfy stage -post hwos -n rklx1,rklx2

Run the following CLUVFY to check system requirements for installing Oracle Clusterware.
It verifies the following
• User Equivalence: User equivalence exists on all the specified nodes
• Node Reachability: All the specified nodes are reachable from the local node
• Node Connectivity: Connectivity exists between all the specified nodes through the public and private network interconnections
• Administrative Privileges: The oracle user has proper administrative privileges to install Oracle Clusterware on the specified nodes
• Shared Storage Accessibility: If specified, the Oracle Cluster Registry (OCR) device and voting disk are shared across all the specified nodes
• System Requirements: All system requirements are met for installing Oracle Clusterware software, including kernel version, kernel parameters, software packages, memory, swap directory space, temp directory space, and required users and groups
• Node Applications: VIP, ONS, and GSD node applications are created for all the nodes

./cluvfy stage -pre crsinst -osdba dba -orainv dba –r 10gR2 -n rklx1,rklx2

Run the following to perform a pre-installation check for an Oracle Database with Oracle RAC installation
./cluvfy stage -pre dbinst -n rklx1,rklx2 -r 10gR2 -osdba dba –verbose

Run the CLUVFY command to obtain a detailed comparison of the properties of the reference node with all of the other nodes that are part of your current cluster environment
./cluvfy comp peer -n rklx1,rklx2 -osdba dba -orainv dba

To verify the accessibility of the cluster nodes from the local node or from any other cluster node, use the component verification command nodereach as follows:
./cluvfy comp nodereach -n rklx1,rklx2

To verify that the other cluster nodes can be reached from the local node through all the available network interfaces or through specific network interfaces, use the component verification command nodecon as follows:
./cluvfy comp nodecon -n all

Install the Oracle Clusterware

Login as ORACLE

$ unzip 10201_clusterware_linux32.zip (for example to /u01/software)

$ export ORACLE_BASE=/u01/app/oracle

I will be installing the software in the following directories, you can follow the exact OFA structure if you want.
Oracle Clusteware: /u01/app/oracle/product/crs
Oracle ASM: /u01/app/oracle/product/asm
Oracle Database: /u01/app/oracle/product/10.2.0

Make sure you can see the ‘xclock’. I login as ROOT and then do +xhost and then su to oracle and it works.

rklx1:/u01/software/clusterware $ ./runInstaller –ignoreSysPrereqs

Click Next at the Welcome Screen

Accept Default for the Inventory Location, so click next

You can specify the location where you want to install CRS and click Next

Click Next at the ‘Product-Specific Prerequisite Checks’

‘Specify Cluster Configuration’ screen shows only the RAC Node 1. Click the “Add” button to continue.

Now input the Node 2 details and click OK

Make sure Public, Private and VIP information for both the nodes are correct and click Next

By default, both the Interface Type shows as Private, Click the Edit button to change the ‘eth0’ to Public

Specify the OCR Location, in our case it is /dev/raw/raw1 and /dev/raw/raw2

Specify the Voting Disk Location, in our case it is /dev/raw/raw3 , /dev/raw/raw4 and /dev/raw/raw5

Run the orainstRoot.sh file first on NODE 1 and then on NODE 2 and STOP

BEFORE YOU RUN ROOT.SH, Make the following changes in srvctl and vipca under $CRS_HOME/bin on both the nodes.
# Edit the vipca and srvctl commands to add the “unset LD_ASSUME_KERNEL”
# Documented as workaround in Metalink Note 414163.1

Edit the vipca file under $CRS_HOME/bin
if [ “$arch” = “i686” -o “$arch” = “ia64” -o “$arch” = “x86_64” ]
then
LD_ASSUME_KERNEL=2.4.19
export LD_ASSUME_KERNEL
fi

unset LD_ASSUME_KERNEL ## << Line to be added

Edit the srvctl file under $CRS_HOME/bin

LD_ASSUME_KERNEL=2.4.19
export LD_ASSUME_KERNEL

unset LD_ASSUME_KERNEL ## << Line to be added

Login to NODE 2 as ROOT and run the vipca
# cd /u01/app/oracle/product/crs/bin
# ./vipca

Click Next on the first screen

Make sure you highlight eth0 (if there are more than one shows up) and click Next

On the Next Screen you need to enter the IP Alias Name and it would automatically shows the IP Address but make sure to verify it is correct. Enter the IP Alias Name for both the nodes and Click Next.

Click ‘Finish’ at the Summary Screen

At this point, VIPCA is configured.

NOW run root.sh on NODE 1 and let it finish before you run root.sh on NODE 2

Now Come Back to NODE 1 and click ‘OK’ on the ‘Execute Configuration scripts’ screen

Clusterware Installation is complete, now lets install Oracle ASM

rklx1:/u01/software/database $ ./runInstaller –ignoreSysPrereqs

On Select Installation Type screen select Enterprise Edition

On Specify Home Details, enter the value where you want to install ASM

Check on both nodes for cluster installation mode

Download Oracle Scsi & Raid Devices Drivers

I see warning in the above as I have not created the ‘oinstall’ group and you can ignore this warning.

Select ‘Configure Automatic Storage Management (ASM)’

Select External and check mark the disk(s) shown

Click Install

BEFORE YOU RUN ROOT.SH, Make the following changes in srvctl under $ASM_HOME/bin on both nodes.
# Edit the vipca and srvctl commands to add the “unset LD_ASSUME_KERNEL”
# Documented as workaround in Metalink Note 414163.1

Run root.sh on NODE 1 and let it finish before you run root.sh on NODE 2

Now Install the Oracle Database
rklx1:/u01/software/database $ ./runInstaller –ignoreSysPrereqs

BEFORE YOU RUN ROOT.SH, Make the following changes in srvctl under $ORACLE_HOME/bin on both nodes.
# Edit the vipca and srvctl commands to add the “unset LD_ASSUME_KERNEL”
# Documented as workaround in Metalink Note 414163.1

Run root.sh on NODE 1 and let it finish before you run root.sh on NODE 2
Run the cluvfy utility to verify all pre-requisites are in place.

Download Oracle Scsi & Raid Devices Driver Download

I will be using the “-osdba dba and -orainv dba” switches to suppress OS group ‘oinstall’ errors, as I have not created the ‘oinstall’ group. I prefer to keep just one group ‘DBA’ for the user ‘oracle’

After you have configured the hardware and operating systems on the new nodes, you can use this command to verify node reachability. Run the CLUVFY command to verify your installation at the post hardware installation stage.
$ ./cluvfy stage -post hwos -n rklx1,rklx2

Run the following CLUVFY to check system requirements for installing Oracle Clusterware.
It verifies the following
• User Equivalence: User equivalence exists on all the specified nodes
• Node Reachability: All the specified nodes are reachable from the local node
• Node Connectivity: Connectivity exists between all the specified nodes through the public and private network interconnections
• Administrative Privileges: The oracle user has proper administrative privileges to install Oracle Clusterware on the specified nodes
• Shared Storage Accessibility: If specified, the Oracle Cluster Registry (OCR) device and voting disk are shared across all the specified nodes
• System Requirements: All system requirements are met for installing Oracle Clusterware software, including kernel version, kernel parameters, software packages, memory, swap directory space, temp directory space, and required users and groups
• Node Applications: VIP, ONS, and GSD node applications are created for all the nodes

./cluvfy stage -pre crsinst -osdba dba -orainv dba –r 10gR2 -n rklx1,rklx2

Run the following to verify that your system is prepared to create the Oracle Database with Oracle RAC successfully.
./cluvfy stage -pre dbcfg -n rklx1,rklx2 -d /u01/app/oracle/product/10.2.0

Run the following to perform a pre-installation check for an Oracle Database with Oracle RAC installation
./cluvfy stage -pre dbinst -n rklx1,rklx2 -r 10gR2 -osdba dba –verbose

Run the CLUVFY command to obtain a detailed comparison of the properties of the reference node with all of the other nodes that are part of your current cluster environment
./cluvfy comp peer -n rklx1,rklx2 -osdba dba -orainv dba

To verify the accessibility of the cluster nodes from the local node or from any other cluster node, use the component verification command nodereach as follows:
./cluvfy comp nodereach -n rklx1,rklx2

To verify that the other cluster nodes can be reached from the local node through all the available network interfaces or through specific network interfaces, use the component verification command nodecon as follows:
./cluvfy comp nodecon -n all

Verifying the Existence of Node Applications
To verify the existence of node applications, namely the virtual IP (VIP), Oracle Notification Services (ONS), and Global Service Daemon (GSD), on all the nodes, use the CVU comp nodeapp command, using the following syntax:

cluvfy comp nodeapp [ -n node_list] [-verbose]
Verifying the Integrity of Oracle Clusterware Components
To verify the existence of all the Oracle Clusterware components, use the component verification comp crs command, using the following syntax:

cluvfy comp crs [ -n node_list] [-verbose]
Verifying the Integrity of the Oracle Cluster Registry
To verify the integrity of the Oracle Cluster Registry, use the component verification comp ocr command, using the following syntax:

cluvfy comp ocr [ -n node_list] [-verbose]
Verifying the Integrity of Your Entire Cluster
To verify that all nodes in the cluster have the same view of the cluster configuration, use the component verification comp clu command, as follows:

cluvfy comp clu

How to delete everything from RAW Devices

Once the raw devices are created, use the dd command to zero out the device and
make sure no data is written to the raw devices:

# dd if=/dev/zero of=/dev/raw/raw1
# dd if=/dev/zero of=/dev/raw/raw2

Various Metalink Notes

Download Oracle Scsi & Raid Devices Driver Updater

How to map raw device on RHEL5 and OEL5 – Note 443996.1

Linux 2.6 Kernel Deprecation Of Raw Devices – Note 357492.1

How to Disable SELinux – Note 457458.1

Installation and Configuration Requirements – Note 169706.1

Configuring raw devices (singlepath) for Oracle Clusterware 10g Release 2 (10.2.0)
on RHEL5/OEL5 – Note 465001.1

How to Setup UDEV Rules for RAC OCR & Voting devices on SLES10, RHEL5, OEL5
– Note 414897.1

How to Configure Linux OS Ethernet TCP/IP Networking – Note 132044.1
How to Clean Up After a Failed Oracle Clusterware (CRS) Installation – Note 239998.1
Requirements For Installing Oracle 10gR2 On RHEL/OEL 5 (x86) – Note 419646.1