为用户 oracle 创建身份验证密钥。要创建此密钥,请将当前目录更改为 oracle 用户的默认登录目录并执行以下操作: [oracle@oradb5 oracle]$ ssh-keygen -t dsa -b 1024Generating public/private dsa key pair.Enter file in which to save the key (/home/oracle/.ssh/id_dsa):Created Directory '/home/oracle/.ssh'.Enter passphrase (empty for no passphrase):Enter same passphrase again:Your identification has been saved in /home/oracle/.ssh/id_dsa.Your public key has been saved in /home/oracle/.ssh/id_dsa.pub.The key fingerprint is:b6:07:42:ae:47:56:0a:a3:a5:bf:75:3e:21:85:8d:30 oracle@oradb5.sumsky.net[oracle@oradb5 oracle]$
假如硬件和操作系统配置已经完成: cluvfy stage -post hwos -n oradb1,oradb5Performing post-checks for hardware and Operating system setupChecking node reachability...Node reachability check passed from node "oradb1".Checking user equivalence...User equivalence check passed for user "oracle".Checking node connectivity...Node connectivity check passed for subnet "192.168.2.0" with node(s) oradb5,oradb1.Node connectivity check passed for subnet "10.168.2.0" with node(s) oradb5,oradb1.Suitable interfaces for the private interconnect on subnet "192.168.2.0":oradb5 eth0:192.168.2.50 eth0:192.168.2.55oradb1 eth0:192.168.2.10 eth0:192.168.2.15Suitable interfaces for the private interconnect on subnet "10.168.2.0":oradb5 eth1:10.168.2.150oradb1 eth1:10.168.2.110Checking shared storage accessibility...Shared storage check failed on nodes "oradb5".Post-check for hardware and operating system setup was unsuccessful on all the nodes. 正如突出显示的部分一样,上面的验证失败于存储检查验证;节点 oradb5 无法查看存储设备。在这个特定示例中,磁盘没有足够的权限。 假如忽略该错误继续安装,Oracle 集群件安装将失败。但假如在重新执行前解决了该错误,该验证步骤将成功,如下所示。Checking shared storage accessibility...Shared storage check passed on nodes "oradb5,oradb1".Post-check for hardware and operating system setup was successful on all the nodes.
在安装 Oracle 集群件之前请对节点列表中的所有节点执行相应的检查。 [oracle@oradb1 cluvfy]$ cluvfy stage -pre crsinst -n oradb1,oradb5Performing pre-checks for cluster services setupChecking node reachability...Node reachability check passed from node "oradb1".Checking user equivalence...User equivalence check passed for user "oracle".Checking administrative privileges...User existence check passed for "oracle".Group existence check passed for "oinstall".Membership check for user "oracle" in group "oinstall" [as Primary] failed.Check failed on nodes: oradb5,oradb1Administrative privileges check passed.Checking node connectivity...Node connectivity check passed for subnet "192.168.2.0" with node(s) oradb5,oradb1.Node connectivity check passed for subnet "10.168.2.0" with node(s) oradb5,oradb1.Suitable interfaces for the private interconnect on subnet "192.168.2.0":oradb5 eth0:192.168.2.50 eth0:192.168.2.55oradb1 eth0:192.168.2.10 eth0:192.168.2.15Suitable interfaces for the private interconnect on subnet "10.168.2.0":oradb5 eth1:10.168.2.150oradb1 eth1:10.168.2.110Checking system requirements for 'crs'...Total memory check passed.Check failed on nodes: oradb5,oradb1Free disk space check passed.Swap space check passed.System architecture check passed.Kernel version check passed.Package existence check passed for "make-3.79".Package existence check passed for "binutils-2.14".Package existence check passed for "gcc-3.2".Package existence check passed for "glibc-2.3.2-95.27".Package existence check passed for "compat-db-4.0.14-5".Package existence check passed for "compat-gcc-7.3-2.96.128".Package existence check passed for "compat-gcc-c++-7.3-2.96.128".Package existence check passed for "compat-libstdc++-7.3-2.96.128".Package existence check passed for "compat-libstdc++-devel-7.3-2.96.128".Package existence check passed for "openmotif-2.2.3".Package existence check passed for "setarch-1.3-1".Group existence check passed for "dba".Group existence check passed for "oinstall".User existence check passed for "nobody".System requirement failed for 'crs'Pre-check for cluster services setup was successful on all the nodes.
当需要的所有集群件组件从 oradb1 复制到 oradb5 之后,OUI 将提示执行三个文件: /usr/app/oracle/oraInventory/orainstRoot.sh on node oradb5[root@oradb5 oraInventory]# ./orainstRoot.shChanging permissions of /usr/app/oracle/oraInventory to 770.Changing groupname of /usr/app/oracle/oraInventory to dba.The execution of the script is complete[root@oradb5 oraInventory]#/usr/app/oracle/product/10.2.0/crs/install/rootaddnode.sh on node oradb1.(addnoderoot.sh 文件将使用 srvctl 实用程序将新节点信息添加到 OCR。请注重下面脚本输出末尾的具有 nodeapps 参数的 srvctl 命令。)[root@oradb1 install]# ./rootaddnode.shclscfg: EXISTING configuration version 3 detected.clscfg: version 3 is 10G Release 2.Attempting to add 1 new nodes to the configurationUsing ports: CSS=49895 CRS=49896 EVMC=49898 and EVMR=49897.node <nodenumber>: <nodename> <private interconnect name> <hostname>node 5: oradb5 oradb5-priv oradb5Creating OCR keys for user 'root', privgrp 'root'..Operation successful./usr/app/oracle/product/10.2.0/crs/bin/srvctl add nodeapps -n oradb5 -A oradb5-v ip/255.255.255.0/bond0 -o /usr/app/oracle/product/10.2.0/crs[root@oradb1 install]#/usr/app/oracle/product/10.2.0/crs/root.sh on node oradb5. [root@oradb5 crs]# ./root.shWARNING: directory '/usr/app/oracle/product/10.2.0' is not owned by rootWARNING: directory '/usr/app/oracle/product' is not owned by rootWARNING: directory '/usr/app/oracle' is not owned by rootChecking to see if Oracle CRS stack is already configured/etc/oracle does not exist. Creating it now.OCR backup directory '/usr/app/oracle/product/10.2.0/crs/cdata/SskyClst' does not exist. Creating nowSetting the permissions on OCR backup directorySetting up NS directoriesOracle Cluster Registry configuration upgraded successfullyWARNING: directory '/usr/app/oracle/product/10.2.0' is not owned by rootWARNING: directory '/usr/app/oracle/product' is not owned by rootWARNING: directory '/usr/app/oracle' is not owned by rootclscfg: EXISTING configuration version 3 detected.clscfg: version 3 is 10G Release 2.assigning default hostname oradb1 for node 1.Successfully accumulated necessary OCR keys.Using ports: CSS=49895 CRS=49896 EVMC=49898 and EVMR=49897.node : node 1: oradb1 oradb1-priv oradb1node 2: oradb2 oradb2-priv oradb2node 3: oradb3 oradb3-priv oradb3node 4: oradb4 oradb4-priv oradb4clscfg: Arguments check out successfully.NO KEYS WERE WRITTEN. Supply -force parameter to override.-force is destructive and will destroy any previous clusterconfiguration.Oracle Cluster Registry for cluster has already been initializedStartup will be queued to init within 90 seconds.Adding daemons to inittabExpecting the CRS daemons to be up within 600 seconds.CSS is active on these nodes. oradb1 oradb2 oradb3 oradb4 oradb5CSS is active on all nodes.Waiting for the Oracle CRSD and EVMD to startOracle CRS stack installed and running under init(1M)Running vipca(silent) for configuring nodeappsIP address "oradb-vip" has already been used. Enter an unused IP address. 产生错误“oradb-vip’ has already been used”,因为 VIP 已经在所有节点(而非 oradb5)上进行了配置。重要的是在继续之前手动执行 VIPCA(虚拟 IP 配置助手)。 使用 VIPCA 手动配置 VIP。与执行 OUI 相似,执行 VIPCA 要求运行该安装程序的终端与 X-windows 兼容。否则,应安装相应的 X-windows 模拟器并使用以下语法通过 DISPLAY 命令调用此模拟器:export DISPLAY=<client IP address>:0.0例如: [oracle@oradb1 oracle]$export DISPLAY=192.168.2.101:0.0在节点 oradb1(或者执行添加节点过程的节点)上的命令提示符处执行 root.sh 之后,还要立即作为根调用 VIPCA。(VIPCA 还将在新节点上配置 GSD 和 ONS 资源。)
将 Oracle 软件复制到节点 oradb5 之后,OUI 将提示您以 root 用户的身份在另一个窗口中对集群中的新节点(一个或多个)执行 /usr/app/oracle/product/10.2.0/db_1/root.sh 脚本。 [root@oradb5 db_1]# ./root.shRunning Oracle10 root.sh script...The following environment variables are set as: ORACLE_OWNER= oracle ORACLE_HOME= /usr/app/oracle/product/10.2.0/db_1Enter the full pathname of the local bin directory: [/usr/local/bin]:The file "dbhome" already exists in /usr/local/bin. Overwrite it? (y/n)[n]: y Copying dbhome to /usr/local/bin ...The file "oraenv" already exists in /usr/local/bin. Overwrite it? (y/n)[n]: y Copying oraenv to /usr/local/bin ...The file "coraenv" already exists in /usr/local/bin. Overwrite it? (y/n)[n]: y Copying coraenv to /usr/local/bin ...Creating /etc/oratab file...Entries will be added to the /etc/oratab file as needed byDatabase Configuration Assistant when a database is createdFinished running generic part of root.sh script.Now product-specific root actions will be performed.
验证是否已经安装了所有 ASM 磁盘组,而且数据文件是否对新实例可视。 SQL> SELECT NAME,STATE,TYPE FROM V$ASM_DISKGROUP;NAME STATE TYPE------------------------------ ----------- ------ASMGRP1 CONNECTED NORMALASMGRP2 CONNECTED NORMALSQL> SELECT NAME FROM V$DATAFILE;NAME-----------------------------------------------------------------+ASMGRP1/sskydb/datafile/system.256.581006553+ASMGRP1/sskydb/datafile/undotbs1.258.581006555+ASMGRP1/sskydb/datafile/sysaux.257.581006553+ASMGRP1/sskydb/datafile/users.259.581006555+ASMGRP1/sskydb/datafile/example.269.581007007+ASMGRP1/sskydb/datafile/undots2.271.581029215
验证 OCR 是否知道: 集群中的新实例:[oracle@oradb1 oracle]$ srvctl status database -d SSKYDBInstance SSKY1 is running on node oradb1Instance SSKY2 is running on node oradb2Instance SSKY3 is running on node oradb3Instance SSKY4 is running on node oradb4Instance SSKY5 is running on node oradb5数据库服务: [oracle@oradb1 oracle]$ srvctl status service -d SSKYDBService CRM is running on instance(s) SSKY1Service CRM is running on instance(s) SSKY2Service CRM is running on instance(s) SSKY3Service CRM is running on instance(s) SSKY4Service CRM is running on instance(s) SSKY5Service PAYROLL is running on instance(s) SSKY1Service PAYROLL is running on instance(s) SSKY5