為用戶 oracle 創(chuàng)建身份驗證密鑰。要創(chuàng)建此密鑰,請將當(dāng)前目錄更改為 oracle 用戶的默認(rèn)登錄目錄并執(zhí)行以下操作: [oracle@oradb5 oracle]$ ssh-keygen -t dsa -b 1024Generating public/private dsa key pair.Enter file in which to save the key (/home/oracle/.ssh/id_dsa):Created Directory '/home/oracle/.ssh'.Enter passphrase (empty for no passphrase):Enter same passphrase again:Your identification has been saved in /home/oracle/.ssh/id_dsa.Your public key has been saved in /home/oracle/.ssh/id_dsa.pub.The key fingerprint is:b6:07:42:ae:47:56:0a:a3:a5:bf:75:3e:21:85:8d:30 oracle@oradb5.sumsky.net[oracle@oradb5 oracle]$
假如硬件和操作系統(tǒng)配置已經(jīng)完成: cluvfy stage -post hwos -n oradb1,oradb5Performing post-checks for hardware and Operating system setupChecking node reachability...Node reachability check passed from node "oradb1".Checking user equivalence...User equivalence check passed for user "oracle".Checking node connectivity...Node connectivity check passed for subnet "192.168.2.0" with node(s) oradb5,oradb1.Node connectivity check passed for subnet "10.168.2.0" with node(s) oradb5,oradb1.Suitable interfaces for the private interconnect on subnet "192.168.2.0":oradb5 eth0:192.168.2.50 eth0:192.168.2.55oradb1 eth0:192.168.2.10 eth0:192.168.2.15Suitable interfaces for the private interconnect on subnet "10.168.2.0":oradb5 eth1:10.168.2.150oradb1 eth1:10.168.2.110Checking shared storage accessibility...Shared storage check failed on nodes "oradb5".Post-check for hardware and operating system setup was unsuccessful on all the nodes. 正如突出顯示的部分一樣,上面的驗證失敗于存儲檢查驗證;節(jié)點 oradb5 無法查看存儲設(shè)備。在這個特定示例中,磁盤沒有足夠的權(quán)限。 假如忽略該錯誤繼續(xù)安裝,Oracle 集群件安裝將失敗。但假如在重新執(zhí)行前解決了該錯誤,該驗證步驟將成功,如下所示。Checking shared storage accessibility...Shared storage check passed on nodes "oradb5,oradb1".Post-check for hardware and operating system setup was successful on all the nodes.
在安裝 Oracle 集群件之前請對節(jié)點列表中的所有節(jié)點執(zhí)行相應(yīng)的檢查。 [oracle@oradb1 cluvfy]$ cluvfy stage -pre crsinst -n oradb1,oradb5Performing pre-checks for cluster services setupChecking node reachability...Node reachability check passed from node "oradb1".Checking user equivalence...User equivalence check passed for user "oracle".Checking administrative privileges...User existence check passed for "oracle".Group existence check passed for "oinstall".Membership check for user "oracle" in group "oinstall" [as Primary] failed.Check failed on nodes: oradb5,oradb1Administrative privileges check passed.Checking node connectivity...Node connectivity check passed for subnet "192.168.2.0" with node(s) oradb5,oradb1.Node connectivity check passed for subnet "10.168.2.0" with node(s) oradb5,oradb1.Suitable interfaces for the private interconnect on subnet "192.168.2.0":oradb5 eth0:192.168.2.50 eth0:192.168.2.55oradb1 eth0:192.168.2.10 eth0:192.168.2.15Suitable interfaces for the private interconnect on subnet "10.168.2.0":oradb5 eth1:10.168.2.150oradb1 eth1:10.168.2.110Checking system requirements for 'crs'...Total memory check passed.Check failed on nodes: oradb5,oradb1Free disk space check passed.Swap space check passed.System architecture check passed.Kernel version check passed.Package existence check passed for "make-3.79".Package existence check passed for "binutils-2.14".Package existence check passed for "gcc-3.2".Package existence check passed for "glibc-2.3.2-95.27".Package existence check passed for "compat-db-4.0.14-5".Package existence check passed for "compat-gcc-7.3-2.96.128".Package existence check passed for "compat-gcc-c++-7.3-2.96.128".Package existence check passed for "compat-libstdc++-7.3-2.96.128".Package existence check passed for "compat-libstdc++-devel-7.3-2.96.128".Package existence check passed for "openmotif-2.2.3".Package existence check passed for "setarch-1.3-1".Group existence check passed for "dba".Group existence check passed for "oinstall".User existence check passed for "nobody".System requirement failed for 'crs'Pre-check for cluster services setup was successful on all the nodes.
當(dāng)需要的所有集群件組件從 oradb1 復(fù)制到 oradb5 之后,OUI 將提示執(zhí)行三個文件: /usr/app/oracle/oraInventory/orainstRoot.sh on node oradb5[root@oradb5 oraInventory]# ./orainstRoot.shChanging permissions of /usr/app/oracle/oraInventory to 770.Changing groupname of /usr/app/oracle/oraInventory to dba.The execution of the script is complete[root@oradb5 oraInventory]#/usr/app/oracle/product/10.2.0/crs/install/rootaddnode.sh on node oradb1.(addnoderoot.sh 文件將使用 srvctl 實用程序?qū)⑿鹿?jié)點信息添加到 OCR。請注重下面腳本輸出末尾的具有 nodeapps 參數(shù)的 srvctl 命令。)[root@oradb1 install]# ./rootaddnode.shclscfg: EXISTING configuration version 3 detected.clscfg: version 3 is 10G Release 2.Attempting to add 1 new nodes to the configurationUsing ports: CSS=49895 CRS=49896 EVMC=49898 and EVMR=49897.node <nodenumber>: <nodename> <private interconnect name> <hostname>node 5: oradb5 oradb5-priv oradb5Creating OCR keys for user 'root', privgrp 'root'..Operation successful./usr/app/oracle/product/10.2.0/crs/bin/srvctl add nodeapps -n oradb5 -A oradb5-v ip/255.255.255.0/bond0 -o /usr/app/oracle/product/10.2.0/crs[root@oradb1 install]#/usr/app/oracle/product/10.2.0/crs/root.sh on node oradb5. [root@oradb5 crs]# ./root.shWARNING: directory '/usr/app/oracle/product/10.2.0' is not owned by rootWARNING: directory '/usr/app/oracle/product' is not owned by rootWARNING: directory '/usr/app/oracle' is not owned by rootChecking to see if Oracle CRS stack is already configured/etc/oracle does not exist. Creating it now.OCR backup directory '/usr/app/oracle/product/10.2.0/crs/cdata/SskyClst' does not exist. Creating nowSetting the permissions on OCR backup directorySetting up NS directoriesOracle Cluster Registry configuration upgraded successfullyWARNING: directory '/usr/app/oracle/product/10.2.0' is not owned by rootWARNING: directory '/usr/app/oracle/product' is not owned by rootWARNING: directory '/usr/app/oracle' is not owned by rootclscfg: EXISTING configuration version 3 detected.clscfg: version 3 is 10G Release 2.assigning default hostname oradb1 for node 1.Successfully accumulated necessary OCR keys.Using ports: CSS=49895 CRS=49896 EVMC=49898 and EVMR=49897.node : node 1: oradb1 oradb1-priv oradb1node 2: oradb2 oradb2-priv oradb2node 3: oradb3 oradb3-priv oradb3node 4: oradb4 oradb4-priv oradb4clscfg: Arguments check out successfully.NO KEYS WERE WRITTEN. Supply -force parameter to override.-force is destructive and will destroy any previous clusterconfiguration.Oracle Cluster Registry for cluster has already been initializedStartup will be queued to init within 90 seconds.Adding daemons to inittabExpecting the CRS daemons to be up within 600 seconds.CSS is active on these nodes. oradb1 oradb2 oradb3 oradb4 oradb5CSS is active on all nodes.Waiting for the Oracle CRSD and EVMD to startOracle CRS stack installed and running under init(1M)Running vipca(silent) for configuring nodeappsIP address "oradb-vip" has already been used. Enter an unused IP address. 產(chǎn)生錯誤“oradb-vip’ has already been used”,因為 VIP 已經(jīng)在所有節(jié)點(而非 oradb5)上進行了配置。重要的是在繼續(xù)之前手動執(zhí)行 VIPCA(虛擬 IP 配置助手)。 使用 VIPCA 手動配置 VIP。與執(zhí)行 OUI 相似,執(zhí)行 VIPCA 要求運行該安裝程序的終端與 X-windows 兼容。否則,應(yīng)安裝相應(yīng)的 X-windows 模擬器并使用以下語法通過 DISPLAY 命令調(diào)用此模擬器:export DISPLAY=<client IP address>:0.0例如: [oracle@oradb1 oracle]$export DISPLAY=192.168.2.101:0.0在節(jié)點 oradb1(或者執(zhí)行添加節(jié)點過程的節(jié)點)上的命令提示符處執(zhí)行 root.sh 之后,還要立即作為根調(diào)用 VIPCA。(VIPCA 還將在新節(jié)點上配置 GSD 和 ONS 資源。)
將 Oracle 軟件復(fù)制到節(jié)點 oradb5 之后,OUI 將提示您以 root 用戶的身份在另一個窗口中對集群中的新節(jié)點(一個或多個)執(zhí)行 /usr/app/oracle/product/10.2.0/db_1/root.sh 腳本。 [root@oradb5 db_1]# ./root.shRunning Oracle10 root.sh script...The following environment variables are set as: ORACLE_OWNER= oracle ORACLE_HOME= /usr/app/oracle/product/10.2.0/db_1Enter the full pathname of the local bin directory: [/usr/local/bin]:The file "dbhome" already exists in /usr/local/bin. Overwrite it? (y/n)[n]: y Copying dbhome to /usr/local/bin ...The file "oraenv" already exists in /usr/local/bin. Overwrite it? (y/n)[n]: y Copying oraenv to /usr/local/bin ...The file "coraenv" already exists in /usr/local/bin. Overwrite it? (y/n)[n]: y Copying coraenv to /usr/local/bin ...Creating /etc/oratab file...Entries will be added to the /etc/oratab file as needed byDatabase Configuration Assistant when a database is createdFinished running generic part of root.sh script.Now product-specific root actions will be performed.
驗證是否已經(jīng)安裝了所有 ASM 磁盤組,而且數(shù)據(jù)文件是否對新實例可視。 SQL> SELECT NAME,STATE,TYPE FROM V$ASM_DISKGROUP;NAME STATE TYPE------------------------------ ----------- ------ASMGRP1 CONNECTED NORMALASMGRP2 CONNECTED NORMALSQL> SELECT NAME FROM V$DATAFILE;NAME-----------------------------------------------------------------+ASMGRP1/sskydb/datafile/system.256.581006553+ASMGRP1/sskydb/datafile/undotbs1.258.581006555+ASMGRP1/sskydb/datafile/sysaux.257.581006553+ASMGRP1/sskydb/datafile/users.259.581006555+ASMGRP1/sskydb/datafile/example.269.581007007+ASMGRP1/sskydb/datafile/undots2.271.581029215
驗證 OCR 是否知道: 集群中的新實例:[oracle@oradb1 oracle]$ srvctl status database -d SSKYDBInstance SSKY1 is running on node oradb1Instance SSKY2 is running on node oradb2Instance SSKY3 is running on node oradb3Instance SSKY4 is running on node oradb4Instance SSKY5 is running on node oradb5數(shù)據(jù)庫服務(wù): [oracle@oradb1 oracle]$ srvctl status service -d SSKYDBService CRM is running on instance(s) SSKY1Service CRM is running on instance(s) SSKY2Service CRM is running on instance(s) SSKY3Service CRM is running on instance(s) SSKY4Service CRM is running on instance(s) SSKY5Service PAYROLL is running on instance(s) SSKY1Service PAYROLL is running on instance(s) SSKY5