RedHat7.1安装oracle rac crs命令11GRAC时,安装grid软件,执行root.sh时报错 CRS-4124,CRS-4000

oracle 11g r2 的rac,执行root.sh脚本时报错,求教_百度知道
oracle 11g r2 的rac,执行root.sh脚本时报错,求教
提问者采纳
tmp/var/npohasd&#39: opening`/dd.这是11;bin/tmp/11.2;tmp&#47.0;grid&#47,直到能执行为止.1的BUG原因:一删除配置;u01/app/.0&#47:/install&#47.oracle/dd if=/?CRS-4124;var/var/bin&#47: No such file or directory的时候文件还没生成就继续执行;npohasd管道文件的权限不正确。三执行root。解决办法;dev/:&#47, or ccrs&#47.pl -deconfig -force-verbose二&#47: Oracle High Availability Sernull bs=1024 count=1如果出现&#47.oracle&#47: Command Snpohasd of=&#47是这个错误吗.CRS-4000;.oracle&#47,一般出现Adding daemon to inittab这条信息的时候执行dd命令.2
来自团队:
其他类似问题
为您推荐:
rac的相关知识
等待您来回答
下载知道APP
随时随地咨询
出门在外也不愁当前位置: >
Oracle 11g安装GI CRS-4124 解决方案
来源:Ask Oracle社区/栏目:/时间:/阅读:次
在Oracle Linux 6.1 上安装11.2.0.1 的RAC,在安装grid时执行root.sh 脚本,报错,如下: CRS-4124: Oracle High Availability Services startup failed.CRS-4000: Command Start failed, or completed with errors.ohasd failed to start: Inappropriate io
在Oracle Linux 6.1 上安装11.2.0.1 的RAC,在安装grid时执行root.sh 脚本,报错,如下:CRS-4124: Oracle High Availability Services startup failed.
CRS-4000: Command Start failed, or completed with errors.
ohasd failed to start: Inappropriate ioctl for device
ohasd failed to start at/u01/app/11.2.0/grid/crs/install/rootcrs.pl line 443.第一次安装11gR2 RAC的时候就遇到了这个11.0.2.1的经典问题,上网一查才知道这是个bug,解决办法也很简单,就是在执行root.sh之前执行以下命令/bin/dd if=/var/tmp/.oracle/npohasd f=/dev/null bs=1024 count=1如果出现如下报错时,文件还没生成就继续执行,直到能执行为止,一般出现Adding daemon to inittab这条信息的时候执行dd命令。/bin/dd: opening`/var/tmp/.oracle/npohasd': No such file or directory或者在执行之前执行以下命令:chown root:oinstall /var/tmp/.oracle/npohasd重新执行root.sh之前别忘了删除配置:/u01/app/11.2.0/grid/crs/install/roothas.pl -deconfig -force-verbose有关GI的安装步骤,请戳》》》》『 &』
个人微信,欢迎来访。
还在犹豫什么,扫起!
Oracle数据库中表空间的数据文件在基于OS系统级别被rm -rf 删除后,只要数据库在删除后一直未被sh...
关于数据库 alter system switch logfile hang住的解决方案 当我执行alter system switch logfil...
sys@ORA10Gconn /as sysdbaConnected.sys@ORA10Gcreate user aaa identified by aaa account unl...
服务器型号:DELL XXXX 内存大小:6G 硬盘空间:300G 系统版本:Red Hat Enterprise Linux 6 32...
检索数据库时,在alert.log日志中发现如下错误: ORA-00600: 内部错误代码, 参数: [qerrmObnd1],...
扫描二维码!轻松学习Oracle!基于CentOS与VmwareStation10搭建Oracle11G RAC 64集群环境:4.安装Oracle RAC FAQ-4.6.重新配置与缷载11R2 Grid Infrastructure - xuzhengzhu - 博客园
把握现在,领导未来
posts - 579, comments - 20, trackbacks - 18, articles - 0
1.[root@linuxrac1 ~]# /u01/app/oraInventory/orainstRoot.sh
2.[root@linuxrac2 ~]# /u01/app/oraInventory/orainstRoot.sh
3.[root@linuxrac1 ~]# /u01/app/11.2.0/grid/root.sh
4.[root@linuxrac2 ~]# /u01/app/11.2.0/grid/root.sh
安装集群软件时,没有按上述步骤在两个节点执行相同的脚本,而是采用了下面错误的顺序:
1. [root@linuxrac1 ~]# /u01/app/oraInventory/orainstRoot.sh
2. [root@linuxrac1 ~]# /u01/app/11.2.0/grid/root.sh
3. [root@linuxrac2 ~]# /u01/app/oraInventory/orainstRoot.sh
4. [root@linuxrac2 ~]# /u01/app/11.2.0/grid/root.sh
导致集群安装失败
1. 先恢复配置:恢复配置Grid Infrastructure 并不会移除已经复制的二进制文件,仅仅是回复到配置crs之前的状态
a)&使用root用户登录,并执行下面的命令(所有节点,但 最后一个节点除外)
#perl /u01/app/11.2.0/grid/crs/install/rootcrs.pl -verbose -deconfig -force
[root@linuxrac1 ~]#perl /u01/app/11.2.0/grid/crs/install/rootcrs.pl -verbose -deconfig -force
00:20:37: Parsing the host name
00:20:37: Checking for super user privileges
00:20:37: User has super user privileges
Using configuration parameter file: /u01/app/11.2.0/grid/crs/install/crsconfig_params
PRCR-1035 : Failed to look up CRS resource ora.cluster_vip.type for 1
PRCR-1068 : Failed to query resources
Cannot communicate with crsd
PRCR-1070 : Failed to check if resource ora.gsd is registered
Cannot communicate with crsd
PRCR-1070 : Failed to check if resource ora.ons is registered
Cannot communicate with crsd
PRCR-1070 : Failed to check if resource ora.eons is registered
Cannot communicate with crsd
ADVM/ACFS is not supported on centos-release-5-4.el5.centos.1
ACFS-9201: Not Supported
CRS-2791: Starting shutdown of Oracle High Availability Services-managed resources on 'linuxrac1'
CRS-2673: Attempting to stop 'ora.crsd' on 'linuxrac1'
CRS-4548: Unable to connect to CRSD
CRS-2675: Stop of 'ora.crsd' on 'linuxrac1' failed
CRS-2679: Attempting to clean 'ora.crsd' on 'linuxrac1'
CRS-4548: Unable to connect to CRSD
CRS-2678: 'ora.crsd' on 'linuxrac1' has experienced an unrecoverable failure
CRS-0267: Human intervention required to resume its availability.
CRS-2795: Shutdown of Oracle High Availability Services-managed resources on 'linuxrac1' has failed
CRS-4687: Shutdown command has completed with error(s).
CRS-4000: Command Stop failed, or completed with errors.
You must kill crs processes or reboot the system to properly
cleanup the processes started by Oracle clusterware
Successfully deconfigured Oracle clusterware stack on this node
b)&&&&&&& 、同样使用root用户在最后一个节点执行下面的命令。该命令将清空ocr配置和voting disk
#perl /u01/app/11.2.0/grid/crs/install/rootcrs.pl -verbose -deconfig -force
[root@linuxrac2 ~]#perl /u01/app/11.2.0/grid/crs/install/rootcrs.pl -verbose -deconfig -force
00:25:37: Parsing the host name
00:25:37: Checking for super user privileges
00:25:37: User has super user privileges
Using configuration parameter file: /u01/app/11.2.0/grid/crs/install/crsconfig_params
VIP exists.:linuxrac1
VIP exists.: /linuxrac1-vip/10.10.97.181/255.255.255.0/eth0
GSD exists.
ONS daemon exists. Local port 6100, remote port 6200
eONS daemon exists. Multicast port 18049, multicast IP address 234.241.229.252, listening port 2016
PRKO-2439 : VIP does not exist.
PRKO-2313 : VIP linuxrac2 does not exist.
ADVM/ACFS is not supported on centos-release-5-4.el5.centos.1
ACFS-9201: Not Supported
CRS-2791: Starting shutdown of Oracle High Availability Services-managed resources on 'linuxrac2'
CRS-2673: Attempting to stop 'ora.crsd' on 'linuxrac2'
CRS-2790: Starting shutdown of Cluster Ready Services-managed resources on 'linuxrac2'
CRS-2673: Attempting to stop 'ora.OCR_VOTE.dg' on 'linuxrac2'
CRS-2677: Stop of 'ora.OCR_VOTE.dg' on 'linuxrac2' succeeded
CRS-2673: Attempting to stop 'ora.asm' on 'linuxrac2'
CRS-2677: Stop of 'ora.asm' on 'linuxrac2' succeeded
CRS-2792: Shutdown of Cluster Ready Services-managed resources on 'linuxrac2' has completed
CRS-2677: Stop of 'ora.crsd' on 'linuxrac2' succeeded
CRS-2673: Attempting to stop 'ora.cssdmonitor' on 'linuxrac2'
CRS-2673: Attempting to stop 'ora.ctssd' on 'linuxrac2'
CRS-2673: Attempting to stop 'ora.evmd' on 'linuxrac2'
CRS-2673: Attempting to stop 'ora.asm' on 'linuxrac2'
CRS-2673: Attempting to stop 'ora.mdnsd' on 'linuxrac2'
CRS-2677: Stop of 'ora.cssdmonitor' on 'linuxrac2' succeeded
CRS-2677: Stop of 'ora.evmd' on 'linuxrac2' succeeded
CRS-2677: Stop of 'ora.mdnsd' on 'linuxrac2' succeeded
CRS-2677: Stop of 'ora.asm' on 'linuxrac2' succeeded
CRS-2677: Stop of 'ora.ctssd' on 'linuxrac2' succeeded
CRS-2673: Attempting to stop 'ora.cssd' on 'linuxrac2'
CRS-2677: Stop of 'ora.cssd' on 'linuxrac2' succeeded
CRS-2673: Attempting to stop 'ora.gpnpd' on 'linuxrac2'
CRS-2673: Attempting to stop 'ora.diskmon' on 'linuxrac2'
CRS-2677: Stop of 'ora.gpnpd' on 'linuxrac2' succeeded
CRS-2673: Attempting to stop 'ora.gipcd' on 'linuxrac2'
CRS-2677: Stop of 'ora.gipcd' on 'linuxrac2' succeeded
CRS-2677: Stop of 'ora.diskmon' on 'linuxrac2' succeeded
CRS-2793: Shutdown of Oracle High Availability Services-managed resources on 'linuxrac2' has completed
CRS-4133: Oracle High Availability Services has been stopped.
Successfully deconfigured Oracle clusterware stack on this node
c)&&&如果使用了ASM磁盘,继续下面的操作以使得ASM重新作为候选磁盘(清空所有的ASM磁盘组)
[root@linuxrac1 ~]#dd if=/dev/zero of=/dev/sdb1 bs=1024 count=10000
10000+0 records in
10000+0 records out
bytes (10M) copied, 0.002998 seconds, 34.2 MB/s
[root@linuxrac2 ~]#dd if=/dev/zero of=/dev/sdb1 bs=1024 count=10000
10000+0 records in
10000+0 records out
bytes (10M) copied, 0.00289 seconds, 35.4 MB/s
[root@linuxrac1 /]#etc/init.d/oracleasm deletedisk OCR_VOTE /dev/sdb1
Removing ASM disk "OCR_VOTE":&&&&&&&&&&&&&&&&&&&&&&&&&&&&& [& OK& ]
[root@linuxrac2 /]#etc/init.d/oracleasm deletedisk OCR_VOTE /dev/sdb1
Removing ASM disk "OCR_VOTE":&&&&&&&&&&&&&&&&&&&&&&&&&&&&& [& OK& ]
[root@linuxrac1 /]#etc/init.d/oracleasm createdisk OCR_VOTE /dev/sdb1
[root@linuxrac2 /]#oracleasm scandisks
Reloading disk partitions: done
Cleaning any stale ASM disks...
Scanning system for ASM disks...
Instantiating disk "OCR_VOTE"
[root@linuxrac2 /]# oracleasm listdisks
2.彻底删除Grid Infrastructure
11G R2 Grid Infrastructure 也提供了彻底卸载的功能,deinstall该命令取代了使用OUI方式清除clusterware以及ASM,回复到安装grid之前的环境。
该命令将停止集群,移除二进制文件及其相关的所有配置信息。
命令位置:$GRID_HOME/deinstall
下面该命令操作的具体事例,操作期间,需要提供一些交互信息,以及在新的session以root身份。
[root@ linuxrac1/ ]# cd /u01/app/11.2.0/grid/
[root@ linuxrac1/ ]# cd bin
[root@ linuxrac1 bin]# ./crsctl check crs
CRS-4047: No Oracle Clusterware components configured.
CRS-4000: Command Check failed, or completed with errors.
[root@ linuxrac1 bin]# cd ../deinstall/
[root@ linuxrac1 deinstall]# pwd
[root@ linuxrac1 deinstall]# su grid
[grid@ linuxrac1 deinstall]# ./deinstall
Checking for required files and bootstrapping ...
Please wait ...
Location of logs /tmp/deinstall_06-18-10-PM/logs/
############ ORACLE DEINSTALL & DECONFIG TOOL START ############
######################## CHECK OPERATION START ########################
Install check configuration START
Checking for existence of the Oracle home location /u01/app/11.2.0/grid
Oracle Home type selected for de-install is: CRS
Oracle Base selected for de-install is: /u01/app/grid
Checking for existence of central inventory location /u01/app/oraInventory
Checking for existence of the Oracle Grid Infrastructure home
The following nodes are part of this cluster: linuxrac1,linuxrac2
Install check configuration END
Traces log file: /tmp/deinstall_06-18-10-PM/logs//crsdc.log
Enter an address or the name of the virtual IP used on node "linuxrac1"[linuxrac1-vip]
The following information can be collected by running ifconfig -a on node "linuxrac1"
Enter the IP netmask of Virtual IP "10.10.97.181" on node "linuxrac1"[255.255.255.0]
Enter the network interface name on which the virtual IP address "10.10.97.181" is active
Enter an address or the name of the virtual IP used on node "linuxrac2"[linuxrac2-vip]
The following information can be collected by running ifconfig -a on node "linuxrac2"
Enter the IP netmask of Virtual IP "10.10.97.183" on node "linuxrac2"[255.255.255.0]
Enter the network interface name on which the virtual IP address "10.10.97.183" is active
Enter an address or the name of the virtual IP[]
Network Configuration check config START
Network de-configuration trace file location: /tmp/deinstall_06-18-10-PM/logs/netdc_check0150519.log
Specify all RAC listeners that are to be de-configured [LISTENER,LISTENER_SCAN1]:
Network Configuration check config END
Asm Check Configuration START
ASM de-configuration trace file location: /tmp/deinstall_06-18-10-PM/logs/asmcadc_check4710711.log
ASM configuration was not detected in this Oracle home. Was ASM configured in this Oracle home (y|n) [n]: y
Enter the OCR/Voting Disk diskgroup name []:&&&&
Specify the ASM Diagnostic Destination [ ]:
Specify the diskgroups that are managed by this ASM instance []:
######################### CHECK OPERATION END #########################
####################### CHECK OPERATION SUMMARY #######################
Oracle Grid Infrastructure Home is:
The cluster node(s) on which the Oracle home exists are: (Please input nodes seperated by ",", eg: node1,node2,...)linuxrac1,linuxrac2
Oracle Home selected for de-install is: /u01/app/11.2.0/grid
Inventory Location where the Oracle home registered is: /u01/app/oraInventory
Following RAC listener(s) will be de-configured: LISTENER,LISTENER_SCAN1
ASM instance will be de-configured from this Oracle home
Do you want to continue (y - yes, n - no)? [n]: y
A log of this session will be written to: '/tmp/deinstall_06-18-10-PM/logs/deinstall_deconfig_06-18-44-PM.out'
Any error messages from this session will be written to: '/tmp/deinstall_06-18-10-PM/logs/deinstall_deconfig_06-18-44-PM.err'
######################## CLEAN OPERATION START ########################
ASM de-configuration trace file location: /tmp/deinstall_06-18-10-PM/logs/asmcadc_clean850558.log
ASM Clean Configuration START
ASM Clean Configuration END
Network Configuration clean config START
Network de-configuration trace file location: /tmp/deinstall_06-18-10-PM/logs/netdc_clean4092411.log
De-configuring RAC listener(s): LISTENER,LISTENER_SCAN1
De-configuring listener: LISTENER
&&& Stopping listener: LISTENER
&&& Warning: Failed to stop listener. Listener may not be running.
Listener de-configured successfully.
De-configuring listener: LISTENER_SCAN1
&&& Stopping listener: LISTENER_SCAN1
&&& Warning: Failed to stop listener. Listener may not be running.
Listener de-configured successfully.
De-configuring Naming Methods configuration file on all nodes...
Naming Methods configuration file de-configured successfully.
De-configuring Local Net Service Names configuration file on all nodes...
Local Net Service Names configuration file de-configured successfully.
De-configuring Directory Usage configuration file on all nodes...
Directory Usage configuration file de-configured successfully.
De-configuring backup files on all nodes...
Backup files de-configured successfully.
The network configuration has been cleaned up successfully.
Network Configuration clean config END
----------------------------------------&
Oracle Universal Installer clean START
Detach Oracle home '/u01/app/11.2.0/grid' from the central inventory on the local node : Done
Delete directory '/u01/app/11.2.0/grid' on the local node : Done
Delete directory '/u01/app/oraInventory' on the local node : Done
Delete directory '/u01/app/grid' on the local node : Done
Detach Oracle home '/u01/app/11.2.0/grid' from the central inventory on the remote nodes 'linuxrac2' : Done
Delete directory '/u01/app/11.2.0/grid' on the remote nodes 'linuxrac2' : Done
Delete directory '/u01/app/oraInventory' on the remote nodes 'linuxrac2' : Done
Delete directory '/u01/app/grid' on the remote nodes 'linuxrac2' : Done
Oracle Universal Installer cleanup was successful.
Oracle Universal Installer clean END
Oracle install clean START
Clean install operation removing temporary directory '/tmp/install' on node 'linuxrac1'
Clean install operation removing temporary directory '/tmp/install' on node 'linuxrac2'
Oracle install clean END
######################### CLEAN OPERATION END #########################
####################### CLEAN OPERATION SUMMARY #######################
ASM instance was de-configured successfully from the Oracle home
Following RAC listener(s) were de-configured successfully: LISTENER,LISTENER_SCAN1
Oracle Clusterware was already stopped and de-configured on node "linuxrac2"
Oracle Clusterware was already stopped and de-configured on node "linuxrac1"
Oracle Clusterware is stopped and de-configured successfully.
Successfully detached Oracle home '/u01/app/11.2.0/grid' from the central inventory on the local node.
Successfully deleted directory '/u01/app/11.2.0/grid' on the local node.
Successfully deleted directory '/u01/app/oraInventory' on the local node.
Successfully deleted directory '/u01/app/grid' on the local node.
Successfully detached Oracle home '/u01/app/11.2.0/grid' from the central inventory on the remote nodes 'linuxrac2'.
Successfully deleted directory '/u01/app/11.2.0/grid' on the remote nodes 'linuxrac2'.
Successfully deleted directory '/u01/app/oraInventory' on the remote nodes 'linuxrac2'.
Successfully deleted directory '/u01/app/grid' on the remote nodes 'linuxrac2'.
Oracle Universal Installer cleanup was successful.
Run 'rm -rf /etc/oraInst.loc' as root on node(s) 'linuxrac1,linuxrac2' at the end of the session.
Oracle install successfully cleaned up the temporary directories.
#######################################################################
############# ORACLE DEINSTALL & DECONFIG TOOL END #############
基于CentOS与VmwareStation10搭建Oracle11G RAC 64集群环境所有链接:
1.资源准备
2.搭建环境-2.1创建虚拟机
2.搭建环境-2.2安装操作系统CentOS5.4
2.搭建环境-2.3配置共享磁盘
2.搭建环境-2.4. 安装JDK
2.搭建环境-2.5. 配置网络
2.搭建环境-2.6. 安装Oracle所依赖的必要包
2.搭建环境-2.7. 配置资源与参数
2.搭建环境-2.8. 配置用户环境
2.搭建环境-2.9. 配置用户等效性(可选项)
2.搭建环境-2.10.配置用户NTF服务
3.安装Oracle RAC-3.1.安装并配置ASM驱动
3.安装Oracle RAC-3.2.安装 cvuqdisk 软件包
3.安装Oracle RAC-3.3.安装前检查
3.安装Oracle RAC-3.4.安装Grid Infrastructure
3.安装Oracle RAC-3.5.安装oracle11gr2 database 软件与创建数据库
3.安装Oracle RAC-3.6.集群管理命令
4.安装Oracle RAC FAQ-4.1.系统界面报错Gnome
4.安装Oracle RAC FAQ-4.2.Oracleasm Createdisk ASM磁盘失败:Instantiating disk: failed
4.安装Oracle RAC FAQ-4.3.Oracle 集群节点间连通失败
4.安装Oracle RAC FAQ-4.4.无法图形化安装Grid Infrastructure
4.安装Oracle RAC FAQ-4.5.安装Grid,创建ASM磁盘组空间不足
4.安装Oracle RAC FAQ-4.6.重新配置与缷载11R2 Grid Infrastructure
4.安装Oracle RAC FAQ-4.7.Oracle 11G R2 RAC修改public网络IP

我要回帖

更多关于 oracle rac crsctl 的文章

 

随机推荐