【Oracle 12c Flex Cluster专题】节点角色转换
笔者上一篇译文中在介绍Leaf Node时提到, 这次就介绍下如何将节点的角色在hub node和leaf node之间互相转换。由于笔者实验环境中已经存在了一个leaf node,所以先从leaf node转为hub node做起。 初始状态: [root@rac1 ~]# crsctl get cluster mode status
Cluster is running in "flex" mode
[root@rac1 ~]# srvctl status srvpool -detail
Server pool name: Free
Active servers count: 0
Active server names:
Server pool name: Generic
Active servers count: 0
Active server names:
Server pool name: RF1POOL
Active servers count: 1
Active server names: rac3
NAME=rac3 STATE=ONLINE
Server pool name: ztp_pool
Active servers count: 2
Active server names: rac1,rac2
NAME=rac1 STATE=ONLINE
NAME=rac2 STATE=ONLINE
[root@rac1 ~]# crsctl get node role config -all
Node 'rac1' configured role is 'hub'
Node 'rac2' configured role is 'hub'
Node 'rac3' configured role is 'leaf'
[root@rac1 ~]# crsctl get node role status -all
Node 'rac1' active role is 'hub'
Node 'rac2' active role is 'hub'
Node 'rac3' active role is 'leaf'
leaf转hub该集群上运行着名为orcl的数据库,在角色转换之前先观察下orcl库的状态 ora.orcl.db
1 ONLINE ONLINE rac3 Open,Readonly,HOME=/
u01/app/oracle/produ
ct/12.2.0/dbhome_1,S
TABLE
2 ONLINE ONLINE rac2 Open,HOME=/u01/app/o
racle/product/12.2.0
/dbhome_1,STABLE
3 ONLINE ONLINE rac1 Open,STABLE
显然,由于rac3现在是leaf node,所以rac3上的数据库实例只能以只读方式打开。 执行如下操作即可将rac3的角色从leaf node转为hub node crsctl set node role {hub | leaf} [root@rac3 ~]# crsctl set node role hub
CRS-4408: Node 'rac3' configured role successfully changed; restart Oracle High Availability Services for new role to take effect.
查看各节点角色信息 [root@rac1 ~]# crsctl get node role config -all
Node 'rac1' configured role is 'hub'
Node 'rac2' configured role is 'hub'
Node 'rac3' configured role is 'hub',but active role is 'leaf'.
Restart Oracle High Availability Services for the new role to take effect.
[root@rac1 ~]# crsctl get node role status -all
Node 'rac1' active role is 'hub'
Node 'rac2' active role is 'hub'
Node 'rac3' active role is 'leaf',but configured role is 'hub'.
Restart Oracle High Availability Services for the new role to take effect.
根据命令输出信息可知,在配置生效前需要重启该节点的crs,即角色转换无法在线进行。 关闭rac3的crs服务 [root@rac3 ~]# crsctl stop crs
CRS-2791: Starting shutdown of Oracle High Availability Services-managed resources on 'rac3'
CRS-2673: Attempting to stop 'ora.crsd' on 'rac3'
CRS-2790: Starting shutdown of Cluster Ready Services-managed resources on server 'rac3'
CRS-2673: Attempting to stop 'ora.orcl.db' on 'rac3'
CRS-2677: Stop of 'ora.orcl.db' on 'rac3' succeeded
CRS-2673: Attempting to stop 'ora.LISTENER.lsnr' on 'rac3'
CRS-2673: Attempting to stop 'ora.LISTENER_LEAF.lsnr' on 'rac3'
CRS-2677: Stop of 'ora.LISTENER_LEAF.lsnr' on 'rac3' succeeded
CRS-2677: Stop of 'ora.LISTENER.lsnr' on 'rac3' succeeded
CRS-2673: Attempting to stop 'ora.rac3.vip' on 'rac3'
CRS-2677: Stop of 'ora.rac3.vip' on 'rac3' succeeded
CRS-2672: Attempting to start 'ora.rac3.vip' on 'rac2'
CRS-2676: Start of 'ora.rac3.vip' on 'rac2' succeeded
CRS-2673: Attempting to stop 'ora.net1.network' on 'rac3'
CRS-2677: Stop of 'ora.net1.network' on 'rac3' succeeded
CRS-2792: Shutdown of Cluster Ready Services-managed resources on 'rac3' has completed
CRS-2677: Stop of 'ora.crsd' on 'rac3' succeeded
CRS-2673: Attempting to stop 'ora.cluster_interconnect.haip' on 'rac3'
CRS-2673: Attempting to stop 'ora.crf' on 'rac3'
CRS-2673: Attempting to stop 'ora.gpnpd' on 'rac3'
CRS-2673: Attempting to stop 'ora.mdnsd' on 'rac3'
CRS-2677: Stop of 'ora.cluster_interconnect.haip' on 'rac3' succeeded
CRS-2677: Stop of 'ora.gpnpd' on 'rac3' succeeded
CRS-2673: Attempting to stop 'ora.ctssd' on 'rac3'
CRS-2673: Attempting to stop 'ora.evmd' on 'rac3'
CRS-2677: Stop of 'ora.mdnsd' on 'rac3' succeeded
CRS-2677: Stop of 'ora.evmd' on 'rac3' succeeded
CRS-2677: Stop of 'ora.crf' on 'rac3' succeeded
CRS-2677: Stop of 'ora.ctssd' on 'rac3' succeeded
CRS-2673: Attempting to stop 'ora.cssd' on 'rac3'
CRS-2677: Stop of 'ora.cssd' on 'rac3' succeeded
CRS-2673: Attempting to stop 'ora.driver.afd' on 'rac3'
CRS-2673: Attempting to stop 'ora.gipcd' on 'rac3'
CRS-2677: Stop of 'ora.driver.afd' on 'rac3' succeeded
CRS-2677: Stop of 'ora.gipcd' on 'rac3' succeeded
CRS-2793: Shutdown of Oracle High Availability Services-managed resources on 'rac3' has completed
CRS-4133: Oracle High Availability Services has been stopped.
查看各个节点角色信息 [root@rac1 ~]# crsctl get node role config -all
Node 'rac1' configured role is 'hub'
Node 'rac2' configured role is 'hub'
[root@rac1 ~]# crsctl get node role status -all
Node 'rac1' active role is 'hub'
Node 'rac2' active role is 'hub'
启动rac3的crs服务 [root@rac3 ~]# crsctl start crs -wait CRS-4123: Starting Oracle High Availability Services-managed resources CRS-2672: Attempting to start 'ora.evmd' on 'rac3' CRS-2672: Attempting to start 'ora.mdnsd' on 'rac3' CRS-2676: Start of 'ora.mdnsd' on 'rac3' succeeded CRS-2676: Start of 'ora.evmd' on 'rac3' succeeded CRS-2672: Attempting to start 'ora.gpnpd' on 'rac3' CRS-2676: Start of 'ora.gpnpd' on 'rac3' succeeded CRS-2672: Attempting to start 'ora.gipcd' on 'rac3' CRS-2676: Start of 'ora.gipcd' on 'rac3' succeeded CRS-2672: Attempting to start 'ora.cssdmonitor' on 'rac3' CRS-2676: Start of 'ora.cssdmonitor' on 'rac3' succeeded CRS-2672: Attempting to start 'ora.cssd' on 'rac3' CRS-2672: Attempting to start 'ora.diskmon' on 'rac3' CRS-2676: Start of 'ora.diskmon' on 'rac3' succeeded CRS-2676: Start of 'ora.cssd' on 'rac3' succeeded CRS-2672: Attempting to start 'ora.cluster_interconnect.haip' on 'rac3' CRS-2672: Attempting to start 'ora.ctssd' on 'rac3' CRS-2676: Start of 'ora.ctssd' on 'rac3' succeeded CRS-2672: Attempting to start 'ora.crf' on 'rac3' CRS-2676: Start of 'ora.crf' on 'rac3' succeeded CRS-2672: Attempting to start 'ora.crsd' on 'rac3' CRS-2676: Start of 'ora.crsd' on 'rac3' succeeded CRS-2676: Start of 'ora.cluster_interconnect.haip' on 'rac3' succeeded CRS-2672: Attempting to start 'ora.drivers.acfs' on 'rac3' CRS-2676: Start of 'ora.drivers.acfs' on 'rac3' succeeded CRS-2672: Attempting to start 'ora.asm' on 'rac3' CRS-2676: Start of 'ora.asm' on 'rac3' succeeded CRS-6017: Processing resource auto-start for servers: rac3 CRS-2672: Attempting to start 'ora.ASMNET1LSNR_ASM.lsnr' on 'rac3' CRS-2673: Attempting to stop 'ora.rac3.vip' on 'rac2' CRS-2673: Attempting to stop 'ora.LISTENER_SCAN2.lsnr' on 'rac2' CRS-2672: Attempting to start 'ora.ons' on 'rac3' CRS-2677: Stop of 'ora.rac3.vip' on 'rac2' succeeded CRS-2672: Attempting to start 'ora.rac3.vip' on 'rac3' CRS-2677: Stop of 'ora.LISTENER_SCAN2.lsnr' on 'rac2' succeeded CRS-2673: Attempting to stop 'ora.scan2.vip' on 'rac2' CRS-2677: Stop of 'ora.scan2.vip' on 'rac2' succeeded CRS-2672: Attempting to start 'ora.scan2.vip' on 'rac3' CRS-2676: Start of 'ora.ASMNET1LSNR_ASM.lsnr' on 'rac3' succeeded CRS-2676: Start of 'ora.rac3.vip' on 'rac3' succeeded CRS-2672: Attempting to start 'ora.LISTENER.lsnr' on 'rac3' CRS-2676: Start of 'ora.ons' on 'rac3' succeeded CRS-2676: Start of 'ora.scan2.vip' on 'rac3' succeeded CRS-2672: Attempting to start 'ora.LISTENER_SCAN2.lsnr' on 'rac3' CRS-2676: Start of 'ora.LISTENER.lsnr' on 'rac3' succeeded CRS-2679: Attempting to clean 'ora.asm' on 'rac3' CRS-2676: Start of 'ora.LISTENER_SCAN2.lsnr' on 'rac3' succeeded CRS-2681: Clean of 'ora.asm' on 'rac3' succeeded CRS-2672: Attempting to start 'ora.asm' on 'rac3' CRS-2676: Start of 'ora.asm' on 'rac3' succeeded CRS-2672: Attempting to start 'ora.DATA.dg' on 'rac3' CRS-2672: Attempting to start 'ora.FLEXDG.dg' on 'rac3' CRS-2676: Start of 'ora.FLEXDG.dg' on 'rac3' succeeded CRS-2676: Start of 'ora.DATA.dg' on 'rac3' succeeded CRS-2672: Attempting to start 'ora.orcl.db' on 'rac3' CRS-2672: Attempting to start 'ora.prod1.db' on 'rac3' CRS-2676: Start of 'ora.orcl.db' on 'rac3' succeeded CRS-2676: Start of 'ora.prod1.db' on 'rac3' succeeded CRS-6016: Resource auto-start has completed for server rac3 CRS-6024: Completed start of Oracle Cluster Ready Services-managed resources CRS-4123: Oracle High Availability Services has been started.
启动完成后在查看各个节点角色信息 [root@rac1 ~]# crsctl get node role config -all
Node 'rac1' configured role is 'hub'
Node 'rac2' configured role is 'hub'
Node 'rac3' configured role is 'hub'
[root@rac1 ~]# crsctl get node role status -all
Node 'rac1' active role is 'hub'
Node 'rac2' active role is 'hub'
Node 'rac3' active role is 'hub'
此时观察下整个集群的状态 [root@rac1 ~]# crsctl status res -t
-------------------------------------------------------------------------------- Name Target State Server State details --------------------------------------------------------------------------------
Local Resources --------------------------------------------------------------------------------
ora.ASMNET1LSNR_ASM.lsnr ONLINE ONLINE rac1 STABLE ONLINE ONLINE rac2 STABLE ONLINE ONLINE rac3 STABLE ora.DATA.dg ONLINE ONLINE rac1 STABLE ONLINE ONLINE rac2 STABLE ONLINE ONLINE rac3 STABLE ora.FLEXDG.dg ONLINE ONLINE rac1 STABLE ONLINE ONLINE rac2 STABLE ONLINE ONLINE rac3 STABLE ora.LISTENER.lsnr ONLINE ONLINE rac1 STABLE ONLINE ONLINE rac2 STABLE ONLINE ONLINE rac3 STABLE ora.OCR.dg ONLINE ONLINE rac1 STABLE ONLINE ONLINE rac2 STABLE ONLINE ONLINE rac3 STABLE ora.net1.network ONLINE ONLINE rac1 STABLE ONLINE ONLINE rac2 STABLE ONLINE ONLINE rac3 STABLE ora.ons ONLINE ONLINE rac1 STABLE ONLINE ONLINE rac2 STABLE ONLINE ONLINE rac3 STABLE ora.proxy_advm
OFFLINE OFFLINE rac1 STABLE
OFFLINE OFFLINE rac2 STABLE
OFFLINE OFFLINE rac3 STABLE --------------------------------------------------------------------------------
Cluster Resources --------------------------------------------------------------------------------
ora.LISTENER_SCAN1.lsnr 1 ONLINE ONLINE rac1 STABLE ora.LISTENER_SCAN2.lsnr
1 ONLINE ONLINE rac3 STABLE
ora.LISTENER_SCAN3.lsnr 1 ONLINE ONLINE rac2 STABLE ora.MGMTLSNR 1 OFFLINE OFFLINE STABLE ora.asm 1 ONLINE ONLINE rac1 Started,STABLE 2 ONLINE ONLINE rac2 Started,STABLE 3 ONLINE ONLINE rac3 Started,STABLE ora.cvu 1 ONLINE ONLINE rac2 STABLE ora.gns 1 ONLINE ONLINE rac1 STABLE ora.gns.vip 1 ONLINE ONLINE rac1 STABLE ora.orcl.db 1 ONLINE ONLINE rac3 Open,HOME=/u01/app/o racle/product/12.2.0 /dbhome_1,STABLE
2 ONLINE ONLINE rac2 Open,HOME=/u01/app/o
racle/product/12.2.0
/dbhome_1,STABLE
3 ONLINE ONLINE rac1 Open,STABLE
ora.prod1.db
1 ONLINE ONLINE rac1 Open,STABLE
2 ONLINE ONLINE rac2 Open,STABLE
3 ONLINE ONLINE rac3 Open,STABLE
ora.qosmserver
1 OFFLINE OFFLINE STABLE
ora.rac1.vip
1 ONLINE ONLINE rac1 STABLE
ora.rac2.vip
1 ONLINE ONLINE rac2 STABLE
ora.rac3.vip
1 ONLINE ONLINE rac3 STABLE
ora.scan1.vip
1 ONLINE ONLINE rac1 STABLE
ora.scan2.vip
1 ONLINE ONLINE rac3 STABLE
ora.scan3.vip
1 ONLINE ONLINE rac2 STABLE --------------------------------------------------------------------------------
此时rac3上的orcl库的实例已变为open状态,而不是之前的Open,Readonly hub转leaf在12cR2中,如果想将一个节点角色设置为leaf node,那么该集群的scan解析方式必须为GNS。 [root@rac3 ~]# crsctl set node role leaf
CRS-4408: Node 'rac3' configured role successfully changed; restart Oracle High Availability Services for new role to take effect.
同上,rac3依然需要重启crs来使配置生效。 过程略 重启后各个节点角色信息如下: [root@rac1 ~]# crsctl get node role status -all
Node 'rac1' active role is 'hub'
Node 'rac2' active role is 'hub'
Node 'rac3' active role is 'leaf'
[root@rac1 ~]# crsctl get node role config -all
Node 'rac1' configured role is 'hub'
Node 'rac2' configured role is 'hub'
Node 'rac3' configured role is 'leaf'
此时整个集群状态如下: [root@rac1 ~]# crsctl status res -t
--------------------------------------------------------------------------------
Name Target State Server State details
--------------------------------------------------------------------------------
Local Resources
--------------------------------------------------------------------------------
ora.ASMNET1LSNR_ASM.lsnr
ONLINE ONLINE rac1 STABLE
ONLINE ONLINE rac2 STABLE
ora.DATA.dg
ONLINE ONLINE rac1 STABLE
ONLINE ONLINE rac2 STABLE
ora.FLEXDG.dg
ONLINE ONLINE rac1 STABLE
ONLINE ONLINE rac2 STABLE
ora.LISTENER.lsnr
ONLINE ONLINE rac1 STABLE
ONLINE ONLINE rac2 STABLE
ONLINE ONLINE rac3 STABLE
ora.LISTENER_LEAF.lsnr
OFFLINE OFFLINE rac3 STABLE
ora.OCR.dg
ONLINE ONLINE rac1 STABLE
ONLINE ONLINE rac2 STABLE
ora.net1.network
ONLINE ONLINE rac1 STABLE
ONLINE ONLINE rac2 STABLE
ONLINE ONLINE rac3 STABLE
ora.ons
ONLINE ONLINE rac1 STABLE
ONLINE ONLINE rac2 STABLE
ora.proxy_advm
OFFLINE OFFLINE rac1 STABLE
OFFLINE OFFLINE rac2 STABLE
--------------------------------------------------------------------------------
Cluster Resources
--------------------------------------------------------------------------------
ora.LISTENER_SCAN1.lsnr
1 ONLINE ONLINE rac1 STABLE
ora.LISTENER_SCAN2.lsnr
1 ONLINE ONLINE rac1 STABLE
ora.LISTENER_SCAN3.lsnr
1 ONLINE ONLINE rac2 STABLE
ora.MGMTLSNR
1 OFFLINE OFFLINE STABLE
ora.asm
1 ONLINE ONLINE rac1 Started,STABLE
2 ONLINE ONLINE rac2 Started,STABLE
3 ONLINE OFFLINE Instance Shutdown,ST
ABLE
ora.cvu
1 ONLINE ONLINE rac2 STABLE
ora.gns
1 ONLINE ONLINE rac1 STABLE
ora.gns.vip
1 ONLINE ONLINE rac1 STABLE
ora.orcl.db
1 ONLINE ONLINE rac3 Open,S
TABLE
2 ONLINE ONLINE rac2 Open,STABLE
3 ONLINE ONLINE rac1 Open,STABLE
ora.prod1.db
1 ONLINE ONLINE rac1 Open,STABLE
2 ONLINE ONLINE rac2 Open,ST
ABLE
ora.qosmserver
1 OFFLINE OFFLINE STABLE
ora.rac1.vip
1 ONLINE ONLINE rac1 STABLE
ora.rac2.vip
1 ONLINE ONLINE rac2 STABLE
ora.rac3.vip
1 ONLINE ONLINE rac3 STABLE
ora.scan1.vip
1 ONLINE ONLINE rac1 STABLE
ora.scan2.vip
1 ONLINE ONLINE rac1 STABLE
ora.scan3.vip
1 ONLINE ONLINE rac2 STABLE
--------------------------------------------------------------------------------
可以发现在rac3切换为leaf node之后,多了ora.LISTENER_LEAF.lsnr这个资源, 需要注意的一点是,leaf node上的只读db实例会把服务注册到LISTENER_LEAF这个监听中,而不是LISTENER。 [root@rac3 ~]# srvctl start listener -listener LISTENER_LEAF [grid@rac3 ~]$ lsnrctl status LSNRCTL for Linux: Version 12.2.0.1.0 - Production on 27-JUL-2017 16:46:01 Copyright (c) 1991,2016,Oracle. All rights reserved. Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=IPC)(KEY=LISTENER))) STATUS of the LISTENER ------------------------ Alias LISTENER Version TNSLSNR for Linux: Version 12.2.0.1.0 - Production Start Date 27-JUL-2017 16:24:27 Uptime 0 days 0 hr. 21 min. 34 sec Trace Level off Security ON: Local OS Authentication SNMP OFF Listener Parameter File /u01/app/12.2.0/grid/network/admin/listener.ora Listener Log File /u01/app/grid/diag/tnslsnr/rac3/listener/alert/log.xml Listening Endpoints Summary... (DESCRIPTION=(ADDRESS=(PROTOCOL=ipc)(KEY=LISTENER))) (DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=10.10.70.103)(PORT=1521))) (DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=10.10.70.186)(PORT=1521))) The listener supports no services The command completed successfully [grid@rac3 ~]$ lsnrctl status listener_leaf LSNRCTL for Linux: Version 12.2.0.1.0 - Production on 27-JUL-2017 16:46:02 Copyright (c) 1991,Oracle. All rights reserved. Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=IPC)(KEY=LISTENER_LEAF))) STATUS of the LISTENER ------------------------ Alias LISTENER_LEAF Version TNSLSNR for Linux: Version 12.2.0.1.0 - Production Start Date 27-JUL-2017 16:44:31 Uptime 0 days 0 hr. 1 min. 31 sec Trace Level off Security ON: Local OS Authentication SNMP OFF Listener Parameter File /u01/app/12.2.0/grid/network/admin/listener.ora Listener Log File /u01/app/grid/diag/tnslsnr/rac3/listener_leaf/alert/log.xml Listening Endpoints Summary... (DESCRIPTION=(ADDRESS=(PROTOCOL=ipc)(KEY=LISTENER_LEAF))) (DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=10.10.70.103)(PORT=1525))) Services Summary... Service "5491bed1838610f0e05366460a0a5736" has 1 instance(s). Instance "orcl_1",status READY,has 1 handler(s) for this service... Service "5507ca8c0abd4747e05365460a0a8d01" has 1 instance(s). Instance "orcl_1",has 1 handler(s) for this service... Service "orcl" has 1 instance(s). Instance "orcl_1",has 1 handler(s) for this service... Service "orclXDB" has 1 instance(s). Instance "orcl_1",has 1 handler(s) for this service... Service "orclpdb" has 1 instance(s). Instance "orcl_1",has 1 handler(s) for this service... Service "ztp" has 1 instance(s). Instance "orcl_1",has 1 handler(s) for this service... The command completed successfully
最后需要注意的是:leaf node上默认监听端口为1525 结论
(编辑:李大同) 【声明】本站内容均来自网络,其相关言论仅代表作者个人观点,不代表本站立场。若无意侵犯到您的权利,请及时与联系站长删除相关内容! |