2024年2月9日发(作者:)
Solaris11.3+suncluster4.3+VirtualBox
一、 准备工作:
笔记本一台:8G内存、CPU I3以上,建议用SSD硬盘
操作系统 Solaris11.3-X86 安装包+补丁包(repo+SRU)
HA软件: Suncluster4.3安装包
虚机要求:
Node1:OS盘30G(动态分配);IP:192.168.56.10/24;配置三个网口(1个OS,2个心跳)
Node1:OS盘30G(动态分配);IP:192.168.56.11/24;配置三个网口(1个OS,2个心跳)
Quorum盘: 1G
Data盘: 2G*2
二、 安装OS+打补丁+cluster
1. 安装操作系统solaris11.3
此步骤略。
2. 打补丁和安装图形
#pkg set-publisher -G '*' -g /sol-11-3-SRU/publisher/solaris/
solaris
# pkg set-publisher -g file:///mnt/repo/ solaris
# pkg update
# pkg install --accept solaris-desktop
3. 安装cluster4.3
# mount -F hsfs /root/osc-4_ /mnt
# pkg set-publisher -G '*' -g file:///mnt/repo ha-cluster
# pkg install ha-cluster-full
# pkg list ha-cluster-full
三、 添加共享磁盘
1. 创建磁盘:
D:virtual+vmwareVirtualBox> createhd -filename
D:virtual+ -size 1024 --format VDI --variant Fixed
0%...10%...20%...30%...40%...50%...60%...70%...80%...90%...100%
Medium created. UUID: 7d05bce2-7d62-4fdf-80cb-b37138c4e496
D:virtual+vmwareVirtualBox> createhd -filename
D:virtual+ -size 2048 --format VDI --variant Fixed
0%...10%...20%...30%...40%...50%...60%...70%...80%...90%...100%
Medium created. UUID: 55b08a9b-6043-47c1-9ba4-4215c2680de5
D:virtual+vmwareVirtualBox> createhd -filename
D:virtual+ -size 2048 --format VDI --variant Fixed
0%...10%...20%...30%...40%...50%...60%...70%...80%...90%...100%
Medium created. UUID: 51d55226-ac4f-4eed-b2ab-567d9c905312
2. 把磁盘添加到虚拟机里node1、node2:
D:virtual+vmwareVirtualBox> storageattach solaris-node2
--storagectl "SATA" --port 1 --device 0 --type hdd --medium
D:virtual+ --mtype shareable
D:virtual+vmwareVirtualBox> storageattach solaris-node2
--storagectl "SATA" --port 2 --device 0 --type hdd --medium
D:virtual+ --mtype shareable
D:virtual+vmwareVirtualBox> storageattach solaris-node2
--storagectl "SATA" --port 3 --device 0 --type hdd --medium
D:virtual+ --mtype shareable
D:virtual+vmwareVirtualBox> storageattach solaris-node1
--storagectl "SATA" --port 1 --device 0 --type hdd --medium
D:virtual+ --mtype shareable
D:virtual+vmwareVirtualBox> storageattach solaris-node1
--storagectl "SATA" --port 2 --device 0 --type hdd --medium
D:virtual+ --mtype shareable
D:virtual+vmwareVirtualBox> storageattach solaris-node1
--storagectl "SATA" --port 3 --device 0 --type hdd --medium
D:virtual+ --mtype shareable
3. 磁盘共享:
D:virtual+vmwareVirtualBox>
d:virtual+ --type shareable
D:virtual+vmwareVirtualBox>
d:virtual+ --type shareable
D:virtual+vmwareVirtualBox>
d:virtual+ --type shareable
modifyhd
modifyhd
modifyhd
四、 配置集群:
1. 初始化配置
# scinstall
*** Main Menu ***
Please select from one of the following (*) options:
* 1) Create a new cluster or add a cluster node
2) Configure a cluster to be JumpStarted from this install server
3) Manage a dual-partition upgrade
4) Upgrade this cluster node
* 5) Print release information for this cluster node
* ?) Help with menu options
* q) Quit
Option: 1
*** New Cluster and Cluster Node Menu ***
Please select from any one of the following options:
1) Create a new cluster
2) Create just the first node of a new cluster on this machine
3) Add this machine as a node in an existing cluster
?) Help with menu options
q) Return to the Main Menu
Option: 2
*** Establish Just the First Node of a New Cluster ***
This option is used to establish a new cluster using this machine as
the first node in that cluster.
Before you select this option, the Oracle Solaris Cluster framework
software must already be installed. Use the Oracle Solaris Cluster
installation media or the IPS packaging system to install Oracle
Solaris Cluster software.
Press Control-d at any time to return to the Main Menu.
Do you want to continue (yes/no) [yes]?
>>> Typical or Custom Mode <<<
This tool supports two modes of operation, Typical mode and Custom.
For most clusters, you can use Typical mode. However, you might need
to select the Custom mode option if not all of the Typical defaults
can be applied to your cluster.
For more information about the differences between Typical and Custom
modes, select the Help option from the menu.
Please select from one of the following options:
1) Typical
2) Custom
?) Help
q) Return to the Main Menu
Option [1]: 2
>>> Cluster Name <<<
Each cluster has a name assigned to it. The name can be made up of any
characters other than whitespace. Each cluster name should be unique
within the namespace of your enterprise.
What is the name of the cluster you want to establish? yx_cluster
>>> Check <<<
This step allows you to run cluster check to verify that certain basic
hardware and software pre-configuration requirements have been met. If
cluster check detects potential problems with configuring this machine
as a cluster node, a report of violated checks is prepared and
available for display on the screen.
Do you want to run cluster check (yes/no) [no]?
>>> Cluster Nodes <<<
This Oracle Solaris Cluster release supports a total of up to 16
nodes.
Please list the names of the other nodes planned for the initial
cluster configuration. List one node name per line. When finished,
type Control-D:
Node name (Control-D to finish): node1
Node name (Control-D to finish): node2
Node name (Control-D to finish): ^D
This is the complete list of nodes:
node1
node2
Is it correct (yes/no) [yes]?
>>> Authenticating Requests to Add Nodes <<<
Once the first node establishes itself as a single node cluster, other
nodes attempting to add themselves to the cluster configuration must
be found on the list of nodes you just provided. You can modify this
list by using claccess(1CL) or other tools once the cluster has been
established.
By default, nodes are not securely authenticated as they attempt to
add themselves to the cluster configuration. This is generally
considered adequate, since nodes which are not physically connected to
the private cluster interconnect will never be able to actually join
the cluster. However, DES authentication is available. If DES
authentication is selected, you must configure all necessary
encryption keys before any node will be allowed to join the cluster
(seekeyserv(1M), publickey(4)).
Do you need to use DES authentication (yes/no) [no]?
>>> Minimum Number of Private Networks <<<
Each cluster is typically configured with at least two private
networks. Configuring a cluster with just one private interconnect
provides less availability and will require the cluster to spend more
time in automatic recovery if that private interconnect fails.
Should this cluster use at least two private networks (yes/no) [yes]?
>>> Point-to-Point Cables <<<
The two nodes of a two-node cluster may use a directly-connected
interconnect. That is, no cluster switches are configured. However,
when there are greater than two nodes, this interactive form of
scinstall assumes that there will be exactly one switch for each
private network.
Does this two-node cluster use switches (yes/no) [yes]? no
>>> Cluster Transport Adapters and Cables <<<
Transport adapters are the adapters that attach to the private cluster
interconnect.
Select the first cluster transport adapter:
1) net0
2) net1
4) net11
5) net3
6) net4
7) net5
8) net6
9) net7
10) net9
n) Next >
Option: 9
Adapter "net5" is an Ethernet adapter.
Searching for any unexpected network traffic on "net5" ... done
Verification completed. No traffic was detected over a 10 second
sample period.
The "dlpi" transport type will be set for this cluster.
Select the second cluster transport adapter:
1) net0
2) net1
4) net11
5) net3
6) net4
7) net5
8) net6
9) net7
10) net9
n) Next >
Option: 3
Adapter "net11" is an Ethernet adapter.
Searching for any unexpected network traffic on "net11" ... done
Verification completed. No traffic was detected over a 10 second
sample period.
The "dlpi" transport type will be set for this cluster.
>>> Network Address for the Cluster Transport <<<
The cluster transport uses a default network address of 172.16.0.0. If
this IP address is already in use elsewhere within your enterprise,
specify another address from the range of recommended private
addresses (see RFC 1918 for details).
The default netmask is 255.255.240.0. You can select another netmask,
as long as it minimally masks all bits that are given in the network
address.
The default private netmask and network address result in an IP
address range that supports a cluster with a maximum of 64 nodes, 10
private networks, and 12 virtual clusters.
Is it okay to accept the default network address (yes/no) [yes]?
Is it okay to accept the default netmask (yes/no) [yes]?
Plumbing network address 172.16.0.0 on adapter bge0 >>
NOT DUPLICATE ... done Plumbing network address 172.16.0.0 on adapter bge1 >>
NOT DUPLICATE ... done
>>> Global Devices File System <<<
Each node in the cluster must have a local file system mounted on
/global/.CFDEVices/node@
as a cluster member. Since the "nodeID" is not assigned until
scinstall is run, scinstall will set this up for you.
You must supply the name of either an already-mounted file system or a
raw disk partition which scinstall can use to create the global
CFDEVices file system. This file system or partition should be at least
512 MB in size.
Alternatively, you can use a loopback file (lofi), with a new file
system, and mount it on /global/.CFDEVices/node@
If an already-mounted file system is used, the file system must be
empty. If a raw disk partition is used, a new file system will be
created for you.
If the lofi method is used, scinstall creates a new 100 MB file system
from a lofiCFDEVice by using the file /.globalCFDEVices. The lofi method
is typically preferred, since it does not require the allocation of a
dedicated disk slice.
The default is to use /globalCFDEVices.
Is it okay to use this default (yes/no) [yes]?
>>> Set Global Fencing <<<
Fencing is a mechanism that a cluster uses to protect data integrity
when the cluster interconnect between nodes is lost. By default,
fencing is turned on for global fencing, and each disk uses the global
fencing setting. This screen allows you to turn off the global
fencing.
Most of the time, leave fencing turned on. However, turn off fencing
when at least one of the following conditions is true: 1) Your shared
storageCFDEVices, such as Serial Advanced Technology Attachment (SATA)
disks, do not support SCSI; 2) You want to allow systems outside your
cluster to access storage CFDEVices attached to your cluster; 3) Sun
Microsystems has not qualified the SCSI persistent group reservation
(PGR) support for your shared storage CFDEVices.
If you choose to turn off global fencing now, after your cluster
starts you can still use the cluster(1CL) command to turn on global
fencing.
Do you want to turn off global fencing (yes/no) [no]?
>>> Quorum Configuration <<<
Every two-node cluster requires at least one quorum CFDEVice. By
default, scinstall selects and configures a shared disk quorum CFDEVice
for you.
This screen allows you to disable the automatic selection and
configuration of a quorum CFDEVice.
You have chosen to turn on the global fencing. If your shared storage
CFDEVices do not support SCSI, such as Serial Advanced Technology
Attachment (SATA) disks, or if your shared disks do not support
SCSI-2, you must disable this feature.
If you disable automatic quorum CFDEVice selection now, or if you intend
to use a quorum CFDEVice that is not a shared disk, you must instead use
clsetup(1M) to manually configure quorum once both nodes have joined
the cluster for the first time.
Do you want to disable automatic quorum CFDEVice selection (yes/no) [no]? yes
>>> Automatic Reboot <<<
Once scinstall has successfully initialized the Oracle Solaris Cluster
software for this machine, the machine must be rebooted. After the
reboot, this machine will be established as the first node in the new
cluster.
Do you want scinstall to reboot for you (yes/no) [yes]?
>>> Confirmation <<<
Your responses indicate the following options to scinstall:
scinstall -i
-C sap-cluster
-F
-T node=Node1,node=Node2,authtype=sys
-w
netaddr=172.16.0.0,netmask=255.255.240.0,maxnodes=32,maxprivatenets=10,numvirtualclusters=12,numxipvirtualclusters=3
-A trtype=dlpi,name=net7 -A trtype=dlpi,name=net11
-B type=direct
-P task=security,state=SECURE
Are these the options you want to use (yes/no) [yes]?
Do you want to continue with this configuration step (yes/no) [yes]?
Initializing cluster name to "sap-cluster" ... done
Initializing authentication options ... done
Initializing configuration for adapter "net7" ... done
Initializing configuration for adapter "net11" ... done
Initializing private network address options ... done
Setting the node ID for "node1" ... done (id=1)
Checking for global CFDEVices global file system ... done
Updating vfstab ... done
Verifying that NTP is configured ... done
Initializing NTP configuration ... done
Updating ... done
Adding cluster node entries to /etc/inet/hosts ... done
Configuring IP multipathing groups ...done
Ensure that the EEPROM parameter "local-mac-address?" is set to "true" ... done
Ensure network routing is disabled ... done
Network routing has been disabled on this node by creating /etc/notrouter.
Having a cluster node act as a router is not supported by Oracle Solaris Cluster.
Please do not re-enable network routing.
Log file - /var/cluster/logs/install/.1686
Rebooting ...
节点2-node2
root@Node2 # scinstall
*** Main Menu ***
Please select from one of the following (*) options:
* 1) Create a new cluster or add a cluster node
2) Configure a cluster to be JumpStarted from this install server
3) Manage a dual-partition upgrade
4) Upgrade this cluster node
* 5) Print release information for this cluster node
* ?) Help with menu options
* q) Quit
Option: 1
*** New Cluster and Cluster Node Menu ***
Please select from any one of the following options:
1) Create a new cluster
2) Create just the first node of a new cluster on this machine
3) Add this machine as a node in an existing cluster
?) Help with menu options
q) Return to the Main Menu
Option: 3
*** Add a Node to an Existing Cluster ***
This option is used to add this machine as a node in an already
established cluster. If this is a new cluster, there may only be a
single node which has established itself in the new cluster.
Before you select this option, the Oracle Solaris Cluster framework
software must already be installed. Use the Oracle Solaris Cluster
installation media or the IPS packaging system to install Oracle
Solaris Cluster software.
Press Control-d at any time to return to the Main Menu.
Do you want to continue (yes/no) [yes]?
>>> Typical or Custom Mode <<<
This tool supports two modes of operation, Typical mode and Custom.
For most clusters, you can use Typical mode. However, you might need
to select the Custom mode option if not all of the Typical defaults
can be applied to your cluster.
For more information about the differences between Typical and Custom
modes, select the Help option from the menu.
Please select from one of the following options:
1) Typical
2) Custom
?) Help
q) Return to the Main Menu
Option [1]: 2
>>> Sponsoring Node <<<
For any machine to join a cluster, it must identify a node in that
cluster willing to "sponsor" its membership in the cluster. When
configuring a new cluster, this "sponsor" node is typically the first
node used to build the new cluster. However, if the cluster is already
established, the "sponsoring" node can be any node in that cluster.
Already established clusters can keep a list of hosts which are able
to configure themselves as new cluster members. This machine should be
in the join list of any cluster which it tries to join. If the list
does not include this machine, you may need to add it by using
claccess(1CL) or other tools.
And, if the target cluster uses DES to authenticate new machines
attempting to configure themselves as new cluster members, the
necessary encryption keys must be configured before any attempt to
join.
What is the name of the sponsoring node? Node1 1
>>> Cluster Name <<<
Each cluster has a name assigned to it. When adding a node to the
cluster, you must identify the name of the cluster you are attempting
to join. A sanity check is performed to verify that the "sponsoring"
node is a member of that cluster.
What is the name of the cluster you want to join? crm-cluster
Attempting to contact "Node1" ... done
Cluster name "crmjkdb_cluster" is correct.
Press Enter to continue:
>>> Check <<<
This step allows you to run cluster check to verify that certain basic
hardware and software pre-configuration requirements have been met. If
cluster check detects potential problems with configuring this machine
as a cluster node, a report of violated checks is prepared and
available for display on the screen.
Do you want to run cluster check (yes/no) [no]?
>>>Autodiscovery of Cluster Transport <<<
If you are using Ethernet or Infiniband adapters as the cluster
transport adapters, autodiscovery is the best method for configuring
the cluster transport.
Do you want to use autodiscovery (yes/no) [yes]?
Probing ......................
The following connection was discovered:
Node1:bge1 -Node1:bge1
Probes were sent out from all transport adapters configured for
cluster node "Node1". But, they were only received on less than 2
of the network adapters on this machine ("Node1"). This may be due
to any number of reasons, including improper cabling, an improper
configuration for "Node1", or a switch which was confused by the
probes.
You can either attempt to correct the problem and try the probes again
or manually configure the transport. To correct the problem might
involve re-cabling, changing the configuration for "Node1", or
fixing hardware. You must configure the transport manually to
configure tagged VLAN adapters and non tagged VLAN adapters on the
same private interconnect VLAN.
Do you want to try again (yes/no) [yes]?
Probing .........
The following connections were discovered:
Node1:net3 -Node1:net3
Node1:net7 -Node1:net7
Is it okay to configure these connections (yes/no) [yes]?
>>> Global Devices File System <<<
Each node in the cluster must have a local file system mounted on
/global/.CFDEVices/node@
as a cluster member. Since the "nodeID" is not assigned until
scinstall is run, scinstall will set this up for you.
You must supply the name of either an already-mounted file system or a
raw disk partition which scinstall can use to create the global
CFDEVices file system. This file system or partition should be at least
512 MB in size.
Alternatively, you can use a loopback file (lofi), with a new file
system, and mount it on /global/.CFDEVices/node@
If an already-mounted file system is used, the file system must be
empty. If a raw disk partition is used, a new file system will be
created for you.
If the lofi method is used, scinstall creates a new 100 MB file system
from a lofiCFDEVice by using the file /.globalCFDEVices. The lofi method
is typically preferred, since it does not require the allocation of a
dedicated disk slice.
The default is to use /globalCFDEVices.
Is it okay to use this default (yes/no) [yes]?
>>> Automatic Reboot <<<
Once scinstall has successfully initialized the Oracle Solaris Cluster
software for this machine, the machine must be rebooted. The reboot
will cause this machine to join the cluster for the first time.
Do you want scinstall to reboot for you (yes/no) [yes]?
>>> Confirmation <<<
Your responses indicate the following options to scinstall:
scinstall -i
-C crmjkdb_cluster
-N Node1
-A trtype=dlpi,name=bge0 -A trtype=dlpi,name=bge1
-B type=direct
-m endpoint=:bge0,endpoint=Node1:bge0
-m endpoint=:bge1,endpoint=Node1:bge1
Are these the options you want to use (yes/no) [yes]?
Do you want to continue with this configuration step (yes/no) [yes]?
Checking CFDEVice to use for global CFDEVices file system ... done
Adding node "Node1" to the cluster configuration ... done
Adding adapter "bge0" to the cluster configuration ... done
Adding adapter "bge1" to the cluster configuration ... done
Adding cable to the cluster configuration ... done
Adding cable to the cluster configuration ... done
Copying the config from "Node1" ... done
Copying the postconfig file from "Node1" if it exists ... done
No postconfig file found on "Node1", continuing
Setting the node ID for "Node1" ... done (id=2)
Verifying the major number for the "did" driver with "Node1" ... done
Checking for global CFDEVices global file system ... done
Updating vfstab ... done
Verifying that NTP is configured ... done
Initializing NTP configuration ... done
Updating ... done
Adding cluster node entries to /etc/inet/hosts ... done
Configuring IP multipathing groups ...done
Ensure that the EEPROM parameter "local-mac-address?" is set to "true" ... done
Ensure network routing is disabled ... done
Network routing has been disabled on this node by creating /etc/notrouter.
Having a cluster node act as a router is not supported by Oracle Solaris Cluster.
Please do not re-enable network routing.
Updating file ("r") on node node1 ... done
Updating file ("hosts") on node Node1 ... done
Log file - /var/cluster/logs/install/.1561
Rebooting ...
2. 注册服务:
# scrgadm -a -t agePlus
# scrgadm -a -t :6
注:这些服务只需在一个节点上操作即可
#scrgadm –pv 检查是否安装
3. 添加仲裁
# devfsadm -C
# scdidadm -C
# scdidadm -r
# scdiadm -ui
# clquorum add d7
# scconf -c -q reset
4. 创建资源组及IP资源
# clsetup
*** Main Menu ***
Please select from one of the following options:
1) Quorum
2) Resource groups
3) Data Services
4) Cluster interconnect
5) Device groups and volumes
6) Private hostnames
7) New nodes
8) Zone Cluster
9) Other cluster tasks
?) Help with menu options
q) Quit
Option:
Option: 2
*** Resource Group Menu ***
Please select from one of the following options:
1) Create a resource group
2) Add a network resource to a resource group
3) Add a data service resource to a resource group
4) Resource type registration
5) Online/Offline or Switchover a resource group
6) Suspend/Resume recovery for a resource group
7) Enable/Disable a resource
8) Change properties of a resource group
9) Change properties of a resource
10) Remove a resource from a resource group
11) Remove a resource group
12) Clear the stop_failed error flag from a resource
?) Help
s) Show current status
q) Return to the main menu
Option: 1
>>> Create a Resource Group <<<
Use this option to create a new resource group. You can also use this
option to create new resources for the new group.
A resource group is a container into which you can place resources of
various types, such as network and data service resources. The cluster
uses resource groups to manage its resource types. There are two types
of resource groups, failover and scalable.
Only failover resource groups may contain network resources. A network
resource is either a LogicalHostname or SharedAddress resource.
It is important to remember that each scalable resource group depends
upon one or more failover resource groups which contains one or more
SharedAddress network resources.
Is it okay to continue (yes/no) [yes]?
Select the type of resource group you want to add:
1) Failover Group
2) Scalable Group
Option: 1
What is the name of the group you want to add? data-rg
Do you want to add an optional description (yes/no) [yes]? no
Because this cluster has two nodes, the new resource group will be
configured to be hosted by both cluster nodes.
At this time, you may select one node to be the preferred node for
hosting this group. Or, you may allow the system to select a preferred
node on an arbitrary basis.
Do you want to specify a preferred node (yes/no) [yes]?
Select the preferred node or zone for hosting this group:
1) Node1
2) Node2
Option: 1
Some types of resources (for example, HA for NFS) require the use of
an area in a global file system for storing configuration data. If any
of the resources that will be added to this group require such
support, you can specify the full directory path name now.
Do you want to specify such a directory now (yes/no) [no]?
Is it okay to proceed with the update (yes/no) [yes]?
/usr/cluster/bin/clresourcegroupcreate -n Node1,Node2 data-rg
Command completed successfully.
Press Enter to continue: Jun 10 19:37:30 Node1 cl_runtime: NOTICE:
Received non-interrupt heartbeat on Node1:net7 - Node2:net7.
Do you want to add any network resources now (yes/no) [yes]?
Select the type of network resource you want to add:
1) LogicalHostname
2) SharedAddress
Option: 1
If a failover resource group contains LogicalHostname resources, the
most common configuration is to have one LogicalHostname resource for
each subnet.
How many LogicalHostname resources would you like to create [1]?
Each network resource manages a list of one or more logical hostnames
for a single subnet. This is true whether the resource is a
LogicalHostname or SharedAddress resource type. The most common
configuration is to assign a single logical hostname to each network
resource for each subnet. Therefore, clsetup(1M) only supports this
configuration. If you need to support more than one hostname for a
given subnet, add the additional support using clresourcegroup(1M).
Before clsetup(1M) can create a network resource for any logical hostname, that
hostname must be specified in the hosts(4) file on eachnode in the cluster. In
addition, the required network adapters must
be actively available on each of the nodes.
What logical hostname do you want to add? CFSAP
Is it okay to proceed with the update (yes/no) [yes]?
/usr/cluster/bin/clreslogicalhostname create -g data-rg -p
R_description="LogicalHostname resource for CFSAP" CFSAP
clreslogicalhostname: Failed to retrieve netmask for the given
hostname(s)/IP(s). Will try again when the resource being brought online.
Command completed successfully.
Press Enter to continue:
Do you want to add any additional network resources (yes/no) [no]?
Do you want to add any data service resources now (yes/no) [yes]? no
Do you want to manage and bring this resource group online now (yes/no) [yes]?
/usr/cluster/bin/clresourcegroup online -M data-rg
Command completed successfully.
Press Enter to continue:
*** Resource Group Menu ***
Please select from one of the following options:
1) Create a resource group
2) Add a network resource to a resource group
3) Add a data service resource to a resource group
4) Resource type registration
5) Online/Offline or Switchover a resource group
6) Suspend/Resume recovery for a resource group
7) Enable/Disable a resource
8) Change properties of a resource group
9) Change properties of a resource
10) Remove a resource from a resource group
11) Remove a resource group
12) Clear the stop_failed error flag from a resource
?) Help
s) Show current status
q) Return to the main menu
Option: 1
>>> Create a Resource Group <<<
Use this option to create a new resource group. You can also use this option to create
new resources for the new group.
A resource group is a container into which you can place resources of
various types, such as network and data service resources. The clusteruses resource
groups to manage its resource types. There are two typesof resource groups, failover
and scalable.
Only failover resource groups may contain network resources. A network
resource is either a LogicalHostname or SharedAddress resource.
It is important to remember that each scalable resource group depends upon one or
more failover resource groups which contains one or more SharedAddress network
resources.
Is it okay to continue (yes/no) [yes]?
Select the type of resource group you want to add:
1) Failover Group
2) Scalable Group
Option: 1
What is the name of the group you want to add? oracle-rg
Do you want to add an optional description (yes/no) [yes]? no
Because this cluster has two nodes, the new resource group will be
configured to be hosted by both cluster nodes.
At this time, you may select one node to be the preferred node for hosting this group.
Or, you may allow the system to select a preferrednode on an arbitrary basis.
Do you want to specify a preferred node (yes/no) [yes]?
Select the preferred node or zone for hosting this group:
1) Node1
2) Node2
Option: 2
Some types of resources (for example, HA for NFS) require the use of
an area in a global file system for storing configuration data. If any
of the resources that will be added to this group require such support, you can
specify the full directory path name now.
Do you want to specify such a directory now (yes/no) [no]? no
Is it okay to proceed with the update (yes/no) [yes]?
/usr/cluster/bin/clresourcegroupcreate -n Node2,Node1 oracle-rg
Command completed successfully.
Press Enter to continue:
Do you want to add any network resources now (yes/no) [yes]?
Select the type of network resource you want to add:
1) LogicalHostname
2) SharedAddress
Option: 1
If a failover resource group contains LogicalHostname resources, the
most common configuration is to have one LogicalHostname resource for
each subnet.
How many LogicalHostname resources would you like to create [1]?
Each network resource manages a list of one or more logical hostnames for a single
subnet. This is true whether the resource is a LogicalHostname or SharedAddress
resource type. The most common configuration is to assign a single logical hostname
to each network resource for each subnet. Therefore,clsetup(1M) only supports this
configuration. If you need to support more than one hostname for a given subnet,
add the additional support using clresourcegroup(1M).
eforeclsetup(1M) can create a network resource for any logical hostname, that
hostname must be specified in the hosts(4) file on eachnode in the cluster. In
addition, the required network adapters must be actively available on each of the
nodes.
What logical hostname do you want to add? CFDB
Is it okay to proceed with the update (yes/no) [yes]?
/usr/cluster/bin/clreslogicalhostname create -g oracle-rg -p
R_description="LogicalHostname resource for CFDB" CFDB
clreslogicalhostname: Failed to retrieve netmask for the given
hostname(s)/IP(s). Will try again when the resource being brought online.
Command completed successfully.
Press Enter to continue:
Do you want to add any additional network resources (yes/no) [no]?
Do you want to add any data service resources now (yes/no) [yes]? no
Do you want to manage and bring this resource group online now (yes/no) [yes]?
/usr/cluster/bin/clresourcegroup online -M oracle-rg
Command completed successfully.
Press Enter to continue:
5. 磁盘组资源-rs
把阵列卷加入sun cluster,注册磁盘资源
clrs create -g data-rg -t agePlus -p Zpools=datapool -p
AffinityOn=True data-rs


发布评论