These instructions are for changing the IP address in a pureScale cluster. Please remember that the IP address can be changed easily for DB2 but it is RSCT and GPFS that requires special handling to change the IP address.
You change the IP address one by one in a pureScale cluster. DB2 needs to be stopped on all members and CF for this to happen.
- Stop DB2 on a host
$ db2stop <CF or Member number> force
- Stop db2 instance
$ db2stop instance on <hostname>
- As a root, stop RSCT node and remove it from the cluster.
# stoprpnode -f <hostname>
# rmrpnode <hostname>
- As a root, stop the GPFS and remove it.
# mmshutdown
If the host names or IP addresses of the primary or secondary GPFS cluster configuration server nodes must change, use the mmchcluster command to specify another node to serve as the primary or secondary GPFS cluster configuration server.
Example: Move the primary configuration server to node03
First list which hosts are the acting as the primary and secondary configuration servers.
/usr/lpp/mmfs/bin # mmlscluster GPFS cluster information ======================== GPFS cluster name: db2gpfscluster.ibm.local GPFS cluster id: 654993895500980647 GPFS UID domain: db2gpfscluster.ibm.local Remote shell command: /usr/bin/rsh Remote file copy command: /usr/bin/rcp GPFS cluster configuration servers: ----------------------------------- Primary server: node01.ibm.local Secondary server: node02.ibm.local Node Daemon node name IP address Admin node name Designation ----------------------------------------------------------------------------------------------- 1 node01.ibm.local 10.1.10.11 node01.ibm.local quorum-manager 2 node02.ibm.local 10.1.10.12 node02.ibm.local quorum-manager 3 node03.ibm.local 10.1.10.13 node03.ibm.local 4 node04.ibm.local 10.1.10.14 node04.ibm.local
Move the primary configuration server from the current host to another host in the GPFS cluster
/usr/lpp/mmfs/bin # mmchcluster -p node03 mmchcluster: GPFS cluster configuration servers: mmchcluster: Primary server: node03.ibm.local mmchcluster: Secondary server: node02.ibm.local mmchcluster: Propagating the new server information to the rest of the nodes. mmchcluster: Command successfully completed
Ensure that the configuration server location is updated on all other hosts
/usr/lpp/mmfs/bin # mmchcluster -p LATEST mmchcluster: GPFS cluster configuration servers: mmchcluster: Primary server: node03.ibm.local mmchcluster: Secondary server: node02.ibm.local mmchcluster: Propagating the new server information to the rest of the nodes. mmchcluster: Command successfully completed
Verify that the new configuration server is active
/usr/lpp/mmfs/bin # mmlscluster GPFS cluster information ======================== GPFS cluster name: db2gpfscluster.ibm.local GPFS cluster id: 654993895500980647 GPFS UID domain: db2gpfscluster.ibm.local Remote shell command: /usr/bin/rsh Remote file copy command: /usr/bin/rcp GPFS cluster configuration servers: ----------------------------------- Primary server: node03.ibm.local Secondary server: node02.ibm.local Node Daemon node name IP address Admin node name Designation ----------------------------------------------------------------------------------------------- 1 node01.ibm.local 10.1.10.11 node01.ibm.local quorum-manager 2 node02.ibm.local 10.1.10.12 node02.ibm.local quorum-manager 3 node03.ibm.local 10.1.10.13 node03.ibm.local 4 node04.ibm.local 10.1.10.14 node04.ibm.local
If the host names or IP addresses of an NSD server node must change, temporarily remove the node from being a server with the mmchnsd command. Then, after the node has been added back
to the cluster, use the mmchnsd command to change the NSDs to their original configuration. Use the mmlsnsd command to obtain the NSD server node names.This will not be necessary in pureScale clusters as we are directly attached, however for a user managed. GPFS cluster it may be necessary to move the NSD server nodes.
Remove the node from the GPFS cluster
# mmdelnode <host>
On the host being affected remove any files under /var/db2 with the format of
/var/db2/*gpfs_forced_offline* /var/db2/*gpfs_failed*
- Update the IP address on the host using the your method from the operating system.
- Update the /etc/hosts file on all hosts in the domain to reflect the new IP address
- Update DNS server to reflect the host name with new IP address
- As a root, run prepprnode on each host.
# preprpnode node01 node02 node03 node04
- Add node back to the RSCT domain
# addrpnode <hostname>
- From another host in the cluster start this new host
# startrpnode <hostname>
- As a root, add the node back to the GPFS cluster
# mmaddnode <host>:quorum-manager:
- Start the GPFS node
# mmstartup
- Ensure that the latest configuration is updated on all nodes
# mmchcluster -p LATEST
- As a root on the host that was updated, run
# db2cluster -cfs -network_resiliency -repair
- Check if it got updated using the command
# db2cluster -cfs -list -network_resiliency
- If the IP address of the gateway also changed, you need to update the /var/ct/cfg/netmon.cfg
- Restart the instance using db2start instance on <hostname>
- Stop the whole cluster
$ db2stop force
- Do not worry if there is an error as we need to fix the resource model.
# db2cluster -repair -resources
# db2cluster -verify -resources
- Restart host if it was stopped
$ db2start instance on <hostname>
- Start db2
$ db2start