When we install DB2 instance as a pureScale, we can specify the name of the mount point by using option instance_shared_mount.

If we do not use this option, the mount point name is generated by the db2icrt command as /dv2sd_<timestamp> and this may not be the good name if you are setting up HADR between 2 pureScale clusters since we need same symmetry on both clusters.

This is what I did to have same mount point name between 2 clusters .

Steps:

1. Stop db2 on the cluster where you want to change the mount point name.

$ db2stop force

2. Stop GPFS as root

# /opt/IBM/db2/V10.5.5/bin/db2cluster -cfs -stop -all

3. Put pureScale instance in maintenance mode as root

#  /opt/IBM/db2/V10.5.5/bin/db2cluster -cm -enter -maintenance -all

4. The output from lsrpdomain command should show RSCT in offline mode.

5. Start GPFS using mmstartup -a command as root (no mount of the file system)

# /usr/lpp/mmfs/bin/mmstartup -a

5. Change mount point name using mmchfs command as root

# /usr/lpp/mmfs/bin/mmchfs db2fs1 -T /db2sd_20150226094553

-> Use same mount point name as it was in the other cluster

Note: find out the name of the GPFS by looking at /etc/fstab. The default name is db2fs1.

6. Check the name of the new mount point.

# /usr/lpp/mmfs/bin/mmlsfs db2fs1 -T

7. Mount all mount points

# /usr/lpp/mmfs/bin/mmmount all -a

8. Check df -h and look /etc/fstab and you should see the new mount point.

9. Drop db2 instance on all nodes.

# export CT_MANAGEMENT_SCOPE=2
# lsrpdomain

–> Note the domain name for the next step

# rmrpdomain -f <domainname>

–> wait for 1-2 minute to get the domain drop command to propagate on all nodes.

# db2idrop_local <instance name>

–> Repeat this on all nodes

10. Rename db2 global.reg on all nodes

# mv /var/db2/global.reg /var/db2/global.reg.bak

–> Repeat this on all nodes.

11. Create DB2 pureScale instance. Use option instance_shared_dir

# ./db2icrt -d -s dsf -p 50001 -instance_shared_dir  /db2sd_20150226094553 
-tbdev /dev/sdd -m <hostname> -mnet <networkFQDN> -cf <cfhostname> -cfnet <cfnetworkFQDN> 
-u <fenced user> <instance user>

Check the option -instance_shared_dir  /db2sd_20150226094553 where we are using existing GPFS mount point name which is same as the name of the mount point in first cluster.

12. Create other CF and members.

# ./db2iupdt -d -add -cf <CFHost>-cfnet <NetworkFQDN> <instance name>
# ./db2iupdt -d -add -m <MemberName> -mnet <Member FQDN> <instance name>

In a nutshell, we used GPFS commands to change the mount point but since the mount point name is used in many symbolinks on each host, fixing sym links and RSCT resources can be very time consuming so dropping and creating the instance is the easiest thing. You do not lose anything as far as databases are concerned since they were on the GPFS.