In Linux environment, it is a common practice to upgrade the kernel for security patches, vulnerability fixes etc. And if you are running a DB2 pureScale environment, it is likely that you are going to break the GPFS GPL as soon as you upgrade the kernel and this will result into DB2 pureScale GPFS getting broken. If by any chance, you were running a production environment – you invited a big trouble for yourself.
The reason being – When you create a DB2 pureScale instance, it will compile GPL and those libraries are stored in
/lib/modules/`uname -r`/extra folder. When kernel gets upgraded, the
uname -r value changes and GPFS can no longer locate GPFS libraries in the new
/lib/modules/`uname -r`/extra since this no longer exists. So what is the procedure one should follow to safely upgrade the Linux kernel and also get the appropriate GPFS GPL for that kernel.
You need to ask one question from your SA before they attempt to build a new kernel for whatever reason. Do you have a compatible GPFS for the Linux kernel that you are upgrading? Who can reply this? You can go to this IBM link https://www.ibm.com/support/knowledgecenter/SSFKCN/com.ibm.cluster.gpfs.doc/gpfs_faqs/gpfsclustersfaq.html and search for latest supported Linux Distributions and it will have supported Linux kernel version. If you are on latest and greatest Kernel version and it may so happen that you may not see the kernel version that you are looking at. How do you go forward? Here are the steps.
Get a test environment with same Kernel version as you have in your pureScale environment and it is not necessary that you have to have pureScale in this test environment. Install GPFS RPMs from your DB2 software directory. The sample commands are:
# cd /root/download # tar xvfz gpfs-4.1.tar.gz # tar xvfz gpfs-184.108.40.206.tar.gz # cd 4.1 # rpm -ivh gpfs.base-4.1.0-0.x86_64.rpm # rpm -ivh gpfs.gskit-8.0.50-16.x86_64.rpm # rpm -ivh gpfs.msg.en_US-4.1.0-0.noarch.rpm # rpm -ivh gpfs.gpl-4.1.0-0.noarch.rpm # cd ../220.127.116.11 # rpm -Uvh gpfs.base-4.1.1-11.x86_64.update.rpm # rpm -Uvh gpfs.gpl-4.1.1-11.noarch.rpm # rpm -Uvh gpfs.gskit-8.0.50-47.x86_64.rpm # rpm -Uvh gpfs.msg.en_US-4.1.1-11.noarch.rpm # rpm -qa | grep -i gpfs gpfs.msg.en_US-4.1.1-11.noarch gpfs.gpl-4.1.1-11.noarch gpfs.base-4.1.1-11.x86_64 gpfs.gskit-8.0.50-47.x86_64 OR # rpm -ivh 4.1/*.rpm # rpm -Uvh 18.104.22.168/*.rpm
After installation, run command
/usr/lpp/mmfs/bin/mmbuildgpl to build the GPFS build directory. The GPFS kernel libraries will be under
Now do the kernel upgrade and reboot the machine and again run
/usr/lpp/mmfs/bin/mmbuildgpl to build the GPFS build directory. If it is successful, you are good to go as new GPFS libraries are in new
But if for some reason – mmbuildgpl fails and it will if the Kernel that you upgraded has not been validated for the GPFS. What are the choices?
Check from the above IBM link if your new kernel is supported or not. If not, open a PMR and request for a special build of GPFS for the kernel that you have to upgrade for and install / upgrade the new RPMs and then repeat the /usr/lpp/mmfs/bin/mmbuildgpl and you are good to go if this succeeds,
If the kernel is supported but your DB2 software does not have the newer version then you should look for the DB2 fix pack that has the GPFS version that supports the new kernel. You can do a google search on DB2 Software Compatibility Matrix Report and this will take you to an IBM link and you will be able to find out the DB2 fix pack that has the desired GPFS version. Get the fix pack and go to
server/db2/linuxamd64/gpfs folder and run the commands:
# ./db2ckgpfs -v media # ./db2ckgpfs -v install
To know the GPFS version in media and the installed version. The base and fp directories will have the base and GPFS fix packs and run appropriate rpm commands to either upgrade or install it.
Your other option is to search IBM Fix Central and look for the desired GPFS fixpacks and then install / upgrade it. But, this is not my preferred path. I would always like to get the GPFS software from the db2 software directory.
How to patch a machine?
Here are the instructions that I sent to one of the customer.
Stop db2 on the machine being patched.
$ db2stop <member|CF number> quiesce 5 --> Quiesce in 5 minutes $ db2stop instance on <hostname>
1. First find out if patch includes kernel upgrade. If yes, find out if this is going to change the minor release version. For example, if RHEL 7.2 needs to upgrade to RHEL 7.3 or not. Upgrade from RHEL 6.7 to 6.8 is OK but not from RHEL 7.2 to THEL 7.3 as that might break the GPFS (as of writing on 5/15/2017).
2. If kernel needs to upgraded and it is almost necessary then you need to test this out in a test cluster before you propagate to your production cluster.
3. Change /etc/yum.conf and comment out EXCLUDE line to allow kernel upgrade.
4. Login as root,
5. Note the kernel version # uname -r
6. Note down the contents of GPFS kernel modules details.
# ls -l /lib/modeules/`uname -r`/extra
7. Make sure that you back up the following on the machine before patching in case any dir contents are affected.
GPFS confg backup /var/mmfs/cfg db2 config /var/ Copy ~/sqllib/db2nodes.cfg db2greg -dump ~/db2greg.dump db2hareg -dump ~/db2hareg.dump uname -r > /root/uname.log
8. Put machine into maintenance.
# ./db2cluster -cm -enter -maintenance # ./db2cluster -cfs -enter -maintenance
Now, let SA folks patch the machine and reboot it.
Once you get the machine back, run the following commands to make sure:
1. Did kernel change? Compare current uname -r to the old one before patching.
2. If yes, compare the contents of ls -l /lib/modules/`uname -r`/extra and if this folder contents are same as before then you are good.
3. If not, you need to recompile GPFS GPL.
4. To recompile GPFS GPL, run command:
5. After successful build, check contents of /lib/modules/`uname -r`/extra and you should GPL modules built after the abive step.
6. Bring machine out of maintenance
# ./db2cluster -cm -exit -maintenance # ./db2cluster -cfs -exit -maintenance (This might not be necessary but run it any way.)
7. Login as db2 instance owner and check status of the cluster,
$ db2instance -list $ db2start instance on <host> $ db2start <member|CF name>