The open source iscsitarget provides a great way to do SAN on a laptop using VMware. This is good for testing and learning purposes. We use iscsitarget to export devices so that we can use them for the shared database by multiple hosts in a DB2 pureScale environment. There is a lots of documentation for how to set up iscsitarget and this is how I am using it.
These instructions are for SLES 11 SP2 and with minor changes they can be implemented on RedHat Linux also.
Install latest open-scsi, openiscsitarget and iscsitarget-kmp-default packages. If you want to test SCSI-3 PR using iscsitarget drivers, make sure that your minimum version are:
iscsitarget-1.4.20-0.14.1.x86_64.rpm iscsitarget-kmp-default-1.4.20_3.0.13_0.27-0.14.1.x86_64.rpm
On your SAN node (designate one VMware machine as a host for SAN devices), add virtual disks using your VMware workstation to your VM and partition them into as many disks you want. For example:
I partitioned 100GB disk into four partitions such as /dev/sda1, /dev/sda2, /dev/sda3 and /dev/sda4. I use /dev/sda1 as swap space for the OS and /dev/sda2 for the operating system.
I used /dev/sda3 and /dev/sda4 to be used as disks that will be used by other nodes. I then added one more 100GB disk and did four parimary partitions as /dev/sdb1, /dev/sdb2, /dev/sdb3 and /dev/sdb4. I also added one 1GB disk and created one partition /dev/sdc1 of 1 GB to be used as a tie breaker disk.
We created 7 such disk partitions and did not format them and did not mount them. These 7 partitions will be used as 7 disks on other nodes so that we can use them for our DB2 pureScale instance, logs, storage devices and tie-breaker disks.
Create following entries in /etc/ietd.conf file.
Target iqn.2012-06.com.ibm:pureScaleInstance Lun 0 Path=/dev/sda3,Type=fileio,ScsiId=1234567890,ScsiSN=345678901 Target iqn.2012-06.com.ibm:pureScaleDatabase01 Lun 1 Path=/dev/sda4,Type=fileio,ScsiId=3456789012,ScsiSN=456789012 Target iqn.2012-06.com.ibm:pureScaleLogs Lun 2 Path=/dev/sdb1,Type=fileio,ScsiId=4567890123,ScsiSN=567890123 Target iqn.2012-06.com.ibm:pureScaleDatabase02 Lun 3 Path=/dev/sdb2,Type=fileio,ScsiId=5678901234,ScsiSN=678901234 Target iqn.2012-06.com.ibm:pureScaleDatabase03 Lun 4 Path=/dev/sdb3,Type=fileio,ScsiId=6789012345,ScsiSN=789012345 Target iqn.2012-06.com.ibm:pureScaleDatabase04 Lun 5 Path=/dev/sdb4,Type=fileio,ScsiId=7890123456,ScsiSN=890123456 Target iqn.2012-06.com.ibm:tiebreaker Lun 6 Path=/dev/sdc1,Type=fileio,ScsiId=8901234567,ScsiSN=901234567
On the SAN node (designated VMware machine), run the following commands:
# chkconfig -a iscsitarget # service iscsitarget start
Your SAN server is now ready.
Go to the other VM guests that are running SLES 11 SP2 (Kernel 3.0.31).
Install same open-scsi, iscsitarget and iscsitarget-kmp-default on all the nodes but we will turn off iscisitarget daemon by using
# chkconfig -d iscsitarget
Copy this file and put it on your /etc/init.d directory so that the nodes can now connect to our Poor Man SAN server and get the devices. The contents of the file are also reproduced below:
#! /bin/sh
### BEGIN INIT INFO
# Provides: iscsiclsetup
# Required-Start: $network $syslog $remote_fs smartd
# Required-Stop:
# Default-Start: 3 5
# Default-Stop: 0 1 2 6
# Description: ISCSI client setup
### END INIT INFO
IPLIST="192.168.142.101"
# Shell functions sourced from /etc/rc.status:
# rc_check check and set local and overall rc status
# rc_status check and set local and overall rc status
# rc_status -v ditto but be verbose in local rc status
# rc_status -v -r ditto and clear the local rc status
# rc_failed set local and overall rc status to failed
# rc_reset clear local rc status (overall remains)
# rc_exit exit appropriate to overall rc status
. /etc/rc.status
# catch mis-use right here at the start
if [ "$1" != "start" -a "$1" != "stop" -a "$1" != "status" -a "$1" != "restart" -a "$1" != "rescan" -a "$1" != "mountall" ]; then
echo "Usage: $0 {start|stop|status|restart|rescan|mountall}"
exit 1
fi
# First reset status of this service
rc_reset
iscsimount() {
rc_reset
echo -n "Mounting $1: "
/usr/lpp/mmfs/bin/mmmount $1
rc_status -v
return $?
}
iscsiumount() {
rc_reset
echo -n "Umounting $1: "
/usr/lpp/mmfs/bin/mmumount $1
rc_status -v
return $?
}
iscsicheck() {
rc_reset
echo -n "Verify if $1 is mounted: "
mount | grep "on $1\b" > /dev/null
rc_status -v
return $?
}
iscsimountall() {
# Find all fstab lines with gpfs as fstype
for mountpoint in `grep "gpfs" /etc/fstab | awk '{print $2}'`
do
# Only try to mount filesystems that are not currently mounted
if ! mount | grep "on $mountpoint\b" > /dev/null
then
iscsimount $mountpoint || overallstatus=$?
fi
done
return $overallstatus
}
iscsiumountall() {
# Find all fstab lines with gpfs as fstype
for mountpoint in `grep "gpfs" /etc/fstab | awk '{print $2}'`
do
# Only try to umount filesystems that are currently mounted
if mount | grep "on $mountpoint\b" > /dev/null
then
iscsiumount $mountpoint || overallstatus=$?
fi
done
return $overallstatus
}
iscsicheckall() {
# Find all fstab lines with gpfs as fstype
for mountpoint in `grep "gpfs" /etc/fstab | awk '{print $2}'`
do
iscsicheck $mountpoint || overallstatus=$?
done
return $overallstatus
}
case "$1" in
start)
modprobe -q iscsi_tcp
iscsid
for IP in $IPLIST
do
ping -q $IP -c 1 -W 1 > /dev/null
RETURN_ON_PING=$?
if [ ${RETURN_ON_PING} == 0 ]; then
ISCSI_VALUES=`iscsiadm -m discovery -t st -p $IP \
| awk '{print $2}' | uniq`
if [ "${ISCSI_VALUES}" != "" ] ; then
for target in $ISCSI_VALUES
do
echo "Logging into $target on $IP"
iscsiadm --mode node --targetname $target \
--portal $IP:3260 --login
done
else
echo "No iscsitarget were discovered"
fi
else
echo "iscsitarget is not available"
fi
done
if [ ${RETURN_ON_PING} == 0 ]; then
if [ "${ISCSI_VALUES}" != "" ] ; then
/usr/lpp/mmfs/bin/mmstartup -a &> /dev/null
iscsimountall
fi
fi
;;
stop)
for IP in $IPLIST
do
ping -q $IP -c 1 -W 1 > /dev/null
RETURN_ON_PING=$?
if [ ${RETURN_ON_PING} == 0 ]; then
ISCSI_VALUES=`iscsiadm -m discovery -t st --portal $IP \
| awk '{print $2}' | uniq`
if [ "${ISCSI_VALUES}" != "" ] ; then
for target in $ISCSI_VALUES
do
echo "Logging out for $target from $IP"
iscsiadm -m node --targetname $target \
--portal $IP:3260 --logout
done
else
echo "No iscsitarget were discovered"
fi
fi
done
if [ ${RETURN_ON_PING} == 0 ]; then
if [ "${ISCSI_VALUES}" != "" ] ; then
iscsiumountall
fi
fi
;;
status)
echo "Running sessions"
iscsiadm -m session -P 1
iscsicheckall
rc_status -v
;;
rescan)
echo "Perform a SCSI rescan on a session"
iscsiadm -m session -r 1 --rescan
rc_status -v
;;
mountall)
iscsimountall
rc_status -v
;;
restart)
## Stop the service and regardless of whether it was
## running or not, start it again.
$0 stop
$0 start
;;
*)
echo "Usage: $0 {start|stop|status|restart|rescan|mountall}"
exit 1
esac
rc_status -r
rc_exit
Few things to note about the contents of the above script.
- Change IPLIST and modify the IP address to your iscsitarget server.
- You can have multipath to use these devices if you configure multiple NIC between the machines. In that case, specify IPLIST="192.168.142.101 192.168.142.201"
- The script will do the multiple logins to the iscsitarget automatically.
-
For multipath, turn on the service for multipathd by using
# chkconfig -a multipathd # service multipathd start
Turn the service on as shown :
# chmod +x /etc/init.d/iscsiclient # chkconfig -a iscsiclient # service iscsiclient start
Repeat the above excercise for creating iscsiclient on every node that will be using these disks such as using those machines for DB2 pureScale database.
From the other nodes which are going to use our SAN server, run command lsscsi to see the list of the devices now available to this and other nodes.
# lsscsi node02:/etc/init.d # lsscsi [0:0:0:0] disk VMware, VMware Virtual S 1.0 /dev/sda [2:0:0:0] cd/dvd NECVMWar VMware IDE CDR10 1.00 /dev/sr0 [10:0:0:6] disk IET VIRTUAL-DISK 0 /dev/sdb [11:0:0:5] disk IET VIRTUAL-DISK 0 /dev/sdc [12:0:0:4] disk IET VIRTUAL-DISK 0 /dev/sdd [13:0:0:3] disk IET VIRTUAL-DISK 0 /dev/sde [14:0:0:2] disk IET VIRTUAL-DISK 0 /dev/sdf [15:0:0:1] disk IET VIRTUAL-DISK 0 /dev/sdg [16:0:0:0] disk IET VIRTUAL-DISK 0 /dev/sdh # multipath -ll node02:/etc/init.d # multipath -ll 1494554000000000033343536373839303132000000000000 dm-5 IET,VIRTUAL-DISK size=32G features='0' hwhandler='0' wp=rw |-+- policy='round-robin 0' prio=1 status=active | `- 15:0:0:1 sdg 8:96 active ready running `-+- policy='round-robin 0' prio=1 status=enabled `- 22:0:0:1 sdn 8:208 active ready running 1494554000000000037383930313233343536000000000000 dm-1 IET,VIRTUAL-DISK size=5.0G features='0' hwhandler='0' wp=rw |-+- policy='round-robin 0' prio=1 status=active | `- 11:0:0:5 sdc 8:32 active ready running `-+- policy='round-robin 0' prio=1 status=enabled `- 18:0:0:5 sdj 8:144 active ready running 1494554000000000031323334353637383930000000000000 dm-6 IET,VIRTUAL-DISK size=32G features='0' hwhandler='0' wp=rw |-+- policy='round-robin 0' prio=1 status=active | `- 16:0:0:0 sdh 8:112 active ready running `-+- policy='round-robin 0' prio=1 status=enabled `- 23:0:0:0 sdo 8:224 active ready running 1494554000000000035363738393031323334000000000000 dm-3 IET,VIRTUAL-DISK size=5.0G features='0' hwhandler='0' wp=rw |-+- policy='round-robin 0' prio=1 status=active | `- 13:0:0:3 sde 8:64 active ready running `-+- policy='round-robin 0' prio=1 status=enabled `- 20:0:0:3 sdl 8:176 active ready running 1494554000000000034353637383930313233000000000000 dm-4 IET,VIRTUAL-DISK size=5.0G features='0' hwhandler='0' wp=rw |-+- policy='round-robin 0' prio=1 status=active | `- 14:0:0:2 sdf 8:80 active ready running `-+- policy='round-robin 0' prio=1 status=enabled `- 21:0:0:2 sdm 8:192 active ready running 1494554000000000036373839303132333435000000000000 dm-2 IET,VIRTUAL-DISK size=5.0G features='0' hwhandler='0' wp=rw |-+- policy='round-robin 0' prio=1 status=active | `- 12:0:0:4 sdd 8:48 active ready running `-+- policy='round-robin 0' prio=1 status=enabled `- 19:0:0:4 sdk 8:160 active ready running 1494554000000000038393031323334353637000000000000 dm-0 IET,VIRTUAL-DISK size=1023M features='0' hwhandler='0' wp=rw |-+- policy='round-robin 0' prio=1 status=active | `- 10:0:0:6 sdb 8:16 active ready running `-+- policy='round-robin 0' prio=1 status=enabled `- 17:0:0:6 sdi 8:128 active ready running
All nodes see these disks from our SAN node and we can do DB2 pureScale using these shared disk devices. By using the latest iscsitarget and kernel 3.0.31, we are also able to use SCSI-3 PR and please see my earlier article for how to test SCSI-3 PR.
If you want to use multipath for the high availability, use /dev/dm-? otherwise use /dev/sd? to use the devices.