How to setup iSCSI on Linux (RedHat)

Definitions:

iSCSI initiator : the endpoint that initiates a iSCSI session.  An iSCSI initiator sends SCSI commands over an IP network. It’s the client endpoint.

iSCSI Target : refers to a storage resource located on an iSCSI server (most of the time it’s a “storage array”). It’s the server endpoint.

LUNs (Logical Number Units): number used to identify a logical unit, which is a device addressed by the SCSI protocol (thus Fiber Channel or iSCSI). It usually represents slices of large RAID disk arrays.

IQN (iSCSI Qualified Name) : iSCSI name of the target or initiator.

 

On the Storage Server:

Enable and configure the iSCSI Target on your storage server.

Mine is a QNAP Turbo NAS. I’ve got 1 target with 5 LUNs configured.

SCSI Portal
 
X Enable iSCSI Target Service
 
iSCSI Service Port:           3260
 
mytarget (iqn.2004-04.com.qnap:ts-859:iscsi.mytarget.c5884b)     Connected
 
id:0 - lun1 ( 2024.00 GB)               Enabled
 
id:1 - lun2 ( 2024.00 GB)               Enabled
 
id:2 - lun3 ( 2024.00 GB)               Enabled
 
id:3 - lun4 ( 2024.00 GB)               Enabled
 
id:4 - lun5 ( 1804.13 GB)               Enabled

I have two network interfaces:

1-       for QNAP management, IP : 10.0.0.5

2-      iSCSI access, directly connected to the server : 192.168.0.1

 

For more security you can enable “LUN masking”. It will restrain iSCSI target to be accessed to only by the initiator of your client (the client initiator name IQN can be found on /etc/iscsi/initiatorname.iscsi).

 

On the Linux client (see tips ** for VMware configuration) :

Install “iscsi-initiator-utils” on the server that will connect to the iSCSI volume:

# rpm –Uvh iscsi-initiator-utils-6.2.0.865-6.el5.x86_64.rpm

Set up iscsi automatic start on boot and start iscsi services:

# chkconfig iscsid on
# service iscsid start
# chkconfig iscsi on
# service iscsi start

 

Discover your iSCSI targets:

# iscsiadm -m discovery -t st -p 192.168.0.1

In my case it will show 2 targets (one for each network connection):

192.168.0.1:3260 iqn.2004-04.com.qnap:ts-859:iscsi. mytarget.c5884b
10.0.0.5:3260 iqn.2004-04.com.qnap:ts-859:iscsi. mytarget.c5884b

I have 2 routes for the same target.

 

Log to the target through IP 192.168.0.1 :

# iscsiadm -m node -T iqn.2004-04.com.qnap:ts-859:iscsi.mytarget.c5884b -p 192.168.0.1 -l

Add automatic login at boot :

# iscsiadm -m node -T iqn.2004-04.com.qnap:ts-859:iscsi.mytarget.c5884b -p 192.168.0.1 --op update -n node.startup -v automatic

As I have another access to the target, I will disable it in order to not disturb the previous configuration:

# iscsiadm -m node -T iqn.2004-04.com.qnap:ts-859:iscsi.mytarget.c5884b -p 10.0.0.5 --logout
# iscsiadm -m node -T iqn.2004-04.com.qnap:ts-859:iscsi.mytarget.c5884b -p 10.0.0.5 --op update -n node.startup -v manual

 

At this point you will see the iSCSI LUNs as block devices on your client.

On my system the five iSCSI block devices are /dev/sdc, sdd, sde, sdf and sdg.

 

You will have create partitions and format them to either standard Linux partition or LVM partition.

I chose LVM because I need large file systems.

You can use parted or fdisk (if < 2To), see article: ” How To Make Partitions Larger Than 2To With Parted GPT Support“.

So here is the result:

# fdisk -l
Disk /dev/sdc: 2173.2 GB, 2173253451776 bytes
Device Boot      Start         End      Blocks   Id  System
/dev/sdc1               1      264216  2122314988+  8e  Linux LVM
Disk /dev/sdd: 2173.2 GB, 2173253451776 bytes
Device Boot      Start         End      Blocks   Id  System
/dev/sdd1               1      264216  2122314988+  8e  Linux LVM
Disk /dev/sde: 2173.2 GB, 2173253451776 bytes
Device Boot      Start         End      Blocks   Id  System
/dev/sde1               1      264216  2122314988+  8e  Linux LVM
Disk /dev/sdf: 2173.2 GB, 2173253451776 bytes
Device Boot      Start         End      Blocks   Id  System
/dev/sdf1               1      264216  2122314988+  8e  Linux LVM
Disk /dev/sdg: 1937.1 GB, 1937169711104 bytes
Device Boot      Start         End      Blocks   Id  System
/dev/sdg1               1      235514  1891766173+  8e  Linux LVM

 

Then create you LVM volume group and logical volume :

# pvcreate /dev/sdc1 /dev/sdd1 /dev/sde1 /dev/sdf1 /dev/sdg1
# vgcreate -s 256M vol_vg /dev/sdc1 /dev/sdd1 /dev/sde1 /dev/sdf1 /dev/sdg1
# lvcreate -l 28672 vol_vg -n vol_lv1
# lvcreate -l 10924 vol_vg -n vol_lv2

File system creation:

# mkfs -t ext3 -b 4096 -N 100000 /dev/vol_vg/vol_lv1 -L VOL1
# mkfs -t ext3 -b 4096 -N 100000 /dev/vol_vg/vol_lv2 -L VOL2

Then mount the file systems:

# mkdir –p /VOL1 /VOL2
# mount -t ext3 /dev/vol_vg/vol_lv1 /VOL1
# mount -t ext3 /dev/vol_vg/vol_lv2 /VOL2
# df -h
Filesystem            Size  Used Avail Use% Mounted on
/dev/mapper/vol_vg-vol_lv1 7.0T  6.3T  395G  95% /VOL1
/dev/mapper/vol_vg-vol_lv2 2.7T  1.5T  1.2T  57% /VOL2

If you will want to automatically mount your iSCSI file systems at startup (see article: How To Use UUID And Blkid To Manage Devices).

Get the UUID for each file system:

# blkid /dev/vol_vg/vol_lv1
/dev/vol_vg/vol_lv1: LABEL="VOL1" UUID="4a496f92-6840-4736-a0d5-5b9916113835" SEC_TYPE="ext2" TYPE="ext3"
# blkid /dev/vol_vg/vol_lv2
/dev/vol_vg/vol_lv2: LABEL="VOL2" UUID="cab5e3ec-4797-4227-98e8-e9bca3c3f766" SEC_TYPE="ext2" TYPE="ext3"

The add UUIDs to /etc/fstab :

UUID=4a496f92-6840-4736-a0d5-5b9916113835       /VOL1   ext3 _netdev    0 0
UUID=cab5e3ec-4797-4227-98e8-e9bca3c3f766       /VOL2   ext3 _netdev    0 0

 

** Tip :

If your Linux is a VM on ESXi :

–          Dedicate a network adapter to connect directly the Storage Array to the VMware server.

CAT 5e/6 RJ45 cable (through dedicated hardware switch, if needed).

–          Create a “vSwitch” using the dedicated network adapter with vShere Client.

vswitch iscsi

vswitch iscsi

–          Add a network adapter using the new vSwitch on your virtual host configuration.

Now you have a direct iSCSI connection to you storage array. You can start the configuration.

AIX (5.3) NFS tuning

For an AIX migration I needed to maximize my transfer rate on a NFS share (AIX iSCSI is not working with QNAP Turbo NAS).

An AIX expert (one of the name you can read on IBM Redbooks covers) gave a tuning advice:

Pour un migration AIX j’avais besoin de maximiser les taux de transfert vers un partage NFS (comme le iSCSI AIX ne fonctionne pas avec les Turbo NAS QNAP).

Un export AIX (un de ceux qu’on peut lire sur la couverture des Redbook IBM) m’a donné un conseil de tuning assez simple à mettre en place:

To display the current network and nfs settings :

Pour afficher les valeurs actuelles des settings réseau et nfs :

nfso -a
no -a

Set the new parameters:

Mettre en place les paramètres suivants :

nfso -o nfs_use_reserved_ports=1
no -o rfc1323=1
no -o udp_recvspace=262144
nfso -o nfs_v3_pdts=8

(this tuning will be effective only on the next nfs mount)

(ce tuning  ne sera effectif qu’au prochain montage nfs)

then mount your share  :

ensuite, montez vos partages :

mount -o vers=3 qnap1:/Public /mnt

Be sure to run enable rfc1323 before incresing the tcp and udp receiving buffers (tcp_recvspace and udp_recvspace).

Il faut impérativement activer rfc1323 avant d’augmenter les buffers de reception tcp et udp.