How to use iperf for network speed testing – Comment utiliser iperf pour des effectuer des tests de vitesse réseau.

Comment utiliser iperf pour des effectuer des tests réseau

While solving lots of backup issues in my job, I noticed that large number of those was directly related to network issues. Consequently it became a habit to test the network performances of the servers I manage.

Dans mon travail j’ai du résoudre de nombreux problèmes de sauvegarde et j’ai remarqué qu’un grand nombre d’entre eux sont directement liés à des soucis réseau. J’ai donc pris comme habitude de tester les performances réseau des serveurs que j’administre.

Why run network performance tests on your servers:

–          Detect network issue on your servers (wrong Ethernet port configuration, wrong switch configuration …).

–          Detect a global LAN issue (congestion, latency …).

Pourquoi tester les performances réseau de vos serveurs :

–          Pour détecter des problèmes réseau relatifs au serveur directement (configuration du port Ethernet ou du switch …).

–          Pour détecter des problèmes globaux LAN (congestion, latence …).

Why use iperf :

–          Very quick to use: 2 commands and the test is done.

–          Flexible: many options (see below).

–          Iperf exist on both Unix/Linux and Windows platforms.

–          Accurate: iperf doesn’t generate I/O or high CPU load that will distort your test results.

Pourquoi utiliser iperf :

–          Rapide à utiliser : 2 commandes et le test est fait.

–          Flexible : beaucoup d’options.

–          Iperf existe sur plusieurs plateformes.

–          Précis : iperf ne génère pas d’I/O ou charge CPU qui fausseront les résultats des tests.

Handy options:

Options utilies :

-p <port_number> : to change the default port (5001), useful if you are behind a firewall.

-t <seconds> : duration of the test (default is 10s).

-r : run a bidirectional test (up then down).

-d :  run simultaneously a bidirectional test  (up and down at the same time).

You can even launch iperf as a daemon (-D option) on a server so the iperf server will be ready whenever you want to test a client.

 

Options utiles :

-p <port_number> : pour changer le port par défaut (5001), utile si vous êtes derrière un firwall.

-t <seconds> : durée du test (10s par défaut).

-r : lance un test bidirectionnel (montant ensuite descendant).

-d : lance simultanément un test bidirectionnel  (en même temps montant et descendant).

Vous pouvez même lancer iperf en tant que daemon (option –D) sur un serveur. De cette façon le serveur iperf sera prêt à tester des clients à tout moment.

 

Using iperf :

Utiliser iperf :

On one of the servers launch iperf as server:

Sur un des serveurs lancer iperf en mode serveur :

[root@myserver1 root]# iperf -s

------------------------------------------------------------

Server listening on TCP port 5001
TCP window size: 85.3 KByte (default)

 

Then run the test on the other server (in client mode where myserver1 is the name of the server running iperf -s):

Ensuite lancer le test sur l’autre serveur (en mode client où myserver1 est le nom du serveur sur lequel est lancé iperf –s) :

D:\>iperf –c myserver1

------------------------------------------------------------

Client connecting to myserver1, TCP port 5001

TCP window size: 63.0 KByte (default)

------------------------------------------------------------

[1852] local 192.168.1.101 port 36794 connected with 192.168.1.120 port 5001
[ ID] Interval       Transfer     Bandwidth
[1852]  0.0-10.0 sec  1.07 GBytes    921 Mbits/sec

 

Most of the time you will want to run the test with option –r to detect any asymmetric rate issue.

La plus part du temps il faudra lancer le test avec l’option –r pour détecter les problèmes de débit asymétrique :

[root@ myserver3 root] # iperf -c myserver1 -r

------------------------------------------------------------

Server listening on TCP port 5001

TCP window size: 85.3 KByte (default)

------------------------------------------------------------

------------------------------------------------------------

Client connecting to frsu0069, TCP port 5001

TCP window size:   128 KByte (default)

------------------------------------------------------------

[  4] local 192.168.1.102  port 34743 connected with 192.168.1.120 port 5001

[  4]  0.0-10.0 sec    112 MBytes  94.3 Mbits/sec

[  4] local 192.168.1.102  port 5001 connected with 192.168.1.120 port 53073

[  4]  0.0-10.0 sec    112 MBytes  94.1 Mbits/sec

 

You can easily find iperf rpm for you distribution, for Windows and other operating systems here is a link :

Vous pouvez facilement touver le rpm pour votre distribution, pour Windows et les autre systems d’exploitation voice un lien :

iperf download

 

 

Glances: installing and tuning default thresholds on RedHat ES6

Glances is a free (under LGPL license) tool, written by Nicolas HENNION (aka Nicolargo) in Python and using libstatgrab to monitor your system.

I like the concept and it’s very easy to install and to handle.

I used to work a lot with top and sar on linux but now if I have the opportunity to install Glances, I do it.

Conceptually Glances is more like nmon on AIX, or glance on HP-UX (quite the same name, what a coincidence?).

Installation:

Download Glances (1.3.7 on this post) :

Compile glance (you will need gcc) :

# tar ztf glances-1.3.7.tar.gz
# cd glances-1.3.7/
# ./configure
# make
# make install

Then install libstatgrab and pystatgrab librairies.

# rpm -Uvh libstatgrab-0.15-1.el6.rf.x86_64.rpm
# rpm -Uvh pystatgrab-0.5-9.el6.x86_64.rpm

You will not found the 2 rpm on RH ES6 CDs but you can find them on rpm.pbone.net for instance.

Don’t try to install pystatgrab from the sources: it won’t work as it uses pkg-config and pkg-config is not able to locate the libs on RedHat (don’t now why?).

Run Glances:

# glances.py

Glances: installing and tuning

From the screenshot one can see that the server’s main statistics are available on one screen:

Cpu, load, Memory, Network, Disk I/O, Disk space, and process

For the monitoring part you have colors for current thresholds overflow and an history of the last 3 to 10 warning and critical alerts.

For more information see the man page or Nicolargo’s web site (in French):

nicolargo Glances doc

If you want to change the default limits, you will have to edit glances.py

# locate glances.py
/usr/local/bin/glances.py
# vi /usr/local/bin/glances.py

Find glancesLimits class :

class glancesLimits():
"""
Manage the limit OK,CAREFUL,WARNING,CRITICAL for each stats
"""
# The limit list is stored in an hash table:
#  limits_list[STAT] = [ CAREFUL , WARNING , CRITICAL ]
# Exemple:
#  limits_list['STD'] = [ 50, 70 , 90 ]
 
__limits_list = {   #           CAREFUL WARNING CRITICAL
'STD':  [50,    70,     90],
'LOAD': [0.7,   1.0,    5.0]
}

And modify the default limits, for instance:

__limits_list = {   #           CAREFUL WARNING CRITICAL
'STD':  [40,    50,     70],
'LOAD': [0.2,   0.5,    1.0]

Glances: tuning default thresholds

Now you can see that while cpu% user if still lower than 90, the color is now red, meaning CRITICAL and the process who were in CAREFUL state before (<70) are now in WARNING state. Same comment on the load.

How to use UUID and blkid to manage devices

Having trouble with device mapping at reboot (iscsi device mapping can change at every reboot) ?

Use UUID (Universal Unique Identifier)!

 

Say you have two iSCSI targets on your Linux. They show, for example, as disks /dev/sdc and /dev/sdd the first time you discover them (with iscsiadm) :

# sfdisk -s

/dev/sda:  20971520
/dev/sdb: 104857600
/dev/sdc: 2122317824
/dev/sdd: 2122317824

Using fdisk or parted you will create the devices /dev/sdc1 and /dev/sdd1 :

# sfdisk -l /dev/sdc

Disk /dev/sdc: 264216 cylinders, 255 heads, 63 sectors/track

Units = cylinders of 8225280 bytes, blocks of 1024 bytes, counting from 0

Device Boot Start     End   #cyls    #blocks   Id  System

/dev/sdc1          0+ 264215  264216- 2122314988+  83  Linux

/dev/sdc2          0       -       0          0    0  Empty

/dev/sdc3          0       -       0          0    0  Empty

/dev/sdc4          0       -       0          0    0  Empty

Disk /dev/sdd: 264216 cylinders, 255 heads, 63 sectors/track

Units = cylinders of 8225280 bytes, blocks of 1024 bytes, counting from 0

Device Boot Start     End   #cyls    #blocks   Id  System

/dev/sdd1          0+ 264215  264216- 2122314988+  83  Linux

/dev/sdd2          0       -       0          0    0  Empty

/dev/sdd3          0       -       0          0    0  Empty

/dev/sdd4          0       -       0          0    0  Empty

Then you can create the two file systems and mount them:

# mkfs -t ext3 /dev/sdc1

# mount -t ext3 /dev/sdc1 /VLS1

# mkfs -t ext3 /dev/sdd1

# mount -t ext3 /dev/sdd1 /VLS2

You need to add the file system in /etc/fstab in order to automatically mount them at startup.

However let’s first reboot the system to check if everything is alright.

At you surprise after reboot you can’t mount /dev/sdc1 and /dev/sdd1.

If you check with sfdisk –l  you can see that /dev/sdc1 and /dev/sdd1 still exist but /dev/sde1 and /dev/sdf1 appeared out of nowhere …

Actually, your two iSCSI disks are now /dev/sde1 et sdf1 (but sdc1 can also be sdd1 at the new reboot and so on …) thanks to the magic of Linux’s dynamic mapping of devices (udev).

That’s where UUID is your best friend !

UUID (Universal Unique Identifier) enable to uniquely identify you device. UUID is not changing at every reboot the way device files (/dev/sdX ou /dev/hdX …) are.

For instance you may know MAC addresses are unique identifier for network cards, well MAC addresses are UUID (version 1).

So if you can get the UUID for your two file systems your problem is solved.

On linux you can get a file system UUIDs with command “blkid” (“vol_id” command can sometimes be used for old Linux versions).

blkid can be found into util-linux package.

# blkid /dev/sdc1

/dev/sdc1: UUID="01066206-c67c-47d1-83a9-d61791fff943" SEC_TYPE="ext2" TYPE="ext3"

# blkid /dev/sdd1

/dev/sdd1 UUID="cea28516-ca98-4ac4-954f-6710b6ac36c7" SEC_TYPE="ext2" TYPE="ext3"

 

Then add the following lines to /etc/fstab :

UUID=4a496f92-6840-4736-a0d5-5b9916113835       /VLS1   ext3 _netdev    0 0
UUID=cab5e3ec-4797-4227-98e8-e9bca3c3f766       /VLS2   ext3 _netdev    0 0

“_netdev” substitute to the usual “default” because network needs to be up before iSCSI filesystems mount.

Now your iSCSI file systems will be automatically mounted after every reboot (well, as long as your iSCSI server is up ?).