How to mount a shared folder from Buffalo LinkStation NAS on Windows 7/Vista/2008

If you have a Buffalo LinkStation (LS model for me) and want to mount a SMB network share from the LinkStationon to a Windows 2008/7/Vista, you will have the following error message:
Error code 0x80070035

Si vous possédez un Buffalo LinkStation (model LS pour moi) et que vous voulez monté un partage réseau du LinkStation vers un Windows 2008/7/Vista) vous aurez l’erreur suivante :
Code erreur 0x80070035

NAS from QNAP, for instance, use NTLM v2 authentication as for Buffalo’s LinkStation, it uses only the (old and weak) LN authentication.
The network authentication level on Windows 2008/7/Vista is defined to send NTLMv2 response only. You will have to tune it correctly to mount the LinkStation’s network share.

Les NAS QNAP, par exemple, utilisent l’authentification NTLM v2, le LinkStation quant à lui n’utilise que l’authentification LN (faible et ancienne).
Le niveau d’authentification réseau sur Windows 2008/7/Vista est défini à : envoyer uniquement des réponses NTLMv2. Il faudra modifier cela pour monter le partage réseau LinkStation.

On Windows 2008, Windows 7 and Vista:

Go to “Administrative Tools > Local Security Policy“.
Then “Security Settings > Local Policies > Security Options > Network security : LAN Manager authentication level
Send LM & NTLM responses” or “Send LM & NTLM – use NTLM2 session security if negotiated” depending your type of network.

Sur Windows 2008, Windows 7 et Vista :

Aller dans :”Outils d’Administration > Stratégie de sécurité locale“.
Puis, “Paramètres de sécurité > Stratégies locales > Options de sécurité > Sécurité réseau : niveau d’authentification LAN Manager
Envoyer les réponses LM et NTLM” ou “Envoyer LM et NTLM – utiliser la sécurité de session NTLM2 si négociée” en fonction de votre type de réseau.

You will be able to mount your Windows SMB share.
Vous pourrez ensuite monter votre partage SMB Windows.

It might work for some other Buffalo products (I only have a LinkStation so I can’t test it) !
Ca doit également fonctionner pour d’autres produits Buffalo (je n’ai qu’une LinkStation donc je ne peux pas tester) !

How to setup iSCSI on Linux (RedHat)


iSCSI initiator : the endpoint that initiates a iSCSI session.  An iSCSI initiator sends SCSI commands over an IP network. It’s the client endpoint.

iSCSI Target : refers to a storage resource located on an iSCSI server (most of the time it’s a “storage array”). It’s the server endpoint.

LUNs (Logical Number Units): number used to identify a logical unit, which is a device addressed by the SCSI protocol (thus Fiber Channel or iSCSI). It usually represents slices of large RAID disk arrays.

IQN (iSCSI Qualified Name) : iSCSI name of the target or initiator.


On the Storage Server:

Enable and configure the iSCSI Target on your storage server.

Mine is a QNAP Turbo NAS. I’ve got 1 target with 5 LUNs configured.

SCSI Portal
X Enable iSCSI Target Service
iSCSI Service Port:           3260
mytarget (     Connected
id:0 - lun1 ( 2024.00 GB)               Enabled
id:1 - lun2 ( 2024.00 GB)               Enabled
id:2 - lun3 ( 2024.00 GB)               Enabled
id:3 - lun4 ( 2024.00 GB)               Enabled
id:4 - lun5 ( 1804.13 GB)               Enabled

I have two network interfaces:

1-       for QNAP management, IP :

2-      iSCSI access, directly connected to the server :


For more security you can enable “LUN masking”. It will restrain iSCSI target to be accessed to only by the initiator of your client (the client initiator name IQN can be found on /etc/iscsi/initiatorname.iscsi).


On the Linux client (see tips ** for VMware configuration) :

Install “iscsi-initiator-utils” on the server that will connect to the iSCSI volume:

# rpm –Uvh iscsi-initiator-utils-

Set up iscsi automatic start on boot and start iscsi services:

# chkconfig iscsid on
# service iscsid start
# chkconfig iscsi on
# service iscsi start


Discover your iSCSI targets:

# iscsiadm -m discovery -t st -p

In my case it will show 2 targets (one for each network connection): mytarget.c5884b mytarget.c5884b

I have 2 routes for the same target.


Log to the target through IP :

# iscsiadm -m node -T -p -l

Add automatic login at boot :

# iscsiadm -m node -T -p --op update -n node.startup -v automatic

As I have another access to the target, I will disable it in order to not disturb the previous configuration:

# iscsiadm -m node -T -p --logout
# iscsiadm -m node -T -p --op update -n node.startup -v manual


At this point you will see the iSCSI LUNs as block devices on your client.

On my system the five iSCSI block devices are /dev/sdc, sdd, sde, sdf and sdg.


You will have create partitions and format them to either standard Linux partition or LVM partition.

I chose LVM because I need large file systems.

You can use parted or fdisk (if < 2To), see article: ” How To Make Partitions Larger Than 2To With Parted GPT Support“.

So here is the result:

# fdisk -l
Disk /dev/sdc: 2173.2 GB, 2173253451776 bytes
Device Boot      Start         End      Blocks   Id  System
/dev/sdc1               1      264216  2122314988+  8e  Linux LVM
Disk /dev/sdd: 2173.2 GB, 2173253451776 bytes
Device Boot      Start         End      Blocks   Id  System
/dev/sdd1               1      264216  2122314988+  8e  Linux LVM
Disk /dev/sde: 2173.2 GB, 2173253451776 bytes
Device Boot      Start         End      Blocks   Id  System
/dev/sde1               1      264216  2122314988+  8e  Linux LVM
Disk /dev/sdf: 2173.2 GB, 2173253451776 bytes
Device Boot      Start         End      Blocks   Id  System
/dev/sdf1               1      264216  2122314988+  8e  Linux LVM
Disk /dev/sdg: 1937.1 GB, 1937169711104 bytes
Device Boot      Start         End      Blocks   Id  System
/dev/sdg1               1      235514  1891766173+  8e  Linux LVM


Then create you LVM volume group and logical volume :

# pvcreate /dev/sdc1 /dev/sdd1 /dev/sde1 /dev/sdf1 /dev/sdg1
# vgcreate -s 256M vol_vg /dev/sdc1 /dev/sdd1 /dev/sde1 /dev/sdf1 /dev/sdg1
# lvcreate -l 28672 vol_vg -n vol_lv1
# lvcreate -l 10924 vol_vg -n vol_lv2

File system creation:

# mkfs -t ext3 -b 4096 -N 100000 /dev/vol_vg/vol_lv1 -L VOL1
# mkfs -t ext3 -b 4096 -N 100000 /dev/vol_vg/vol_lv2 -L VOL2

Then mount the file systems:

# mkdir –p /VOL1 /VOL2
# mount -t ext3 /dev/vol_vg/vol_lv1 /VOL1
# mount -t ext3 /dev/vol_vg/vol_lv2 /VOL2
# df -h
Filesystem            Size  Used Avail Use% Mounted on
/dev/mapper/vol_vg-vol_lv1 7.0T  6.3T  395G  95% /VOL1
/dev/mapper/vol_vg-vol_lv2 2.7T  1.5T  1.2T  57% /VOL2

If you will want to automatically mount your iSCSI file systems at startup (see article: How To Use UUID And Blkid To Manage Devices).

Get the UUID for each file system:

# blkid /dev/vol_vg/vol_lv1
/dev/vol_vg/vol_lv1: LABEL="VOL1" UUID="4a496f92-6840-4736-a0d5-5b9916113835" SEC_TYPE="ext2" TYPE="ext3"
# blkid /dev/vol_vg/vol_lv2
/dev/vol_vg/vol_lv2: LABEL="VOL2" UUID="cab5e3ec-4797-4227-98e8-e9bca3c3f766" SEC_TYPE="ext2" TYPE="ext3"

The add UUIDs to /etc/fstab :

UUID=4a496f92-6840-4736-a0d5-5b9916113835       /VOL1   ext3 _netdev    0 0
UUID=cab5e3ec-4797-4227-98e8-e9bca3c3f766       /VOL2   ext3 _netdev    0 0


** Tip :

If your Linux is a VM on ESXi :

–          Dedicate a network adapter to connect directly the Storage Array to the VMware server.

CAT 5e/6 RJ45 cable (through dedicated hardware switch, if needed).

–          Create a “vSwitch” using the dedicated network adapter with vShere Client.

vswitch iscsi

vswitch iscsi

–          Add a network adapter using the new vSwitch on your virtual host configuration.

Now you have a direct iSCSI connection to you storage array. You can start the configuration.