Just Another IT Blog

It's time to share some of my experiences, crazy ideas, tips and tricks !!!

Post Page Advertisement [Top]

With the enhancements of iSCSI devices over these years, it’s not unusual to find some environments choosing iSCSI implementation over Fiber Channel implementations.

I believe it’s clear to everyone that FC provides the fastest and more reliable solution these days and of course it’s the most expensive solution, right ?!?

But iSCSI solutions have it’s merits . So, how do you get the best performance of it ?

Well, the more obviously approach would be use faster connections.
If you are already using 10gb connections you probably wont see much difference having more than one connection.
Now if you have 1gb connection and cannot migrate to 10gb, add multiple NICs and path to your configuration.

Configuring iSCSI Multipathing.
I’ll not try to cover here all the aspects of how to accomplish that, it’s because different iSCSI storage vendors present storage to servers in different ways. Some vendors present multiple LUNs on a single target, while others present multiple targets with one LUN each.


My best advice is to check with your storage vendor how to configure it for your specific environment; they all have documentation about it.

So, what’s this post about ?

My first though was to alert about the misconception about it, some people tent to believe that if you just add more NICs to your virtual switch where the VMKernel port is configured it will automatically provide load balance and bigger throughput, just like the virtual machines connections do.

That’s not true !!!

Since vSphere iSCSI stack acts like a “port binding” you will end up with just one active connection per iSCSI initiator X iSCSI target, regardless of how many NIC you have attached to your vSwitch.

To accomplish multipathing you will need to configure additional vmkernel portgroups and bind each NIC to each portgroup.

Let’s see how it works.

1 – Configure additional vmkernel portgroups




Configure as much portgroups as NICs will have for iSCSI traffic

2 – Map each iSCSI port to just one active NIC.
By default all NICs are active, you will need to overwrite vSwitch failover order policy so that each port maps to only one corresponding active NIC


3 - Binding ports
Now the final piece: you will bind vmknics to iSCSI initiators.
First identify the name of iSCSI ports. (get them from the VI client, Networking option)


Second, you need to identify the vmhba names. (get them from the VI client, Storage Adapters option)


Finally you just run the command which will bind them.

esxcli swiscsi nic add -n port_name -d vmhba

on our example it will be this:

esxcli swiscsi nic add -n vmk1 –d vmhba32
esxcli swiscsi nic add -n vmk2 –d vmhba32

If you display the Paths view for the vmhba32 adapter through the vSphere Client, you see that the adapter uses two paths to access the same target. The runtime names of the paths are vmhba32:C1:T1:L0 and vmhba32:C2:T1:L0. C1 and C2 in this example indicate the two network adapters that are used for multipathing.

You can now configure your discovery initiators and rescan your datastore.

AGAIN: it’s more a heads up than a procedure to follow, remember: there are several factors that could contribute to how do you set it up, like software assistance or hardware dependent of your card connections, so check with your vendor.

If you want to read more:
VMWARE has a good guide: iSCSI SAN Configuration Guide
Virtual Geek blog has also very good information about it

Now it’s up to you ; )

Bottom Ad [Post Page]