UnderGround Forums
 

ITGround >> VMWare networking


7/31/11 2:42 AM
Ignore | Quote | Vote Down | Vote Up
Road Warrior Fin
17 The total sum of your votes up and votes down Send Private Message Add Comment To Profile

Edited: 07/31/11 2:45 AM
Member Since: 1/1/01
Posts: 28960
 

Got a Dell m1000e blade running 5 different blades as well as different operations systems.

The fucker has like 24 ports available, we only have 5 NICs hooked up.  Each one hooked up has seem to created one seperate virtual switch for each one but it looks like the VM's are all on one physical NIC.  WTF?

Why the fuck did this thing create all these virtual switches?  How can I just team/etherchannel/LACP and hookup like 12 ports from my switch, use LACP or team 12 switchports via VMWare ESX?

What is the standard procedure to do this?  I just want to get as much bandwidth out of this as possible.

And one Cisco question - 10Gig SFP's - Can a 2960G with the latest IOS handle these and would I be able just to throw one in a 4507 R-E (10GE -V sup, upgrading to next model up in a few months) and could I just use the existing MM fiber that is running through 1G SFP's from my core to the my server stack?

 
8/1/11 8:31 AM
Ignore | Quote | Vote Down | Vote Up
Anarkis
38 The total sum of your votes up and votes down Send Private Message Add Comment To Profile

Edited: 08/01/11 8:31 AM
Member Since: 1/22/05
Posts: 2186
I wish I had more time to respond to your question. But I think that your approach maybe too much of a network perspective on the situation. Most of the changes you need to be made can be handled by making changes on the ESX server.
8/1/11 11:40 AM
Ignore | Quote | Vote Down | Vote Up
Road Warrior Fin
17 The total sum of your votes up and votes down Send Private Message Add Comment To Profile

Member Since: 1/1/01
Posts: 28967
I guess that is what I'm trying to figure out.

The Dell M1000e has 8 ports on its "A" Fabric which ties back into a network controller module.  I'm just trying to figure out how to hook up more, if I need to trunk them (everything is on one vlan) - or if I can just hook them all up, combine and team the network cards then allow the vmhosts to access the network via 4-8 gig instead of the 1 it is on right now.
8/1/11 12:51 PM
Ignore | Quote | Vote Down | Vote Up
big_slacker
27 The total sum of your votes up and votes down Send Private Message Add Comment To Profile

Member Since: 1/1/01
Posts: 15024
My last ESX setup was a long time ago, but IIRC there was a NIC teaming selection that allows you to set it to cisco etherchannel and assign the etherchannel to a vswitch. At that point it can either be a trunk or a plain old L2 link.
8/8/11 12:23 PM
Ignore | Quote | Vote Down | Vote Up
E
21 The total sum of your votes up and votes down Send Private Message Add Comment To Profile

Member Since: 5/10/04
Posts: 604
You can just wire up as many nics as needed and add them to the vswitches to team or configure failover.

Just make sure they are on the same vlans.
8/8/11 11:44 PM
Ignore | Quote | Vote Down | Vote Up
Anarkis
38 The total sum of your votes up and votes down Send Private Message Add Comment To Profile

Member Since: 1/22/05
Posts: 2195
If your still having trouble with this, check out this link:

http://www.cisco.com/en/US/docs/solutions/Enterprise/Data_Center/vmware/VMware.html#wp696415

The most common and preferred option is to configure the vSwitch to color traffic from the VMs with a VLAN TAG and to establish a 802.1q trunk with the Cisco switch connected to the ESX Server NICs. VMware calls this method Virtual Switch Tagging (VST).

I am not sure if you need VMOTION with your hardware configuration, but if you plan on using its better to put those ports in a specific VLAN in order to physically segregate the traffic and not have it effect your host network.

Reply Post

You must log in to post a reply. Click here to login.