VMware NSX

Its VMworld show time, which means awesome stuff being announced. VMware’s Overlay Network and SDN Sagas are clearly just starting, and so VMware announced today its Network Virtualization Solution: NSX.

I haven’t been able to grasp any VMware own technical material yet, however from what I could until so far understand we are talking about the first true signs of a Nicira’s integration/incorporation into current Vmware Networking Portfolio. VMware’s Overlay Networking technology – VXLAN –  has been integrated with Nicira’s NVP platform. However, this is not mandatory, so you do get 2 additional flavors of encapsulation to choose among VXLAN: GRE and STT.

As it can be seen from NSX datasheet, key features are similar to Nicira’s NVP plataform:

• Logical Switching – Reproduce the complete L2 and L3
switching functionality in a virtual environment, decoupled
from underlying hardware
• NSX Gateway – L2 gateway for seamless connection to
physical workloads and legacy VLANs
• Logical Routing – Routing between logical switches,
providing dynamic routing within different virtual networks.
• Logical Firewall – Distributed firewall, kernel enabled line
rate performance, virtualization and identity aware, with
activity monitoring
• Logical Load Balancer – Full featured load balancer with
SSL termination.
• Logical VPN – Site-to-Site & Remote Access VPN in software
• NSX API – RESTful API for integration into any cloud
management platform”

Ivan Pepelnjak gives very interesting preview-insight on what’s under the hood. Though new, integration with Networking partners is also being leveraged. Examples are: Arista,  Brocade, Cumulus, Dell, and Juniper on the pure Networking side, and Palo Alto Networks on the security side.

Exciting time for the Networking community!


I found this Packet-Pusher’s Podcast from last week about Avaya’s Software Defined Datacenter & Fabric Connect with Paul Unbehagen really interesting. He points out some of the differences in VMware’s Overlay Network VXLAN approach against Physical Routing Switches Overlay Network’s SPB approach (which naturally Avaya is using). Some of these were different encapsulation methods – where with VXLAN the number of headers is quite more numerous – ability to support Multicasting environments (such as PIM), and most importantly, raises the central question: where do you want to control your routing and switching – the Virtual Layer or the Physical Layer. Even though I’m theoretically favorable to Virtual, arguments to keep some functions on the Physical layer still do make a lot of sense in a lot of scenarios.

The Podcast also features a lot of interesting Avaya Automation related features result of a healthy promiscuous relationship between VMware and OpenStack. Also if you want to get in more detail about SPB, Paul Unbehagen covers lots of tech details in his blog.

Overlay Virtual Networks – VXLAN

Overlay Virtual Networks (OVN) are increasingly gaining a lot of attention, whether from Virtual Networking providers, as for Physical Networking providers. Here are my notes on a specific VMware Solution: Virtual eXtensible LAN (VXLAN).


This would never be an issue if not large/huge datacenters didn’t come into place. We’re talking about some big-ass companies as well as Cloud providers, enterprises where the number of VMs scales beyond thousands. This is when the ability to scale within the Network, to rapidly change the network, and the ability to isolate different tenants is crucial. So the main motivators are:

  • First L2 communication requirement is an intransigent pusher, which drags 4k 802.1Q VLAN-tagging limitation plus L2 flooding with it
  • Ability to change configurations without burining the rest of the network, and doing it quickly – e.g. easy isolation
  • Ability to change without being “physically” constrained by Hardware limitations
  • Ability to scale large number of VMs, and being able to isolate different tenants.
  • Unlimmited Workload mobility.

These requirements demand for a change in the architecture. They demand that one is not bound to physical hardware constraints, and as such, demand an abstraction layer run by Software and which can be mutable – a virtualization layer in other words.

Network Virtualization

Professor Nick Feamster – who ended today his SDN MOOC course on Coursera – goes further and describes Network Virtualization as being the “killer app” for SDN. As a side note, here is an interesting comment from a student of this course.

Thus it is no surprise that Hyper-visor vendors were the first to push such technologies.

It is also no wonder that their approach was to treat the physical Network as a dumb Network, unaware of the Virtual Segmentation that is done within the Hyper-visor.

To conclusion, the main goal being really moving away from the dumb VLAN-aware L2 vSwitch to building a Smart Edge (Internal VM Host Networking) without having to rely on smart Datacenter Fabrics (supporting for instance EVB, etc).

Overall solutions

There is more than vendor using a OVN approach to solve the stated problems. VMware was probably one of the first Hyper-visor vendors who started with their vCloud Director Networking Infrastructure (vCDNI). MAC in MAC solution, so L2 Networks over L2. Unfortunately this wasn’t a successful attempt, and so VMware changed quickly the its solution landscape. VMware currently has two Network Virtualization solutions, namely VXLAN and more advanced Nicira NVP. Though I present these two as OVN solutions, this is actually quit an abuse, as these are quit different from each other. In this post I will restrict myself to VXLAN.

As for Microsoft, shortly after VXLAN was introduced Microsoft proposed its own Network Virtualization Solution called NVGRE. Finally Amazon uses L3 Core, which uses IP-over-IP communications.


Virtual eXtensible LAN (VXLAN) was developed by in conjunction of Cisco and VMware, and IETF launched a draft. It is supposed to have a similar encapsulation Header as in Nexus 7k OTV/LISP, allowing for Nexus 7k to act as a VXLAN Gateway.

VXLAN introduces an additional kernel Software layer between ESX vSwitches and Physical Network Card, which can either be VMware Distributed vSwitch or Cisco’s Nexus 1000v. This kernel code is able to introduce additional L2 Virtual Segments beyond the 4k 802.1Q limitation over standard IP Networks. Note that these segments run solely within the hyper-visor, which means that in order to have a physical server communicating with these VMs you will need a VXLAN Gateway.

So the VXLAN kernel is aware of Port-Groups on VM-side and intriduces a VX-segment ID (VNI), and introduces an adaptor on the NIC-side for IP communications- the VXLAN Termination Point (VTEP). VTEP has an IP address and performs encapsulation/decapsulation from L2 traffic generated by a VM and inserts VXLAN header and an UDP header plus traditional IP envelop to talk to the physical NIC. The receiving host where the destination VM resides will do the exact same reverse process.

Also note that it transforms broadcast traffic into multicast traffic for segmentation.

It is thus a transparent layer between VMs and the Network. However, since there is no centralized control plane, VXLAN used to need IP multicast on the DC core for L2 flooding. However this has changed with recent enhancements on Nexus OS.

Here’s VMware’s VXLAN Deployment Guide, and Design Guide.

Finally please do note that not everyone is pleased with VXLAN solution.


SDN Playground – getting started with OpenFlow

Most every big Networking company has announced something related to SDN. Whether simple marketing to concrete legit solutions, its a question of time until the market is filled with SDN-related products. It is thus essential to start getting familiar with it, and you know damn well there’s nothing like getting your hands dirty. So here are some helper-notes on getting started with sandboxing OpenFlow (OF) environments.

To do so I’m using Mininet – a VM created part of an OpenSource Project to emulate a whole complete environment with a Switch, an OF Controller, and even three linux hosts. Also note I’m using my desktop as a Host, with VirtualBox.

So what you’ll need:

  • If you don’t have it yet, download VirtualBox, or another PC hyper-visor Software such as VMware Player. VirtualBox has the advantage of being free for Windows, Linux and Mac.
  • Download Mininet VM OVF image.
  • After decompressing the image, import the OVF.

VB Import Applicance

  • In order to establish terminal session to your VM, you’ll need to add a Host-only Adaptor on the Mininet VM. So first (before adding the adaptor on the VM itself) go to VirtualBox > Preferences. Then select the Networking tab, and add and adaptor.


  • Next edit Vm Settings, and add an Host-only Adaptor. Save it and boot the VM.
  • User: mininet       Password: mininet
  • Type sudo dhclient eth1 (or if you haven’t added another adaptor and simply changed the default Adaptor from NAT to Host-only adaptor then type eth0 instead of eth1) to enable DHCP service on that interface.
  • Type ifconfig eth1 to get the IP address of the adaptor.
  • Establish an SSH session to the Mininet VM. Open terminal, and type ssh -X [user]@[IP-Address-Eth1], where the default mininet user is “mininet” and IP address is what you got after ifconfig. So in my case it was: ssh -X mininet@
  • Mininet has its own basics tutorial – the Walkthrough. Also interesting is the OpenFlow tutorial.

The Mininet Walkthrough is designed for less than an hour tutorial. Here are some simple shortcuts to speedup your playing around:

  • Type sudo mn –topo single,3 –mac –switch ovsk –controller remote. This will fire up the emulated environment of the switch, OF controller, and 3 linux hosts.

OF topology

  • Type nodes to confirm it. “h” stands for hosts, “s” for switch and “c” for controller. If you want, for instance, to now the addresses of a specific node such as Host2, type h2 ifconfig. If you want to establish a terminal session to the same host, type xterm h2. Note that xterm command only works if you first established ssh session by typing ssh -X

This should already get you started.

Have fun!