I found this Packet-Pusher’s Podcast from last week about Avaya’s Software Defined Datacenter & Fabric Connect with Paul Unbehagen really interesting. He points out some of the differences in VMware’s Overlay Network VXLAN approach against Physical Routing Switches Overlay Network’s SPB approach (which naturally Avaya is using). Some of these were different encapsulation methods – where with VXLAN the number of headers is quite more numerous – ability to support Multicasting environments (such as PIM), and most importantly, raises the central question: where do you want to control your routing and switching – the Virtual Layer or the Physical Layer. Even though I’m theoretically favorable to Virtual, arguments to keep some functions on the Physical layer still do make a lot of sense in a lot of scenarios.

The Podcast also features a lot of interesting Avaya Automation related features result of a healthy promiscuous relationship between VMware and OpenStack. Also if you want to get in more detail about SPB, Paul Unbehagen covers lots of tech details in his blog.

Overlay Virtual Networks – VXLAN

Overlay Virtual Networks (OVN) are increasingly gaining a lot of attention, whether from Virtual Networking providers, as for Physical Networking providers. Here are my notes on a specific VMware Solution: Virtual eXtensible LAN (VXLAN).


This would never be an issue if not large/huge datacenters didn’t come into place. We’re talking about some big-ass companies as well as Cloud providers, enterprises where the number of VMs scales beyond thousands. This is when the ability to scale within the Network, to rapidly change the network, and the ability to isolate different tenants is crucial. So the main motivators are:

  • First L2 communication requirement is an intransigent pusher, which drags 4k 802.1Q VLAN-tagging limitation plus L2 flooding with it
  • Ability to change configurations without burining the rest of the network, and doing it quickly – e.g. easy isolation
  • Ability to change without being “physically” constrained by Hardware limitations
  • Ability to scale large number of VMs, and being able to isolate different tenants.
  • Unlimmited Workload mobility.

These requirements demand for a change in the architecture. They demand that one is not bound to physical hardware constraints, and as such, demand an abstraction layer run by Software and which can be mutable – a virtualization layer in other words.

Network Virtualization

Professor Nick Feamster – who ended today his SDN MOOC course on Coursera – goes further and describes Network Virtualization as being the “killer app” for SDN. As a side note, here is an interesting comment from a student of this course.

Thus it is no surprise that Hyper-visor vendors were the first to push such technologies.

It is also no wonder that their approach was to treat the physical Network as a dumb Network, unaware of the Virtual Segmentation that is done within the Hyper-visor.

To conclusion, the main goal being really moving away from the dumb VLAN-aware L2 vSwitch to building a Smart Edge (Internal VM Host Networking) without having to rely on smart Datacenter Fabrics (supporting for instance EVB, etc).

Overall solutions

There is more than vendor using a OVN approach to solve the stated problems. VMware was probably one of the first Hyper-visor vendors who started with their vCloud Director Networking Infrastructure (vCDNI). MAC in MAC solution, so L2 Networks over L2. Unfortunately this wasn’t a successful attempt, and so VMware changed quickly the its solution landscape. VMware currently has two Network Virtualization solutions, namely VXLAN and more advanced Nicira NVP. Though I present these two as OVN solutions, this is actually quit an abuse, as these are quit different from each other. In this post I will restrict myself to VXLAN.

As for Microsoft, shortly after VXLAN was introduced Microsoft proposed its own Network Virtualization Solution called NVGRE. Finally Amazon uses L3 Core, which uses IP-over-IP communications.


Virtual eXtensible LAN (VXLAN) was developed by in conjunction of Cisco and VMware, and IETF launched a draft. It is supposed to have a similar encapsulation Header as in Nexus 7k OTV/LISP, allowing for Nexus 7k to act as a VXLAN Gateway.

VXLAN introduces an additional kernel Software layer between ESX vSwitches and Physical Network Card, which can either be VMware Distributed vSwitch or Cisco’s Nexus 1000v. This kernel code is able to introduce additional L2 Virtual Segments beyond the 4k 802.1Q limitation over standard IP Networks. Note that these segments run solely within the hyper-visor, which means that in order to have a physical server communicating with these VMs you will need a VXLAN Gateway.

So the VXLAN kernel is aware of Port-Groups on VM-side and intriduces a VX-segment ID (VNI), and introduces an adaptor on the NIC-side for IP communications- the VXLAN Termination Point (VTEP). VTEP has an IP address and performs encapsulation/decapsulation from L2 traffic generated by a VM and inserts VXLAN header and an UDP header plus traditional IP envelop to talk to the physical NIC. The receiving host where the destination VM resides will do the exact same reverse process.

Also note that it transforms broadcast traffic into multicast traffic for segmentation.

It is thus a transparent layer between VMs and the Network. However, since there is no centralized control plane, VXLAN used to need IP multicast on the DC core for L2 flooding. However this has changed with recent enhancements on Nexus OS.

Here’s VMware’s VXLAN Deployment Guide, and Design Guide.

Finally please do note that not everyone is pleased with VXLAN solution.