With the advent of compute virtualisation, it was necessary to extend the physical network from outside the host into the virtualised environment within the host. This is achieved with the virtualised network. Now, and in the near future, we will see the growth of network virtualisation.
Two words, and a big change in meaning if you adjust their order, but what exactly is that change in meaning, and what does it empower in your (or your customers) business?
The past virtualised network is the extension of the physical network into the virtualised environment. It uses all the same “equipment” as the physical network, but in a virtual form embedded within a host.
So let’s look at what a physical network provides.
Back in the day, Ethernet was a bus network, using a co-axial cable to link the devices.
These days it’s normally only seen as RJ-45 connectors on devices, and wired as a bus through what was a centrally located repeating “hub”, but now is most likely a switch.
Different networks used to need to have additional hubs or switches to maintain physical separation, and to keep the broadcast domains apart.
(Later these could be collapsed back in to the same hardware environment using VLAN or virtual-LAN (the first virtualisation of the network on physical hardware) technology and 802.11Q header tagging to pass information between switches. Linking lots of physical switches together needed common programming of VLAN information across each device, so Cisco provided the VTP (VLAN Trunk Protocol) to help propagate the configurations amongst clusters of linked switches)
So how is this different in a virtualised network?
In short, it’s not different. The virtualised network is simply an implementation of the same functions as provided by a hardware switch, but embeded within the core software of the host. (In VMware, this is the vSwitch or virtual switch).
VMware also allows the linking of switches to become a Virtual Distibuted Switch (dvSwitch), which alleviates some of the configuration hassles of managing multiple vSwitches in the same manner as Cisco’s VTP. This then allows devices to connect with a virtual switch within a host which then links with other virtual switches across the VMware environment to provide a “seamless” connectivity layer. (There is an element of abstraction here, in that the inter switch communication is managed by the VMware ESXi hypervisors, and the actual network traffic to the physical environment is now managed via a dvuplink rather than the vmnic used for a vSwitch.)
So in short, a virtualised network still uses the same constructs and capabilities as a physical network, so all the challenges that the physical network provides.
So what are the challenges?
So the challenges that occur are primarily in recovering environments in a disaster situation, particularly if it is a partial recovery (e.g. restoring only a single application node if a portion of a application solution is down), or in the challenge of using resources hosted in different data centers (or even the cloud).
These problems exist for both physical and virtual environments, and stem from the same issue, namely the need to provide the same network “cable” either physical or virtual to the replacement application module, and have it connect to the network.
This is easier in the same data center, as in this case, it is comparatively easy to plug into the same switch port, or extend the VLAN to a different switch (change control not withstanding), or in a virtualised environment provision the “replacement” machine elsewhere in the same host cluster.
The challenge becomes when things are being restored in the “back-up” or “dark” data center, or use the cloud as a restore location. In this case, the need to extend the Layer 2 network across boundaries that are typically managed with Layer 3 segregation (sites, areas, countries or regions). In this case, restoration is not just a simple restoration process, but it also a reconfiguration to the application environment so that the restored node can be integrated, using changes to DNS servers, load-balancers, or even the application configuration or the application code itself, significantly increasing the time to restore and the risk of successful completion.
And what does this Network Virtualisation bring?
Network virtualisation is a true virtualisation (as with the compute virtualisation earlier), abstracting the “network” from the underlying environment (physical or virtual).
Because Network Virtualisation builds a new network environment on top of the physical layer, it allows much more flexibility. There can be several “networks” virtualised within the same Network Virtualisation infrastructure (similar to the VLAN capability, but including Layer 3 routing, load-balancers and firewalls). Add to the fact that these environments can be populated programmatically (either from a catalog, or via an API) and the power of Network Virtualisation becomes really powerful.
It can allow for application nodes to be restored without changes but in different locations. In this case the ability for the Layer 2 environment to be extended over Layer 3 boundaries means that the no changes are needed, no matter where the application node is being restored.
This reduces the time to restore an application to full service, and to allow for the flexibility to cloud-burst, adding extra capacity via the cloud to cope with peaks in demand.
All of these things allow for increased speed of deployment (imagine the need of not having to physically wire anything in the data center ever again, except when physically provisioned), for flexibility (see the cloud-bursting example above), for security (imagine the ability to provision individual network segments for applications and per node firewalls), and the agility gained in dealing with not only application failures, but also the ability to grow capacity on demand when the business needs require.
Wouldn’t Network Virtualisation help you or your customers gain these advantages?