The SBHVL project - Part 4: Spanning a private network over multiple hosts


It's SBHVL time again! This acronym stands for "Small Budget Hosted Virtual Lab" and for a series of posts that I wrote about my vSphere lab that runs on a dedicated physical box at a hosting provider. Until now I covered the basic setup and networking, backup / DR and management / remote access.

So far backup and DR was somewhat limited, because there was only the one box available, and the best I could do was to backup the contents of one of its hard disk to the other. This concept worked out well when I once noticed that one of the hard drives was about to fail: It was the drive that contained the backup data, so I just had to reconfigure the backup after the drive was replaced. However, the necessary drive replacement did not run as smooth as possible and caused a longer downtime of the whole box.

It was clear before that this solution does not offer the availability that you need to run "production systems", but in the meantime I had added VMs that run as mail and web servers offering publically available services, so it was about time to re-think my DR strategy ...

I'm not done with this yet - the outcome of this will really be part of a later post -, but I started with the decision to add a second physical host to the lab. And regarding networking I installed this second host in exactly the same way as the first one: Using pfSense as an IPv4 NAT gateway and IPv6 router.

So I really had two isolated lab environments now, living in different VLANs with layer 3 connectivity only and no shared storage - not what I really wanted. The first thing I tackled was networking: IPv6 is the future, and if we lived in an IPv6-only world then I wouldn't have any problems, because all of the VMs running on both hosts are using public IPv6 addresses that are routed through the pfSense appliances. But we still need IPv4, and IPv4 addresses are becoming scarce and expensive, so I use only one additional IPv4 address per host for the pfSense VMs, and all other VMs use private addresses that are NATted through the pfSense box that runs on the same host.

How can I make the lab VMs on the one host talk to the lab VMs on the other host using IPv4? I praised pfSense before as a very versatile and easy to configure appliance for all sorts of networking purposes ... and it also didn't let me down with this new requirement!

PfSense has the OpenVPN software built into it and can leverage it not only to provide a connection endpoint for all sorts of VPN clients - you can also connect two different pfSense boxes through OpenVPN to tunnel traffic between them. You have basically two different choices:

1. You can connect two different private subnets on an IP / Layer 3 basis, and this is what I chose. In this scenario the boxes route the traffic of their own private subnet into the other private subnet that is behind their counterparts. In the pfSense documentation you can find a detailed description on how to set this up, so I don't need to repeat this here.

2. The second option is to implement a Layer 2 bridge between the two boxes. This way you extend the private IP subnet that you have on the one host into the other. I didn't choose this setup, because it complicates the design with some uncertainties about what to choose as the default router and how to set up DHCP. There are other and better use cases for this option - one is mentioned in this post that also describes how to set up this kind of tunnel.

The first choice provides a more symmetrical setup and makes it easier to ensure that the VMs on each of the ESXi hosts can survive independently from each other - should one host go down, no matter which of the two -, and this is why I picked it.

To complete this post and add a decent visual experience ;-) here is the updated network architecture diagram of my hosted virtual lab:

SBHVL 2.0: The new two host network architecture




No comments:

Post a Comment

***** All comments will be moderated! *****
- Please post only comments or questions that are related to this post's contents!
- Advertising and link spamming will not be tolerated!