Hardware

Mk. I – 2011

vSandbox - Mk. 1

The first iteration of vSandbox was nothing more than a few cobbled together white boxes that I built from spare hardware I had laying around. I purchased a couple nice cases, some flashy LED case fans, a few USB memory sticks and some extra memory to max out the motherboard capacity. Each host had a Core 2 Duo CPU in it with 8GB of memory. I also threw a few Intel GB NICs into each host to give me a total of 6 network ports per host.

For storage, I purchased an Iomega ix2-200 NAS that would allow me to present both NFS and iSCSI to the ESXi hosts. I was also able to lay hands on a couple of Cisco 8-port GB managed switches to connect everything together. These in turn were connected to a Cisco 10-port L3 GB switch that would allow me create all the VLANs necessary for the environment.

This setup allowed me to really start playing with, understanding and truly appreciating the capabilities and resiliency of vCenter, ESXi and virtualization in general.

I can not even count the number of times I built, broke and re-built this environment.

Check out the retro Linksys Wifi router. (I was running dd-wrt to wirelessly bridge the lab to the internet through the upstairs router at the time.)


Mk. II – 2012

As expected, I very quickly out grew the Mk. 1 version of the lab.

Four months later I set about looking to upgrade a few critical pieces after discovering that having all my VM’s on a single tiny NAS with 5400 rpm spinning disks didn’t help with the latency issues I was experiencing. I also learnt very early on how to use the various memory allocation and reservation capabilities of the platform.

The two items on the top of the shopping list for this upgrade were more storage and more memory in the hosts. Being on a budget, I turned to good ol’ eBay to help me source the items I needed.

When the dust had settled I’d added a lightly used Iomega ix4-200D NAS for storage and sourced 2 pre-owned Intel Core 2 Quad motherboards with CPU’s and 16GB of memory each.

The Core 2 Quad motherboards replaced the Core 2 Duo’s and the ix4 found a place along side the ix2.

Life was good….if only for a short while.


Mk. III – 2013

The next upgrade came in the form of a Synology DS1812+ NAS that was provided to me by Synology for the purpose of testing and showing their integration capabilities with vSphere.

I soon discovered, much to my horror, a small issue when looking to integrate this new piece of kit into my existing setup.

I’d run out of network ports!

Not being the kind of person to let a minor set back get in the way of geeking out with new hardware, I managed to convince the minister of finance (read the wife) to allow me to “invest” in some additional networking gear to “future proof” the lab.

Off I went to eBay…again where I was able to procure a lightly used Cisco 52-port L3 GB managed switch.

After a few weeks of tweaking and re-plumbing the lab I realized that the Synology NAS, the two Iomega NAS’s, both ESXi hosts and the uplink to the Wifi bridge didn’t really consume all that many ports on the new to me Cisco switch. I also realized that it would be such a waste to have all those empty ports just sitting there not really doing anything.

In true geek fashion, I resurrected the 2 older Core 2 Duo’s, purchased a couple cheap Intel NIC’s, 2 cheap cases and 2 USB memory sticks and turned my 2 node ESXi cluster into a 4 Node ESXi cluster, now complete with a L3 backbone, VLAN’s all over the place and 3-tiers of storage.

Thats right, I had gold, silver and bronze storage tiers. Although when compared to enterprise class storage, I probably had bronze, stone and mud tiers.


Mk. IV – 2014

I was able to put a few thousand cycles on the lab in its current state until around mid 2014. With all the software I had running and wanted to run in my lab, I simply didn’t have enough horsepower available to me any more.

The 4th version resulted in the permanent decommissioning of the Core 2 Duo boxes which I ended up donating to a co-worker who was just getting his own home lab started.

These were replaced with two purpose built Core i7’s with 32GB of memory each. Again no internal storage, simply just a couple hosts with compute, memory and networking capacity.


Mk. V – 2015

Late in 2015, while browsing eBay of all places, I stumbled upon what can only be described as a treasure trove of older enterprise class server hardware. I’d seen older rack mount servers listed before but the prices I was seeing now finally justified stepping up to the big league.

I sold off the Core i7’s to help fund the re-build project and officially retired the Core 2 Quad hosts.

Over the 2015 holiday break, I tore down and re-built my entire lab environment. Re-purposing hardware where I could and procuring new hardware as needed. While on a much smaller scale, it felt like the good old days back in the data center. Racking and stacking hardware, splicing CAT 6 to the proper length and cabling up the rack so it looked like a work of art.

The end result looked as follows:

  • Eaton 12U rack
  • Eaton UPS
  • Cisco 52 Port L3 GB switch
  • Synology DiskStation
  • Synology RackStation
  • SuperMicro 2U 4 node server (2x 4Core CPU’s & 48GB memory per node)

Mk. VI – 2016

Around mid 2016 I was offered a NetApp FAS2220 array for my lab. As you can no doubt guess, I jumped at this opportunity and once again set about to plumb this new-to-me piece of hardware into the rack.

The addition of the NetApp brings us to the current hardware build of the lab today.

                    

While it is still relatively easy to max out memory and CPU with the current hardware setup, I have found a way to run all of the VMware software I need to effectively test and demo not only individual VMware products, but also customized customer specific VMware solutions. I’m also able to maintain these various environments and switch between them as needed.