Home > Business Critical Applications, VMware > My Lab Environment

My Lab Environment

Below I describe the lab environment that I’ve set up for my use, is not a recommendation, it’s just what I’ve done. This lab is located in my office at home. It’s a pretty serious lab environment for pretty serious testing.

I wanted something supported and close to what a customer might have. I wanted to be able to reproduce problems and troubleshoot them. I chose hardware from a major vendor. Models are fully supported on the VMware HCL, and have embedded ESXi hypervisor. All equipment purchased new over a few years. I have recently added an EMC Clariion CX500 to my lab and a Cisco MDS9120 FC switch, which were donated by a very kind soul.

Before I give you all the technical details here is a photo that I’ve recently taken to give you an idea of what this setup looks like.

A Serious Home Lab for Serious Home Testing

Compute:
1 x Dell PE1900, 16GB RAM, 1 x E5345 CPU, 7 x 1Gb/s NIC ports, 1 x Single Port QLogic 2312 2Gb/s FC HBA – Backup Management Host
4x Dell T710, 72GB RAM, 3 x Dual Socket X5650 and 1 x Dual Socket E5504, 8 x 1Gb/s NIC ports (4 on board, 4 add on quad port card) 2 x 10Gb/s NIC ports (dual port card), 1 x Dual Port Emulex LPe11002 4Gb/s FC HBA (T710 E5504 is primary management host)
2 x Dell R320, 32GB RAM, Single Socket E5-2430 CPU, 2 x 1Gb/s NIC Ports On board, 2 x 10Gb/s NIC ports (dual port card), 1 x Dual Port Emulex LPe11002 4Gb/s HBA
6 x vESXi hosts with 8GB RAM (used for vShield, Lab Manager, vCloud Director, and Cisco Nexus 1000v testing)
All hosts running ESXi 5.

Network:
vSS0 – vmkernel for management, iSCSI1, iSCSI2, N1KV VSM’s (2 x 1Gb/s Uplinks)
vDS0 – VM Networking, multiple port groups and VLAN’s including a trunk promiscuous VLAN for vESXi servers, AppSpeed port group (2 x 1Gb/s Uplinks)
vDS1 – Management vSwitch for vMotion, main iSCSI port groups (2), FT, and VM port groups (2 x 10Gb/s Uplinks)
N1KV – VM Networking, N1KV packet, control, management (4 x 1Gb/s Uplinks in two uplink port profiles)

Physical Network has 2 main 24 port 1Gb/s switches which the 1Gb/s uplinks are split across with a 24 port 10G Dell 8024 (full layer 3, qos etc) switch as the core, which also connects the hosts and shared NAS storage. Management host is connected to a separate 24 port 1Gb/s switch with an 2 uplinks to one of the edge switches (due to space constraints)

Storage:
Tier 0:
Fusion-IO IODrive2 1.2TB SSD x 2 – On loan temporarily for testing, may make them permanent
Tier 1:
Thanks to a very kind donation I have a much loved CX500 and Cisco MDS9120 in my lab as my Tier 1 storage. This has 30 x 146GB 10K FC disks. I’ve configured them in two RAID groups (12 disks each), excluding the first 5 disks used for FLARE (26), and 1 as hot spare. Two RAID 1/0 LUN’s are configured, one on each RAID group. I’m using single target single initiator zoning for my fabric, which is split into two VSAN’s to create the two fabric’s you’d normally have.
Tier 2:
Each server has 8 x 300GB 15K SAS disks locally which is configured as a single RAID5 datastore. On top of the datastore I’ve place a HP P4000 VSA(per host), which consume 80% of the local datastore, rest used for local appliances/VM’s. The VSA’s are in one management group. I have volumes configured and presented to the hosts as Network RAID5 and RAID10, all of it is thin provisioned. Performance is ok, maxes out at about 300MB/s during performance tests. The VSA’s are connected to the port groups with the software iSCSI initiators on the 10Gb/s vDS.
Tier 3: Qnap 4 disk NAS, serving out NFS (test VM’s and templates) and iSCSI (currently unused as it’s not T10 compliant and vSphere 5 generates log spam as a result)
Tier 4: Open Filer on a desktop, only used for archiving and vCloud Director

Management:
3 x vCenter servers (3 x 5.0), I use SRM along with the HP P4000 VSA’s, SRM is now at v5
1 x vSyslog (FT Protected)
SQL DB for Protected Site vCenter, Oracle DB for Recovery Site vCenter
VUM, VUMDS, View, View Sec Server (with PCoIP)
vCenter Mobile Access (use with iPad App)
4.0 vMA for auto UPS shutdown of environment
5.0 vMA for general management
vCOps Enterprise v5
CapIQ (Still there but now included in vCOPs)
Virtual Infrastructure Navigator
vCenter Configuration Manager
vCloud Director
AppInsight (Part of the New Application Performance Management Suite)
AppSpeed
Lab Manager
vShield App
F5 LTM/VE
vDR for backup
vCloud Connector
vSphere Web Client (2 instances load balanced by the F5 LTM/VE)
Nexus 1000v (2 x VSM’s)

Power:
2 x Dell 1920W UPS – Core Servers, core switches, storage
3 x APC 1500 VA Smart-UPS – Edge Swtiches, presentation equipment, wireless network, management host, desktops

Notable omissions – vCenter Heartbeat. I did have Heartbeat, due to lack of resources removed it temporarily.

I’m sure many people could get away with a lot less, especially if you have got access to a company lab, or if just using it for functional testing. But I also wanted to be able to do performance testing and be able to simulate real world situations. I’ve used this setup to identify multiple bug’s and design / config errors and then get them fixed for customers. I don’t have access to another company lab that is up to scratch, so I decided to invest significantly in building my own.

Some other home lab’s you should definitely check out are:

Jason Boche’s Lab

David Klee’s Lab

This post first appeared on the Long White Virtual Clouds blog at longwhiteclouds.com. By Michael Webster +. Copyright © 2012 – IT Solutions 2000 Ltd and Michael Webster +. All rights reserved. Not to be reproduced for commercial purposes without written permission.

Advertisements
  1. arielantigua
    May 14, 2012 at 5:39 am

    Can you post some picture of this? Looks like you are having so much fun in that LAB!!!
    Congrats!

  2. July 16, 2012 at 11:35 am

    Looking forward to comparing notes on the FusionIO benefits.
    Get some VMware view goodness into your lab!

    • July 16, 2012 at 11:36 am

      I’ve got VMware View 5.0. I use it for remote access and testing.

  3. July 16, 2012 at 7:53 pm

    AWESOME lab man

  4. Anuj Modi
    July 17, 2012 at 12:39 am

    Lot of hardwork has been done in planning such a beautiful lab…hats off !!

  5. July 17, 2012 at 5:23 am

    Awesome…

    • July 17, 2012 at 5:26 am

      “Server room” is in your bed room.

  6. Joe
    July 19, 2012 at 11:31 pm

    Have fun replacing that every 3-4 years when the SAN and switches become obsolete.

    • July 20, 2012 at 12:03 pm

      There is a good chance it’ll last for more than 5 years given I bought enterprise class equipment, but in any case I replace and enhance some of it each year and spread the investment over multiple years to avoid a massive single year hit.

  7. July 22, 2012 at 2:46 pm

    That is an amazing lab! Very nice!

  8. Gert
    July 24, 2012 at 3:20 am

    Awesome lab….

  9. August 18, 2012 at 11:39 pm

    Great lab! Thanks for sharing ..

    How’s the noise level with all those 19″ dell’s and so many 15k disks. Because this is the one thing most bothering me in my own lab.

    • August 19, 2012 at 12:09 am

      Actually not that bad since I put it in the rack. But if your used to quiet then it is pretty noisy. The FC switch and CX500 array is very noisy though and so I don’t have that turned on much. Plus it uses heaps of power.

      • August 19, 2012 at 12:41 am

        Thanks for commenting!

        Mmmhhh, i’m thinking of moving the stuff over to a friend, which has plenty of space. But i’m not quit sure if i’m gonna feel happy with working on the lab only from a remote location. Actually that’s something i’ve never seen much on the net. Either it’s a “Home”lab or a dc (co)location.

        Putting it in a DC seems to much hassle (limitations) and to expensive (around $ 149 – 169 a month).

        A remote (friends) location only would cost me a inet-uplink, i guess $ 39,- month and some electricity comp. *

        * My Power consumption is moderate, between 270 – 480 watts (@ 230 volts offcourse)

        i have 3 HP ML110G7 ‘s as phy hosts, 1 big supermicro, 2x passive cooled cisco gbit sw, and one qnap 8xx (tier 4 like you) Main storage is a Nexenta zfs as vsa (with pass-through storage) and as secondary 2x HP
        p4000 vsa’s (mirrored R10)

        Your correct to think “that’s not 100% hcl stuff .. ” Allthough i’ve had no single issue anywhere yet. At the time of starting the lab it was budget vs hcl.

        Well, enough off-topic raming for now i guess 🙂

  10. September 4, 2012 at 1:23 pm

    Fantastic lab! I’m super happy to see more people with great home setups. Great job!

  11. Alessandro
    November 23, 2012 at 11:01 am

    Hi mate,
    Do you know where can i get those vault disks for Clariion CX500? I have a CX500 but no vault… Sad… 😦

    Cheers

    • November 26, 2012 at 6:36 am

      Hi Allesandro, I’m not sure where you can get the vault disks or how to create them. Have you tried looking for the procedure on EMC PowerLink? Perhaps you could find some spare parts on EBay or similar site.

  12. November 24, 2012 at 5:57 pm

    Michael. Thanks for all the hard work and effort to support the virtualization community. I reference your blogs at least once per month.

  1. February 20, 2012 at 12:05 am
  2. February 28, 2012 at 12:05 am
  3. July 17, 2012 at 12:21 am
  4. August 17, 2012 at 9:42 pm
  5. October 4, 2012 at 1:13 am

Leave a Comment

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: