Azure is fun. It’s more than fun – it’s low-key kind of awesome. With just a few clicks anyone can make pretty much anything they want without having to have hardware on-hand to prop it up. All you need is money (or a sponsored Azure account from your workplace *hint hint*).
While that’s pretty damn amazing, I missed having a kick-ass home lab. I still have one, don’t get me wrong, but it’s mostly fallen to neglect these days. It’s less of a lab and more of a home control appliance. It powers our home automation, turning lights, heat, etcetera on and off. It monitors the water level in the basement sump, reports effluent volume, charts high and low activity dates. It stores our local backups and files before they’re shipped off to an Azure storage account for off-site backup. It’s become less of a playground and more a real production system that we legitimately depend on.
I still wax and reminisce about building my lab like any nerd building his computer. There’s something about picking out parts that complement each other, have some exotic performance or capability, or perfectly match the requirements. In my head it affirms my nerd-cred. It’s a physical representation of my membership card. Haha.
I recently purchased a new gigabit switch with POE to power and connect our security and monitoring systems. Gigabit is typically fast enough for nearly everything we need, certainly so for surveillance and media purposes, but not so much for some of the other things I do with networked storage. I’ve noticed that with multiple machines connecting to the file server I can easily exceed the available bandwidth of a single link there. I toyed around with LACP and port channels (which work great, but still limit each client to 1Gb/s), but thought it might be fun to have something in an entirely different class – I think it’s time to make the step to 10 gig Ethernet.
I managed to snag a Ubiquiti Networks US-16-XG SFP+ 10 Gig Aggregation switch for fairly cheap. I have some existing Unifi gear that I’ve had pretty decent luck with, so I wanted something that could integrate with that. Luckily they had just released this model (though, also a downside because it’s a bit buggy – more on that in a bit). The DL380P I use as a file / virtualization / automation server didn’t have a 10GbE interface, so I had to procure one of those as well. I ended up with an HP CN1100E for ~$100. It’s a dual port 10G PCIe converged network adapter that does Ethernet, iSCSI, and Fibre Channel (FC) connectivity over 10GbE. I won’t need anything but Ethernet for this deployment, but the price was right!
The last major piece of the puzzle was cabling. I’ve not forayed into the world of SFP+ before, so I had to learn about optics, DACs, transceivers, and a whole mess of things that I was ignorant of previously. I already have Cat6 strung around the house, so I had hoped to utilize that to connect my desktop to this new 10G switch. I was very wrong. The transceivers for SFP+ RJ45 connectors are something like $300 each – and that’s on the low end of what I was able to find! It takes two per connection, an amount I’m not willing to spend. This particular connection requires a bit more thought. Perhaps a single 100′ run of Fiber would be more economical. The jury is still out on this one.
Connecting the server and switches was relatively simple. I hopped over to the Ubiquiti community forums and checked out the list of DACs and Optics on the US-16-XGs compatibility list before settling on these iPolex passive DACs I picked up from Amazon. At ~$24 each, they’re fairly cheap. I utilized four of them in the lab rack – two from the US-16-XG to the DL380P, and two to the gigabit POE switch. This configuration essentially allows ~20 (theoretical, real world is obviously less) clients to access the server at their full gigabit line rate. The server connection is aggregated, as well as the inter-switch connection. I’m planning for the desktop to connect directly to the XG, as well, which will allow full 10Gb access to the server once I get the whole fiber thing sorted out.
Now, about the bugginess of the switch. The US-16-XG was originally a bit finicky about the DACs. Others have reported this issue on the community forums, as well. I was able to get everything working by shifting things to different ports. For whatever reason DACs would work in some ports, but not others. I managed to get everything working, though. Here’s a snippet from the working switch configuration:
(UBNT) >show fiber-ports optics-info all Link Link Nominal Length Length Bit 50um 62.5um Rate Port Vendor Name [m] [m] Serial Number Part Number [Mbps] Rev Compliance -------- ---------------- --- ---- ---------------- ---------------- ----- ---- ---------------- 0/1 OEM 0 0 CSS31GB1516 SFP-H10GB-CU3M 10300 03 DAC 0/2 OEM 0 0 CSS31GB1523 SFP-H10GB-CU3M 10300 03 DAC 0/5 OEM 0 0 CSS31GC0602 SFP-H10GB-CU3M 10300 03 DAC 0/6 OEM 0 0 CSS31GC0601 SFP-H10GB-CU3M 10300 03 DAC
Overall, I’m happy with this configuration. Now that the server’s network connection is no longer the bottleneck, I can probably live with gigabit to my desktop – but I still dream of 10G! I’ll update the post with whatever I choose to do when I get it figured out. Until next time!