Since my last homelab update Ive moved into a new apartment. I decided not to move my entire server rack with me to my new place due to several reasons. One of the main reasons was due to cost of power, noise, and labor to not only disconnect/reconstruct/carry it up several flights of stairs. I originally had it stored in the basement of my parents house where none of the issues I listed were a problem. There were several reasons that I discuss in this post why I decided to replace my current home lab with a Synology NAS.
Since I moved out for the several months I could only access my lab through my Wireguard VPN or SSH tunnel. Once connected and I used the Proxmox interface and NoVNC/RDP/VNC to access my machines. The latency from these remote desktop services were decent, but not as good as being on the same LAN as my lab like I was before. I mainly was connecting to my lab via Wireguard VPN then SSH-ing into my VMs/Containers. When performing research and I needed to transfer files I had to find unique ways to get the files from my desktop/laptop to my lab. There were several options I could do…
- VPN then RDP and transfer the file
- VPN then SSH/SFTP to the machine
- Upload files to online storage such as Google Drive or Dropbox then downloading the files on the machine
- Open a port on my firewall and SSHing to that then tunneling through that tunnel to SSH into the machine I need to access.
All of these methods are slow and spent a lot of time waiting for files to upload/download to the system before I could actually begin what I was needing to do in the first place.
Another issue I ran into was that a majority of the applications in my home lab weren’t resource intensive and mainly ran in LXC containers so I usually allocated 512 Mb of RAM and 1 CPU core to these applications. Since the RAM and Cores allocated to the VMs and containers can be shared it wasnt a huge issue, but some of the containers didnt need nearly 512Mb of RAM. Another issue was that the containers took up a lot of space compared to a docker container. Although with an LXC container it was nice to take a snapshot of it in case I needed to revert, which I cant do in a Docker container.
When it comes to my Windows VMs and Linux Desktops VMs in Proxmox I found myself leaving those systems all running all the time and they also took up a lot of resources. Since I was the only one using my lab so I can only use one VM at a time so there was no need to have all those running all the time. I found myself using VMs less and instead trying to put everything I can into a LXC container since it was easier to maintain. When I found something I wanted to self host I found more support for Docker than than compiling or hosting it in a LXC. So using an LXC was a bit cumbersome, but I had more control if I choose to use LXC containers.
The network switches I setup were using VLANs and many of the VMs/containers were placed in a respective VLAN, but I didnt want to have to configure each of the host firewalls each time to only allow the ports I needed. I managed all the networking using my switches managed interface and Pfsense.
When I had to add a VM/Container to the network and something would fail there were multiple authentications pages and multiple points of failures where I had to check and figure out why the VM/Container wasnt connecting to the internet or couldn’t communicate with another device on the network. All of this took me away from focusing on setting up the VM/container fast and doing what I need to test/setup.
The hard drives that my lab ran on were purchased used and could fail at any moment so I did not trust putting important files on my server. So I kept a backup of all my inportant files in a offline backup hard drive or password protected file in Google Drive. After setting up several RAID configurations and managing the LVM, LVM-Thin, Directory storages it also became cumbersome because I was constantly running out of space.
Since the Dell Poweredges I was using were already using really old hardware and used an old hardware RAID controller I was afraid at any moment the RAID controller or something else could fail and I would lose everything on the server. A solution to this would to use a software RAID solution such as ZFS or Synologys SHR.
Now that I have explained all the points of failure in my previous lab and issues I encountered when using Proxmox. In my circumstance it was no longer a viable option since I relied on it so much on my lab. I need something I can trust, quiet, power effecient, and low maintence so I can focus on my work, research, or personal projects. I also need something with redundancy and a lot of storage…
So using a Raspberry Pi with some SSDs/HDDs and using something like OpenMediaVault would fail since there is no redundancy. I looked into building my own TrueNAS or FreeNAS build, but this would take a bit of time to source all the exact parts I wanted and also fit it into a Mirco ATX or small case. When I priced all the parts I was ready to purchase I noticed that it would be only a few hundred dollars less than just buying a ready made NAS like Synology or QNAP.
So at this point I narrowed it down to Synology and QNAP. I know that Synology has a much larger community and set of people that use it and as a security professional I know QNAP has had several more security issues for their devices than Synology. Since my number one issue with my previous homelab was upload/download speed I wanted to make sure I “future-proof” my server and choose the best hardware without going overboard on price. So was looking for a NAS that had 10Gb NIC. The QNAP TS-431KX-2G 4 Bay has a 10 Gb NIC built in to the NAS and the NAS itself was small and cheap.
I also looked at the Synology devices and noticed that they didn’t 4 bay NAS with 10Gb NIC. Although the larger Synology had an option for a PCIe slot. So I could always purchase a NIC with a RJ45 that does 10Gb or even a NIC with a SFP+ port.
So I began to look into RJ45 10Gb vs SFP+ with Fiber. I learned that 10 Gb RJ45 tends to have more noise across the cable and limited in the distance it can be used rather than a SFP+ NIC with Fiber. When I compared my options with a self build solution using 10Gb Fiber vs Synology with 10Gb it was still only a minor difference in cost by a couple hundred dollars. Also the option to expand my storage using the eSATA port was a plus and the possibility to be able to setup link aggregation was also a plus. There were only a few selections of cases I could choose from if I wanted to build my own solution and I just really like the simplicity of the Synology case. So I decided to go with a Synology DS1821+.
After looking at all my options and comparing cost and time needed to maintain/build my new homelab solution I decided to go with Synology for the following reasons it solves:
- Power effecient compared to previous setup
- Small form factor
- 8 Bays
- PCIe Slot for an option to add 10 Gb NIC
- Little to no time needed to build or setup
- Docker support so I can dockerize all my previous LXCs in my homelab
- One reliable central location for all my important files and lab files
- Options for Link Aggregation and expanding Synology via eSATA for future use cases.
- Software RAID instead of Hardware RAID solution
- Simple GUI and built-in apps and features such as Reverse Proxy, Firewall, 2FA Authentication
- Well maintained OS, stable updates, and quick patches to security issues.
- Option for caching with NVMe
- Able to use ECC RAM
- Synology DS1821+
- 4Gb DDR4 SODIMM ECC
- (2) Intel X520 with Double SFP+ ports
- 4 meters of OM3 Fiber Cable
- (2) 10GBase SFP+ Transceivers
- (4) 10Tb WD Red Plus / 7200RPM / SATA6 / CMR / 256 MB Cache
This Synology NAS supports docker containers and also has plenty of ports to expand if I need more space. A question that is asked a lot when building a NAS is “Is it better to have a few larger stroage drives with less bays or more hard drive bays with smaller drives?”. The answer varies… so it really depends on how much you want to spend and how much you plan to store. Since I wanted to future proof my NAS I went with larger drives and more bays. Using the Synology RAID calculator here. I have 30 Tb of usable space if I use SHR with 4x 10Tb Drives.
Why not fill all 8 bays? Well the reason I did that was because I dont have over 30 Tb of data I needed to store. So if I purchased those and added them to my NAS the disk would spin and there is more of a chance for a failure on one of the drives. Also I learned that the price per Gigabyte dramatically increases after a certain Tb. I found that anything over 10 Tb almost doubled in price so thats why I went for the 10Tb until the price of larger drives decrease.
You may be wondering well what about all the VMs that I used/maintained in Proxmox. Well I moved or rebuilt those VMs on my desktop using VMware Workstation Pro. When Im not using a VM I can transfer that to my NAS to use for later. I can technically run my VMs directly from my NAS over the 10 Gb Fiber, but this wouldn’t be as fast as having the VM directly on my SSD or NVMe on my desktop.
I have used Proxmox, network switches, Dell Poweredge servers, and custom built Pfsense for for over 5 years. During the time I didn’t need to worry about noise or power or form factor. Although I do not ever regret having my previous homelab. Being able to physically play with real enterprise hardware has help prepare me for my current job, interviews, and solidify my overall understanding of hypervisors, computers, servers, and networking gear. If you are interested in how I decided to setup my new lab see the following blog post