DIY NAS: Difference between revisions
No edit summary |
mNo edit summary |
||
Line 92: | Line 92: | ||
=== Configuration === | === Configuration === | ||
Some thoughts, reminders and decisions regarding the configuration of the OS/FS | |||
==== FakeRAID vs. mdadm ==== | ==== FakeRAID vs. mdadm ==== | ||
* FakeRAID can't be recovered on a system that's different from the original one. | * FakeRAID can't be recovered on a system that's different from the original one. | ||
Line 97: | Line 99: | ||
* Still needs mdadm to do the heavily lifting. | * Still needs mdadm to do the heavily lifting. | ||
* Really only needed if the RAID set must be bootable. | * Really only needed if the RAID set must be bootable. | ||
==== ZFS vs. btrfs ==== | ==== ZFS vs. btrfs ==== | ||
* ZFS doesn't | * ZFS overhead only makes sense if using RAIDZ | ||
* RAIDZ doesn't grow. You can add new vdevs, or replace all(!) disks in a vdev. | |||
==== bcache & RAID5 ==== | ==== bcache & RAID5 ==== | ||
* CTRL+F: [https://www.kernel.org/doc/Documentation/bcache.txt "Backing device alignment"] | * CTRL+F: [https://www.kernel.org/doc/Documentation/bcache.txt "Backing device alignment"] |
Revision as of 12:03, 27 August 2016
I've learned two things in the past: you can never have enough money, and you can never have enough disk space. Okay, there's a third thing i've learned: those two above are mutually exclusive. My first NAS was a Conceptronic CH3SNAS with two 1TB drives as RAID1 and a Gbit NIC. The existing OS on there was pretty soon "extended" with FFP to work around some issues and also provide access to the system's shell.
Having shell access taught me some things about off-the-shelf consumer NAS and their manufacturers:
- They're just using Linux, too
- They're stripping all the good stuff to optimize their system
- They're not good at optimizing
With these insights, the observation that the weak CPU of the CH3SNAS was not even remotely able to provide Gbit-bandwith (only ~20MByte/s) and the strong desire to have a more flexible storage solution, i decided to build my own NAS from off-the-shelf computer parts - and have never looked back.
What OS should i use?
My original idea was to basically just build a regular PC, just without any fancy graphics card, a cheap but reasonably powered CPU and loads of large drives, then put Debian Linux on there and just start from scratch configuring all the services like Samba and... well, actually i only needed Samba, but with decent performance.
No big deal and easily done, right? But then i discovered OpenMediaVault. Don't get, i'm more than capable of installing, configuring and maintaining my Linux systems, and happy to have something to tinker with, but OMV looked like it would be easy to use (compared to e.g. Openfiler) and by that i mean "keep me from tinkering too much with it, losing Terabytes of data in the process". And it was just a regular Debian Linux under the hood, so i'd still have all the freedom i want.
After 4 years of using it, i'm still convinced that going with OMV was the best decision, compared to the alternatives out there - and i tried a couple of them - so this would be my OS recommendation for everybody looking into building their own NAS, too.
Current Build
Here's a list of the parts i'm using in my current NAS. The cost without the storage drives was ~400€, if i recall correctly. That's about half of what i would have paid for a (only) 5 bay consumer NAS with a lot less performance, but a big brand name on it.
Blockbox (2012)
- Motherboard: Asus E35M1-I
- CPU: onboard AMD E-350
- RAM: 2x 2GB DDR3 1333MHz
- SSD:
OCZ Agility 3 60GBSamsung 840 128GB - HDD: 6x WD Green 2TB WD20EARX
- Case: Lian-Li PC-Q08 black
- Addon SATA2 Controller for SSD
The HDDs are configured in a RAID5 (mdadm) with ~9TB usage storage space and have been running 24/7 for the last 4 years. No failures at all. That can't be said of the OCZ SSD i was using for the OS, though. The CPU is more than sufficient to saturate the Gbit link when reading and writing to the disks and only shows only moderate load when Syncthing hashes new files (like my nightly backups, more on that later).
Overall this system is still more than sufficient, performance-wise. One could even say it is quite a bit over-powered for its purpose. Unfortunately it's also maxed out, storage-wise.
Upgrade Options?
Back in 2012 i was confident that going from 1TB of available space to ~10TB would last me a lot longer than 4 years, but now (2016) i'm down to ~500GB space left, which brings me into the Danger Zone™ of <5%. So what are my options?
- More disks: not possible, only 6 SATA ports on the board
- Bigger disks: possible, but would require replacement of all 6 disks, which is quite expensive
- New NAS: possible, also quite expensive
So the choice is between replacing all 6 disks with 6TB ones, which would cost roughly 1700€, get me ~27TB in RAID5 and limit my options for further expansion, or building a new NAS with fewer drives (3x 6TB), wich means less available space to start with (11TB in RAID5), but with more SATA ports, which means with a better upgrade path for the future, for about the same amount of money.
Considering the urgent requirement for a bit more disk space, but not that much right away, i went for the new NAS.
New Build
It took me 2 days (or rather: nights) to come up with this build, and i'll only be able to really call it a good decision after it's installed, configured and running as expected, but for each component there's a good reason why i chose this particular one and not something else.
Blockbox (2016)
- Motherboard: Supermicro X11SSH-F (MBD-X11SSH-F)
- CPU: Intel i3 6100 (BX80662I36100)
- RAM: 2x Kingston 4GB DDR4 2133 ECC (KVR21E15S8/4)
- SSD: Samsung SM951 NVMe M.2 128GB (MZ-VPV128)
- HDD: 3x WD Red Pro 6TB (WD6002FFWX)
- Case: Fractal Design Node 804 (FD-CA-NODE-804-BL-W)
- PSU: be quiet! DARK POWER PRO 11 650W (BN251)
Motherboard
First off, the motherboard. Supermicro has become a quite respected manufacturer, at least in my opinion. They're not wasting their money on huge marketing and fancy graphics on the boxes their boards ship in, but rather on simple and well thought-out devices. I don't need airflow guides made from carbon fiber and more blinking lights than your average christmas tree, but here's what i need, and what their board offers:
- 8 SATA ports, so i can keep adding drives
- M.2 NVMe port, so the OS (and Cache) SSD doesn't block a precious SATA port
- IPMI, so i can remotely manage it
- dual Gbit NICs, just because.
CPU
The CPU was probably the only rational decision, it was the smallest and cheapest available that can be used with the motherboard. Considering that the 4+ year old CPU in the old NAS was mostly twiddling its thumbs, i don't think this was a bad choice. But hey, i can upgrade to a real Xeon if i feel like it.
RAM
Could i have added a bit more RAM, especially because it's currently dirt cheap? Sure. But why. There are still 2 additional slots available, so maybe later. But i really wanted to have ECC RAM! The markup compared to Non-ECC RAM was insignificant, and with all the other components being geared towards reliability, this was an easy decision. And yes, the board and the CPU both support ECC RAM.
System/Cache Drive
Now regarding the SSD for the OS (and Cache), this was initially just a "How cool would it be to have an M.2 SSD in there?"-thought, especially if you consider that 128GB is a lot of wasted (and expensive) space if OpenMediaVault is happy with less than 5GB, and lots of people actually use cheap USB Sticks for their system drive, which they replace every couple of months. On the one hand i wanted a reliable system, so using an USB Stick that wears out pretty quickly was definitely not an option, but on the other hand i didn't want to give up a SATA port, which would mean one disk less for the RAID array. I could have added an PCIe SATA controller and plugged a regular SSD into that, but the combined cost of those two would be higher than just buying a M.2 NVMe SSD. NVMe means that the drive is really fast, compared to normal SATA SSDs.
But doesn't it make even less sense then to use it as the system drive, which doesn't even need to be fast when the system boots, as that happens maybe once every couple of months? Yes, if i'd only use it as the OS drive! It does make a lot of sense if i also use it as a cache drive, using bcache. About 100GB of that NVMe SSD will be set aside for the cache, which should be plenty enough for most purposes, while OMV will use the remain ~20GB for itself. By using a (really, really fast) NVMe SSD as a cache, i'll get the benefit of a prohibitively expensive hardware RAID controller with a battery-backed RAM cache (so that you don't lose any data in case of a power outage during a write operation), while also still being able to pay my rent.
Storage Disks
On to to the most important parts, the storage drives.
- Why Western Digital? Because i had very few issues with their drives, and their RMA process is quick and comfortable. That alone seals the deal.
- Why Red Pro and not regular Red (or any other color) drives? Because Red Pro drives have a 5 year warranty, compared to 3 years for the Red ones. I know i'm pushing my luck with those Green drives that i'm using for 4 years already, and considering the size of the new drives, i want them to last (or be RMA'd) as long as possible.
- Speaking of size, why 6TB and not 8TB (or 4TB)? Red Pro drives in 8TB are not yet available, so the maximum is 6TB. Anything less than 6TB would only get me to the point of using all SATA ports even sooner.
The plan is to start with 3 drives in RAID5 and add drives regularly, but i will probably reshape the array to RAID6 when needing the 5th drive (so i'll be adding 2 drives then, bringing it to 6 drives). Depending on the price and availability, i may also start adding bigger drives, as long as it's not already the last 1-2 drives (because that would make it too expensive to upgrade all the existing ones, too).
Case
I really liked the cube form factor of my old NAS, but just like the motherboard in there only has 6 SATA ports, the case only holds 6 3.5" drives. As there are not many other mATX cases with that form factor and enough space for 8 (or more) drives, and me being a big fan of Fractal Design's other cases, this decision was also rather easy. Their cases are not only solid and have a good airflow, they also put a lot effort into details like sleeved cables, silent fans and great cable routing. And they look awesome (even though i could have done without the side window).
PSU
Just like i'm using Fractal Design cases also for my workstation and server, i'll rely on the same PSU in all builds. The modular build of the Dark Power series allows me connect just those cables i really need for the build, and leave the unused ones in the box instead of in the case. They are efficient, pretty much completely silent and absolutely reliable, which is probably more important than having just reliable drives. If you're going to cut corners because you're on a budget, skip the nice case and put it all in cardboard box, but don't try to save money by going for a cheap PSU. Using something off-brand in your gaming rig may be alright if all you're losing is your Solitaire highscore, but it's not acceptable if we're talking about your precious data. Is 650W maybe a bit much? Probably by a factor of 2-3, yes. Better safe than sorry.
Configuration
Some thoughts, reminders and decisions regarding the configuration of the OS/FS
FakeRAID vs. mdadm
- FakeRAID can't be recovered on a system that's different from the original one.
- Performance advantage is negligible.
- Still needs mdadm to do the heavily lifting.
- Really only needed if the RAID set must be bootable.
ZFS vs. btrfs
- ZFS overhead only makes sense if using RAIDZ
- RAIDZ doesn't grow. You can add new vdevs, or replace all(!) disks in a vdev.
bcache & RAID5
- CTRL+F: "Backing device alignment"