DIY NAS: Difference between revisions
No edit summary |
No edit summary |
||
(15 intermediate revisions by the same user not shown) | |||
Line 15: | Line 15: | ||
After 4 years of using it, i'm still convinced that going with OMV was the best decision, compared to the alternatives out there - and i tried a couple of them - so this would be my OS recommendation for everybody looking into building their own NAS, too. | After 4 years of using it, i'm still convinced that going with OMV was the best decision, compared to the alternatives out there - and i tried a couple of them - so this would be my OS recommendation for everybody looking into building their own NAS, too. | ||
=== Current Build === | |||
Here's a list of the parts i'm using in my current NAS. The cost without the storage drives was ~400€, if i recall correctly. That's about half of what i would have paid for a (only) 5 bay consumer NAS with a lot less performance, but a big brand name on it. | |||
==== Blockbox (2012) ==== | ==== Blockbox (2012) ==== | ||
Line 23: | Line 26: | ||
* HDD: 6x WD Green 2TB WD20EARX | * HDD: 6x WD Green 2TB WD20EARX | ||
* Case: Lian-Li PC-Q08 black | * Case: Lian-Li PC-Q08 black | ||
* ''Addon SATA2 Controller for SSD'' | |||
The HDDs are configured in a RAID5 (mdadm) with ~9TB usage storage space and have been running 24/7 for the last 4 years. No failures at all. That can't be said of the OCZ SSD i was using for the OS, though. | The HDDs are configured in a RAID5 (mdadm) with ~9TB usage storage space and have been running 24/7 for the last 4 years. No failures at all. That can't be said of the OCZ SSD i was using for the OS, though. | ||
Line 29: | Line 33: | ||
Overall this system is still more than sufficient, performance-wise. One could even say it is quite a bit over-powered for its purpose. Unfortunately it's also maxed out, storage-wise. | Overall this system is still more than sufficient, performance-wise. One could even say it is quite a bit over-powered for its purpose. Unfortunately it's also maxed out, storage-wise. | ||
=== Upgrade Options === | === Upgrade Options? === | ||
Back in 2012 i was confident that going from 1TB of available space to ~10TB would last me a lot longer than 4 years, but now (2016) i'm down to ~500GB space left, which brings me into the Danger Zone™ of <5%. So what are my options? | Back in 2012 i was confident that going from 1TB of available space to ~10TB would last me a lot longer than 4 years, but now (2016) i'm down to ~500GB space left, which brings me into the Danger Zone™ of <5%. So what are my options? | ||
Line 39: | Line 43: | ||
Considering the urgent requirement for a bit more disk space, but not '''that''' much right away, i went for the new NAS. | Considering the urgent requirement for a bit more disk space, but not '''that''' much right away, i went for the new NAS. | ||
=== New Build === | |||
It took me 2 days (or rather: nights) to come up with this build, and i'll only be able to really call it a good decision after it's installed, configured and running as expected, but for each component there's a good reason why i chose this particular one and not something else. | |||
==== Blockbox (2016) ==== | ==== Blockbox (2016) ==== | ||
*Motherboard: [https://www.supermicro.com/products/motherboard/Xeon/C236_C232/X11SSH-F.cfm Supermicro X11SSH-F] | *Motherboard: [https://www.supermicro.com/products/motherboard/Xeon/C236_C232/X11SSH-F.cfm Supermicro X11SSH-F] (MBD-X11SSH-F) | ||
*CPU: [http://ark.intel.com/products/90729/Intel-Core-i3-6100-Processor-3M-Cache-3_70-GHz Intel i3 6100] | *CPU: [http://ark.intel.com/products/90729/Intel-Core-i3-6100-Processor-3M-Cache-3_70-GHz Intel i3 6100] (BX80662I36100) | ||
*RAM: 2x Kingston 4GB DDR4 2133 ECC | *RAM: 2x [http://www.kingston.com/us/memory/search/?partid=kvr21e15s8/4 Kingston 4GB DDR4 2133 ECC] (KVR21E15S8/4) | ||
*SSD: [http://www.samsung.com/semiconductor/products/flash-storage/client-ssd/MZVPV128HDGM?ia=831 Samsung SM951 NVMe M.2 128GB] | *SSD: [http://www.samsung.com/semiconductor/products/flash-storage/client-ssd/MZVPV128HDGM?ia=831 Samsung SM951 NVMe M.2 128GB] (MZ-VPV128) | ||
*HDD: 3x [http://www.wdc.com/en/products/products.aspx?id=1280 WD Red Pro 6TB WD6002FFWX | *HDD: 3x [http://www.wdc.com/en/products/products.aspx?id=1280 WD Red Pro 6TB] (WD6002FFWX) | ||
*Case: [http://www.fractal-design.com/home/product/cases/node-series/node-804 Fractal Design Node 804] | *Case: [http://www.fractal-design.com/home/product/cases/node-series/node-804 Fractal Design Node 804] (FD-CA-NODE-804-BL-W) | ||
*PSU: [http://www.bequiet.com/en/powersupply/610 be quiet! DARK POWER PRO 11 650W] (BN251) | |||
===== Motherboard ===== | |||
First off, the motherboard. Supermicro has become a quite respected manufacturer, at least in my opinion. They're not wasting their money on huge marketing and fancy graphics on the boxes their boards ship in, but rather on simple and well thought-out devices. I don't need airflow guides made from carbon fiber and more blinking lights than your average christmas tree, but here's what i need, and what their board offers: | |||
* 8 SATA ports, so i can keep adding drives | |||
* M.2 NVMe port, so the OS (and Cache) SSD doesn't block a precious SATA port | |||
* IPMI, so i can remotely manage it | |||
* dual Gbit NICs, just because. | |||
===== CPU ===== | |||
The CPU was probably the only rational decision, it was the smallest and cheapest available that can be used with the motherboard. Considering that the 4+ year old CPU in the old NAS was mostly twiddling its thumbs, i don't think this was a bad choice. But hey, i can upgrade to a real Xeon if i feel like it. | |||
===== RAM ===== | |||
Could i have added a bit more RAM, especially because it's currently dirt cheap? Sure - but why. I'm not going to use ZFS! And there are still 2 additional slots available, so maybe i'll add more RAM later. I really wanted to have ECC RAM, though! The markup compared to Non-ECC RAM was insignificant, and with all the other components being geared towards reliability, this was an easy decision. And yes, the board and the CPU both support ECC RAM. | |||
===== System/Cache Drive ===== | |||
Now regarding the SSD for the OS (and Cache), this was initially just a "How cool would it be to have an M.2 SSD in there?"-thought. Those 128GB are a lot of wasted (and expensive) space for just the system drive, as OpenMediaVault is happy with less than 5GB. Lots of people actually just use cheap USB Sticks and replace them every couple of months. So on the one hand i wanted a reliable system, which is why using an USB Stick that wears out pretty quickly wouldn't be a good idea, but on the other hand i didn't want to give up a SATA port for a regular SSD, which would mean one disk less in the RAID set. I could have added an PCIe SATA controller and plugged that regular SSD into that, but the combined cost of those controller and regular SSD would be the same or even higher than just buying a M.2 NVMe SSD. NVMe means that the drive is '''really''' fast, compared to normal SATA SSDs, as it's using the PCIe bus. We're talking about >2GByte/s here. | |||
But doesn't it make even less sense then to use it as the system drive, which neither needs to be that fast when the system boots maybe once every couple of months, nor needs that much unused space? Yes, if i'd only use it as the OS drive! | |||
It does make a lot of sense if i also use it as a cache for the RAID drive, using [https://en.wikipedia.org/wiki/Bcache bcache]. | |||
About 100GB of that NVMe SSD will be set aside for the cache, which should be plenty enough for most purposes, while OMV will use the remain ~20GB for itself. By using a (really, really fast) NVMe SSD as a cache, i'll get the benefit of a prohibitively expensive hardware RAID controller with a battery-backed RAM cache (so that you don't lose any data in case of a power outage during a write operation), while also still being able to pay my rent. | |||
===== Storage Disks ===== | |||
On to to the most important parts, the storage drives. | |||
* Why Western Digital? Because i had very few issues with their drives, and their RMA process is quick and comfortable. That alone seals the deal. | |||
* Why Red Pro and not regular Red (or any other color) drives? Because Red Pro drives have a 5 year warranty, compared to 3 years for the Red ones. I know i'm pushing my luck with those Green drives that i'm using for 4 years already, and considering the size of the new drives, i want them to last (or be RMA'd) as long as possible. | |||
* Speaking of size, why 6TB and not 8TB (or 4TB)? Red Pro drives in 8TB are not yet available, so the maximum is 6TB. Anything less than 6TB would only get me to the point of using all SATA ports even sooner. | |||
The plan is to start with 3 drives in RAID5 and add drives regularly, but i will probably reshape the array to RAID6 when needing the 5th drive (so i'll be adding 2 drives then, bringing it to 6 drives), eventually giving me 6 usable drives with a total of 33TB and double parity (up to 2 drives may fail at the same time). Depending on the price and availability, i may also start adding bigger drives, as long as it's not already the last 1-2 drives (because that would make it too expensive to upgrade all the existing ones, too). | |||
===== Case ===== | |||
I really liked the cube form factor of my old NAS, but just like the motherboard in there only has 6 SATA ports, the case only holds 6 3.5" drives. As there are not many other mATX cases with that form factor and enough space for 8 (or more) drives, and me being a big fan of Fractal Design's other cases, this decision was also rather easy. Their cases are not only solid and have a good airflow, they also put a lot effort into details like sleeved cables, silent fans and great cable routing. And they look awesome (even though i could have done without the side window). | |||
===== PSU ===== | |||
Just like i'm using Fractal Design cases also for my workstation and server, i'll rely on the same PSU in all builds. The modular build of the Dark Power series allows me connect just those cables i really need for the build, and leave the unused ones in the box instead of in the case. They are efficient, pretty much completely silent and absolutely reliable, which is probably more important than having just reliable drives. If you're going to cut corners because you're on a budget, skip the nice case and put it all in cardboard box, but don't try to save money by going for a cheap PSU. Using something off-brand in your gaming rig may be alright if all you're losing is your Solitaire highscore, but it's not acceptable if we're talking about your precious data. | |||
Is 650W maybe a bit much? Probably by a factor of 2-3, yes. Better safe than sorry. | |||
=== Configuration === | |||
Some thoughts, reminders and decisions regarding the configuration of the OS/FS | |||
==== FakeRAID vs. mdadm ==== | |||
* FakeRAID can't be recovered on a system that's different from the original one. | |||
* Performance advantage is negligible. | |||
* Still needs mdadm to do the heavily lifting. | |||
* Really only needed if the RAID set must be bootable. | |||
==== ZFS vs. btrfs ==== | |||
* ZFS overhead only makes sense if using RAIDZ, but RAIDZ can't be grown with new disks, only additional RAIDZ vdevs. | |||
* btrfs RAID5/6 is broken at the moment | |||
* choice: mdadm RAID5/6 with btrfs on top | |||
==== bcache & RAID5 ==== | |||
* CTRL+F: [https://www.kernel.org/doc/Documentation/bcache.txt "Backing device alignment"] | |||
==== Hierarchy ==== | |||
# HDDs via AHCI, not FakeRAID | |||
# mdadm RAID5 set across all HDDs | |||
# bcache (SSD partition as caching device, RAID set as backing device) | |||
# btrfs | |||
Coincidentally, this seems to be similar how Synology is planning to use btrfs in DSM 6.0, with the exception that they are adding another layer before mdadm assembles the RAID set (splitting the disks into smaller partitions with LVM to be able to use differently sized disks without losing available space on the larger disks). |
Latest revision as of 22:35, 27 August 2016
I've learned two things in the past: you can never have enough money, and you can never have enough disk space. Okay, there's a third thing i've learned: those two above are mutually exclusive. My first NAS was a Conceptronic CH3SNAS with two 1TB drives as RAID1 and a Gbit NIC. The existing OS on there was pretty soon "extended" with FFP to work around some issues and also provide access to the system's shell.
Having shell access taught me some things about off-the-shelf consumer NAS and their manufacturers:
- They're just using Linux, too
- They're stripping all the good stuff to optimize their system
- They're not good at optimizing
With these insights, the observation that the weak CPU of the CH3SNAS was not even remotely able to provide Gbit-bandwith (only ~20MByte/s) and the strong desire to have a more flexible storage solution, i decided to build my own NAS from off-the-shelf computer parts - and have never looked back.
What OS should i use?
My original idea was to basically just build a regular PC, just without any fancy graphics card, a cheap but reasonably powered CPU and loads of large drives, then put Debian Linux on there and just start from scratch configuring all the services like Samba and... well, actually i only needed Samba, but with decent performance.
No big deal and easily done, right? But then i discovered OpenMediaVault. Don't get, i'm more than capable of installing, configuring and maintaining my Linux systems, and happy to have something to tinker with, but OMV looked like it would be easy to use (compared to e.g. Openfiler) and by that i mean "keep me from tinkering too much with it, losing Terabytes of data in the process". And it was just a regular Debian Linux under the hood, so i'd still have all the freedom i want.
After 4 years of using it, i'm still convinced that going with OMV was the best decision, compared to the alternatives out there - and i tried a couple of them - so this would be my OS recommendation for everybody looking into building their own NAS, too.
Current Build
Here's a list of the parts i'm using in my current NAS. The cost without the storage drives was ~400€, if i recall correctly. That's about half of what i would have paid for a (only) 5 bay consumer NAS with a lot less performance, but a big brand name on it.
Blockbox (2012)
- Motherboard: Asus E35M1-I
- CPU: onboard AMD E-350
- RAM: 2x 2GB DDR3 1333MHz
- SSD:
OCZ Agility 3 60GBSamsung 840 128GB - HDD: 6x WD Green 2TB WD20EARX
- Case: Lian-Li PC-Q08 black
- Addon SATA2 Controller for SSD
The HDDs are configured in a RAID5 (mdadm) with ~9TB usage storage space and have been running 24/7 for the last 4 years. No failures at all. That can't be said of the OCZ SSD i was using for the OS, though. The CPU is more than sufficient to saturate the Gbit link when reading and writing to the disks and only shows only moderate load when Syncthing hashes new files (like my nightly backups, more on that later).
Overall this system is still more than sufficient, performance-wise. One could even say it is quite a bit over-powered for its purpose. Unfortunately it's also maxed out, storage-wise.
Upgrade Options?
Back in 2012 i was confident that going from 1TB of available space to ~10TB would last me a lot longer than 4 years, but now (2016) i'm down to ~500GB space left, which brings me into the Danger Zone™ of <5%. So what are my options?
- More disks: not possible, only 6 SATA ports on the board
- Bigger disks: possible, but would require replacement of all 6 disks, which is quite expensive
- New NAS: possible, also quite expensive
So the choice is between replacing all 6 disks with 6TB ones, which would cost roughly 1700€, get me ~27TB in RAID5 and limit my options for further expansion, or building a new NAS with fewer drives (3x 6TB), wich means less available space to start with (11TB in RAID5), but with more SATA ports, which means with a better upgrade path for the future, for about the same amount of money.
Considering the urgent requirement for a bit more disk space, but not that much right away, i went for the new NAS.
New Build
It took me 2 days (or rather: nights) to come up with this build, and i'll only be able to really call it a good decision after it's installed, configured and running as expected, but for each component there's a good reason why i chose this particular one and not something else.
Blockbox (2016)
- Motherboard: Supermicro X11SSH-F (MBD-X11SSH-F)
- CPU: Intel i3 6100 (BX80662I36100)
- RAM: 2x Kingston 4GB DDR4 2133 ECC (KVR21E15S8/4)
- SSD: Samsung SM951 NVMe M.2 128GB (MZ-VPV128)
- HDD: 3x WD Red Pro 6TB (WD6002FFWX)
- Case: Fractal Design Node 804 (FD-CA-NODE-804-BL-W)
- PSU: be quiet! DARK POWER PRO 11 650W (BN251)
Motherboard
First off, the motherboard. Supermicro has become a quite respected manufacturer, at least in my opinion. They're not wasting their money on huge marketing and fancy graphics on the boxes their boards ship in, but rather on simple and well thought-out devices. I don't need airflow guides made from carbon fiber and more blinking lights than your average christmas tree, but here's what i need, and what their board offers:
- 8 SATA ports, so i can keep adding drives
- M.2 NVMe port, so the OS (and Cache) SSD doesn't block a precious SATA port
- IPMI, so i can remotely manage it
- dual Gbit NICs, just because.
CPU
The CPU was probably the only rational decision, it was the smallest and cheapest available that can be used with the motherboard. Considering that the 4+ year old CPU in the old NAS was mostly twiddling its thumbs, i don't think this was a bad choice. But hey, i can upgrade to a real Xeon if i feel like it.
RAM
Could i have added a bit more RAM, especially because it's currently dirt cheap? Sure - but why. I'm not going to use ZFS! And there are still 2 additional slots available, so maybe i'll add more RAM later. I really wanted to have ECC RAM, though! The markup compared to Non-ECC RAM was insignificant, and with all the other components being geared towards reliability, this was an easy decision. And yes, the board and the CPU both support ECC RAM.
System/Cache Drive
Now regarding the SSD for the OS (and Cache), this was initially just a "How cool would it be to have an M.2 SSD in there?"-thought. Those 128GB are a lot of wasted (and expensive) space for just the system drive, as OpenMediaVault is happy with less than 5GB. Lots of people actually just use cheap USB Sticks and replace them every couple of months. So on the one hand i wanted a reliable system, which is why using an USB Stick that wears out pretty quickly wouldn't be a good idea, but on the other hand i didn't want to give up a SATA port for a regular SSD, which would mean one disk less in the RAID set. I could have added an PCIe SATA controller and plugged that regular SSD into that, but the combined cost of those controller and regular SSD would be the same or even higher than just buying a M.2 NVMe SSD. NVMe means that the drive is really fast, compared to normal SATA SSDs, as it's using the PCIe bus. We're talking about >2GByte/s here.
But doesn't it make even less sense then to use it as the system drive, which neither needs to be that fast when the system boots maybe once every couple of months, nor needs that much unused space? Yes, if i'd only use it as the OS drive! It does make a lot of sense if i also use it as a cache for the RAID drive, using bcache. About 100GB of that NVMe SSD will be set aside for the cache, which should be plenty enough for most purposes, while OMV will use the remain ~20GB for itself. By using a (really, really fast) NVMe SSD as a cache, i'll get the benefit of a prohibitively expensive hardware RAID controller with a battery-backed RAM cache (so that you don't lose any data in case of a power outage during a write operation), while also still being able to pay my rent.
Storage Disks
On to to the most important parts, the storage drives.
- Why Western Digital? Because i had very few issues with their drives, and their RMA process is quick and comfortable. That alone seals the deal.
- Why Red Pro and not regular Red (or any other color) drives? Because Red Pro drives have a 5 year warranty, compared to 3 years for the Red ones. I know i'm pushing my luck with those Green drives that i'm using for 4 years already, and considering the size of the new drives, i want them to last (or be RMA'd) as long as possible.
- Speaking of size, why 6TB and not 8TB (or 4TB)? Red Pro drives in 8TB are not yet available, so the maximum is 6TB. Anything less than 6TB would only get me to the point of using all SATA ports even sooner.
The plan is to start with 3 drives in RAID5 and add drives regularly, but i will probably reshape the array to RAID6 when needing the 5th drive (so i'll be adding 2 drives then, bringing it to 6 drives), eventually giving me 6 usable drives with a total of 33TB and double parity (up to 2 drives may fail at the same time). Depending on the price and availability, i may also start adding bigger drives, as long as it's not already the last 1-2 drives (because that would make it too expensive to upgrade all the existing ones, too).
Case
I really liked the cube form factor of my old NAS, but just like the motherboard in there only has 6 SATA ports, the case only holds 6 3.5" drives. As there are not many other mATX cases with that form factor and enough space for 8 (or more) drives, and me being a big fan of Fractal Design's other cases, this decision was also rather easy. Their cases are not only solid and have a good airflow, they also put a lot effort into details like sleeved cables, silent fans and great cable routing. And they look awesome (even though i could have done without the side window).
PSU
Just like i'm using Fractal Design cases also for my workstation and server, i'll rely on the same PSU in all builds. The modular build of the Dark Power series allows me connect just those cables i really need for the build, and leave the unused ones in the box instead of in the case. They are efficient, pretty much completely silent and absolutely reliable, which is probably more important than having just reliable drives. If you're going to cut corners because you're on a budget, skip the nice case and put it all in cardboard box, but don't try to save money by going for a cheap PSU. Using something off-brand in your gaming rig may be alright if all you're losing is your Solitaire highscore, but it's not acceptable if we're talking about your precious data. Is 650W maybe a bit much? Probably by a factor of 2-3, yes. Better safe than sorry.
Configuration
Some thoughts, reminders and decisions regarding the configuration of the OS/FS
FakeRAID vs. mdadm
- FakeRAID can't be recovered on a system that's different from the original one.
- Performance advantage is negligible.
- Still needs mdadm to do the heavily lifting.
- Really only needed if the RAID set must be bootable.
ZFS vs. btrfs
- ZFS overhead only makes sense if using RAIDZ, but RAIDZ can't be grown with new disks, only additional RAIDZ vdevs.
- btrfs RAID5/6 is broken at the moment
- choice: mdadm RAID5/6 with btrfs on top
bcache & RAID5
- CTRL+F: "Backing device alignment"
Hierarchy
- HDDs via AHCI, not FakeRAID
- mdadm RAID5 set across all HDDs
- bcache (SSD partition as caching device, RAID set as backing device)
- btrfs
Coincidentally, this seems to be similar how Synology is planning to use btrfs in DSM 6.0, with the exception that they are adding another layer before mdadm assembles the RAID set (splitting the disks into smaller partitions with LVM to be able to use differently sized disks without losing available space on the larger disks).