• Hey Guest. Check out your NeoGAF Wrapped 2025 results here!

It's the NAS (Network Attached Storage) Thread, yo.

Just finished my 3rd NAS/SAN.
Here are the Internals...

w67PAIg.jpg


It's a Supermicro 36 bay chassis with redundant 1280w platinum PSU's and Dual-Channel SAS Expanders connected to two LSI 9207 HBA's.

2X Xeon E5-2609 v2's with 128GB ECC. 10GbE Intel X540 for networking.

Drives will be 4TB ES.3 SAS with WD 6TB Red's, various ZFS vdev/zpool configurations.

Let me know if anyone is interested in any more info.

Nice. This is the secret sauce of building cheaper and better storage than the name brand package solutions.

Supermicro has incredible enterprise hardware.
 
So anyone have experience with NAS catastrophic failures? I have a Netgear ReadyNAS Ultra 6 (6*3tb, one disk redundancy, x-raid) and one disk died (without a doubt, clicking sounds, everything). I bought a new one, put it in and during reconstruction another one died. Volume disappeared, I was kind of in a panic and tried a lot of stupid things, nothing worked but I finally came to the conclusion that maybe it was just a bad connection or some S.M.A.R.T. errors from disk 1. All 5 of 6 disks are on and green status (and some point all were yellow, or listed as dead), but there's no way to recognize a volume. Unfortunately it was the NAS that had all my personal stuff (pictures, etc..) in, and mostly lost all of it. I used reclaime-me (it has taken weeks!) and recovered some of the stuff, but is there anything I can do still? The disks are all working, just not recognized as a volume in the NAS.
Thanks.
 
So anyone have experience with NAS catastrophic failures? I have a Netgear ReadyNAS Ultra 6 (6*3tb, one disk redundancy, x-raid) and one disk died (without a doubt, clicking sounds, everything). I bought a new one, put it in and during reconstruction another one died. Volume disappeared, I was kind of in a panic and tried a lot of stupid things, nothing worked but I finally came to the conclusion that maybe it was just a bad connection or some S.M.A.R.T. errors from disk 1. All 5 of 6 disks are on and green status (and some point all were yellow, or listed as dead), but there's no way to recognize a volume. Unfortunately it was the NAS that had all my personal stuff (pictures, etc..) in, and mostly lost all of it. I used reclaime-me (it has taken weeks!) and recovered some of the stuff, but is there anything I can do still? The disks are all working, just not recognized as a volume in the NAS.
Thanks.

I think with some proprietary RAID set ups you can access the drives via Linux, at least I think you can with unRaid and/or Synology's SHR. Not sure about X-Raid though.

Edit: Found this http://www.readynas.com/forum/viewtopic.php?f=7&t=58709 It looks like this will let you mount the drives under Ubuntu with a proper folder structure versus just straight data dump.

Not to be a jerk but use this as a lesson that RAID != back ups.
 
I think with some proprietary RAID set ups you can access the drives via Linux, at least I think you can with unRaid and/or Synology's SHR. Not sure about X-Raid though.

Edit: Found this http://www.readynas.com/forum/viewtopic.php?f=7&t=58709 It looks like this will let you mount the drives under Ubuntu with a proper folder structure versus just straight data dump.

Not to be a jerk but use this as a lesson that RAID != back ups.

I know and I always said that to myself, but was dumb enough to think one disk redundancy was enough (but it never is, someone can break in and steal my stuff or a power failure can fry the entire raid)

Thanks for the help, I'll try and see what I can do, I'm not really good with linux, I was hoping there was some kind of easier command to make the NAS recheck for volumes. I guess I'll have to try and understand how to connect to it via linux or maybe spend more money (that I already don't have) and let the netgear guys telnet to it or something :\
 
You can use install linux to a USB drive and there are instructions all over with how to do it. Once you're in you can probably just put a drive into an external enclosure and access it that way with the tool mentioned in that Netgear thread. If it works and you can access the files I would then think about next step which is either A. replacing all the drives or B. backing all that data up to an external drive.
 
Has anyone with an unraid system upgraded their drives? I have 5x2TB drives for a usable 8TB, and I'm considering updating to 4TB drives. Just wondering how long it takes to remake parity etc (I'd have to switch the parity drive too which I'm a bit worried about)
 
Has anyone with an unraid system upgraded their drives? I have 5x2TB drives for a usable 8TB, and I'm considering updating to 4TB drives. Just wondering how long it takes to remake parity etc (I'd have to switch the parity drive too which I'm a bit worried about)

It's going to be dependent on the drive I think. When I've done mine (1TB) it took 4-6 hours IIRC.
 
Hey I'm new to this NAS stuff and just got one of those synology disktation ds214. Working great so far. Pretty easy to set up.

My router is a big bottleneck right now so can anyone recommend a decent but cheap gigabit router?
 
Still haven't bought a NAS yet as I read some reports about Synology units and RAM failing after one year.

I'm hoping Microsoft comes out with a solution in Windows 10 (for homes) as I would feel comfortable with Storage Spaces for ReFS and a DIY box.
 
Hey I'm new to this NAS stuff and just got one of those synology disktation ds214. Working great so far. Pretty easy to set up.

My router is a big bottleneck right now so can anyone recommend a decent but cheap gigabit router?

How is everything connecting? Are you noticing slow speeds over wifi or is your computer connected to the router with your NAS? What kind of drives did you put in? Did you do SHR or RAID?
 
How is everything connecting? Are you noticing slow speeds over wifi or is your computer connected to the router with your NAS? What kind of drives did you put in? Did you do SHR or RAID?
I have the NAS hooked straight to the router. I've got 2 1TB WD reds in there.

Not managed to do many tests yet but noticed slow speeds over WiFi. I have a shit cheap router so thought I should upgrade to gigabit for an improvement.
 
Still haven't bought a NAS yet as I read some reports about Synology units and RAM failing after one year.

I'm hoping Microsoft comes out with a solution in Windows 10 (for homes) as I would feel comfortable with Storage Spaces for ReFS and a DIY box.

If you're comfortable with DIY, just build your own ZFS-based NAS already.
 
I have the NAS hooked straight to the router. I've got 2 1TB WD reds in there.

Not managed to do many tests yet but noticed slow speeds over WiFi. I have a shit cheap router so thought I should upgrade to gigabit for an improvement.

Just use server 2012 R2, it has great file server options.
 
I have the NAS hooked straight to the router. I've got 2 1TB WD reds in there.

Not managed to do many tests yet but noticed slow speeds over WiFi. I have a shit cheap router so thought I should upgrade to gigabit for an improvement.

Gigabit isn't going to improve anything if you're connecting over Wifi. Gigabit will only improve speed for devices connected directly to the router. If you want better speeds over WiFi you can try going to N (if youre not there) or spending more for AC.

Even when I'm in the same room as my router and connected via N I don't see huge jumps in speed, maybe 1MB/s more steady.
 
Gigabit isn't going to improve anything if you're connecting over Wifi. Gigabit will only improve speed for devices connected directly to the router. If you want better speeds over WiFi you can try going to N (if youre not there) or spending more for AC.

Even when I'm in the same room as my router and connected via N I don't see huge jumps in speed, maybe 1MB/s more steady.
This is not true. When you are on N or AC, you can go beyond 100 mbit in speed so connecting via gigabit will gain you something. I currently can get 35 MB/sec over AC which is roughly 350 mbit/sec. That clearly is a gain to have a gigabit connection between the server and router.
 
This is not true. When you are on N or AC, you can go beyond 100 mbit in speed so connecting via gigabit will gain you something. I currently can get 35 MB/sec over AC which is roughly 350 mbit/sec. That clearly is a gain to have a gigabit connection between the server and router.

My point was more that don't assume a Gigabit router means gigabit WiFi bandwidth.
 
My point was more that don't assume a Gigabit router means gigabit WiFi bandwidth.

I think it's nearly impossible to get a router that is no higher than 802.11g these days so if he gets one that has 802.11n or 802.11ac, he's going to benefit from gigabit port. He should assume there's likely a benefit to the WiFi bandwidth.
 
I think it's nearly impossible to get a router that is no higher than 802.11g these days so if he gets one that has 802.11n or 802.11ac, he's going to benefit from gigabit port. He should assume there's likely a benefit to the WiFi bandwidth.

Can't discount reception though. I had to put a spare router in our main floor because the Fios router in the basement is so damn weak we'd loose reception by just leaning forward.
 
Can't discount reception though. I had to put a spare router in our main floor because the Fios router in the basement is so damn weak we'd loose reception by just leaning forward.

Yeah. Generally I agree with Marty, but depending on location it is easily possible for all your wifi gear to never hit 100Mb/s. But like he said, these days you'd have to try hard to get a modern wifi router that didn't support gigabit on at least one of the ports (which you can then hook up to a switch if needed)
 
Decided to pull the trigger on a DS415+ and 4x4TB Deskstars. I wanted to wait for a sale on the drives but my NAS was around 200GB and I'm going to be starting a new job soon which wont be as easy to tinker during the day.
 
Do you use a ZFS setup?

Yes. I started with 3 2TB disks in RAID-Z1 running off an old Athlon 64 on FreeBSD 8 many years ago, and after 2 upgrade cycles now have a Norco 20-bay 4U enclosure with Xeon E3 v3 (Haswell), Supermicro X10SL7-F (with built-in LSI HBA), and another LSI HBA (IBM M1115). For drives, I currently have 8 x 3TB + 6 x 2TB drives in 2 RAID-Z2 vdevs for 26TB of usable space, and 6 more bays for another RAID-Z2 vdev with 4 x nTB of usable space. It's all running Ubuntu Server 14.04.1 LTS - ZFS on Linux is quite mature at this point and I've been using it without issue (aside from one botched upgrade that required a re-install of ZOL) for years, and with a friendly Linux distro like Ubuntu I can have a fully functional file server setup (ZFS, Samba, NFS, Netatalk, rsync) up in less than an hour, and also has the flexibility to use it for any other purpose (Plex/XBMC, torrent/newsgroup downloaders, etc.) without needing to rely on vendor-specific versions.
 
So in an unexpected twist I had a HGST Deskstar show up bad. Going to test it again before contacting Amazon, but I find it a bit amusing after Backblaze's report.
 
I have a desktop I built a few years ago with an R4 case (so lots of empty slots). I want to try and rip all my blu-rays and stream them through Plex.

Would WD Reds suffice?
 
I have a desktop I built a few years ago with an R4 case (so lots of empty slots). I want to try and rip all my blu-rays and stream them through Plex.

Would WD Reds suffice?

Reds have a premium cost for being designed to handle the constant running and vibrations that may happen in a NAS. If it's just going to be in your desktop you can probably get away with a Black or even a Green. You don't need a ton of read/write power for Plex. I would recommend if you think it's going to be transcoding (streaming to your phone, roku, really any non-desktop/laptop device) set the transcode directory to an SSD.
 
Reds have a premium cost for being designed to handle the constant running and vibrations that may happen in a NAS. If it's just going to be in your desktop you can probably get away with a Black or even a Green. You don't need a ton of read/write power for Plex. I would recommend if you think it's going to be transcoding (streaming to your phone, roku, really any non-desktop/laptop device) set the transcode directory to an SSD.

I've been using greens in my NAS for a couple years now, no issues so far.
 
Wondering if anybody can give me some input on this. I've had a WHS V1 server (running on an Intel SS4200) for almost five years now, and have been increasingly wondering about how I should eventually upgrade it, since it's a bit long in the tooth and I'd really like to gain support for 4 TB drives.

I was initially thinking that I'd move to FreeNAS or something, but I've just realized that I can get Windows Server 2012 R2 Essentials for free from DreamSpark. Does anybody have any experience with that upgrade path? I've heard good things about it, and the ability to use Stablebit's DrivePool is kind of tempting (DriveExtender in WHS is a killer feature, I think).
 
Reds have a premium cost for being designed to handle the constant running and vibrations that may happen in a NAS. If it's just going to be in your desktop you can probably get away with a Black or even a Green. You don't need a ton of read/write power for Plex. I would recommend if you think it's going to be transcoding (streaming to your phone, roku, really any non-desktop/laptop device) set the transcode directory to an SSD.

Agreed. Assuming your drives will be powering down when not used, and you aren't watching plex 24/7, then I'd save the money and get something cheaper. Also your content is quite failure tolerant. If a drive fails you can re-rip (pain, but doable). Alternatively set up as RAID with redundancy so if a drive fails you can recover from it.
 
Red's are the same drives as Green's , other than the firmware.

If you use WDIDLE3 to change how often the heads park (on a Green is is 8 seconds) the only difference between a Green and Red will be be an extra year of warranty on the Red.
I was initially thinking that I'd move to FreeNAS or something, but I've just realized that I can get Windows Server 2012 R2 Essentials for free from DreamSpark. Does anybody have any experience with that upgrade path? I've heard good things about it, and the ability to use Stablebit's DrivePool is kind of tempting (DriveExtender in WHS is a killer feature, I think).

Don't move to FreeNAS unless you really have the hardware to support ZFS, as they have moved to ZFS-only. Unless you have at least 8GB of ECC RAM, look elsewhere.
 
Has anyone with an unraid system upgraded their drives? I have 5x2TB drives for a usable 8TB, and I'm considering updating to 4TB drives. Just wondering how long it takes to remake parity etc (I'd have to switch the parity drive too which I'm a bit worried about)

I have a 30+TB system and upgraded some drives to 4TB. Switching the parity drive is nothing to be worried about. Clearing the disks is the most annoying part since the array has to be offline. 1 night was enough to clear 3 4TB disks (I think they run in parallel, but can't remember).

You can do preclearing and stuff, but it failed for me and it was just much easier to do everything from the console. Just budget a day of offline time. The parity reconstruction I think was about 6 hours or so, but by then the array is online and can be used normally so it's not an issue.

All in all I'd say, budget a day for the whole thing.
 
Unless you have at least 8GB of ECC RAM, look elsewhere.

No, you don't NEED 8GB of ECC for ZFS. ECC is nice, and the more RAM the better, but that is far from being a requirement for ZFS.

Now, if you're going to be building a serious file server with a larger number of high-capacity drives, then yes, for best performance and maximum reliability you want as much ECC RAM as possible, but for a home/SOHO setup with something like 4 drives for 6TB of usable space, you absolutely can get away with consumer hardware and 4GB of non-ECC.
 
Hey all,

So last year after a magical TRIPLE hard-drive failure my other half nearly lost all over photography business work, luckily we had a 6 month old DVD backup set which saved her bacon and recovered a lot off the memory cards to fill the gaps. One hard drive died from old age, another external backup was dropped (sigh) and the third backup of the backup was stolen a week later from her work.

I rebuilt her PC and bought a NAS enclosure (Synology DS214SE) with a pair of WD Red 2TB drives inside setup to mirror (forget which RAID this is, 1?)

She then has a 2TB drive in her PC which using FBackup copies any changes to the NAS drive when she turns the PC on and I've got her into the process of manually doing it when she's finished working too.

Now the problem! The NAS has 200GB left but the internal drive has 1TB left, so I presume its not actually mirroring the internal drive its actually just copying differences. So if she backs up, then moves the file, it backs that up again without deleting the original.

Any recommendations for good programs to use to just simply mirror an internal drive, including location changes without clogging the NAS up?
 
you could use crash plan, just ignore the online options and it's free. the data deduplication checks shouldn't backup files over more than once and it has compression settings too, these are useful when uploading over the web but should come in handy for local backup too
 
Okay, so some of you may remember, that I gave my own IP to my Synology DiskStation, so that I could map my Movies folder. But why can't I map my other folders, so that I can access them through My Computer?

It says this when I try mapping other drives via Synology Assistant:

 
Okay, so some of you may remember, that I gave my own IP to my Synology DiskStation, so that I could map my Movies folder. But why can't I map my other folders, so that I can access them through My Computer?

It says this when I try mapping other drives via Synology Assistant:

Reboot your Synology and see what happens. That should clear all old connections. What are you using to connect to it?
 
No, you don't NEED 8GB of ECC for ZFS. ECC is nice, and the more RAM the better, but that is far from being a requirement for ZFS.

I completely disagree with this. Not having ECC would be completely outweighing the benefits that ZFS provides. It's not about performance or reliability, its the fact that ZFS does check-summing in RAM, unlike most filesystems.

There is a good chance that you will be fine without ECC RAM, but its to the point that if you really care about the integrity of the data, i would say you need it.
 
I completely disagree with this. Not having ECC would be completely outweighing the benefits that ZFS provides. It's not about performance or reliability, its the fact that ZFS does check-summing in RAM, unlike most filesystems.

There is a good chance that you will be fine without ECC RAM, but its to the point that if you really care about the integrity of the data, i would say you need it.

You should have ECC if data integrity is mission critical. That's not the same as saying you absolutely need it to run ZFS. In the context of ZFS being popular for home-use due to the ability to use it with cheap consumer-grade hardware, and that it's competition is consumer-grade turnkey solutions (from Synology et. al), nearly none of which offer ECC, and that it can add a substantial amount of cost, and it's hardly a clear-cut rule.

If somebody is building a new dedicated system from scratch, I absolutely would recommend starting with server-grade components and that includes ECC RAM. But for somebody who just needs a home NAS for storing their media files and want more flexibility/control than what a consumer device offers, ZFS would be a perfectly legitimate solution w/o the need for ECC.
 
It's all about the risk you are willing to accept. Bad RAM in a ZFS machine can possibly corrupt an entire pool (unlike NTFS/EXT4 which will just corrupt single files).

The advantages of self-healing, parity, and check-summing can almost be offset by the use of non-ECC RAM. The "Need" part should be up to the individual, but just like the use of RAID-5 in a traditional RAID, i would recommend against it.

If you are throwing old commodity hardware into a NAS, i would almost say use a traditional (older) filesystem. The actual ZFS flesystem won't provide much benefit to you, and will probably hinder performance.
 
It's all about the risk you are willing to accept. Bad RAM in a ZFS machine can possibly corrupt an entire pool (unlike NTFS/EXT4 which will just corrupt single files).

The advantages of self-healing, parity, and check-summing can almost be offset by the use of non-ECC RAM. The "Need" part should be up to the individual, but just like the use of RAID-5 in a traditional RAID, i would recommend against it.

I've been running a 12tb NAS in raid 0 with ZFS and only 4gb or non ECC RAM. It will work fine, as the poster said, it depends on if its mission critical or not.
 
I've been running a 12tb NAS in raid 0 with ZFS and only 4gb or non ECC RAM. It will work fine, as the poster said, it depends on if its mission critical or not.

Raid 0? You mean all your drives are striped?

God damn people do you care about your data at all?
 
Raid 0? You mean all your drives are striped?

God damn people do you care about your data at all?

If you've read my previous posts in this thread you would know that A) All my photography is backed up to an external drive, and online backup.
B) all my movies, ripped blu-rays, are replicated to another storage array on my XBMC computer and C) those same movies are also on drives stored offsite.

Nothing wrong with Raid 0 in this instance.
 
It's all about the risk you are willing to accept.

Yes, and that's exactly why it's not a clear cut fast and hard rule. There are in fact use cases where that risk is considered acceptable.

Also, consider this: where are the data that you're putting on your ZFS storage coming from? For a lot of people, it's probably going to be from the internet, likely transmitted through the air via Wi-Fi, via a computer that almost certainly doesn't use ECC RAM either. There are a LOT of opportunities for (rare) data corruption to occur that people accept as a matter of course.
 
OMG I NEED this thread.

Check it, yo:

I got this older Google Search Appliance server that I want to turn into a NAS. Its a dual quad-core Xeon machine with 16GB RAM, 2 250GB 15k drives, and 4 1TB SATA drives.

What would be the easiest way to:
- set it up as a NAS that can be accessed remotely, and
- set it up as a media server?

Thanks!
 
If you've read my previous posts in this thread you would know that A) All my photography is backed up to an external drive, and online backup.
B) all my movies, ripped blu-rays, are replicated to another storage array on my XBMC computer and C) those same movies are also on drives stored offsite.

Nothing wrong with Raid 0 in this instance.

Can i ask why? Why use the volatility of stripes and pay for off-site storage? Do you really need the random I//O of stripes? With 4GB of RAM your ARC would be killing performance anyway.

I guess you just want the absolutely maximum storage space? Man this is just weird to me.

Also, consider this: where are the data that you're putting on your ZFS storage coming from? For a lot of people, it's probably going to be from the internet, likely transmitted through the air via Wi-Fi, via a computer that almost certainly doesn't use ECC RAM either. There are a LOT of opportunities for (rare) data corruption to occur that people accept as a matter of course.
Except if the very rare occurrence of a bit-flip happens on said client computer, you might get a BSOD and lose some work. If that same bit-flip happens on a ZFS storage array, it can silently corrupt data during scrubs and possibly corrupt an entire zpool.
OMG I NEED this thread.

Check it, yo:

I got this older Google Search Appliance server that I want to turn into a NAS. Its a dual quad-core Xeon machine with 16GB RAM, 2 250GB 15k drives, and 4 1TB SATA drives.

What would be the easiest way to:
- set it up as a NAS that can be accessed remotely, and
- set it up as a media server?

Thanks!
Xeon? ECC perhaps? This is begging for ZFS.

If you want something easy check out FreeNAS and don't look back. Just use the wizard and set up a CIFS share, unless you really want performance. Don't combine the different size drives into the same "array" (zpool) though. Do you have any use for the higher speed 15k drives?
 
Can i ask why? Why use the volatility of stripes and pay for off-site storage? Do you really need the random I//O of stripes? With 4GB of RAM your ARC would be killing performance anyway.

I guess you just want the absolutely maximum storage space? Man this is just weird to me.

Yes, 2 years ago when I started ripping my blu-rays I had over 250 of them, and no where to put them, WD Greens were ~140+ per drive, so I bought 3, and then 4 and finally finishing at 6 for 12tb of raw space. I know full well if I lose a drive I lose the array, which is why everything is backed up in triplicate. I'm not using the NAS for anything other than raw storage of the movies and my photography. The movies are played from the other machine I mentioned and my photos even at a couple hundred megabytes each are not very much load on the box when editing them.

The box has been going for over 2 years now without a single problem or failure.
 
Except if the very rare occurrence of a bit-flip happens on said client computer, you might get a BSOD and lose some work. If that same bit-flip happens on a ZFS storage array, it can silently corrupt data during scrubs and possibly corrupt an entire zpool.

The bit-flip is going to be just as rare on the ZFS storage, and the chance it causing the entire pool to become corrupt is even smaller. Those are acceptable risks for non-mission critical data for a lot of people.

And, as usual, RAID, ZFS or otherwise, is not backup.
 
Top Bottom