It's Time to Consider Switching Your Network to 10 Gb/s

It's Time to Consider Switching Your Network to 10 Gb/s

If you're a photographer or videographer you probably already have some sort of external storage. As your workload ramps up, it may become more convenient to house all of your files on and work from an external device like a NAS box (network attached storage). They can have practically unlimited storage, your data is redundant, they are able to be accessed by multiple computers at once, and now they are faster than ever. 

Synology has been our NAS of choice for the last few years. We have been running the Fstoppers office off of an old Synology 8-bay NAS. Our unit has four 1 Gbps Ethernet jacks in the back that all run out to a Netgear switch then runs to eight computers we have in the office. Surprisingly, this tiny "server" has been powerful enough for five of us to work off of it all at once for years. As we have added more employees and we have started shooting 4K video, our NAS has finally started to slow down and fill up. As I am writing this we only have 3 TB left and we expect to fill that up in the next few weeks.

When we upgrade our NAS we will be moving over to a 10 Gbps system, and you may want to consider it as well. 10 Gbps technology has been available for quite some time but it never really took over in the consumer market. Luckily that is beginning to change.

Synology has just announced the DS1817. The first ever consumer NAS with dual 10 Gbps Ethernet jacks standard for just $849 (without the drives). This price is quite impressive considering that 10 Gb ethernet cards for desktops computers can run over $500. With the right drives, this NAS is capable of 1,577 MB/s read speeds which means that you shouldn't see any dip in performance from working off this NAS or an internal SSD. When you need more storage you can add 18 more drives by adding 2 DX517 expansion units. If you have four computers or less, you could plug each of them directly into the DS1817, but if you have more like us you will need to purchase a 10 Gbps switch

Keep in mind that to take full advantage of this NAS you will need a computer with a 10 Gbps Ethernet card. Apple has already announced that their new iMac Pro will come standard with 10 Gb Ethernet, and for many Apple users, this may be enough to upgrade, but laptops do not have the option for 10 Gb Ethernet cards (at least not yet). Windows users should expect to spend around $250 on a 10 Gbps PCIE network card but we have found many cheaper options that we will report on in the future. 

I think all photographers/videographers should have a NAS for storage but for many shooters, 10 Gbps may be overkill. If you don't need to work off of your NAS with comparable speeds to internal SSDs, then 10 Gb certainly isn't necessary. If you aren't batch culling thousands of raw files or editing 4K footage, standard 1 Gb Ethernet should be plenty fast for you. It has worked well for us for the last six years. 

Moving to 10 Gbps certainly isn't going to be cheap but for us it's a necessary investment in our business. Luckily prices have come down significantly in the last few years making 10 Gbps networks a realistic option for small studios rather than a standard for gigantic corporations.

Lee Morris's picture

Lee Morris is a professional photographer based in Charleston SC, and is the co-owner of Fstoppers.com

Log in or register to post comments
51 Comments

This is exactly what I need, thanks for the article :)

There are options for those of you who have compatible usb type c laptops and can use this adapter to utilize the 10Gbps. BTW thanks Lee for the informative article! I've been a reader of Fstoppers for a few years now!
http://bhpho.to/2aHav5V

We have two of the AkiTio Thunder2 10G network adapters that we use with our older Macbook Pros, and I can confirm they work great with the latest Synology NAS units. USB-C is not needed btw - these units connect to your machine over Thunderbolt 2.

At 10 Gbps (gigabits per sec), which is 1250 MB/s (megabytes per sec), it's basically 3x faster than your theoretical speed for what I assume is RAID 5, which is around 400 MB/s. Mind you, that 400 MB/s is sequential read in theory; in practice, it's much slower than advertised. I think it's a bit overkill.

Also, don't forget that your router is only capable of transmitting so fast on wire; it's a lot slower on wireless.

I think you'd be better off investing in other bottlenecks instead of the NAS. Your pipeline can only go as fast as your slowest link.

There isn't really mich of a bottleneck when the article describes the need for a card, cables, and switch. No need to involve the router. While the router handles the IP addressing, switches can handle the speed without issue as the packets go straight from PC to Switch to 10GB/S NAS. They never hit the router.

Thanks for this explanation, never knew how to phrase it to Google it. Wasn't sure if all data hit the router. Though currently my NAS is plugged into the router which WILL cause a bottleneck.

Tam, the router we will use is going to have probably 16 10Gb ports and 2 or even 4 ports connecting to the Synology (20-40GB total bandwidth between NAS and Switch). Just like our current 1GB box, this will be a bonded pair so all 4 connections should be seen as one.

From there each computer will have one Cat6a cable to transfer files to each computer.

At the moment we are able to get about 110mbps off our connection but it dips to about 25 when 4 computers all pull from the NAS at the same time.

I think the real bottleneck will not be the NAS, switch, or network adapters but potentially the Sata drives which the fastest can only do 6GB/s at the moment. Of course we will test this new system in an open room before running cables everywhere but I have to believe we can at least double our speed if not triple it.

As for wireless, who does that? I know Lee dreams of a day we can wireless edit off wifi but that is not a viable option anytime soon.

I think by router Tam means wireless router and when you are saying router you mean 10Gb switch. And you mean 110 "MBps" and not "mbps" (Bytes and bits)

son of a bitch....yes I mean all of those things

Thank you Lee. Yes.

If you work with photography only, this is probably overkill anyway, even if batch culling thousands of raw files. Even gbe is overkill (regular ethernet is fine)

We do video with multiple users connecting to it. I think we all know the real photo bottle neck is adobe's lightroom.

Agreed. Speed, speed and more speed is always a good thing :)

I never understood the point of a NAS. Why not just take a 5 year old desktop you have sitting around, toss in a faster Ethernet card, some drives and make that the server? That has to be cheaper than $849. Feel free to school me on this.

If you have an unused desktop machine sitting around, this is definitely an option if you're on a budget and willing to spend a lot of time setting it up and maintaining it. However, modern NAS units come with several major advantages: super low maintenance requirements (stripped down OS, many tasks automated); mobile app connectivity and remote monitoring for when you're on the go; expandability well beyond the number of drives that even the biggest desktop machines can accommodate; cross-platform compatibility; and cutting-edge filesystems with error-correcting capability. In my experience, when you buy a top-end NAS, the value really comes from the built-in software, not the hardware.

After trying to go the desktop route myself in the past, and wasting a bunch of time trying to get AFP & SMB to work reliably in a multi-platform environment (you'll soon discover Apple's new "SMBX" client that replaced Samba on macOS is really broken), and discovering various limitations with ext4 or Microsoft's ReFS filesystems, I finally bought a Synology and let the DiskStation software do the hard part for me. No regrets.

We currently have 12 or more drives in our NAS....how can you fit all those in a PC tower?

My old motherboard only has 8 SATA slots, my old case has 9 3.5" bays. A new motherboard can have 10 SATA slots; plus 3 M.2 slots, plus the PCI slots if you really wanted to get above that (though M.2 and SATA are usually shared unless you spend >$400).
Jon G makes a good software argument, but the hardware cost still isn't there, even if you go brand new.

I'd be interested to see this build and price. Our Synology NAS has 4 GB RJ-45 connectors so your PC build would need those added to the PCI slots. The M.2 drives could be interesting; I'm not sure if the Synology systems use those drives, but keep in mind those things are crazy expensive for only 1TB of data (we have started upgrading 4 and 6 TB drives to 10TB drives now).

And speaking of upgrading, the Synology system makes it super easy to just basically pull a drive out and replace it with another one. I will say the process of reconciling the new drive is taking up to 5 days on the 10TB drives but maybe it takes that long with a home made build and software too? It's obviously not hard to open a PC tower and replace drives but I wouldn't feel comfortable hotswapping them while the computer is on and it's not quite as easy as the Synology push and pull trays they have.

Finally, I would also assume the Synology hardware will hold its value better than a home made NAS computer. So if you ever decide to upgrade or sell your Synology, you will probably still get 50% of your money back on it.

Moving to Synology from Drobo has been the greatest networking solution decision we have made and for us, photographers who don't understand computer networking very well, the self managing software that comes with Synology boxes is easy to use and navigate.

One thing to keep in mind is the limiting factor will be the I/O subsystem. For a Desktop system, it is great for a single user. The great advantage to NAS systems is that they can be used by multiple users at once. Moreover, while RAID can be done on a motherboard, to do it right really requires a dedicated card. Software RAID is truly sub-standard and for highly used tasks, more prone to failure.

I couldn't figure out how to share a Newegg list (~$776), so I recreated it on Amazon (~$730 once you add in the OS):
https://www.amazon.com/gp/registry/wishlist/145AHBPJ92PR7
I'm sure you'd want to adapt things, Windows Pro maybe, beefier PSU if the HDDs need it, etc. Add $20 for low end mouse/keyboard combo if needed.
A new MB with built-in 10Gb bumped up the price by $200 since it also needed a newer CPU for the newer socket. You could then do M.2 drives for 'current/working' folders and 7200rpms for storage.
We could get into RAID controllers with more SATA sockets, but that's over my head.
Buying a Synology seems very easy, but if I'm upgrading to a 4K editing rig, my 4 year old desktop will just be sitting there.

This is the route I have taken and always recommend to tech savvy people but the reality is that the ease of use as mentioned above in a Synology NAS system is all software. I built my own server with 64TB of storage with leftover PC parts I had sitting around. So the only cost was the HDDs. I use NAS4Free to run the whole thing and love it.

Ive installed several Synology systems for large studios and absolutely love the software it runs. For a simple and straight forward experience it cant be beat.

the same reason why people buy apple

I only got one main concern about this article. How the speed of the network can make you reach "SSD-Like" performance if you mount HDD in your NAS? I know a 10Gb network is theoretically faster than a Sata III connection but your PC is connected to the NAS at 10Gbit but the drive in the NAS is still connected via SATA so the bottleneck is still 6Gb and yet it's an HDD and not a SSD so there's really no way that this NAS is reaching "internal SSD-like" performance. Yet I approve this migration for the kind of work you guys do

I would like to know the technical answer to this question too. My understanding is that the way a NAS RAID works, when you pull a file there is a good chance you aren't pulling it from 1 drive but rather 2-5 drives at once. Since the redundancy is built into the entire system, I think theoretically you are using multiple drives to increase the speed (sort of like Strip raiding is faster than a single drive speed).

We have the smaller Synology NAS box that uses 4 SSD drive (Synology DS416) and it is amazing for projects we do on the road, but the storage size is like 4 TB total which fills up quickly. Oh how I dream of a day that we can get 6TB SSD drives at a reasonable price!

You should try raiding SSDs in your desktop tower for scratch drives and running software. really seeds everything up. Until i upgraded to a SSD nvme m.2 for my OS I had 3 SSDs stripped running it and it was sooo smooth.
However I tend to build these things for fun more then needing the extra speed haha

Is it really that much faster? I've read that upgrading to SSD from a 7200 HD is a much more noticeable increase in speed than raiding two SSDs from one SSD running your OS. I've had a SSD for my OS with a normal data drive in my tower for probably a decade now but never went down the stripe route.

Well not all SSD's are the same and Speed at that point is kinda relative. If one SSD dive is super fast will you notice if its even faster in regular usage. Is it worth the extra cost? Im a tech Geek so I do it for fun then usefulness haha. There are other bottle necks. However yes the speed can be double the SSD's average speeds.
http://www.pcworld.com/article/2365767/feed-your-greed-for-speed-by-inst...

You're right, I said a stupid thing regarding the SATA being the limit. Said that, RAID 5 doesn't always read from multiple drives because of the way it stores data so it would not be constant. Anyway I have a hard time seeing HDD reaching the speed of an SSD because of the access time.

What's the fastest you've seen files dragged to your computer? The fastest I've ever seen is a SD card dumped to a SSD through USB 3 and that was like 120 MBps. Maybe the bottle neck is in the computer itself. My goal is to just get my NAS to always pull 100MBps all the time when others are pulling too. At the moment it pulls 100 but drops to about 25-50 when multiple users are pulling too.

Don't get me wrong I think the solution explained in this article is the best you could use right now for a network drive, staying in a reasonable price.
It's fast but not comparable to "SSD-fast" as I read in the article ("you shouldn't see any dip in performance from working off this NAS or an internal SSD")

I'm curious to do tests now because I know I can drop huge packets of files off to our 1GB NAS box through Cat6 cable at about 95MBps but I think my C: drive SSD can only do marginally better at 100MBps. I need to do another test with local file transfers.

Your internal SSD will be way faster than that

I think the best transfer rate you can get between 2 different drives is between 2 PCI-Ex SSD on the same PC

When you say "my C: drive SSD can only do marginally better" you mean transferring from what or to what? Please let me know the result

SD card through usb3 port. I've found that to be one of the quicker transfers. I could also test an SSD through USB 3 to the C drive as well. I'll keep you posted

Okay here is a quick test I ran. The image on the right shows files transferred from my C drive SSD drive to my traditional internal 7200rpm hard drive. The moment I hit print screen it captured the 486MBps speed you see in the photo but then it fluctuated greatly between 150 - 500 MBps.

The image on the left is the same files from the C drive SSD transferred to another SSD mounted internally. For some reason the transfer rate was much slower than the other mechanical drive but it did not fluctuate much at all.

Obviously something seems wrong here because in no way do I believe the SSDs to be slower in transfer but I was still shocked to see how fast the HD drive accepted files.

OK my guess is that the peaks of the HDD transfer are due to it's cache, when it's filling up the cache it goes really fast than it slows down. The SSD got a really small amount, if none at all, but being fast it's much more constant in it's transfer rate. There's also to consider if the drives were all on the same controller and a single huge file would have been a more appropriate test because with small files speed varies a lot due to the nearly empty packets that it has to send and the distribution over the surface. I would actually say you may have slowed down the potential of the HDD because of the slower access speed and with small files it has to access much more times. Anyway really interesting topic, I will do some test myself maybe using some benchmarks to compare the results with real world transfers.

Patrick is correct above. When you set up a RAID 5, the data is NEVER written sequentially, but striped across all the drives, with either 1-2 parity check.

http://thecloudcalculator.com/calculators/disk-raid-and-iops.html

http://wintelguy.com/raidperf.pl

I think people are missing a few things here. First, you don't get Gigabit or 10 Gigabit Ethernet "out of the box". You get it through tuning the TCP stack (OS dependent), the NICs and your switch.

To get optimum performance you will be using "jumbo frames" and set a larger MTU (Maximum Transmission Unit) from the standard 1,500 bytes to either 9000 for both Gigabit or 10 Gigabit, or if your network can support it, 16,384 byte frames. You will also need a managed switch that allows you to customize the settings for each port.

The way I would deploy this is on a "backnet" network which only connects to other machines and storage configured in the same fashion. You would have one interface to communicate with the outside world using 1,500 byte packets to avoid packet fragmentation from using the larger MTU.

Performance of your storage depends on the RAID level, number of drives used, type of drives used, tuning options for the NAS and available bandwidth. You can use faster drives in a RAID 10, or for maximum storage using a RAID 5.

Some useful links:

https://studionetworksolutions.zendesk.com/hc/en-us/articles/201757549-1...

https://arstechnica.com/civis/viewtopic.php?t=1123534

http://www.storagereview.com/

sounds like we need to have you set up our new network

Lee, most of this isn't that hard to do. Tuning the TCP stack and the NICs can be a bit tedious but plenty of documentation is out there. Setting up the switch isn't that hard either as long as you don't want anything fancy like VLANs (Virtual LANs).

If you guys are putting this together yourselves there is a lot of complicated backend stuff that goes into maximizing the components potential. like Robert mentions. Just plugging everything in together I think you'll be disappointed in the results you get.

We can hire a guy who does this locally but he won't be cheap.

Check out Linus Tech Tips. They set up some crazy servers for handling their 8K footage. Probably overkill for now, but hopefully you'll have the need one day.

We watch his show pretty regularly. Did you see his Deadmau5 walk through?

Yes, I thought Linus' setup was great, but DeadMau5's was insane.
I never factor in resale price for anything, I use old systems for backups or give to family members.

I'm just an amateur and not very rich but I completely agree with you. Even for me as an amateur a very fast nas with a fast connection would be ideal. However, since the price is definitely a problem for me, this will have to wait.
Working of my NAS is too slow at the moment.

Thansk for the post. I am really looking forward for upcomming post on this. I am thinking of a similar setup. But I find it a bit confusing still. There are aparently different adapters depending what kind of 10Gbps you choose (A friend told me). So I am really looking forward to it.
If possible then please also tell about what kind of drives and what raid you choose.

I am still in dought if it would be possible to have a RAID 5 for storage, and a RAID 0 for editing 4K on the same Synology NAS. It could be cool to completely work from the NAS instead of having a external work drive for 4K editing.

The adapter we want is the RJ-45 connection which is the same connector used on Ethernet cables. They make some other ports for copper wiring but they are expensive and don't do well in walls and long distances.

We are currently moving from mainly 1080 footage to some 4k in our projects and we edit directly off the NAS box now. It can be a little slow but I bet if you had a strong desktop and just one user connect you would be fine. We typically have 1-3 people pulling from it at all times and it isn't too bad as is.

Can I connect my synology NAS directly to my computer? I have been running it through a router and it's so slow

More comments