How to Upgrade Your Network to 10 Gb/s and Speed Up Your Workflow

We've finally done it: Fstoppers has moved over to a new 10 Gb/second network and server and it is incredibly fast. Let me show you what we did and how you can create your own 10 Gb/s network for a fraction of the price. 

For the past four years we have been using a Synology NAS (network attached storage) device to work from and to backup all of our photo and video content. It has served us well but as we hire more people, and as we begin shooting videos in 4K, it has been filling up and slowing down. It was time for us to upgrade.

What Is a NAS Device and Why Do I Need One?

Do you own more than one computer? Do you own a stack of external hard drives? It's time to organize your data and a NAS device is the answer. Think of a NAS as the ultimate external hard drive. The goal is to house all of your important data in a central location that all of your computers and devices can connect to. This allows all of your devices to have access to all of the same files while keeping everything redundant and safe. In the past, working directly from a NAS device was noticeably slower than working from a local drive in your computer but now, with 10 Gb/s speed becoming affordable, it is possible to work directly from the NAS without any dip in speed. This means that your projects will be safe while you work on them, and if your computer fails, there's no reason to worry, your data is always on at least two different drives. You'll never run out of hard drive space again, and you'll never have to worry about a hard drive failure. If you own more than one computer, the NAS will allow you to access the same data from multiple computers at once (imagine one computer editing footage while another is exporting a project using those same files). This can be done locally over Ethernet, wirelessly over Wi-Fi, or on the road via the web. Now that you know why you might want a NAS, let's jump into our build.

Our Build

  1. NAS/Server: Synology rs18017xs+
  2. Storage: (12) 10 TB Seagate IronWolf Pro Hard Drives
  3. Switch: Netgear ProSAFE XS712T
  4. Cables: (30) Cat 7 Ethernet Cables 
  5. 10 Gb/s Ethernet Adapter: (5) Intel X540T1

Total: $14,000

The Server

We knew our next server was going to be 10 Gb/s and although Synology just recently released some small business/home options with 10 Gb/s, we wanted a top of the line unit that could handle any growth Fstoppers might see in the next 5 to 10 years. 

We decided on the Synology rs18017xs+ because we wanted the extra horsepower and the almost endless upgradability. For the average person reading this post, you do not need something this large, this loud, or this expensive to get almost identical performance in your home or small business. Check out the bottom of this article for a more reasonably priced (and sized) option.

The Storage

The first thing that we needed to do was install 12 hard drives. Because this server will be used around the clock, special drives are recommended. We decided on Seagate Ironwolf Pro drives. These drives are specifically recommended by Synology because they have partnered to create the Iron Wolf Health Management application that can communicate directly with proprietary sensors in the drives. This app can warn you if a drive is malfunctioning long before data is lost. Keep in mind that we set up our NAS with RAID 6 which allows two drives to fail before any data would be lost. In the almost impossible case that more than two drives failed at the exact same time, the "pro" version specifically comes with two years of data recovery which means they will foot the bill in case a drive fails on its own or your server is struck by lightning (which just happened to us a few months ago) or you have fire or water damage. 

We installed all (12) 10 TB drives for a total of 120 TB of storage into the NAS. Literally a week after our build, Seagate released 12 TB versions of these drives. If maximizing your storage is important, you may want to buy those drives instead. 

The Network Switch

We have a lot of computers and printers in this office that all need access to this network and server but not all of them need 10 Gb/s speed. Our old 1 Gb/s switch will work fine with our laptops and our printers but for our five desktops computers, we wanted the maximum 10 Gb/s speed. To accomplish this we purchased a Netgear 12-port 10 Gb switch

We also kept our old switch and plugged all of our 1 Gb/s devices (like laptops, printers, and Wi-Fi portals) into it. This helped us saved a bunch of money by not having to buy a larger 10 Gb/s switch with more ports.

The Cables

There are two main types of cables that can work with 10 Gb/s: RJ45 (standard Ethernet) and SFP+ (fiber). We didn't want to complicate our office by switching to fiber so we used a standard Ethernet connection for everything. To keep the 10 Gb/s speeds you'll need to purchase either Category 6a or Category 7 cable. We ended up purchasing around 30 Cat 7 cables from Amazon and we quickly learned that many of the cables claiming to be Cat 7 were not capable of transferring 10 Gb/s. This cable, at least up to 100 feet, was capable of transferring 10 Gb/s.

10 Gb/s Ethernet Adapter

The final piece of the puzzle is our computer's Ethernet adapter. Almost no computer has 10 Gb/s Ethernet out of the box (although Apple's new iMac Pro does, and it should for that price). We purchased five Intel 10 Gbps PCI express cards and installed them in our most powerful desktops.

Mounting the Server

There are two main issues I have with our server: it's loud and gigantic. It's louder than I would have ever expected and it's probably twice as big as it looked in in the pictures (I know, I know, I should have read the dimensions). Eventually, when the server fills up and we buy expansion units, we'll drop the money on a legitimate rack mount. But for now, we decided to move the server into a closet in Patrick's house (which is a separate structure from our office) and we ran Cat 7 cables to it. The server has plenty of room to breathe in the closet and, with the door shut, we can't hear it humming away. Keep in mind that if you buy the Synology NAS recommended below, you won't have to worry about this as it's made to sit on a desk.

Understanding Bits Versus Bytes

If you download something from the Internet, transfer a memory card, or move a file on a computer, the speed is measured in megabytes per second which means one million bytes per second. Some things, like networks speeds, are measured in megabits per second and there are 8 bits in 1 byte. If the "b" is lowercase (Mb) it means "megabits" and if the "B" is capitalized (MB) it means "Megabytes." This means that a standard 1 Gb/s connection is capable of transferring data at a maximum of 125 MBps. This may be sufficient with only a single computer pulling data but if multiple devices are pulling data at once, or you are trying to maximize speeds (e.g., transferring five memory cards at once), your 1 Gb/s network will quickly max out.

Speed Tests

Our new server was able to download and upload and download data at a staggering 400-500 MB/s in our first test. This was reaching the limits of our motherboard's internal SATA connection to our SSD at 6 Gb/s but wasn't maxing out our server at all. We then tried to download the same file on five computers at once and we were averaging around 300-400 MB/s on each computer which is right around the maximum 10 Gb/s, or 1,250 MB/s. 

In short, our new server is capable of pushing internal SSD speeds to multiple computers at the same time and we have seen almost no difference in editing video off of our internal SSD or from the server. 

Uploading multiple memory cards at the same time has also been a major upgrade for us. With our old server, a single memory card could transfer around 100 MB/s but if you tried to upload two at once the speed would be cut in half. If we tried uploading four at once, we would only get around 25 MB/s. With our new server, we can literally upload 10 memory cards from multiple computers at the same time without seeing any sort of slow down. If you shoot weddings or videos, this is a game changer. 

An Affordable 10 Gb/s Home Network

  1. NAS/Sever: Synology DS1817
  2. Storage: (8) 6 TB Seagate IronWolf Pro Hard Drives
  3. Switch: 10 Gb/s switch may not be necessary 
  4. Cables: (1) Cat 7 Ethernet Cable
  5. 10 Gb/s Ethernet Adapter: (1) Intel X540T1 UPDATE: This one is cheaper

Total: $2,700

Our setup ended up costing around $14,000 but I certainly wouldn't suggest you spend anywhere close to that much. The DS1817 8-bay NAS only costs $869 and it has two 10 Gb/s jacks just like our new monster. You may not require a switch at all and you can plug the NAS directly into your computer with a single Cat 7 cable (remember that you will still need to purchase a 10 Gb/s Ethernet card for your computer). For drives, if you don't need a ton of storage, you could purchase eight 6 TB drives for $284 each. This all comes out to a much more reasonable $2,700 and you could easily save more money with smaller drives. 

If you're a professional photographer or videographer I would highly suggest buying a Synology NAS, and if you do, you might as well spend a few hundred dollars more to move up to 10 Gb/s. It's a very small price to pay to future-proof your office and workflow. Once you experience the performance and reliability of this system, you'll never want to go back.

If you're passionate about taking your photography to the next level but aren't sure where to dive in, check out the Well-Rounded Photographer tutorial where you can learn eight different genres of photography in one place. If you purchase it now, or any of our other tutorials, you can save a 15% by using "ARTICLE" at checkout. 

Lee Morris's picture

Lee Morris is a professional photographer based in Charleston SC, and is the co-owner of Fstoppers.com

Log in or register to post comments
66 Comments

That's a neat little setup you've got there, Lee. Looks like tax-writeoff season started early this year!

Correct me if I'm wrong, but the 10Gbps (1.25GB/sec) network is the maximum theoretical transfer rate. These theoretical rates are never achieved in practice, and looking at your graph there, it looks like you're actually getting somewhere around 469MB/sec.

Also, keep in mind that's the transfer rate to the NAS server itself, not the actual drive that's on the server.

Looking at the SATA drive links you've included, they seem to offer a maximum (theoretical) write speed of 214Mb/sec. I'd be interested to see the actual drive write speed (which is usually half of the advertised max).

Lucky for us, we don't do any video, and run most of our studio on USB-3 attached SSDs, which yield actual write speeds of about 360MB/sec, and read at about 460MB/sec.

And because Adobe has decided to build the Lightroom database in such a way that it does not allow multiple users to use it at the same time, there is really no need for any photo studio to run a NAS (as far as I can see), unless it'd be for JPG transfer and general workflow, in which case a 10Gbps network is grossly over-spec in my opinion.

I am certainly no expert in this area so correct me if I am wrong but as I understand it, because we are using RAID 6 our data is now spread out over 12 drives and when we pull a file, it's spinning all 12 drives at once to send it. So yes, 1 hard drive can only send data at 214MB/s but 12 of them spinning at once could "in theory" send way more than 1.25GB/s. In the video at the top of the post you can see that we were getting close to 1.25GB/s when we downloaded the same file on 5 computers at once.

RAID 5 is n-1 and RAID 6 is n-2 drives. So if you have 12 drives in RAID 6 then you have the storage space and speed of 10 drives while two are always being used for redundancy. This converts to roughly 17Gbps in your case, more than enough to saturate a 10Gbs connection. 1.25GB/s is the theoretical limit but because of protocol overhead etc anything over 950MB/s is typically considered great.

Ah well, there you go. I didn't realize that's how RAID works (since I've never owned one).

RAID 5 is actually very simple tech. If you want to write to a 3-Drive array then your data is split in half with one half on Drive A, one half on Drive B and a backup on Drive C. This is done one block of data at a time and each Drive takes turns being the backup drive. The end result is you can rebuild any single Drive by using the data on the other two. It also prevents against data corruption as you can verify the data in any file on any disk by looking at the other two.

RAID 6 + a ZFS file system (used by many NAS devices) creates not only physically redundant storage immune to a single drive failure but also prevents against data corruption, aka Bit Rot, where your files won't open because a few 1's flipped over to 0's, an otherwise undetectable event.

This is why all of my video/photos are on a RAID 6 ZFS server. With a traditional storage solution you may have corrupted files that look fine and you end up backing up and your local and backup copies become toast. Converting images to DNG with LR and running the built in LR DNG verification adds an additional layer of security as the files themselves have built in error detection. I have about 30k photos and one of them was found to be corrupt using LR DNG verification. I was able to easily find the original file and restore prior to backing up the bad file.

As long as we're talking shop here, technically, in your example of ABC drive, the data doesn't get written on the C drive as a backup. Everything is striped, so the data gets written on A and B, and the parity is written on C. One can do the calculation to infer the whole data if n-1 drive goes out. If 2 drive go out at once, or at least as early as before your parity is calculated, you're toast.

Enter RAID-6, with n-2 redundancy, which allows for 2 drives to go out at once and you're still okay. Of course now you have to write 2 parity copies for each bit that's written, write speed will take a performance hit, but because of that exactly reason, you can now calculate the data back from 2 sets of parity, so your read speed will improve.

But then if you wanna improve write speed on RAID 6, make it a RAID 60, as in 6 and 0, and you'll get the best of both worlds.

Yep, you got it. I didn't think explaining XOR would be necessary for an overview though. I have also find that you can optimize your RAID 6 disk numbers to balance speed and security.

I have 6x4TB disks in RAIDZ2 (RAID6) with an identical backup array. The double edged sword of RAID arrays is now you have a big pool of storage which you need to back up.... with an identical pool of storage. RAID6 != backup.

I don't run a photo studio but all of my RAW files are on a NAS so I could have multiple users accessing the same files without issue. LR catalogs and previews should be on a local PC with SSD for max performance. Ideally the catalog and images on separate drives. So yes, no one else would have access to my edits etc it would be possible to have one pool of storage for redundancy and backup purposes.

Nice article! I'm getting ready to move to 10Gbe shortly. Asus and Gigabyte now make $99 ethernet adaptors that appear to be just as good as more expensive options. https://www.amazon.com/dp/B072N84DG6/

If you already have a "server" and a desktop you can put one card in each and have a separate network with direct connection between the two. This also works with 2-3 devices and NIC cards with dual 10Gbe ports. You can pop a few NICs in your server and connect the workstations directly or daisy chain a few. This can save you some serious bucks on a 10Gbe switch. While the Synology devices are great you can also grab your 5yr old workstation and run FreeNAS on it and achieve similar results.

10Gbe and networking in general can get pretty complex quickly. It isn't aways plug and play to get 10Gbe speeds. There are a number of youtube videos (Linus Tech Tips) about 10Gbe where you can tweak your network to get the full speed out of your setup. Basically you create a temporary RAM disk drive on the server and client and transfer test data between them. This eliminates the SATA/PCIe/etc interfaces and drives and allows you to test only the network side. From there you can figure out if your workstation or the server is slowing things down. My advice for this kind of networking setup is to spend the time to tweak it now and then leave it alone.

Thanks for the tip! I'm glad there are cheaper ethernet adapters hitting the market. I've seen the Linus video about this and I was terrified that getting 10Gbps speeds was going to require us to become computer hackers. Luckily for us, once we got working cat 7 cable, it literally was plug and play for us. I'm sure I could optimize our network for even faster speeds but in all honesty, I don't have any need to. This is way faster than I ever expected it to be.

Nice. I haven't watched your video yet, just read the article where you mentioned 400-500MB/s. This was the video I was referring to incase someone else is troubleshooting. https://www.youtube.com/watch?v=YJdopedfh9E

Hi everyone- I am getting ready to buy my first NAS and redoing my backup system completely in general- after reading this, I've been convinced to go for a 10gbe equipped Nas. I want to use this with my Dell XPS 15. I do not currently use a desktop computer. Can I achieve the speed benefit of the 10gb ethernet connection using a thunderbolt-to-ethernet adapter directly into my xps laptop? Do I need to pick a 10gbe-specific adapter? Can the thunderbolt jack on my laptop support these speeds?

I'm not sure but I am interested if you can get this to work. Report back if you figure it out.

Thunderbolt 2 gives 20gbps/s and version 3 40gbps/s, so in either case they would work as far as connection speed. There are 10gbps thunderbolt 2 and 3 network cards. They are all pretty expensive right now. Here is an example. https://www.akitio.com/adapters/thunder2-10g-network-adapter Hopefully sooner there will be cheaper alternatives. This is an alternative but not cheap as well https://www.sonnettech.com/product/twin10g.html. I guess you need always some sort of converter between thunderbolt and ethernet and its important to take notice that your read/write speed will probably be limited by your internal storage drive/SSD access/write speed.

I managed to snaffle a couple of Cisco switches a few years back; I use a 3560G 48p for the bulk of my network and then use the SFP uplink to the other switch (Cisco SG350XG 12 port) to give a 10Gb backbone between the two areas. This is on it's own VLAN so it can use jumbo packets and to ensure that it's more secure from the rest of the LAN. This VLAN is also used for the VM server (an HP DL385 G7) with a NAS attached to it using iSCSI targets and video render nodes.I did get a couple of SAN nodes but still not got around to populating them. Most of the hardware was getting scrapped and totally overkill for what I need at the moment. I also have a Z820 with a dual 10Gb/e interface attached to the switch which is what I rent out for people needing a render node (one 10Gb/e goes to each switch).

You can pick up a lot of enterprise grade hardware for very little on eBay and auctions if you shop around and you know what you're looking for, sure you don't get the nice warranty but unless you're hammering it 24/7 this kit will outlast pretty much anything else you have.

10Gb/e is about to get a whole lot cheaper, and there's now 'affordable' 10Gb/e chips coming on to the market which can be seen on some x299/x399 motherboards. The main problem with 10Gb/e over 1Gb/e is that the chips can run a lot hotter so you have to pay more attention to heat management.

A final point, you can create 2Gb/e links with link aggregation. You roughly get about 75-85% of the throughput depending on LAN fabric and host adaptor. This might be a cheaper option for many SMBs who can't afford the 10Gb/e infrastructure.

I understood about 20% of that ;). I'm excited for 10GB to become the standard so that these prices can drop. It seems strange that it has taken this long, especially when we have USB 3.1 and Thunderbolt.

You'll see a marked improvement if you enable Jumbo Packets, but I'd advise doing research on the pitfalls of doing so. It's best to to segregate 'normal' network traffic such as web browsing, video comms etc away from data intensive traffic which benefits from jumbo packets using a VLAN.

VLANs can be tricky to set up but worth the headaches.

If you've not set up your SAN with iSCSI targets then you've got all the normal network overheads, so again it's worth researching those now before you get too bedded down as to why they do make sense for certain scenarios.

The reason for 10Gb/e not taking off is that there's never been much need, and the controller chips ran hotter than a 1Gb/e. They also consume more power so it all adds up. There's few industries that can really take advantage of it out-with media design and data centres.

Link aggregation is a great option. I'd be really curious how 2 x 1Gb (aggregated) compares to 10Gb/e in some real world tests (e.g using crystal disk mark or blackmagic disk speed test).

Our old NAS had 4 1Gb jacks that we aggregated together. We couldn't see any speed improvement. Our new machine has 2 10Gb jacks aggregated right now but we haven't tested using only one.

Yes all of our prior tests were run on 4x1GB connections aggregated together. I was told that they help with failure and redundancy but they don't actually give you as much performance as you would like. We have aggregated our 2x10GB connections on this new nas box but again, it might not be doing all that much in terms of speed but theoretically it should be 10x faster than the 1GB solution

Depend on how it is done, link aggregation will not show any speed increase to any one machine. That is, you now have a 4Gbps connection between your NAS and your switch, but only a 1Gbps connection between your switch and your workstation.

Where the speed increase is seen is when 4 disparate connections are made to the NAS. The NAS can deliver data at 4Gbps to the switch, which can deliver the data at 1Gbps to each workstation (assuming the maximum throughput is not exceeded). Whithout aggregation, the NAS isstill limited to 1Gbps to the switch, thus each workstation cannot simultaneously get 1Gbps of disparate data from the switch, (because the data just is not there).

Think of it as four single lane roads (the needed data) leading unto a four lane highway, (aggregate to the Switch) which then leads to four single lane roads (connections to the workstations), compared to four single lane roads (the needed data) leading to a single lane road (single link to switch), which then leads to four single lane roads (link to workstations). In the first case, data gets in and out at max speed, 1Gbps). In the second case, data will have to slow down tremendously to merge into the 1Gbps, then leave at ¼ the rate.

But with only one workstation requesting data, whether with one or ten connections from NAS to the switch, the one workstation still gets data at 1Gbps from switch to NIC.

However, link aggregation typically works in one of two ways….
① Redundancy: any data request will go over the first link, unless it is un-available, then it goes over the next link, etc., so each data request is still over 1Gbps connection, at best.
② Failover. All data requests goes over the same link. this only changes if and only if that link fails, then all data requests goes over the next link, etc., so at any given point in time, all data is limited to sharing a 1Gbps connection.

This is why I grabbed the 48p Cisco switch, 5 of the ports are just for link aggregation just now. On a good day I can pull down around 200-210Mb/s if I remember rightly from the testing.

Lee, you didn't specify the OS used on the machines. I hope you tweaked out the cards before you ran your tests, the TX and RX buffers should be maxed out for better performance.

https://www.intel.com/content/www/us/en/support/articles/000005783/netwo...

https://www.intel.com/content/www/us/en/support/articles/000007205/netwo...

We're all Windows here. Is there even a Mac computer capable of accepting the 10Gb card? I didn't tweak out the cards. I've watched/read about different things you can do but this stuff is going beyond my comfort zone and after simply plugging everything in, and having it work perfectly, I'm not sure I'm going to try.

I don't spend that much time with Macs so I honestly don't know. The TX and RX buffers can be increased through the Intel driver. You should do it because you will definitely see some performance improvement over the default values.

You can now get a Thunderbolt 2 or 3 external PCIe enclosure and pop whatever NIC you want in there. Not the cheapest solution but it does work.

cheaper? the ones I know (Akitio, sanlink2, sonnettech) all cost at least $400. Do you know any cheaper one available?

The new iMac Pro's will have 10BGb Ethernet connections onboard.

Yes you can, however it will cost you around £400 a pop http://www.sonnettech.com/product/echoexpressse1tb3.html

A few things to point out….

① 12×10TB on a RAID6 does not equal 120TB total space, but 100TB total space, prior to formatting. (Formatting takes up additional space, depending on whether it is GPT or nat and what type of file system).

② There is a new set of units of which one needs to be aware. Traditionally in the computer industry, 1kB=1024B, 1MB=1024kB, 1GB=1024MB, 1TB=1024GB, etc. However, to make HDD capacities seem larger than they really are, some HDD manufacturers started using the SI definitions of the k, M, G, T, P, E, prefixes, each one being ×1,000 greater than the previous.

Therefore, the industry has come up with the prefixes, kibi (ki), mebi (Mi), gibi (Gi), Tebi(Ti), etc, where each one is ×1,024 of the previous. Therefore, we may still speak of a 10TB (10,000,000,000,000Bytes), HDD, but a 120MiB raw file, because file systems still use powers of two, not ten.

③ A 1Gbps network can theoretically transfer 1,000,000,000 bits per second, or 125,000,000 bytes per second, but that is raw bits/bytes. When transfering data, we have stop bits, and parity bits. we also have packets, with descriptors, and ACK signaling (acknowledge/resend). We also have protocol overhead. Of actual user data sent, a typical Ethernet loses about 14% as total overhead, so that 125MBps transfer speed is about 107MB of user data per second. The problem is that different tools measure the data differently. Some measure the rate of all data sent/received, others measure the rate of file data sent/received. At 300-400MBps at each of 5 computers, =1,500-2,000MBps, much faster than the theoretical limit.

I put it to you that the downloads were not actually simultaneous, and that the measurements were not from the time of data request to data received, but average speed of each packet of data, ignoring lag times between packets. In otherwords, with 5 simultaneous transfers, you may well be moving slower than an internal HDD RAID, or SSD.

④ Each switch has a theoretical “throughput”, meaning the maximum amount of data the switch can handle at one time. A 1Gbps 8-port switch may have a throughput of 440MBps. So even though each connection theoretically maxes out at 125MBps, 4-ports sending data to four other ports, simultaneously, means that the network slows down (now attempting to hit 1,000MBps, 500MBps into, and 500MBps out of the switch).

A 16-port switch at the same 440MBps throughput will certainly bottleneck if eight ports attempted to send data to eight other ports. Fortunately, this scenario is rare.

⑤ Good switches have buffers. If multiple requests are made for the same data, a switch may pull data off the server once, then send it simultaneously to several connected NICs. Again, with buffering, if seven connections are attempting to get the same data from one server, that would be one 125MBps connection in, and 7×125MBps identical connections out, totaling only 250MBps of the 440MBps throughput limit. This, of course, will vary on the time difference between data requests, switching lag, and the buffer size.

So, if five connected persons are accessing five different projects —that is, different data sets— on the server simultaneously, on a 10Gbps network, that would be about 1,070MBps of file data up from the server to the switch, which could not possibly feed each of the five connections 300-400MBps file data downloads simultaneously. At best, about 214MBps of file data.

Maybe we should do a test pulling 5 different folders to 5 different computers.

That would be a better test. Also, if your NAS has Caching or an SSD used as a Cache then testing the same folder over again might produce invalid results as the data would be in the cache for the 2nd request.

You should, those fluctuations in speed you are seeing could very well be due to caching, not even needing a SSD cache.

One computer asks for a file and has transfers at drive speed.
Second computer asks for same file and gets it at RAM speed.
They alternate and you see spikes in speed.

The example can get very complex with the 5 computers you used so I kept it simple.

Lee, nice article. One thing to point out to everyone is even with the low cost setup the machines that would get the 10 Gbe NIC's require a PCIe x8 slot for the card. That puts the machine into the workstation/server class. You won't find those kind of slots in your typical PC.

What you might consider is writing an article about using Link Agggregation to improve data transfer rates for the readers who don't have workstations and can leverage multiple Gigabit interfaces.

As you can probably tell, I'm just learning a lot of this stuff for myself. Our old server is set up with link aggregation but I'm not sure that we actually saw a speed boost.

As for the PCI e slot. Don't most powerful desktops have this? Not just workstations. And this card only requires PCIe 4 https://www.amazon.com/dp/B072N84DG6?tag=fstoppers-20

No. Depending on the system you might have a few PCIe x1 slots. The CPU determines the number of PCIe lanes available and thus the number and type of slots available. My current machine uses an eight core Ryzen and has 24 PCIe lanes which are used by the video card (16), the M2 slot (4) and the lone PCIe x4 slot.

Can you expand on this? It sounds like lanes has to do with the CPU and not with the PCIe ports on the mobo. How do you know how many lanes a 10Gbps PCI card needs and how do you know how many lanes you have available?

PCIe lanes can get confusing. It depends on both your processor and the chipset on your motherboard. In the case of the new i7-8700K Coffee Lake processor that just came out it only has 1 PCIe connection @ 16x slot that is directly connected to the processor. It also has 24 PCIe 3.0 lanes that run through the z370 chipset to the processor.

On my motherboard I can use either 1 PCIe slot that runs @ 16x (typically for a video card) or two of the PCIe slots @ 8x each. The remaining connections must go through the chipset and are only available up to @4x PCIe 3.0 speed. So if you have a NIC that uses 8x PCIe speeds you need to make sure that it is in the correct slot on your motherboard and that you don't need those lanes for something else like a video card.

In my case I am using a the 16x port for a video card, which means every other connection is limited to @4x max. If I want to use an 8x NIC I would have to deal with my video card running at half the bandwidth. That is why I am going with one of these newer @4x cards. NVMe PCIe SSDs, USB3.1, and all sorts of other devices can quickly eat up PCIe lanes available to the chipset. Again, in the case of my motherboard if I use all the PCIe SSD slots then other ports like SATA are disabled because there simply isnt enough PCIe lanes to send data through.

In the attached image you can see the @16x lane connected directly to the chip and then the chipset with everything else running through it. As you step up to x299/x399 and other more advanced platforms you have more PCIe lanes available both directly and via the chipset.

TL:DR the average workstation with a normal video card (or onboard video) and a 10Gbe PCIe video card should run at full speeds with most modern platforms and i7 processors but you really need to read the motherboard manual and check for processor compatibility.
https://images.anandtech.com/doci/11860/z370_chipset.png

Like Brett said, it is complicated.

You also have PCIe lane multipliers which give additional lanes but not directly to the CPU. Many Intel boards use these.

Today if you're looking to get the most PCIe lanes then you should be looking at the AMD Threadripper or Epyc processors (OEM at the mo as there's not much stock on the market). These give you 64 and 128 lanes respectively. Also they work really well for a professional multi-tasking environment. Intel got caught flatfooted with the new AMD chips and have been playing catch for since they was released earlier this year.

Daris, if I was building a server or killer workstation I would be looking at Threadripper hard. Especially if I was using Fibre Channel and 10 Gbe I would want to spread my I/O across multiple adapters.

Good article. I've been drinking the Synology Kool-Aid for a while now, and I'm up to three units. Two of them I have as mirrors of each other, in two different offices, running Bittorrent Sync. That way we have (i) an off-site mirror, (ii) our own huge "dropbox-style" shared storage where two facilities can both work off the data "locally". I have had one of the Synology's fail a couple times, but support was good and I got replacement units quickly with minimal hassle... and most importantly: the drives worked flawlessly in the new unit with no data loss.

As a 27 year WAN engineer and designer, this was a really good laugh.

Thanks for that.

+1

If you had built your career around something, looked at a video of somebody doing that same thing without a clue and with so much wrong in it, it would be funny for you too.

Exactly Bob!!! I am an IT professional as well as a photographer and I get that not everyone knows how to do IT stuff and that there is a learning curve involved. Part of the reason why I got into IT is watching Navy photographers fumble around with IT issues because they were not trained on IT.

Maybe John and uno can provide some meaningful advice and "hiring a professional" is just one option. I don't mind sharing.

Looks like someone never fired up server-class hardware (that surprise of the noise the generate is a big tell) nor would like to hire professionals for the work (not running cables to jacks is not going to be nice when you trip on a cable and have to run a whole new one through the ceiling)...

There are so many things that can go wrong with this build... please look out for a local professional so you don't end up with lots of problems in the long run. Only from the video, I can point out 3, maybe 4, big issues; who knows if there are more.

Just like you make good foto/video content and tell people to look for professionals when hiring foto/video, please, apply it to yourselves.

You are right. But they should look out for a professional, not a so-called one.

You are also right and if you're talking about me it was never my intention to mock or ridiculize maybe it was something about my culture or lost in translation. I only tried to offer my opinion and advice because fstoppers can be in a big potential problem.

My meaningful advice, as I can only speak for myself was to seek out for a local professional and to do a better benchmark and why it should not be benchmarked like he did.

For any further advice I would bill for it. I make a living on this and will not give it for free, the same I would not think of you giving out for free the rights on your work or to do for free what you make your living on.

More comments