Nvidia Announces New Graphics Cards: Should We Care?

Nvidia Announces New Graphics Cards: Should We Care?

If you've read any tech journalism over the last day or two, you're probably sick of seeing the words Nvidia and 3000 series designations. It's no question Nvidia has created some impressive technology, but is it actually going to change anything for photo and video editing? The answer might just surprise you.

First, let's talk briefly about the cards. Nvidia announced the RTX 3090, RTX 3080, and RTX 3070. These GPUs are their latest generation, featuring a host of performance improvements over the older architectures. All the usual upgrades are there, including more cores, faster memory, and the newest connectors. Even at the "lower" end of the stack, the RTX 3070 is supposedly faster than their previous flagship RTX 2080Ti.

There's no question that these cards are going to offer a significant performance improvement to Nvidia's core market of gamers and machine-learning researchers. For photo and video editing use, however, are these even going to be worth the upgrade? That question is a lot more difficult to answer owing to the highly-fragmented nature of GPU acceleration in professional programs.

Fortunately, over the last couple of years, Adobe and other software makers have added a number of GPU accelerated features to their programs. This has meant faster workflows, most notably where processes that used to have to render are now drawn in near real time — just look at scrubby zoom in Photoshop. To better understand how a faster GPU, let's take a look at what workloads are accelerated on a per program basis.

Photoshop

For Photoshop the following tools either require a GPU or are dramatically accelerated by the presence of one:

  • Perspective Warp
  • Scrubby zoom
  • Smooth brush resizing
  • Lens blur
  • Camera Raw
  • Resizing with the preserve details option
  • Select Focus
  • Blur Gallery: Field Blur, Iris Blur, Tilt-Shift, Path Blur, Spin Blur
  • Smart Sharpen
  • Select and Mask

Looking at that list, I'm not seeing anything that is currently a pain point in my workflow. Probably the biggest benefit comes with the select and mask flow, where some larger images can chug on my 2070, although I wouldn't upgrade just for that. I'd argue that any reasonable modern GPU is already plenty sufficient for Photoshop.

Lightroom

In Lightroom, the many of the adjustments are GPU accelerated, including basic adjustments and the tone curve, HSL, split toning, detail, and other panels in the develop module. Notably, the adjustment brush, loading raw images, generating previews, and a host of other time consuming tasks are not GPU accelerated. Also, some more niche but time intensive processes like HDR and panorama generation are not GPU accelerated.

Want to get an idea of what GPU acceleration impacts in Lightroom? Try toggling it off and browsing through your catalog.

Like many aspects of Lightroom, the situation is messy. GPU acceleration itself is buggy, with users reporting a variety of issues. Enabling GPU support on too weak of a card for the files can counterintuitively make things slower than no GPU acceleration at all. There's the additional caveat of screen resolution, with GPU acceleration making more of a difference at higher resolutions. I didn't really see an impact until I went to a 4K monitor, for example.

For Lightroom, the choice of GPU is highly dependent on your existing gear. If you have a high resolution monitor and an older, slower GPU, a new card could make a large difference to not only speed but stability. If instead you're using a relatively new card with updated drivers, your money can be put towards better storage or a CPU upgrade, which should provide a more significant boost to the user experience.

Video Editing

The video editing world has been enjoying GPU accelerated effects and transitions for a while. Blending, scaling, some effects like color balance, and transitions like cross dissolves can all get a boost. Notably Lumetri looks all play well with GPU acceleration in my experience. Since the complexity of different video projects can range far more widely than photos (1080p vs 4K, heavy effects use vs cutting together some clips), you'll have to take a look at your workflow. When you edit a project, have a GPU monitoring program up and check things like VRAM usage and utilization to see if you've maxed out your current gear. One important note is that while these new cards support AV1 decoding, the hardware support for fast AV1 encoding still isn't there.

Other Programs

Interestingly, a number of more niche programs offer better GPU benefits than the industry titans. Specialty programs like panorama stitching and focus stacking often support OpenCL acceleration, meaning these cards could give you a big improvement to processing times. Additionally, photogrammetry users will appreciate the larger VRAM amounts on offer.

If you work with CGI programs that support GPU acceleration, these cards should be very appealing. A large bump to VRAM that was previously unavailable below Quardro level cards combined with a claimed significant boost to performance could offer significant benefits. The analysis of these specialty programs is beyond the scope of this piece, but if you find yourself compositing CG imagery with your photos or videos, keep an eye on program specific benchmarks.

Beyond Speed Improvements

Looking beyond just the raw speed improvements, it's important to consider some of the features of the cards and what they mean for the visual industries. The first is the continued expansion of AI powered features, implemented in products like NVIDIA Broadcast. The software takes input from regular webcams and mics, then performs software magic to drastically improve the quality and add features. For instance, they demoed high quality real-time background removal without a green screen, and the existing RTX audio processing delivers amazing background noise reduction, capable of even filtering out a hairdryer along vocals.

Renders like this have gotten to the point that they're truly lifelike - should photographers be worried?

Last but not least is the RTX's namesake feature of ray tracing. First introduced on the 2000 series cards as a glorified tech demo, it seems the hardware has made it to the point of usability. Their demo, with hundreds of lights and a complex scene featuring one hundred million polygons, ran at 1440P at a reasonable frame rate. With these quality improvements to ray tracing, are more clients going to opt for a virtual photo shoot? Ikea already generates most of the imagery for their catalogs via CGI, compared to traditional photography.

Conclusion

If you've been sitting on the sidelines for the last few GPU generations, I don't blame you. Between rising prices and diminishing performance improvements, there hasn't been much of a reason to upgrade. The messy state of hardware acceleration for the programs used has made it an even tougher sell. In past articles covering hardware, I've mentioned a set of priorities for many users: a dollar spent on an NVME SSD or faster CPU typically offers more benefits than a GPU, and it seems that still hasn't changed. If however, you've already maxed out your computer in these other areas and are looking to wring out more performance, or your specialty workflow benefits from the discussed improvements, Nvidia's 3000 series cards should be at the top of your list.

Alex Coleman's picture

Alex Coleman is a travel and landscape photographer. He teaches workshops in the American Southwest, with an emphasis on blending the artistic and technical sides of photography.

Log in or register to post comments
23 Comments

These new cards have my attention as I'm still running a GeForce 760 2GB Card. What could help with workflow in conjunction with the new series card would be the addition of a PCIe v4 Hard Drive. Just to add, software like Topaz (and others) also have added graphics card acceleration.

Now if Photography Software, or (take your pick) Video software could take advantage of the new Architecture in these cards which allows the bypass of [Drive > CPU > Memory > CPU > Graphics Card] to [Drive > Graphics Card], that would make massive improvements in efficiency of said software and also hardware (cpu cycles). Hopefully the companies are taking notice and would make a patch for such efficiency gains.

The thing to note, is that these cards are PCIe Gen 4, and nearly every (not all but the majority) system out there is Gen 3 (cpu/mainboard). The cards will still work but remember the cards will not be in their full glory on a Gen 3 system. For General use, the difference is probably not even noticeable; in every day casual work flows that is. If you are on "Team Blue" (Intel) it might still be a small wait for Gen 4.

Lots to consider and ponder.. but it's looking like it might be a wonderful season to upgrade if you're running an older system. Not to mention Memory and Solid State Drive prices are projected on a downward trend for the next several months.

Hopefully they have fixed their driver update process. My laptop got completely clogged up with old drivers with no normal way to delete them. It was very poor from them.

Weird. You can try DDU, which is a dedicated driver uninstaller. I definitely trust Nvidia's drivers over AMD's, all said.

I agree with Alex. You can also try Revo Uninstaller; which will also dig thru the registry for left over keys..

If you're looking to buy a graphics card I have a simple bit of advice. Wait. AMD is about to release their RDNA 2 graphics in the next few months, which from all accounts take the fight back to nVidia. Probably won't beat them on performance, but they'll beat them on power draw and most likely price. We know RDNA 2 is good from the nextgen consoles but we just don't know how good.

The nVidia presentation was a lot of smoke and mirrors, and compared to previous launches felt rushed with the lack of game play and benchmarks.These are solid cards, and arguably what should have been released instead of the 20 series but they're not great cards.

It will defiantly be interesting to see how the field plays out over the next several months, We can see the battle forming already; if it wasn't for AMDs pressure, I'm sure the 3000 series from nVidia would still have their old 2000 series pricing (or higher).

but one benefit you get now from cheaper nvidia gaming cards and only amd workstations cards is 30bit in photoshop

I would certainly wait to see real world performance but it looks very exciting as a creator.

RDNA 2 also looks promising and at the very least used 2000 series cards should be very inexpensive in the coming months and having a 2080 they are more than capable. Good time for tech at the vert least! Very exciting innovations happening

I get the urge, but the Nvidia vs AMD fight is not the same as the Intel vs AMD fight. Nvidia has held the upper hand pretty easily in gpu power and innovation.

I love what AMD is doing in the CPU space (I'll likely be getting a Threadripper cpu for my next build) but I'm under no pretense that AMD is worth holding out for in the GPU space. I've not been impressed by their releases up to now. Even with the workstation cards they have out there (several of which you'd have to buy into a Mac Pro to even get), Nvidia has been throwing their weight around easily.

Seeing them just crash through the gates with the 3070 outperforming the 2080Ti (benchmarks withstanding), I have a hard time believing AMD's next offerings will be a compelling choice unless the pricing is just far under Nvidia. But especially if you're holding out for performance, if they're not touching 3070's performance, I'm not even looking at it.

Actually interpolating the RDNA 2 architecture is fairly easy even before you consider the consoles, you have the 5700XT which you can add 25% performance and reduce power consumption by 50% (confirmed by both AMD and MS with Series X). Also remember RDNA 2 is a completely new GPU, as it's removed any trace of GCN. RNDA 2 is a gaming chip first and foremost as shown by the PS 5 and Series X, and if you want a compute based GPU then it's the CDNA that's due out later this year. Pretty much everyone is saying that RNDA 2 is going to be a damned impressive chip, will it beat nVidia? Probably not, but it'll certainly be in the same ballpark as the 3080 on a far more power-efficient node and will likely clock better due to TSMC's process advantage over Samsung. In addition we have a fairly clear idea of what RNDA 2 is bringing to the table through Series X presentation at Hot Chips! and what PS 5 is doing. The Series X is producing near 2080-2080Ti performance in a 125w power envelop and that's a cut down RDNA 2 with some custom silicon.

Regarding Samsung, nVidia is going to struggle to meet demand for the 3090 and possibly 3080 as all the best dies will go to the Quadro division leaving very little for the consumer market this is because the Samsung node has poorer yields.

The real selling point for nVidia isn't their hardware, but rather their software stack with vendor lock-in.

The selling point for Nvidia really has been hardware and software - for years now, if you wanted the best performance in mid market gaming or up, and for anything ML, Nvidia was the only option.

AMD's new cards might be a good option if you're on a budget, but I doubt they will genuinely challenge the 3080 or higher. If you need the performance, you gotta pay.

The RTX 3080 is roughly about 20-30% faster going by what we've seen, most of the work has gone into the tensor engine. That's within the margins of what's expected from AMD. The RTX 3090 is unlikely to be beaten but there are just too many unknown questions about RNDA 2 due to the lack of leaks but some things are certain there is models with 80 CUs, a early engineering model was spotted beating the 2080 Ti in February in a VR benchmark by 17% and it can clock higher than nVidia's chips (as per PS 5). The fact that nVidia rushed their cards to market and have used their best silicon shows how concerned they are about AMD's challenge. A final concern for nVidia is the console sales, that'll sway developers to optimise more for RDNA 2 instead of nVidia or that's their fear.

The weak spot for AMD isn't hardware, it's software but that's down to the lack of R&D cash but under Lisa Su she has been instrumental in a new reformation of the company past performance can't be used as a metric.

They do look impressive but having just built a new AMD based machine with a 2060 Super and bought a new AMD based laptop with a 1660ti. I am not in the market for one of these.

Adobe can't even make use of current gen hardware so I would not expect them to use the power of these cards anytime soon.

My PC is used for editing as much as it is for gaming and considering my GTX 1070 is slowing down, these new cards are too hard to resist. I think a RTX 3080 will be my next purchase

I say wait for the new AMDs and then decide.

Is anyone even working to any significant extent given COVID-19?

Save your money...we could be in for a long haul. Would hate to see some of ya'll running GoFundMe to buy groceries or pay your rent.

Photo editing doesn't need much in the way of a video card. Video DOES. If you're processing out in NVENC, these RTX cards with their insane number of CUDA Cores will make exporting WICKED FAST.

I feel bad for anyone who bought a 2070 or 2080 recently.

And if you're using AMD... woof.

That being said, bottom line these cards are designed for gaming. It's for people who want to play FS2020 at 4K and Red Dead Redemption with decent framerates

I feel like if you're a photographer, you'd be fine with a 10-series graphics card still. If you're doing video, design, or animation work, I could easily see these new gpu's being highly beneficial. The step up is so significant, it's halted my desire to switch over to an Apple Silicon Mac when it comes around.

As a gamer, I always owned top of the line (or close) video cards. In April I sold my 1080Ti and just used the integrated GPU (Intel 630).
As a Lightroom CC user, I noticed some slowdowns, but I had no issues editing my photos. Photoshop I only used to automatize logos on hundreds of images.
I did have a powerful computer (9900k at 5ghz, 32gb ddr4, fast m.2 ssds).
Ampere announcement is great for you non gamers who need a new video card.
The new cards are too powerful for video/photo editing, but they are also "cheaper" than expected, which means old cards will go down in price, A LOT. Especially second hand market.

LOL!!!! ...Alex calls a huge leap in performance from the 2000 series to 3000 series "diminishing performance improvements"

For photographers and most videographers, yes. A faster GPU beyond something like a 970 will yield little real world improvements in workflows - PS, LR, and others just don’t take full advantage of it. This isn’t a review of the card’s gaming or ML performance, so consider the context of the statement.

That's not necessarily true, because you also have hidden improvements that aren't in the headline specs take this for example:

https://developer.nvidia.com/video-encode-decode-gpu-support-matrix

That shows the differences between generations for nvenc which are useful for video encoding, you also have an extra FP pipeline over a Int pipeline (nVidia went back to a dual function CUDA core) for the 3000 series. Also both generations of RTX will support AV1 decoding, along with AMD's new 6000 series. Sadly no encoding for either GPU until next gen.

Overall the new 3000 series is over-priced, not just cost but also TCO especially if all the rumours are true about RDNA 2. The real acid test will be how good RDNA 2 will be for video encoding/compute compared to GCN as they've focused the card for gaming unlike nVidia who basically copied GCN's own flaws of being compute heavy.

Either way we'll know in a few weeks how good the RDNA 2 arch will be and fulfil the promise that's hinted by both the PS5 and Xbox APU's.

The lack of improvements is more a damning indictment about the glacial pace of improvements from Adobe for decades especially when you have competitors like CaptureOne, Black Magic and many third party plug-ins having better support for GPGPU and at a quicker pace.

These are exceptions that prove the rule: AV1 decode isn’t useful to content creators because no capture device is outputting it. AV1 encode via NVENC isn’t here yet. NVENC more broadly is just a nice to have, as you’d probably want to CPU encode anyway for the quality benefit.

Is any photo or video software really saturating even a basic 1070 or 2070? The only workload I’ve found is photogrammetry, but that’s pretty niche.