Nvidia Announces New Graphics Cards: Should We Care?

Nvidia Announces New Graphics Cards: Should We Care?

If you've read any tech journalism over the last day or two, you're probably sick of seeing the words Nvidia and 3000 series designations. It's no question Nvidia has created some impressive technology, but is it actually going to change anything for photo and video editing? The answer might just surprise you.

First, let's talk briefly about the cards. Nvidia announced the RTX 3090, RTX 3080, and RTX 3070. These GPUs are their latest generation, featuring a host of performance improvements over the older architectures. All the usual upgrades are there, including more cores, faster memory, and the newest connectors. Even at the "lower" end of the stack, the RTX 3070 is supposedly faster than their previous flagship RTX 2080Ti.

There's no question that these cards are going to offer a significant performance improvement to Nvidia's core market of gamers and machine-learning researchers. For photo and video editing use, however, are these even going to be worth the upgrade? That question is a lot more difficult to answer owing to the highly-fragmented nature of GPU acceleration in professional programs.

Fortunately, over the last couple of years, Adobe and other software makers have added a number of GPU accelerated features to their programs. This has meant faster workflows, most notably where processes that used to have to render are now drawn in near real time — just look at scrubby zoom in Photoshop. To better understand how a faster GPU, let's take a look at what workloads are accelerated on a per program basis.

Photoshop

For Photoshop the following tools either require a GPU or are dramatically accelerated by the presence of one:

  • Perspective Warp
  • Scrubby zoom
  • Smooth brush resizing
  • Lens blur
  • Camera Raw
  • Resizing with the preserve details option
  • Select Focus
  • Blur Gallery: Field Blur, Iris Blur, Tilt-Shift, Path Blur, Spin Blur
  • Smart Sharpen
  • Select and Mask

Looking at that list, I'm not seeing anything that is currently a pain point in my workflow. Probably the biggest benefit comes with the select and mask flow, where some larger images can chug on my 2070, although I wouldn't upgrade just for that. I'd argue that any reasonable modern GPU is already plenty sufficient for Photoshop.

Lightroom

In Lightroom, the many of the adjustments are GPU accelerated, including basic adjustments and the tone curve, HSL, split toning, detail, and other panels in the develop module. Notably, the adjustment brush, loading raw images, generating previews, and a host of other time consuming tasks are not GPU accelerated. Also, some more niche but time intensive processes like HDR and panorama generation are not GPU accelerated.

Want to get an idea of what GPU acceleration impacts in Lightroom? Try toggling it off and browsing through your catalog.

Like many aspects of Lightroom, the situation is messy. GPU acceleration itself is buggy, with users reporting a variety of issues. Enabling GPU support on too weak of a card for the files can counterintuitively make things slower than no GPU acceleration at all. There's the additional caveat of screen resolution, with GPU acceleration making more of a difference at higher resolutions. I didn't really see an impact until I went to a 4K monitor, for example.

For Lightroom, the choice of GPU is highly dependent on your existing gear. If you have a high resolution monitor and an older, slower GPU, a new card could make a large difference to not only speed but stability. If instead you're using a relatively new card with updated drivers, your money can be put towards better storage or a CPU upgrade, which should provide a more significant boost to the user experience.

Video Editing

The video editing world has been enjoying GPU accelerated effects and transitions for a while. Blending, scaling, some effects like color balance, and transitions like cross dissolves can all get a boost. Notably Lumetri looks all play well with GPU acceleration in my experience. Since the complexity of different video projects can range far more widely than photos (1080p vs 4K, heavy effects use vs cutting together some clips), you'll have to take a look at your workflow. When you edit a project, have a GPU monitoring program up and check things like VRAM usage and utilization to see if you've maxed out your current gear. One important note is that while these new cards support AV1 decoding, the hardware support for fast AV1 encoding still isn't there.

Other Programs

Interestingly, a number of more niche programs offer better GPU benefits than the industry titans. Specialty programs like panorama stitching and focus stacking often support OpenCL acceleration, meaning these cards could give you a big improvement to processing times. Additionally, photogrammetry users will appreciate the larger VRAM amounts on offer.

If you work with CGI programs that support GPU acceleration, these cards should be very appealing. A large bump to VRAM that was previously unavailable below Quardro level cards combined with a claimed significant boost to performance could offer significant benefits. The analysis of these specialty programs is beyond the scope of this piece, but if you find yourself compositing CG imagery with your photos or videos, keep an eye on program specific benchmarks.

Beyond Speed Improvements

Looking beyond just the raw speed improvements, it's important to consider some of the features of the cards and what they mean for the visual industries. The first is the continued expansion of AI powered features, implemented in products like NVIDIA Broadcast. The software takes input from regular webcams and mics, then performs software magic to drastically improve the quality and add features. For instance, they demoed high quality real-time background removal without a green screen, and the existing RTX audio processing delivers amazing background noise reduction, capable of even filtering out a hairdryer along vocals.

Renders like this have gotten to the point that they're truly lifelike - should photographers be worried?

Last but not least is the RTX's namesake feature of ray tracing. First introduced on the 2000 series cards as a glorified tech demo, it seems the hardware has made it to the point of usability. Their demo, with hundreds of lights and a complex scene featuring one hundred million polygons, ran at 1440P at a reasonable frame rate. With these quality improvements to ray tracing, are more clients going to opt for a virtual photo shoot? Ikea already generates most of the imagery for their catalogs via CGI, compared to traditional photography.

Conclusion

If you've been sitting on the sidelines for the last few GPU generations, I don't blame you. Between rising prices and diminishing performance improvements, there hasn't been much of a reason to upgrade. The messy state of hardware acceleration for the programs used has made it an even tougher sell. In past articles covering hardware, I've mentioned a set of priorities for many users: a dollar spent on an NVME SSD or faster CPU typically offers more benefits than a GPU, and it seems that still hasn't changed. If however, you've already maxed out your computer in these other areas and are looking to wring out more performance, or your specialty workflow benefits from the discussed improvements, Nvidia's 3000 series cards should be at the top of your list.

Log in or register to post comments

53 Comments

Joe Svelnys's picture

These new cards have my attention as I'm still running a GeForce 760 2GB Card. What could help with workflow in conjunction with the new series card would be the addition of a PCIe v4 Hard Drive. Just to add, software like Topaz (and others) also have added graphics card acceleration.

Now if Photography Software, or (take your pick) Video software could take advantage of the new Architecture in these cards which allows the bypass of [Drive > CPU > Memory > CPU > Graphics Card] to [Drive > Graphics Card], that would make massive improvements in efficiency of said software and also hardware (cpu cycles). Hopefully the companies are taking notice and would make a patch for such efficiency gains.

The thing to note, is that these cards are PCIe Gen 4, and nearly every (not all but the majority) system out there is Gen 3 (cpu/mainboard). The cards will still work but remember the cards will not be in their full glory on a Gen 3 system. For General use, the difference is probably not even noticeable; in every day casual work flows that is. If you are on "Team Blue" (Intel) it might still be a small wait for Gen 4.

Lots to consider and ponder.. but it's looking like it might be a wonderful season to upgrade if you're running an older system. Not to mention Memory and Solid State Drive prices are projected on a downward trend for the next several months.

Joe Svelnys's picture

I'm not confused but I thank you for the link and information.

I'm just at the "end of life" for this system. Lightroom runs surprisingly well and fast on this old potato; though it takes a good 20s to load completely; once loaded it runs reasonably well. Surprisingly with one of my images loaded and in Develop, and playing around for a bit; Lightroom is only utilizing 1GB of system memory. Each additional image I swap over too seems to use up an additional 500GB, and stabilizing around 2.5GB. Ironically, Firefox is taking up 1.6GB (with just two windows open).

Sitting on 32GB system memory. I keep the system lean and clean. Runs awesome for it's age, but it is aged. Built in 2010 with the Video Card being from '12 or was it early '13... Either case, a potato by today's standards. :)

Joe Svelnys's picture

It doesn't run that awful as such; super insane blazing hyper bankai fast? No.

Slowest part is the startup (20 seconds on a 10 year old computer); other then that hiccups are hardly noticeable (if at all). If it's running that terribly horrible, unusable, I'd suggest people do a complete systems diagnostic and system cleanup, do a windows refresh and clear out the bloat (which will also clean out any system configurations said user may have changed thinking it would help performance but actually hindered it)...

Some more memory, yes, but 128GB to 512GB system memory is definitely overkill for Lightroom. People should be running 32GB these days regardless; specially in the creative fields (not talking a monster studio-backed killing machine; that's a different story of course).. 64GB would be a nice level to be at and not break the bank (if the mainboard can handle it), most boards people buy cap at 32GB..

I stand by my original statement that going to a current generation (Gen 4) hard drive will yield the greatest boost in performance (system wide, including lightroom); with CPU and Graphics Card coming up right after.

That's all not to say, that Adobe (and other companies) couldn't always improve their code performance... In the last week there was a Lightroom performance patch; I'm sure they are always working on such things...

Hector Belfort's picture

Hopefully they have fixed their driver update process. My laptop got completely clogged up with old drivers with no normal way to delete them. It was very poor from them.

Alex Coleman's picture

Weird. You can try DDU, which is a dedicated driver uninstaller. I definitely trust Nvidia's drivers over AMD's, all said.

Joe Svelnys's picture

I agree with Alex. You can also try Revo Uninstaller; which will also dig thru the registry for left over keys..

Daris Fox's picture

If you're looking to buy a graphics card I have a simple bit of advice. Wait. AMD is about to release their RDNA 2 graphics in the next few months, which from all accounts take the fight back to nVidia. Probably won't beat them on performance, but they'll beat them on power draw and most likely price. We know RDNA 2 is good from the nextgen consoles but we just don't know how good.

The nVidia presentation was a lot of smoke and mirrors, and compared to previous launches felt rushed with the lack of game play and benchmarks.These are solid cards, and arguably what should have been released instead of the 20 series but they're not great cards.

Joe Svelnys's picture

It will defiantly be interesting to see how the field plays out over the next several months, We can see the battle forming already; if it wasn't for AMDs pressure, I'm sure the 3000 series from nVidia would still have their old 2000 series pricing (or higher).

Martin Peterdamm's picture

but one benefit you get now from cheaper nvidia gaming cards and only amd workstations cards is 30bit in photoshop

Christoph .'s picture

I would certainly wait to see real world performance but it looks very exciting as a creator.

RDNA 2 also looks promising and at the very least used 2000 series cards should be very inexpensive in the coming months and having a 2080 they are more than capable. Good time for tech at the vert least! Very exciting innovations happening

Errick Jackson's picture

I get the urge, but the Nvidia vs AMD fight is not the same as the Intel vs AMD fight. Nvidia has held the upper hand pretty easily in gpu power and innovation.

I love what AMD is doing in the CPU space (I'll likely be getting a Threadripper cpu for my next build) but I'm under no pretense that AMD is worth holding out for in the GPU space. I've not been impressed by their releases up to now. Even with the workstation cards they have out there (several of which you'd have to buy into a Mac Pro to even get), Nvidia has been throwing their weight around easily.

Seeing them just crash through the gates with the 3070 outperforming the 2080Ti (benchmarks withstanding), I have a hard time believing AMD's next offerings will be a compelling choice unless the pricing is just far under Nvidia. But especially if you're holding out for performance, if they're not touching 3070's performance, I'm not even looking at it.

Daris Fox's picture

Actually interpolating the RDNA 2 architecture is fairly easy even before you consider the consoles, you have the 5700XT which you can add 25% performance and reduce power consumption by 50% (confirmed by both AMD and MS with Series X). Also remember RDNA 2 is a completely new GPU, as it's removed any trace of GCN. RNDA 2 is a gaming chip first and foremost as shown by the PS 5 and Series X, and if you want a compute based GPU then it's the CDNA that's due out later this year. Pretty much everyone is saying that RNDA 2 is going to be a damned impressive chip, will it beat nVidia? Probably not, but it'll certainly be in the same ballpark as the 3080 on a far more power-efficient node and will likely clock better due to TSMC's process advantage over Samsung. In addition we have a fairly clear idea of what RNDA 2 is bringing to the table through Series X presentation at Hot Chips! and what PS 5 is doing. The Series X is producing near 2080-2080Ti performance in a 125w power envelop and that's a cut down RDNA 2 with some custom silicon.

Regarding Samsung, nVidia is going to struggle to meet demand for the 3090 and possibly 3080 as all the best dies will go to the Quadro division leaving very little for the consumer market this is because the Samsung node has poorer yields.

The real selling point for nVidia isn't their hardware, but rather their software stack with vendor lock-in.

Alex Coleman's picture

The selling point for Nvidia really has been hardware and software - for years now, if you wanted the best performance in mid market gaming or up, and for anything ML, Nvidia was the only option.

AMD's new cards might be a good option if you're on a budget, but I doubt they will genuinely challenge the 3080 or higher. If you need the performance, you gotta pay.

Daris Fox's picture

The RTX 3080 is roughly about 20-30% faster going by what we've seen, most of the work has gone into the tensor engine. That's within the margins of what's expected from AMD. The RTX 3090 is unlikely to be beaten but there are just too many unknown questions about RNDA 2 due to the lack of leaks but some things are certain there is models with 80 CUs, a early engineering model was spotted beating the 2080 Ti in February in a VR benchmark by 17% and it can clock higher than nVidia's chips (as per PS 5). The fact that nVidia rushed their cards to market and have used their best silicon shows how concerned they are about AMD's challenge. A final concern for nVidia is the console sales, that'll sway developers to optimise more for RDNA 2 instead of nVidia or that's their fear.

The weak spot for AMD isn't hardware, it's software but that's down to the lack of R&D cash but under Lisa Su she has been instrumental in a new reformation of the company past performance can't be used as a metric.

Richard Bradbury's picture

They do look impressive but having just built a new AMD based machine with a 2060 Super and bought a new AMD based laptop with a 1660ti. I am not in the market for one of these.

Adobe can't even make use of current gen hardware so I would not expect them to use the power of these cards anytime soon.

Daniel Lee's picture

My PC is used for editing as much as it is for gaming and considering my GTX 1070 is slowing down, these new cards are too hard to resist. I think a RTX 3080 will be my next purchase

Spy Black's picture

I say wait for the new AMDs and then decide.

LA M's picture

Is anyone even working to any significant extent given COVID-19?

Save your money...we could be in for a long haul. Would hate to see some of ya'll running GoFundMe to buy groceries or pay your rent.

Jon The Baptist's picture

Photo editing doesn't need much in the way of a video card. Video DOES. If you're processing out in NVENC, these RTX cards with their insane number of CUDA Cores will make exporting WICKED FAST.

I feel bad for anyone who bought a 2070 or 2080 recently.

And if you're using AMD... woof.

That being said, bottom line these cards are designed for gaming. It's for people who want to play FS2020 at 4K and Red Dead Redemption with decent framerates

Alex Coleman's picture

What?

The 3080 adds more cores, clocks them and the memory faster, and performs more FLOPS. The question of how much more real-world performance this actually translates to is still open, but it's disingenuous to claim this is just a 2080Ti with DLSS. Perhaps an AI can break down technical specs in an easier to digest way for you :)

William Faucher's picture

cuda cores are some of the main things that affect rendering speed. There's a reason we stack the cards when GPU rendering.

Joe Svelnys's picture

Not to mention the 2080 ti (all RTX cards) had DLSS... DLSS 2.0 out months ago....

Joe Svelnys's picture

Lets not confuse CUDA for RT or Tensor cores. A ton of software including a ton of software creatives utilize on a daily basis is accelerated by CUDA cores; RT and Tensor cores on the other hand may or may not have an impact depending on what your doing for workflow... When I was a 3D animator years back I would have killed for real-time ray tracing; but no, photoshop and lightroom, in that limited scope, will not do much (if anything) with these cores.

The 3000 series, is getting a massive boost in CUDA cores; which will very much make a difference not only for creatives but also gamers. When you click "graphics card accelerated" in (enter software you use here), that's CUDA cores.

Joe Svelnys's picture

Your right, it's all about secondary processing and leveraging that for accelerated software and games; thank you for agreeing with everything I've been talking about.

I'll just end the conversation here since we agree CUDA cores are must useful and worth having; with the more the better depending on the creators/gamers use-case.

More comments