If you've read any tech journalism over the last day or two, you're probably sick of seeing the words Nvidia and 3000 series designations. It's no question Nvidia has created some impressive technology, but is it actually going to change anything for photo and video editing? The answer might just surprise you.
First, let's talk briefly about the cards. Nvidia announced the RTX 3090, RTX 3080, and RTX 3070. These GPUs are their latest generation, featuring a host of performance improvements over the older architectures. All the usual upgrades are there, including more cores, faster memory, and the newest connectors. Even at the "lower" end of the stack, the RTX 3070 is supposedly faster than their previous flagship RTX 2080Ti.
There's no question that these cards are going to offer a significant performance improvement to Nvidia's core market of gamers and machine-learning researchers. For photo and video editing use, however, are these even going to be worth the upgrade? That question is a lot more difficult to answer owing to the highly-fragmented nature of GPU acceleration in professional programs.
Fortunately, over the last couple of years, Adobe and other software makers have added a number of GPU accelerated features to their programs. This has meant faster workflows, most notably where processes that used to have to render are now drawn in near real time — just look at scrubby zoom in Photoshop. To better understand how a faster GPU, let's take a look at what workloads are accelerated on a per program basis.
Photoshop
For Photoshop the following tools either require a GPU or are dramatically accelerated by the presence of one:
- Perspective Warp
- Scrubby zoom
- Smooth brush resizing
- Lens blur
- Camera Raw
- Resizing with the preserve details option
- Select Focus
- Blur Gallery: Field Blur, Iris Blur, Tilt-Shift, Path Blur, Spin Blur
- Smart Sharpen
- Select and Mask
Looking at that list, I'm not seeing anything that is currently a pain point in my workflow. Probably the biggest benefit comes with the select and mask flow, where some larger images can chug on my 2070, although I wouldn't upgrade just for that. I'd argue that any reasonable modern GPU is already plenty sufficient for Photoshop.
Lightroom
In Lightroom, the many of the adjustments are GPU accelerated, including basic adjustments and the tone curve, HSL, split toning, detail, and other panels in the develop module. Notably, the adjustment brush, loading raw images, generating previews, and a host of other time consuming tasks are not GPU accelerated. Also, some more niche but time intensive processes like HDR and panorama generation are not GPU accelerated.
Like many aspects of Lightroom, the situation is messy. GPU acceleration itself is buggy, with users reporting a variety of issues. Enabling GPU support on too weak of a card for the files can counterintuitively make things slower than no GPU acceleration at all. There's the additional caveat of screen resolution, with GPU acceleration making more of a difference at higher resolutions. I didn't really see an impact until I went to a 4K monitor, for example.
For Lightroom, the choice of GPU is highly dependent on your existing gear. If you have a high resolution monitor and an older, slower GPU, a new card could make a large difference to not only speed but stability. If instead you're using a relatively new card with updated drivers, your money can be put towards better storage or a CPU upgrade, which should provide a more significant boost to the user experience.
Video Editing
The video editing world has been enjoying GPU accelerated effects and transitions for a while. Blending, scaling, some effects like color balance, and transitions like cross dissolves can all get a boost. Notably Lumetri looks all play well with GPU acceleration in my experience. Since the complexity of different video projects can range far more widely than photos (1080p vs 4K, heavy effects use vs cutting together some clips), you'll have to take a look at your workflow. When you edit a project, have a GPU monitoring program up and check things like VRAM usage and utilization to see if you've maxed out your current gear. One important note is that while these new cards support AV1 decoding, the hardware support for fast AV1 encoding still isn't there.
Other Programs
Interestingly, a number of more niche programs offer better GPU benefits than the industry titans. Specialty programs like panorama stitching and focus stacking often support OpenCL acceleration, meaning these cards could give you a big improvement to processing times. Additionally, photogrammetry users will appreciate the larger VRAM amounts on offer.
If you work with CGI programs that support GPU acceleration, these cards should be very appealing. A large bump to VRAM that was previously unavailable below Quardro level cards combined with a claimed significant boost to performance could offer significant benefits. The analysis of these specialty programs is beyond the scope of this piece, but if you find yourself compositing CG imagery with your photos or videos, keep an eye on program specific benchmarks.
Beyond Speed Improvements
Looking beyond just the raw speed improvements, it's important to consider some of the features of the cards and what they mean for the visual industries. The first is the continued expansion of AI powered features, implemented in products like NVIDIA Broadcast. The software takes input from regular webcams and mics, then performs software magic to drastically improve the quality and add features. For instance, they demoed high quality real-time background removal without a green screen, and the existing RTX audio processing delivers amazing background noise reduction, capable of even filtering out a hairdryer along vocals.
Last but not least is the RTX's namesake feature of ray tracing. First introduced on the 2000 series cards as a glorified tech demo, it seems the hardware has made it to the point of usability. Their demo, with hundreds of lights and a complex scene featuring one hundred million polygons, ran at 1440P at a reasonable frame rate. With these quality improvements to ray tracing, are more clients going to opt for a virtual photo shoot? Ikea already generates most of the imagery for their catalogs via CGI, compared to traditional photography.
Conclusion
If you've been sitting on the sidelines for the last few GPU generations, I don't blame you. Between rising prices and diminishing performance improvements, there hasn't been much of a reason to upgrade. The messy state of hardware acceleration for the programs used has made it an even tougher sell. In past articles covering hardware, I've mentioned a set of priorities for many users: a dollar spent on an NVME SSD or faster CPU typically offers more benefits than a GPU, and it seems that still hasn't changed. If however, you've already maxed out your computer in these other areas and are looking to wring out more performance, or your specialty workflow benefits from the discussed improvements, Nvidia's 3000 series cards should be at the top of your list.
These new cards have my attention as I'm still running a GeForce 760 2GB Card. What could help with workflow in conjunction with the new series card would be the addition of a PCIe v4 Hard Drive. Just to add, software like Topaz (and others) also have added graphics card acceleration.
Now if Photography Software, or (take your pick) Video software could take advantage of the new Architecture in these cards which allows the bypass of [Drive > CPU > Memory > CPU > Graphics Card] to [Drive > Graphics Card], that would make massive improvements in efficiency of said software and also hardware (cpu cycles). Hopefully the companies are taking notice and would make a patch for such efficiency gains.
The thing to note, is that these cards are PCIe Gen 4, and nearly every (not all but the majority) system out there is Gen 3 (cpu/mainboard). The cards will still work but remember the cards will not be in their full glory on a Gen 3 system. For General use, the difference is probably not even noticeable; in every day casual work flows that is. If you are on "Team Blue" (Intel) it might still be a small wait for Gen 4.
Lots to consider and ponder.. but it's looking like it might be a wonderful season to upgrade if you're running an older system. Not to mention Memory and Solid State Drive prices are projected on a downward trend for the next several months.
I'm not confused but I thank you for the link and information.
I'm just at the "end of life" for this system. Lightroom runs surprisingly well and fast on this old potato; though it takes a good 20s to load completely; once loaded it runs reasonably well. Surprisingly with one of my images loaded and in Develop, and playing around for a bit; Lightroom is only utilizing 1GB of system memory. Each additional image I swap over too seems to use up an additional 500GB, and stabilizing around 2.5GB. Ironically, Firefox is taking up 1.6GB (with just two windows open).
Sitting on 32GB system memory. I keep the system lean and clean. Runs awesome for it's age, but it is aged. Built in 2010 with the Video Card being from '12 or was it early '13... Either case, a potato by today's standards. :)
It doesn't run that awful as such; super insane blazing hyper bankai fast? No.
Slowest part is the startup (20 seconds on a 10 year old computer); other then that hiccups are hardly noticeable (if at all). If it's running that terribly horrible, unusable, I'd suggest people do a complete systems diagnostic and system cleanup, do a windows refresh and clear out the bloat (which will also clean out any system configurations said user may have changed thinking it would help performance but actually hindered it)...
Some more memory, yes, but 128GB to 512GB system memory is definitely overkill for Lightroom. People should be running 32GB these days regardless; specially in the creative fields (not talking a monster studio-backed killing machine; that's a different story of course).. 64GB would be a nice level to be at and not break the bank (if the mainboard can handle it), most boards people buy cap at 32GB..
I stand by my original statement that going to a current generation (Gen 4) hard drive will yield the greatest boost in performance (system wide, including lightroom); with CPU and Graphics Card coming up right after.
That's all not to say, that Adobe (and other companies) couldn't always improve their code performance... In the last week there was a Lightroom performance patch; I'm sure they are always working on such things...
Hopefully they have fixed their driver update process. My laptop got completely clogged up with old drivers with no normal way to delete them. It was very poor from them.
Weird. You can try DDU, which is a dedicated driver uninstaller. I definitely trust Nvidia's drivers over AMD's, all said.
I agree with Alex. You can also try Revo Uninstaller; which will also dig thru the registry for left over keys..
If you're looking to buy a graphics card I have a simple bit of advice. Wait. AMD is about to release their RDNA 2 graphics in the next few months, which from all accounts take the fight back to nVidia. Probably won't beat them on performance, but they'll beat them on power draw and most likely price. We know RDNA 2 is good from the nextgen consoles but we just don't know how good.
The nVidia presentation was a lot of smoke and mirrors, and compared to previous launches felt rushed with the lack of game play and benchmarks.These are solid cards, and arguably what should have been released instead of the 20 series but they're not great cards.
It will defiantly be interesting to see how the field plays out over the next several months, We can see the battle forming already; if it wasn't for AMDs pressure, I'm sure the 3000 series from nVidia would still have their old 2000 series pricing (or higher).
but one benefit you get now from cheaper nvidia gaming cards and only amd workstations cards is 30bit in photoshop
I would certainly wait to see real world performance but it looks very exciting as a creator.
RDNA 2 also looks promising and at the very least used 2000 series cards should be very inexpensive in the coming months and having a 2080 they are more than capable. Good time for tech at the vert least! Very exciting innovations happening
I get the urge, but the Nvidia vs AMD fight is not the same as the Intel vs AMD fight. Nvidia has held the upper hand pretty easily in gpu power and innovation.
I love what AMD is doing in the CPU space (I'll likely be getting a Threadripper cpu for my next build) but I'm under no pretense that AMD is worth holding out for in the GPU space. I've not been impressed by their releases up to now. Even with the workstation cards they have out there (several of which you'd have to buy into a Mac Pro to even get), Nvidia has been throwing their weight around easily.
Seeing them just crash through the gates with the 3070 outperforming the 2080Ti (benchmarks withstanding), I have a hard time believing AMD's next offerings will be a compelling choice unless the pricing is just far under Nvidia. But especially if you're holding out for performance, if they're not touching 3070's performance, I'm not even looking at it.
Actually interpolating the RDNA 2 architecture is fairly easy even before you consider the consoles, you have the 5700XT which you can add 25% performance and reduce power consumption by 50% (confirmed by both AMD and MS with Series X). Also remember RDNA 2 is a completely new GPU, as it's removed any trace of GCN. RNDA 2 is a gaming chip first and foremost as shown by the PS 5 and Series X, and if you want a compute based GPU then it's the CDNA that's due out later this year. Pretty much everyone is saying that RNDA 2 is going to be a damned impressive chip, will it beat nVidia? Probably not, but it'll certainly be in the same ballpark as the 3080 on a far more power-efficient node and will likely clock better due to TSMC's process advantage over Samsung. In addition we have a fairly clear idea of what RNDA 2 is bringing to the table through Series X presentation at Hot Chips! and what PS 5 is doing. The Series X is producing near 2080-2080Ti performance in a 125w power envelop and that's a cut down RDNA 2 with some custom silicon.
Regarding Samsung, nVidia is going to struggle to meet demand for the 3090 and possibly 3080 as all the best dies will go to the Quadro division leaving very little for the consumer market this is because the Samsung node has poorer yields.
The real selling point for nVidia isn't their hardware, but rather their software stack with vendor lock-in.
The selling point for Nvidia really has been hardware and software - for years now, if you wanted the best performance in mid market gaming or up, and for anything ML, Nvidia was the only option.
AMD's new cards might be a good option if you're on a budget, but I doubt they will genuinely challenge the 3080 or higher. If you need the performance, you gotta pay.
The RTX 3080 is roughly about 20-30% faster going by what we've seen, most of the work has gone into the tensor engine. That's within the margins of what's expected from AMD. The RTX 3090 is unlikely to be beaten but there are just too many unknown questions about RNDA 2 due to the lack of leaks but some things are certain there is models with 80 CUs, a early engineering model was spotted beating the 2080 Ti in February in a VR benchmark by 17% and it can clock higher than nVidia's chips (as per PS 5). The fact that nVidia rushed their cards to market and have used their best silicon shows how concerned they are about AMD's challenge. A final concern for nVidia is the console sales, that'll sway developers to optimise more for RDNA 2 instead of nVidia or that's their fear.
The weak spot for AMD isn't hardware, it's software but that's down to the lack of R&D cash but under Lisa Su she has been instrumental in a new reformation of the company past performance can't be used as a metric.
They do look impressive but having just built a new AMD based machine with a 2060 Super and bought a new AMD based laptop with a 1660ti. I am not in the market for one of these.
Adobe can't even make use of current gen hardware so I would not expect them to use the power of these cards anytime soon.
My PC is used for editing as much as it is for gaming and considering my GTX 1070 is slowing down, these new cards are too hard to resist. I think a RTX 3080 will be my next purchase
I say wait for the new AMDs and then decide.
Is anyone even working to any significant extent given COVID-19?
Save your money...we could be in for a long haul. Would hate to see some of ya'll running GoFundMe to buy groceries or pay your rent.
Photo editing doesn't need much in the way of a video card. Video DOES. If you're processing out in NVENC, these RTX cards with their insane number of CUDA Cores will make exporting WICKED FAST.
I feel bad for anyone who bought a 2070 or 2080 recently.
And if you're using AMD... woof.
That being said, bottom line these cards are designed for gaming. It's for people who want to play FS2020 at 4K and Red Dead Redemption with decent framerates
What?
The 3080 adds more cores, clocks them and the memory faster, and performs more FLOPS. The question of how much more real-world performance this actually translates to is still open, but it's disingenuous to claim this is just a 2080Ti with DLSS. Perhaps an AI can break down technical specs in an easier to digest way for you :)
cuda cores are some of the main things that affect rendering speed. There's a reason we stack the cards when GPU rendering.
Not to mention the 2080 ti (all RTX cards) had DLSS... DLSS 2.0 out months ago....
Lets not confuse CUDA for RT or Tensor cores. A ton of software including a ton of software creatives utilize on a daily basis is accelerated by CUDA cores; RT and Tensor cores on the other hand may or may not have an impact depending on what your doing for workflow... When I was a 3D animator years back I would have killed for real-time ray tracing; but no, photoshop and lightroom, in that limited scope, will not do much (if anything) with these cores.
The 3000 series, is getting a massive boost in CUDA cores; which will very much make a difference not only for creatives but also gamers. When you click "graphics card accelerated" in (enter software you use here), that's CUDA cores.
Your right, it's all about secondary processing and leveraging that for accelerated software and games; thank you for agreeing with everything I've been talking about.
I'll just end the conversation here since we agree CUDA cores are must useful and worth having; with the more the better depending on the creators/gamers use-case.
I still think your confusing CUDA with RT or Tensor cores... but we will continue with the current narrative...
Off loading to CUDA makes perfect sense since the graphics cards memory is a multiple faster then your slow system memory (in comparison)...
You're debate is "CUDA cores sit there and do nothing for games or to help accelerate software; they are a waste of space". This is just a fundamentally incorrect statement. Big-bold statements require proof and documentation; so far you've only belittled people.
The "Graphics Core" utilizes the CUDA cores on the same die; The "Core" accesses and utilizes the CUDAs in -all- modern games; and same for hardware acceleration in software packages. In fact, I'm pretty sure your "Core" couldn't even function without the CUDAs they are so interconnected within the architecture of the overall chip and design.
Now there is room to debate if CUDA cores should be called, or counted as, "Cores" on their own, individually, but that is another topic. Regardless they are utilized all the time as part of the core architecture functionality of the "graphics core".
Email Nvidia and ask them how you can turn off your CUDA cores explaining they are of zero use and a waste of space; I'd love to see the response.
They don't publish triangles per second anymore because that's a largely irrelevant measure.Instead, consider FLOPS and real world benchmarks of games and you can clearly see that new cards are far more capable than older ones, even setting aside things like NVENC, Tensor cores, VRAM amounts, and more.
On many old games, you won't see increased performance because they're CPU bound - take CS:GO, for instance, where your frames at typical resolutions are primarily dictate by single thread CPU performance.
GPUs have become important to a number of markets outside just gaming - AI, ML, and yes, for a while crypto-currency. Any company would be silly not to welcome more customers.
Yup, didn't even get to that point, so good catch.
1080Ti was shit?
It's one of the most popular cards released. It was so popular that Jensen (the CEO of Nvidia) focused on 1080Ti users that he outright said "Pascal friends, it's safe to upgrade now". Pascal being the 1xxx cards.
Most of people who owned 1080Ti stayed with it when 2080Ti was released. 2080Ti was shit. 1080Ti wasn't.
They do make separate cards for secondary processing offloads which have no video connection; nvidia makes those as well. Being less for consumers; you can read up on it by searching for "Nvidia Tesla" cards.
I don't know where you're getting your information, but it's false.
1080Ti was (approx):
60% faster than 980Ti at 1080p resolution.
80% faster than 980Ti at 1440p resolution
90% faster than 980Ti at 4k resolution.
Source:
https://www.techpowerup.com/review/nvidia-geforce-gtx-1080-ti/30.html
Custom cards are even faster.
Google for more information, stop spreading misinformation.
1080Ti was a monster compared to 980Ti.
That's one of the more reputable review sites out there. I see you're just a troll. Good day.
I feel like if you're a photographer, you'd be fine with a 10-series graphics card still. If you're doing video, design, or animation work, I could easily see these new gpu's being highly beneficial. The step up is so significant, it's halted my desire to switch over to an Apple Silicon Mac when it comes around.
I wouldn't say they're in cahoots to keep things downgrading, but I would say at this point, pure power on the gpu side could be less crucial if there were more focus on software optimizations.
Perfect example: I was grading raw footage from the new BMD Ursa Mini Pro 12K, which, by most any normal metric, should have made my severely underpowered laptop burst into flames or go BSoD. And yet, I was able to play it back. Not perfectly, but useable. That's insane to me and shows how much hardware and software optimization can really push our creative capabilities.
As a gamer, I always owned top of the line (or close) video cards. In April I sold my 1080Ti and just used the integrated GPU (Intel 630).
As a Lightroom CC user, I noticed some slowdowns, but I had no issues editing my photos. Photoshop I only used to automatize logos on hundreds of images.
I did have a powerful computer (9900k at 5ghz, 32gb ddr4, fast m.2 ssds).
Ampere announcement is great for you non gamers who need a new video card.
The new cards are too powerful for video/photo editing, but they are also "cheaper" than expected, which means old cards will go down in price, A LOT. Especially second hand market.
LOL!!!! ...Alex calls a huge leap in performance from the 2000 series to 3000 series "diminishing performance improvements"
For photographers and most videographers, yes. A faster GPU beyond something like a 970 will yield little real world improvements in workflows - PS, LR, and others just don’t take full advantage of it. This isn’t a review of the card’s gaming or ML performance, so consider the context of the statement.
That's not necessarily true, because you also have hidden improvements that aren't in the headline specs take this for example:
https://developer.nvidia.com/video-encode-decode-gpu-support-matrix
That shows the differences between generations for nvenc which are useful for video encoding, you also have an extra FP pipeline over a Int pipeline (nVidia went back to a dual function CUDA core) for the 3000 series. Also both generations of RTX will support AV1 decoding, along with AMD's new 6000 series. Sadly no encoding for either GPU until next gen.
Overall the new 3000 series is over-priced, not just cost but also TCO especially if all the rumours are true about RDNA 2. The real acid test will be how good RDNA 2 will be for video encoding/compute compared to GCN as they've focused the card for gaming unlike nVidia who basically copied GCN's own flaws of being compute heavy.
Either way we'll know in a few weeks how good the RDNA 2 arch will be and fulfil the promise that's hinted by both the PS5 and Xbox APU's.
The lack of improvements is more a damning indictment about the glacial pace of improvements from Adobe for decades especially when you have competitors like CaptureOne, Black Magic and many third party plug-ins having better support for GPGPU and at a quicker pace.
These are exceptions that prove the rule: AV1 decode isn’t useful to content creators because no capture device is outputting it. AV1 encode via NVENC isn’t here yet. NVENC more broadly is just a nice to have, as you’d probably want to CPU encode anyway for the quality benefit.
Is any photo or video software really saturating even a basic 1070 or 2070? The only workload I’ve found is photogrammetry, but that’s pretty niche.