The Little Lies and Big Problems of the Computer Industry: Lack of Innovation, Artificial Segmentation, and Inflated Prices

The Little Lies and Big Problems of the Computer Industry: Lack of Innovation, Artificial Segmentation, and Inflated Prices

Photographers probably spend more time behind a computer than a camera. Yet, the technological progress of this industry is becoming slow, and manufacturers hide the issue with marketing tricks. Here is what's going on.

CPU Hot Mess: Same but Different

Until 2015, processor manufacturers used to introduce a new generation of processor every two years or so and the performance gain were massive between each release. Since then, Intel hit a technological wall with the 10-nanometer chip and is still stuck with the old 14nm lithography process. The latest Intel 9th Generation Core family (Coffee Lake) is nothing more than a reheated Skylake iteration from 2015 and Intel marketing experts desperately try to hide their development issues with endless variations, "++" suffix, and "refresh" generation of the aging Skylake architecture. Indeed, the new Intel 9600K processor is only a few percentage points faster than the previous 8600K released in 2017. 

AMD is confronted with similar difficulties but managed to shrink its CPU from 14nm to 12nm last year with the Ryzen Threadripper 2 based on the Zen+ design. In reality, these chips are rebranded server processors (EPYC) with insane prices and power consumption. Nothing groundbreaking except for the wallet and the power bill. 

The AMD Ryzen Threadripper 2990WX processor. A $1,700 CPU filled with 32 cores. This derivative from the server industry is mostly useless for photo-video applications which are poorly optimized to take advantage of that many cores.

But this derivative of server performance was enough to upset Intel's salespeople last year when they decided to pull a last-minute trick at the Computex trade show in Taiwan. Faced with the imminent AMD announcement of the 32 cores Threadripper processor, Intel came up with a counter fire to avoid the dishonor of being left behind with its "little" 28 cores Skylake-SP processor from the server industry. Since AMD took the leadership of the core counts, Intel replicated the frequency and put together a "new" 28 cores CPU able to reach 5GHz and break the speed record. Some engineers overclocked an old Xeon CPU and kept this monstrosity under thermal control with the help of a 1000 Watts industrial chiller hidden underneath the table. Intel managed to steal the show, but this scam was uncovered the next day by a suspicious journalist from Tom's Hardware. Eventually, an Intel representative explained that "in the excitement of the moment," the company "forgot" to mention this little detail: the CPU was extremely overclocked. 

This embarrassing anecdote illustrates the severe crisis faced by the company. Historically, Intel's architecture used to be ahead of the competition by one or two years, but the manufacturer is now trailing AMD which might release the Zen 2 architecture produced in 7nm sometime this year whereas Intel is still struggling to step down from 14nm to 10nm. The decline of the computer industry and the poor management of the previous CEO can explain the situation. Unfortunately, Intel is not done yet with little lies as we discovered recently with the Core i9 9900K processor introduced a few months ago. Officially, the thermal design power (TDP) of this CPU is listed at 95W, but all benchmarks revealed a much higher thermal profile. To avoid instability, the real thermal envelope is set to 210W by Intel. Therefore, motherboard manufacturers follow this value to scale the correct voltage regulation stage of the Coffee Lake CPUs.  

The Intel Core i9-9900K. This eight core beast consumes a lot of power and requires advanced cooling system to function properly with thermal throttling. All reviews shows this CPU blows up the TDP value officially announced by Intel.

Essentially, Intel can't figure out a way to reach the 10nm architecture and only propose an endless variation of the 2015 Skylake CPU. Unfortunately, no amount of creative rebranding and inaccuracies on the specification sheet can hide the fact that these processors are nothing more than overclocked chips. Logically, the power consumption reaches new heights despite deceptive TDP values that fool no one. As for AMD, the situation is similar if not worst with the high-end Threadripper monsters that can pull up to 180W of power. Multiplying the number of cores generates additional electrical consumption, thus heat. As such, water cooling which used to be an exotic accessory for overclocking nerds a few years ago is becoming a standard requirement on performance computers.

To be fair, these processors are potent and capable, but they are not innovative. In the end, they are the equivalent of a doping syringe or a nitro-boost button: an unsustainable and short term trick to reach a certain level of performance.

The Expensive but Useless and Crippled Motherboards

Once upon a time, motherboards were critical parts of a computer. But nowadays, most functionalities such as storage, USB, audio, and network are embedded directly in the chipset and/or the CPU. Therefore, motherboards are nothing more than a Southbridge base plate designed to receive the main parts of the machine.

For this reason, the CPU's socket (physical connector) and chipset (software controller) govern the selection of the motherboard. Faced with the commoditization of the industry, motherboards and chipsets manufacturers (Intel & AMD) reacted with various strategies. First, they try to justify premium prices by designing "aerodynamic" boards filled with LED, useless heat sinks and few extra connectors. Don't fall in to this marketing trap because these gimmicks only inflate the price without providing any performance gain unless you plan to overclock your CPU.

Nowadays, motherboards are nothing more than a base plate to accommodate the main parts of a computer. The CPU and Chipset control most of the critical functionalities.

The constant change of socket and chipset is another classic way to force the upgrade and justify the acquisition of a new motherboard. Indeed, AMD and Intel have this horrible habit of changing the type of socket and chipset with every new generation of CPU. After each processor release, the choice of compatible motherboards is limited to a few options costing around $300-400. Then, the price usually goes back to a reasonable level after a year or so. Therefore, the choice of CPU dictates the motherboard selection, and both components must be considered together in terms of budget. For instance, Intel just released the Z390 chipset along with the latest 9th generation Coffee Lake CPUs (9600K, 9700K, 9900K).

On the hardware side, this chipset is identical to the previous ones dating from the 6th generation Skylake platform (2015). Essentially, the Z390, replaced the Z370 which replaced the Z270, and the Z170 before that. We must salute the performance here: four identical but incompatible chipset release in 4 years. In fact, the Z370 chipset can handle the new Coffee Lake processors via firmware update of the BIOS, but the voltage regulation module of Z370 motherboard might be too limited for the real power requirement of the 9900K chips.

To be honest, these chipsets are not entirely identical. Despite the minor software changes, chipset makers tend to artificially limit critical functionalities of the component by locking some features in the BIOS such as overclocking capacity and connectivity. The user is invited to purchase the new chipset to unlock the full potential of the motherboard.

Graphic Card: Premium Price for Everyone

The GPU market is finally leaving the cryptocurrency nonsense thanks to the diminishing return of mining. However, the market is currently dominated by Nvidia on the high-end segment. Radeon cards offer similar performance and price on the mid-range segment, but they tend to be less efficient and consume more power, hence more heat to evacuate with noisy fans.

The Nvidia RTX 2070 GPU represents the new middle range in the graphic card market despite its high price. However this physicality huge card packs a lot of transistors and comes with serious performance.

The consequence of this lack of competition is a general price increase. The traditional middle of the market x70 CPU series from Nvidia used to be found in the $350 range but the latest GeForce RTX 2070 cost about $550 at the moment. Hence, the affordable mid-market GPU is now priced as a premium product, and this situation will continue as long as AMD Radeon is not able to propose a competitive alternative. 

A Lucrative Market Concentration

The lack of innovation and inflated prices are clearly linked to the market concentration in the computer industry. The majority of sub-components are only produced by duopolies or oligopolies where the dominant player naturally tends to raise the prices and slow down innovation as soon as its competitor can't keep up with the pace of technological development. As we just saw, this situation is currently happening with Nvidia and Radeon (AMD), but the CPU market is also affected. For instance, the LGA11XX socket has been artificially limited for many years by Intel to the Quad Core offering, but the company finally unlocked this socket to 6 and 8 cores CPUs to counter the resurrection of AMD with its Ryzen processors. Before that, buyers had to operate a transition to costly motherboards fitted with "advanced" sockets and chipsets if they ever wanted to install CPUs with more than four cores.

This example illustrates the positive effect of competition in the CPU market. Finally, the storage industry suffers from the same issue. WD, Toshiba, and Seagate dominate the hard drive industry while the flash memory (SSD, DRAM) sector is concentrated among Samsung, Hynix, and Micron with strong suspicions of price fixing, especially on the DDR memory.

Software Optimization: Coding with Your Feet like Adobe

Market dominance is also the cause of the disastrous performance of Adobe software on modern computers. As if the hardware issue was not enough, Adobe programs are poorly optimized to take advantage of multi-processors architecture and powerful graphic cards. Except for a few effects, the GPU will stay idle most of the time despite the huge reserve of power available. 

As Lee Morris and many users realized, entry-level CPUs can outperform expensive 10+ cores CPUs. Why? It's all about parallelism, or the lack of it. Parallelism is the ability to distribute the processing load among several CPU cores. But more than a decade after the spread of multi-core CPUs in consumer computers, Adobe applications still can't handle multi-threaded tasks correctly. Instead, they mostly rely on CPU clock speed (frequency) to execute the computation. Unfortunately, the frequency race hit a thermal wall several years ago which is why AMD and Intel now propose high core count to increase the level of performance and escape the frequency dead end.

We find ourselves in an absurd situation where the main creative software company ignores the technological evolution and release programs that massively underuse the processing power of modern CPU and GPU.

A typical case of underused processing power while working with Adobe Lightroom, Premiere Pro, or After Effects (even with non GPU tasks). The task is progressing very slowly without using half of the hardware capacity.

Ironically, one of the marketing arguments expressed by Adobe to justify the transition to the Creative Cloud subscription-based model was the "continuous improvements through frequent iteration" as the Product Vice President explained in a blog post in 2012. Six years later, Adobe finally revamped Lightroom to use more than a few CPU cores at the time. Other than that, the programs from this company are painstakingly slow, unable to use the hardware correctly, and plagued with bugs and instability.

In reality, as the CFO of Adobe said, the motivation of the Creative Cloud introduction was financial: "the move to subscriptions just drives a bigger and bigger and bigger recurring revenue stream." And the strategy paid off big time with record boost in revenue. Sadly, this stream of cash didn't translate into "continuous improvements" for the user as it initially promised. For Adobe, shareholders must be pleased, and that's what matters.

Once again, market dominance is the reason behind this lack of innovation. Why would Adobe bother to optimize the Creative Cloud suite when it dominates the market and experiences a massive increase in revenue? Software development is expensive, and the re-coding old software core takes time. Introducing incremental side features during keynote shows is much easier than tackling years of negligent coding.

Conclusion: What Can You Do?

The so-called Moore's Law was never a law but an observation made in 1965 by the founder of Intel who merely described the rate of growth of the semiconductor industry. Ten years later, Gordon Moore revised his observation as the speed of progress was getting slower but still doubled every two years. 50 years later, the computer industry is confronted with many difficulties related to miniaturization as we approach the atomic scale. Traditionally, the main road to technological progress in the electronic industry was to shrink component allowing to cram more transistors on processors and bits on media. But this scaling strategy is now hitting physical walls, and each generation of products demands significant investment to yield modest results, which can explain the concentration of the industry, especially in the memory business.

Sadly, certain companies rely on blatant lies and deceptive communications campaign to hide an abyssal lack of innovation. They also resort to artificial market segmentation and product crippling. When the competition is trailing behind, then there’s always some manufacturer that takes advantage of the situation to raise the price to unreasonable levels. Finally, some software developers enjoying a market dominance don't even try to optimize their apps, which results in poor efficiency and a waste of computing resources. 

Here are a few bits of advice to get the best out of your money in this environment:

Processors and Motherboards 

First, consider the CPU and motherboard acquisition together, as the choice of CPU dictates the socket and chipset type. Try to find the sweet spot between price and performance and don't hesitate to consider the previous generation of processors and motherboard as the progress have been extremely slow the past few years, especially with Intel. For instance, the Core 9600K is only a few percentage points faster than the old Core 8600K. Generally, avoid high-end processors above $500 as they tend to be a waste of money and offer poor performance gain per buck. I would also ignore 8+ cores CPU with their exotic sockets and chipset for the same reason. 

As a rule of thumb, Intel CPUs tend to perform better for single threaded applications due to their higher frequency while AMD Ryzen will shine in multi-threaded tasks and cost less than Intel. At the moment, the eight cores Intel i7 9700K and AMD Ryzen 7 2700x are solid performers with excellent price-performance ratio.

The AMD Ryzen 7 2700X and Intel Core i7 9700k. Two good mid-range CPU with excellent price-performance ratio.

But keep in mind that performance depends on the software optimization and usage type. There is no such thing as the best CPU. The key is to find the right one for your need and priorities. Therefore, proceed with a precise evaluation of your user profile. Which software are you going to use most and which task do you perform in priority? What bottleneck are you trying to address first? If your main editing program doesn't take advantage of multi-core processors, opt for a high-frequency CPU or vice-versa. To complicate the matter, a given software can behave differently depending on the task: real-time editing, playback, pre-render, final export, and encoding each takes a different toll on the processor. Some tasks will benefit from the higher frequency while others will spread the load on multiple cores. In other instance, the program might prioritize the graphic card over the CPU. 

Thankfully, there are plenty of benchmarks and reviews available on the internet and YouTube, even for niche applications. However, be careful with broad benchmarks as they only give a general indication of CPU performance. The reviews based on dedicated benchmarking tools are usually optimized for multi-threaded applications while video game benchmarks are skewed in favor of high-frequency processors. One of the references for workstation reviews is Puget Systems.

Graphic Cards

Nvidia leads the graphic card market, and Radeon GPUs are not competitive at the moment. Things might change with the next release cycle, but there aren't many alternatives for now. But as with the processor, you need to assess your need in terms of graphic processing power. What software are you using and which task are you trying to improve first? Then, is this particular software and task optimized for GPU rendering? Some video task can benefit from high-end GPU, but others completely drop the load on the CPU. Generally, regular photo editing software don't use the graphic card that much while some effects in video editing such as color grading, transitions, and 3D effects can benefit from a powerful GPU.  

Adobe Alternatives

Contrary to the hardware world, there isn’t any oligopoly in the software industry. The offer of Adobe’s substitute is expanding. One of the most serious Lightroom challengers is the great Capture One with its advanced studio and tethering functionalities. DxO PhotoLab is also a good option. In the video department, Avid Media Composer has already been adopted by many productions. Final Cut Pro X is very popular with the editors working on Apple computers. DaVinci Resolve is another rising star in the industry, especially for its advanced color correction features. Blackmagic Design also proposes a good After Effects alternative with Fusion. These two pieces of software are free and can be downloaded directly from the company's website. The advanced versions cost only $299.

The situation is more complex for Photoshop. This one still reigns as the undisputed king in the professional industry. But you may want to take a look at GIMP (free), Pixelmator Pro, or Affinity Photo to name a few. Please, feel free to share other alternatives in the comment section below.

Oliver Kmia's picture

Oliver Kmia is specialized in time-lapse, hyperlapse, and aerial videography. He also works with several drone manufacturers as a marketing and technical consultant. He is the lead brand ambassador of Hello Kitty camera, his favorite piece of equipment. Most people think Oliver is an idiot and they are probably right.

Log in or register to post comments
19 Comments

Oliver,

I use a combo of Capture One, FCPX, and Affinity Photo, and I'm fairly happy. Great performance and ZERO monthly fees.

People laugh at FCPX... until they see how fast it cuts through 4K footage, even on older computers.

I sent you a private message with an in-depth article I wrote about the switch.

FCPX is terrific. Any time I have to use something else I can't wait to get back to FCPX.

a thoughtful well researched article, how refreshing

NextGen Zen 2/Epyc CPU's will be a generational shift, and arguably the Zen chip design was as well which forced Intel to sit up and pay attention. The next generation of CPUs from AMD will be what's called Chiplets. This design methodology separates the I/O from the actual CPU core leading to greater quality control (amongst other advantages). If you want to learn more about Chiplets and why Zen 2 is going to be a major shift in the industry then I'd suggest watching some of the informative videos over at AdoredTV on YouTube. We'll know in a few months how good these CPUs will be but early indications from AMD's demo's is hinting that they're sandbagging on the performance front.

Intel sat on their laurels milking the market for all they could stagnating the industry with the same chips and artificial segmentation over the last decade and we rewarded them for the privilege after their monopolistic practices during the Athlon days. AMD woke them up with the Zen chipset, something that for all intents was 'good enough' and Threadripper was a major blow to their cash cow HEDT market... At least for consumers. If you need to save power, just go Epyc which are clocked to balance the TCO whereas Threadripper was clocked for the HEDT market (which accounts for around 1-5% of the market depending on who you talk to).

There's also an unspoken fact about relying on benchmarks. It's measuring single application performance. Most modern systems are running dozens if not hundreds of processes, multiple applications (browsers, e-mail clients, image editors and so on). Having a multi-core CPU means you can run all those applications with minimal to no over-heads that's as much why RAM/Cores and fast sub-systems (such as PCEe drives or m.2 NVMe systems) are just as important. If you also dig into the benchmarks, Intel has had a hand in sponsoring/developing or creating many of the applications we rely on.

The final fact, and the real elephant in the room, PCs are dead men walking. The world has already shifted to mobile first and it's an increasingly diminishing market for desktop systems and this is only going to accelerate leaving serious games and content creators wanting to actually buy these platforms. I know a lot of people who do most of their day-to-day activities on a mobile and if they need a PC they'll use a laptop or a tablet of some form.

As to GPUs well that's down to history and again AdoredTV has a great series of videos discussing the GPU wars and the consequences of what AMD had to do to survive. It's the same familiar tale as to why Intel screwed us over, but nVidia is far, far worse than Intel....

Could be worse. Could be a Mac user.....

Seriously though I don't want a powerful desktop. I want a tablet type device. If it's not powerful enough perhaps Adobe could sell render time at a data centre?

You upload your raw footage/audio along with a recipe script. The server farm renders it into a video and then you download it or direct upload it to whatever platform...

You wouldn't be bothered to have your ability to work tied to the speed of your internet connection? I know I would...

Well I'm a bit of an optimist. Internet speeds are fairly dismal where I live. But the happy medium would be able to upload it fast enough that by the time you finish editing it you could render it...

thats what I do. buy amd. people dont realize that if amd goes, nvidia and intel will bump prices immediately

just like sony did when lexar left and no xqd maker was available.

Well written piece.
I'm waiting for Zen 2 to build a 4k editing machine this year. If the reviews are great, I'll bite, if not I'll go with a 2700x.
For software, I'll probably downgrade to Photoshop plan and go to Resolve for video. Though I read they don't support more than 2 screens, what's that about?

Wow, that is a chunky article, excellent coverage.

I bought a new desktop to replace my 6 YO laptop, thinking it'd easily outgrunt the old machine but I am disappointed, barely notice any difference.

This is probably a very significant issue because as you say, we're are approaching the physical limits so that new approaches are needed. EG massively parallel, but that won't help us if the software industry doesn't step up.

As a former developer, I can tell you that coding for parallel adds a pile of complexity because not all computation can be easily split.

Gotta laugh too, how vendors jam in useless marketting gunk to stretch out their tired products. Reminds me of the last model Suzuki Katanas (great bikes) that came with a flip-up headlights. So Adobe et al, don't bung in more features, Just Make the Current Stuff Work.

In general i agree with the article. It contains to my taste a few inaccurate details. AMD is more stable on the Socket than Intel - look at the AM4 socket, when you get bios-updates - you can upgrade the cpu. One of the main issues is the changing memory-dimms. Due to integration into the cpu that forces you to buy all new mainboards and design new sockets. Because 10cm of copper between cpu and ram is a lot of distance.
Your statement about HDD's is correct - but are these companies profitable (hardly)? The market is shifting to Flash - and hdd's become a backup medium - or have still to be used for big sequential processing (video and large quantities of photograps). The competition on Flash is huge.
One forgotten thing - due to integration into the cpu - we now have big security issues - and in that case lots of cpu manufacturers are hit - Intel is probably hit the worst (Spectre, Meltdown and there's a third new kid into the classroom right now).
Adobe's CC has always had issues, it was difficult software to get it running in the first place. Photoshop and lightroom where exceptions to the rules. I stopped using them because i dislike the renting extortion mechanisms (that's too much Al Capone alike for my taste). I'm using linux - and Gimp - it works for what i need it for. But i still have C1 on windows for the "better" work when it realy needs to be good - because these softwares save you time.
AMD is doing well at this moment - will it last? Not when chipzilla has it's say. They already are convicted as a monopolist badass in several countries - and they'll try it a 2nd time. In certains stores in my country, by certain brands we don't see any AMD-machines. Why is that? We know Dell has them in Germany, UK, US - but for us. I had to import an HP notebook with a good screen - there are lots of lousy screens on notebooks for us photographers - to be able to buy a Rizen machine (and i'm very happy with it).

This article is great. Worth every word and every pixel!
I'm a software engineer, and a amateur photograph. I could not agree more with the state of the computer software and hardware industry.
It's very refreshing to read a well thought out article. Much better that some of the other "opinion" fluff pieces that constantly (dis)grace blog pages everywhere.
Dumped Adobe about a year and a half ago and I don't miss them. Capture One has been excellent and I don't regret it.
Well done Mr. Kmia!
BTW - I wonder how many people read your bio and see the idiot bit :D.

Faced with the commoditization of the industry, motherboards and chipsets manufacturers (Intel & AMD) reacted with various strategies."

AMD does not build its own motherboards. Intel hasn't built its own boards for several years. The OEM companies -- Asus, MSI, Gigabyte, many others --are responsible for the motherboard designs with the gimmicks you despise. Intel does not tell motherboard companies how many LEDs to put on their products.

"The constant change of socket and chipset is another classic way to force the upgrade and justify the acquisition of a new motherboard. Indeed, AMD and Intel have this horrible habit of changing the type of socket and chipset with every new generation of CPU.After each processor release, the choice of compatible motherboards is limited to a few options costing around $300-400. Then, the price usually goes back to a reasonable level after a year or so."

This is a much more accurate description of Intel behavior than AMD behavior. When AMD released Ryzen in 2017, it pledged to support the platform with all new CPU launches through 2020. Historically, AMD has supported its platforms for far longer and more robustly than Intel. AMD does launch new chipsets with its platforms, yes, but it generally has not required updates in the same fashion. There have been some specific platforms that were only in-use for one product generation, but for AMD, that's been the exception -- not the rule.

https://www.newegg.com/Product/ProductList.aspx?Submit=Property&Subcateg...

New AMD motherboards for past generations are available starting at $45.

i remember when all of the same things were being said about hardware 20 years ago, verbatim. from memory price fixing to the inability for chip lithography to break into the nanometer scale. this period of stagnation produced the Socket 370 Celeron, a budget chip series that was so successful that it continues to this day... it started as a new "budget" alternative to Pentium II and began the now common process of putting cache on-die.

transitioning from socket 7 to slot 1 and slot A, and AMD's resurgence with Athlon. hell, even the Nvidia GeForce underdog versus the dominant 3DFx VooDoo's gfx card or whether having a 3D card mattered once Intel's planned integration of graphics into the northbridge would make it redundant (still hasn't, though on-board gfx is now the norm).

even the idea of, and the subsequent difficulty, of producing software optimized for parallelism due to multi-chip 3D cards or the inclusion of specialty code hardcoded into processors, for example, Pentium MMX. the importance being hard to gauge if there's little software available to utilize it and thus a bit of a chicken or egg issue. even back then, clock speed was favored over parallelism since most applications weren't coded to take advantage of MMX or parallelism.

maybe some 16-bit versus 32-bit addressing arguments?

if we pretend Adobe were proactive, should they code for Intel's MMX SIMD's or AMD's FX SIMD's? should it work with Direct3D, OpenGL, parallel 3D cards or should it even include any 3D code at all? though the terms are 20 years old, they have their analogues today, as you've read above. and like back then, your answer is probably "all of the above" although that is not realistic due to time, complexity, expense and marketplace confusion. since Adobe isnt proactive in reality, we dont have to answer this question, because the truth is that you could end up with having to choose which version to buy based on the one that matches the video card you have and whether you're using Direct3D or OpenCL. because that's what you get with a "one size fits most" platform like wintel.

it's all happened before and it will happen again.

despite arguments to support the apparent market stagnation, that previous period gave rise to some of the most important changes that affect personal computing today. looking back, it was the latter part of an era of evolution rather than revolution and yet the advancements made hold just as much value as the first multi-core consumer processors. ok, maybe not to a consumer, but to those in the industry, MMX-style SIMD instructions are a game changer. for consumers, the Intel Celeron filtered CPUs to the essentials and created a lower price point that represents far greater value for the end user. before that, you paid through the nose for a Pentium II whether you were processing words or processing 3D graphics... or you paid for an AMD K6, or worse a Cyrix 5x86, and dealt with your applications not running well, if at all.

but the truth is that parallelism is the future. it's how neurons work to create an infinitely superior computing solution. developers, however, still arent convinced enough to rewrite their codebases to accommodate for this. the proof is in the applications that have been written with hyperthreaded code to take full advantage of multiple cores... they easily outperform versions optimized for single-thread while running on cryogenically stabilized, overclocked single core processors, and do so on all fronts.

and if there are still developers that aren't convinced to rewrite their code for parallelism at great cost, consumers definitely aren't convinced that multicore is the future and single core is dead-end tech, especially if they can't see it because of said developers refusing to rewrite their codebase. oh look, it's the chicken versus egg issue again.

technology is primed to make another revolutionary leap but not a moment before the market is ready for it.

While you have rightly pointed out that Adobe isn't using more CPU cores, software optimization is not a simple matter.

Some algorithms, such as encryption and compression, can be sequential by nature, and benefit far more by single thread performance increases vs multiple cores

Also, I think LR and PS uses GPU, so you may see those tasks not being registered as a load on the CPU, because it's mostly doing book keeping.

Lastly, it also depends on their software development process. They might start out writing the algorithms for correctness, and then try to optimize without breaking that. If they had insufficient automated tests that their optimization, they'll have a hard time optimizing quickly.

The easiest multi-core optimization are the ones where the different threads don't talk to one another at all, and I'll bet that those are already using many cores safely and efficiently.

Those that need to share some pieces of data would need to be locked properly, and humans are not very good at writing multi-threaded programs while managing locks. So if they forgot the lock, they'd have weirdly corrupt data, but only once in a while. Locking in the wrong order could lead to deadlock, which means you'd have to kill the processes.

None of which are a good look for any software development company.

Fantastic article. The youtube boys who always push the latest builds should read this, and come clean about the little lies they tell supporting Intel, AMD, and Adobe. Thank you.

In a suggested way, understand if there is a brand book in companies and if the idea of brand personality appears in it. f) In a suggested way, work on the concept of fashion and construction of consumer identity, to see the different approaches.  https://failfake.com/pl/

Thoughtful article and it is even more meaningful to look at it in 2020 Winter when we see an Apple Mac Book Air (fanless) outperform an Intel i9.