Before Android-powered modern smartphones, the company, pre-Google, was focused on making a universal operating system for cameras. It raises the question of an alternate reality that never happened, but what if it did?
From old digital camera aficionado James Warner over at the Snappiness YouTube channel is a video that details the short but interesting history to unify cameras under the banner of one operating system. He talks about the first operating system from 25 years ago that saw a home in some Minolta and Kodak cameras, Digita OS, but never saw widespread adoption. The second attempt at a camera operating system, what would eventually become the smartphone operating system Android, didn't either in 2004. Interestingly though, after Google bought the company and turned it into a smartphone operating system, it did make its way onto a camera in the form of the failed Samsung Galaxy NX in 2013, a mirrorless camera with an APS-C lens mount and phone-sized screen and operating system grafted onto the back.
It poses an interesting question: about how much better would the photographic world be for photographers under a united banner of one operating system? Surely, having some consistency between interfaces would make it easier for new users to jump into a camera system and for experienced users to switch between operating systems. I know that my head hurts having to remember how a Fujifilm menu system works differently from a Canon menu system, which is different from a Sony system. If nothing else, having an Android-based operating system right on the camera would at least encourage camera manufacturers to include some sort of built-in memory into their cameras, something that should be a no-brainer in 2023.
If I'd have to wager a guess, a lot of why this likely didn't take off is the pride often found in Japanese corporate culture. Anyone who has used a camera control app on their phones from any of the big camera companies knows that software programming isn't the strong suit of these companies that focus on optics and camera sensors. While it would make sense for usability purposes for any of these manufacturers to jump onto a well-designed third-party operating system to run a camera, it's often the pride of keeping the entire imaging pipeline in house that wins out. It's likely a large part of why we don't see iPhones co-branded with Canon or Nikon lenses. And with the current state of the camera industry, it doesn't look like such a feat will be attempted again.
It's a shame, really, as I'd love to be able to load up a round of Doom on my EOS R5 to play in my downtime.
What do you think of having an open source operating system powering your camera? Would you buy one that had such software? Leave your thoughts in the comments below.
I thought it seemed pretty clear that they're all Linux underneath, no?
"If I'd have to wager a guess, a lot of why this likely didn't take off is the pride often found in Japanese corporate culture. Anyone who has used a camera control app on their phones from any of the big camera companies knows that software programming isn't the strong suit of these companies that focus on optics and camera sensors."
It's the same deal with car companies. GM looks like it's poised to remove Carplay and Android Auto from its cars.
I'd wager that at some point, some other new player (say, DJI) which has a decent amount of software proficiency *and* respect for the optical side will buy up some remaining stake in an old optics player (e.g. the rest of Hasselblad), and maybe some new ones (e.g. Laowa for their cine lenses), and then they will try making their own sort of camera. It will be dismissed by the other old players at first because it will be e.g. content-creator focused. But over time they will move more and more into the space that Canon, Nikon, etc. used to claim as their exclusive domain, much as e.g. Shimano once did to Campagnolo.
If / when that happens, it'll be interesting to see how Sigma, Tamron, etc. do because they have had to make good, competitive lenses without the benefit of owning the mount, the way that the old players have.
The main reason why it never took off is the shear size of mobile os.
Do you want to wait a minute for your camera to start?
Do you want a much shorter battery life, because of the huge processor and tons of background tasks that run every minute?
All OS have tons of security issues that have monthly patches, camera manufacturers would not keep up with constant updates to keep your device safe.
There are reasons why camera os'es are so stripped down: security, battery life, speed.
There are some technical issues with this, that require a paradigm shift in thinking. The OS that you experience on your phone or your computer is not Linux. Linux for embedded devices can start up in much much less than a minute, as short as a second, which is to say, on the same order of magnitude as my R5.
To really understand this, you need to know that the OS itself can be pared back to the kernel, glibc, and then whatever specialized graphical stack you want to run on it. That's it. GNU/Linux itself (i.e. the kernel and the C runtime) has a really small footprint, especially if you are only compiling the parts of the kernel that you actually need. That is how it can run on tiny SOCs that are even smaller than the ones in your phone. To give you an idea of how little that needs to run, you can read this:
What you should get for running a more flexible OS (in an ideal world) is a much more open and programmable platform.
Imagine that the image processing could be programmed much like how the Accelerate and Metal frameworks are called in iOS today.
I think the real reasons this concept hasn't quite taken off yet is because the public-domain software and hardware knowledge to e.g. roll your own Neural Engine isn't quite there yet. Whatever company does something like this, wouldn't just be a software company per se. It would need some degree of integration with the hardware. In particular, you need integration with specialized ASICs that would do the image processing tasks like demosaicing the RAW data for the viewfinder / display, and then you'd want your own little "Tensor" unit in there to do the AI-powered object detection / autofocus, etc. With that said, some of this can probably be gleamed from Android already.
The big camera companies already do all the above. It's just that it's all running on closed platforms of one sort or another. There is probably a specialized real-time OS underneath. But linux can be made to run in real-time as well. The key parts here are in the graphical stack and image processing that sits on top.
While it may sound tempting, any kind of "general" or "unified" software requires additional abstraction layers, which eventually lead to performance degradation and bugs. We can clearly see that in comparison with iOS. So, I am very happy that this idea didn't cought up :)