One of the first things I heard when I sat down at a large white table with Light CTO and Co-Founder Dr. Rajiv Laroia and VP of Marketing Bradley Lautenbach was that, when it comes to lenses, plastic is better than glass. Scratching my head for a bit, while searching for some logic, but keeping an open mind (I did ask for a meeting with the guy who decided to put 16 lenses in a small box and call it the future of photography), the meeting proceeded to somewhat blow my mind… if it’s all true.
Where It Started
Laroia, surprisingly, doesn’t come from a background in optics. Instead, he comes from one of innovation, after serving as the CTO of Flarion Technologies, which was later bought by Qualcomm (odds are you have a Qualcomm chip in your phone) for its LTE technology.
But Laroia was frustrated with the fact that he shot photographs almost entirely with his iPhone, even after having invested thousands in full-frame digital gear. He wanted DSLR-quality photography in a package as small as his phone, so he could take it with him anywhere without hesitation. And the innovation in the world of optics was seriously lacking.
What does an electrical engineer do when he wants a problem solved? He solves it himself. After a year of teaching himself optics, Laroia concocted a plan that makes him equal parts the innovator-we’ve-been-waiting-for and mad scientist. The L16 is his Frankenstein’s monster, albeit, with a little more usefulness than the aimless wandering and growling that makes the character popular around this time of year.
Plastic Lenses and Smaller Sensors Mean a Compact Camera with Full-Frame Quality
So, this is the weird part. The lenses in the L16 aren’t made of glass at all. In fact, they’re made of plastic. In an age when cell phone manufacturers are touting sapphire glass-covered lenses, Light stands behind the newest plastics technology that it says is finally better than even glass. But how can that be?
Laroia notes that optics manufacturers have worked for years to improve the quality of glass optics, in many cases adding aspherical lens elements to reduce aberrations and increase resolution. This works by carefully grinding the glass in a way that varies its curvature so that it directs light only at one point behind the glass.
In traditional lenses, often only one side of an element is aspherical. And only some of the elements in a given lens might feature this time-consuming, delicate process. Nikon’s newest 24-70mm f/2.8E ED VR has 20 elements, with only four aspherical elements. The L16’s lenses have five elements, but both surfaces of each are aspherical, giving 10 aspherical surfaces. The result is a lens of plastic elements that is diffraction-limited. And the added cost of this process for Light: zero.
Plastic elements can be molded and stamped out perfectly every time, which means each of the lenses in the L16 costs a single dollar. Add a few more dollars for each sensor bought at volume (thanks to cell phone cameras keeping small camera sensors cheap due to their extremely high production volumes), and you start to get the idea of what’s possible with very little manufacturing cost.
So What Is It, Really? And What Does It Do?
The L16 features 16 camera models, each with its own lens and camera phone-sized image sensor. Housed in a single, inch-thick box roughly the size of a slightly larger iPhone 6 Plus, the L16 uses its camera modules to simultaneously grab multiple photographs of the same scene at different focal lengths and then stitches them together to create a higher quality, higher megapixel image.
Using five 35mm lenses, five 70mm lenses, and six 150mm lenses, each group helps in various circumstances of shooting to allow a zoom range between 35mm and 150mm. The 70mm and 150mm lenses are actually laid into the camera horizontally and use a mirror to direct light approximately 90 degrees from an axis perpendicular to the front of the camera to the sideways-mounted sensors.
When shooting a 35mm image, all 35mm modules fire, but so do the 70mm modules. Each 70mm lens uses its own mirror to direct its field of view to a different quadrant of a 35mm frame, with the fifth 70mm lens shooting the center of the frame to ensure sharp detail everywhere. Advanced algorithms do the rest of the work, stitching the image together, while deciding what to keep and what to dump. The final result is a 52-megapixel image that rivals the quality of full-frame DSLR sensors, in theory.
Does It Really Work?
In theory, computational photography is limitless. And in theory, the L16 can produce images on par with those coming from a Canon 5D Mark III. I saw some images taken from both cameras side-by-side in Light’s Palo Alto offices. It’s pretty incredible.
On one hand, the images taken by the L16 are still slightly noisy in certain circumstances. A shadow area of an image taken in a hotel room displayed a slightly odd magenta tinge. It was apparent that the 5D Mark III’s color was clearly its strength over something like the L16. Meanwhile, what appear to be artifacts similar to those you’d find in JPEG compression seem to cover some detail in the images, but then you start to keep things in perspective while reimagining the future.
Sharing early images from a product like this with the media is a risk for any company. How we discuss the product can turn away potential customers or kill any interest and hope in a miracle panacea for our weighted bags. Yet there's a reason Light felt the time was right. There’s plenty to complain about when looking at the L16’s first images. But there are plenty of reasons for cautious optimism.
First, forget comparing the L16’s images to those of a cell phone. It’s a night and day difference every time (there was a matching cell phone photo for every comparison of the L16 and DSLR photos that I saw) and it makes sense given the extra camera modules and advanced computing going on in the L16.
But when you compare the L16 images to the 5D Mark III images, something interesting occurs. At some point, the L16 actually shows more detail than the 5D Mark III. In an image of a vase in a nearby hotel room (Light brought the vase and had it in the office for me to examine), small cracks appeared more easily distinguishable in the L16 image compared to being almost nonexistent in the Canon image. Adjustable depth of field after the fact is another draw to the L16, thanks to its multiple-lens design.
Now, there was slightly more noise and the color wasn’t as “pretty” (but not necessarily less accurate in well-lit areas). But there was definitely more actual detail — you know, the kind that the NSA would be interested in for reading faraway signs. And when you consider that the L16 only began taking images about six weeks ago, there’s no telling where this could go by the time its release comes around in middle-to-late 2016. As I saw it, the L16 still needed to be hooked up to a computer to even take a picture. What it could do when it’s an all-in-one, functioning unit is very promising, but those implied promises will need to be realized by launch to make it worth the asking price.
The L16 runs on Android and features touch controls along with 128 GB of internal memory (no memory card slot in sight). If you want additional power beyond what’s in the internal battery, you’ll want the additional hand grip, which simultaneously improves the ergonomics. For the slightly further-off future, Light has an agreement with Foxconn (famously Apple's manufacturing partner) to manufacture the first run of L16s in return for a licensing agreement, so you might see this tech in your phone sooner than you think. In the meantime, Light is also working on a slimmed-down version for those who want even more mobility.
A $200 deposit reserves your L16 for a total of $1,300, the remainder of which will be due before shipment in Fall 2016. After pre-orders run out on November 5th, the price goes up to its full $1,700 MSRP.
Those "35mm" lenses look to be a lot closer to 5mm.
Definitely, but they are likely referring to full frame equivalent focal lengths. Typical cell phone sensors have crop factors around 6-8, so 5mm x 7 = 35mm makes sense.
Probably, but if you're "redefining the concept" or whatever, it might be time to move away from expressing focal length by its 135 field of view equivalent.
Yes, I was talking about full-frame equivalents. While certain optical properties do vary for smaller lenses with the same field of view/angle of view as larger lenses on their respective sensors, I think speaking in terms of 35mm-equivalent focal lengths is pretty much our only option, since that's how people judge angle of view...and that's what people care about (mostly) when discussing the focal length of a camera.
If it were up to me, we'd simply get reacquainted with actual angle of view in degrees (or radians, for that matter, who cares?) and would simply name our lenses based on that number.... It would be much more useful....
I wonder what their profit margin is per unit. I think the $1700 price range is a huge barrier to entry for the average enthusiast. I'm wondering their target market and who they anticipate having the disposable income to spend $1700 on a pocket camera.
I imagine the tech will be acquired by a cell phone company at some point. Motorola, LG or Samsung?
If this was a phone + camera that would be great.
I think the potential is EASILY there. It's a bit thick for a phone...but if it's going to be that thick, it may as well be one, too, right? Add a microphone, a sim card, and some radios, and there you go! It already runs on Android....
The price point lost me.....my guess first generation will suck.....after a year and $799 price point might be of interest......I don't want to invest in R&D
so its just several photos from phone sensor stitched together, they will always have the same problems of phone sensor photos: noise on low light, color noise, low range.
under good light, phone photos are already comparable to dslr for most usages. specially by the time this comes out.
Computational photography is a lot more than just stitching photos together. These photos "overlap" several times, so to speak. And each time, algorithms can use that data in different ways to drastically increase the quality relative to what you would normally get with a smaller sensor... It's really quite different from stitching, although it may seem that way, of course.
they didnt show how each sensor contributes to the final image ? like an overlap map?
They actually did show this in the meeting I had with them. A very simplified version of what they showed is that the wide-angle shots cover more or less then entire frame while the telephoto lenses got quadrants plus a center shot (five shots) to "add data" to "overlap directly," if you will. It's much more complicated than stitching, even if some stitching might be used in the process. This is more of a "lasagna layering" of information... Each layer adds a new depth/possibility to contribute to the quality of an image. Potentially this includes certain images not even necessarily recording all colors at once... There are all kinds of ridiculous combinations of data that you can use these sensors for...and believe me, they're using every trick they can...