You may remember the Lytro camera from Patrick's earlier posts, "...An Image You Can Focus After You Capture It" and "The Focus Later Camera...", well now Lytro has announced it's consumer light field camera with an 8x f/2 lens and built in storage. An 8GB that stores 350 pictures will be priced at $400, while a 16GB with a 750 image capacity will cost $500. Lytro cameras are currently available for pre-order, shipping early 2012. The question now is, "Is this camera going to be developed for professional use or is it destined to be little more than a consumer gimmick?"
"The Lytro is the only consumer camera that lets people instantly capture a scene just as they see it by recording a fundamentally richer set of data than ever before. Lytro cameras feature a light field sensor that collects the color, intensity, and the direction of every light ray flowing into the camera, capturing a scene in four dimensions. To process this additional information, Lytro cameras contain a light field engine that allows camera owners to refocus pictures directly on the camera. When the Lytro’s living pictures are shared online, the light field engine travels with each picture so anyone can interact with them on nearly any device, including web browsers, mobile phones, and tablets—without having to download special software."
"The Lytro’s sleek design was created with simplicity in mind. With no unnecessary modes or dials, the camera features just two buttons—power and shutter—and has an intuitive glass touchscreen that lets pictures be viewed and refocused directly on the camera. While the Lytro camera houses complex technology, it is fundamentally easy to use, opening new creative opportunities for anyone interested in sharing their favorite memories with friends and family."
via [LaughingSquid] [PetaPixel]
From Kenn:
Do you like what we are doing? Then show us some love. Tweet and Like your favorite articles and be sure to leave your comments below. Heck leave a comment even if you don't like what we are doing. We can take it. ;)
If you want to receive the best of the month's news articles in a convenient newsletter with added features such as, upcoming contests, great deals and more... then don't forget to subscribe now.
And don't be shy. I could use some more friends these days so hit me up on Twitter and Facebook.
I'm going to pass on this product. I think the interactive files on the internet are more gimicky and don't really give me anything more to give my customers. This technology would be really neat if included in say another Canon EOS camera for sports photographers. It would sports photographers just to focus on the action and never worry if they missed their AF point.
It's the first one. Interactive files on the internet are the best way of showing us what it can do. Camera companies sell many millions more point and shoots than professional cameras - why would you expect a company to cater to professionals (who are perfectly happy with what they have already)?
I'd love to have one, just for the event of making a collection of photos from a family outing genuinely interactive. No more casual bored flipping through the tattered pages of a photobook of the same-old same-old... rather an event where people will gather to interact with the memories. Should be interesting. Don't know if there's going to be much commercial use for this though, but the technology is still 1.0 and who knows where it will go.
There's no way it would be a professional camera.
too slow and no interchangeble lenses make it a point and shoot!
Don't throw away your DSLRs just yet, lads.
Smells like a load of BS. Captures the direction of every light ray? Records in four dimensions? Living pictures that you can refocus on your mobile phone without special software?
Is it April 1 already?
I won't be picking this up anytime soon, but I do see some interesting possibilities.
Any word on if this camera software could export depth data to be used in photoshop to add some crazy bokeh? That's really the only thing that I would see myself using this camera for. To me it seems like this would make it possible to create some post-processed bokeh effects. (or even wide angle, shallow f/0.5-style DOF)
Any thoughts on this?
The problem with that thought is that you need to know what is behind an object to be able to blur the object or the background significantly. Try taking a completely sharp photo, masking some object, and blurring whats around it significantly... at the edge of the mask, what happens to the blur? Its screwed up because it will blur the flat picture data, causing the masked objects color to affect the blurring of the background. You don't just need "depth data" for what you see, you also nee to record what you DON'T see, the objects behind other objects, or at least behind the edges of other objects enough to recreate the amount of blur you want. And I'm sorry, but there is not camera that records blocked, hidden, unseen rays of light.
for a $100 more i would expect a little more then 8 more GB
Why are people so quick to dismiss the camera? The science behind it is sound in that it is designed to capture light fields from different angles and use all the information to process for the given focusing point computationally.
It will of course have limitations, but this is an exciting product and really the first true breakthrough we've had in digital photography that is not trying to mimic an analog model. Computational imaging is a very rich area of research, and although it may not be ready to be adapted in 'professional' shoots, to dismiss it would look as intelligent as someone did at the advent of digital cameras.
The science behind it is sound?? The science is bunk. "Captures in four dimensions"? Really? And what is the fourth dimension? Wouldn't it only need three to record depth? "11 mega-ray"? 11 megapixel isn't even that exciting, so this claim of 11 mega-ray capturing "all the light rays at a given point at a given time" is bogus... and as far as recording the "direction of every ray flowing into the lens", well, since they're all flowing into the lens, they all come from essentially the SAME DIRECTION. Furthermore, a standard sensor doesn't record direction - and they're claiming all they've done is attach a micro-lens array to a standard sensor (read the fine print in the "cut away"), something lots of regular digital cameras already do.
Actually, since almost all light we see is reflected light, it's actually coming from a near-infinite amount of directions. Just because it's hitting the same place, it doesn't mean it's coming from the same source.
The prototype was a medium format dslr, so I wouldn't knock the lens or body. Scaling it up has been done already. The rendering artifacts are problematic but they will improve, and since this is pretty much 3d raw data, your old pics will look better too as the software gets better. I think the real killer app will be true 3d photography. It is coming, and these guys have a head start. Is it pro grade now? no, that's not the target market. Will selling to consumers fund improvements to the technology? You bet. It's the birth of a whole new kind of photography, and I for one am excited. I wish them the best, and yes, I do plan on buying one for the sheer pleasure of playing with it and having fun. That's a big reason why we do this right?
This would be awesome for those A$$hole street photographers that run up to people and shove their cameras in their faces without caring about focus.
while i've talked to a physics friend of mine who explained to me the many possible professional uses for such a technology, the current implementation is clearly targeted towards consumers as clearly a gimmicky camera. While it does take advantage of one of the best points of the technology, a FAST constant f/2 aperture across a massive x8 zoom range, the rest of the implementation is clearly not targeted at pros
Tried out their simulation of changing the focus in post...it looks as it's only possible to change the focus point for the foreground and the obvious background...no fine tuning possible...say, one third into a shot...
I'm sure someone can hack the new 1dX shoot 14 differently focused images then compile those images into a focus stacking type app..there you have the same thing... pick one of 14 points of focus
still GAY!
Cool Physics, amazing new idea, funny looking camera. :)
Looks like you can only view the photo via the software online. Did I get that right. No prints - no jpegs.
it would be great if they had jpeg format.
Iv been hearing a lot about this for about 6 months now, I have watched all the videos about it and been on the website and had a play around with the refocus photos examples/demo’s they show you (they look very Photoshoped/fake)and I have been trying my hardest to figure out if this is a Hoax. I would love it to be true, but sadly I think it’s a publicity stunt, and it will never come to-light :(
I still think the majority of people are looking at this from the wrong angle.
as a photographer, I'm happy to choose my focus when I take the shot, sure, I get the occasional shot where I screwed the focus, but on the whole, I'm happy. You know what, as a photographer, I feel this offers me nothing... But I think that's the viewpoint everyone else is viewing from too.
as someone browsing images, when they first announced it, I looked through the shots on their website, and spent far longer looking at every shot, because I could explore it myself.
So sod your opinion as a photographer, think about what it offers the viewer (you know, the people we make photos for...)
Here're some simple ideas how this technology could be used (software permitting):
Adjust focus: Useful if accurate focus is difficult to achieve, for example with moving objects.
Refocus: Useful if you change your mind about what should be in focus.
Selective focus: Like painting with light (aka dodge and burn) you could paint with sharpness and blur.
All in focus: Like focus stacking, great for macro
Non-parallel focus plane: Similar to tilting the lens you could tilt the focus plane. Useful for architecture, portraiture, product shots etc.
Non-planar focus area: Have the surface of a curved or oddly shaped object in focus
Focus masks: Use the depth information to create masks in photoshop, for example to desaturate elements that are further away or to easily select and replace a background.
Displacement maps: Use depth information with photoshop's displacement filter to wrap another picture around the image
Follow focus: killer app if the camera could shoot light field videos
3D from one picture: Use depth information to create (limited) 3d objects similar to what helicon focus can do as a side product of focus stacking.
Stereoscopic light fields: Use two light field cameras for 3D movies with additional depth information (which would allow you to move the head a bit a see a slightly different image) or to render an area sharp depending on what the viewer is looking at.
Lenticulars: Use depth information with lenticular prints
Embosser: Combine the light field camera with a device that creates a relief of the scene. Useful if you want to share pictures with blind people. Or take a portrait, inverse the depth information, use with a 3d printer and put it on the wall to have the person watch you.
Relight: Use the raw light field that has the directional information of the light to change the brightness of the light sources or adjust their colors in mixed light situations.
Compress: Use the depth information to compress the perspective as if the picture was taken with a longer lens.
There're probably thousands of other ideas just waiting to be thought of. So I'm much more excited about what could be done with this technology than I'm concerned about what the initial technical shortcomings might be.
In my opinion, despite this being a brilliant invention, it's a little too late. My Canon Elph already sits in a drawer gathering dust. I either use my iPhone or I use the DSLR. This can't compete with the latter and would be another thing I'd have to carry with my phone... which means that I won't.
It is always suspicious when the manufacturer charges for a camera with built in storage, and $100 more for extra 8GB storage. Why not use SD cards?
OK. For what it's worth.
I went to Lytro's site, clicked through their "Science Inside" stuff, saw the link to their CEO's PhD dissertation, went to the referenced Stanford University Dept of CS site, and poked around.
Unless Lytro's gone to _great_ lengths to stand up fake websites across the 'net, this does not look like BS.
In fact, Stanford has it's own archive of examples online, which I think are actually better than Lytro's as you can not only focus on any point of depth, but also change the aperture to bring the entire picture (foreground and background) into focus at the same time. It also lets you slightly rotate the picture as if you were moving the camera location. (Which therotecially could allow you to construct 3d images after the fact.) Links follow:
http://lightfield.stanford.edu/aperture.swf?lightfield=data/chess_lf/pre...
http://lightfield.stanford.edu/lfs.html
Myth: Confirmed.
As to it's value. Time will tell. We already take RAW images that we use to post process. This could end being just an extension of that concept.
I just laugh at all the ignorant comments people make here..
No popup flash, and no hotshoe. If you can't expose for a shot, you don't have a shot. Looks fun for daytime shots but if you go to the site and look at the photos they put up, they are pretty bad. This technology will probably get bought by a better camera maker and thus be very cool. The design is bad and too TRYING TO BE APPLE LIKE. Just look at the site. Cameras need to have form and function. I applaud the maker in their hard work and thinking outside of the box. They should now sell this patent to fuji or someone more capable of making good cameras that photographers want to use.