Photography is hard. It's even harder when you forget your camera. But, as Chase Jarvis always says, the best camera you have is the one you have with you and with the rise of computational photography that is becoming more true than ever.
Every week, I photograph an open mic for magicians here in Toronto. It's great fun, and doesn't pay super well, but definitely a good way to spend a Tuesday evening. A few weeks ago I got out of my Lyft and realized that things were... lighter than I thought. I forgot my camera. After a half-second panic attack I took a deep breath and walked inside. I explained the situation and knew that my phone, the Pixel 2XL, would be "Good enough" for this sort of event.
Thankfully the Pixel 2XL has a feature called Night Sight, which is similar to the Olympus pixel-shift, where it takes multiple photos and uses the shifting of your hand and the accelerometer to merge photos together for more sharpness, dynamic range, and less noise. Basically any time I shoot on my phone, I turn on night sight, even during the day. So in the dark theater and with Night Sight, I got some okay images of the local magicians doing their acts for the small audience.
Being able to shoot a (small) event on my phone made me realize just how far computational photography has come and I really feel as though this is where photography is headed. Computational photography with things like Night Sight, Portrait Mode, HDR+, or even Canon's dual-pixel AF allows you to shift the bokeh a little. Olympus, Panasonic, and Pentax all have some form of pixel-shift technology, with Fujifilm and Phase jumping on board as well in the larger-than-full-frame market.
With megapixels getting to a point of diminishing returns (Show me a photo taken at 50 MP and 26MP and show me which is which without zooming in to 100%), and even cheap computers being able to run Photoshop and Lightroom I can see computational photography really taking off in the enthusiast market. Even Sony with their new AF algorithm's are a big leap forward in machine learning and photography able to use face-detect on animals — I foresee things like better DOF simulation on high-end cameras so we can have the shallow DOF of a f/1.2 while stopped down and actually shooting at f/2.8. Also things like reducing noise, better auto for street shooters and beginners, etc.
While there will always be purists I feel like we are reaching a point where lenses are sharp enough, camera's are good enough, and lights are bright enough we are going to be shopping for computational features rather than just resolution or ISO performance. The Light L16 was purely computational with a whole lot of cellphone cameras put together for some bokeh, resolution, and zooming magic. These features are starting in the consumer market and slowly moving towards the more professional market.
What do you think about computational photography? Flash in the pan? Or the future?
Uh, how little does it pay that you can show up and make do with your phone? Genuinely curious
100% agree. Once someone manages to take a ff sensor and feed the info into even an existing phone, I'm sure it'll take blow some people's minds. There's no reason why we couldn't have it today, since phone resolution is pushing 20-30MPs already, so just getting the data from a decent sensor instead of that dinky one could truly be a game changer. Imagine what a Google Pixel Nighsight picture could look like using a wide aperture apsc or ff sensor, when if can already do so much using that tiny sensor.
Just curious - out of all the photos you have captured during that event, how many keepers did you have? How many have been accepted by your Client, and what was the Client's feedback re quality of photos? What distance did you shoot from (just wondering if one can shoot with a mobile phone from a tele photo lens range (let's say, 70-200mm) without disturbing the Clients)? It's hard to see the results based on only two images you have posted. The first one I would not deliver to a Client, it looks blurry and not in focus; the second one is ok-ish (the moving fingers also look blurry).
This is a small weekly event - I basically shoot from one corner of the stage (And can't really move from there) but a 70-200 might be a little bit too attention grabbing, I usually shoot with a 24-70 equivilant (16-55 2.8 on Fuji) so the 28mm of the Pixel 2 was a little challenging but not too dissimilar from what I often use. I generally want the magician and the audience member to be in the same frame so a 70-200 would be a little bit long anyways.
As for Number of photos, it varies week-to-week and they are usually happy with whatever I give them. I only get paid like 50 bucks Canadian for a 2 hour event. But I enjoy the people I work with, it's not like they make a ton of money from the show themselves, and it helps out some friends of mine (And has directly led to more paid gigs that pay a much fairer wage doing press shots)
I do not, have not, and probably will not brand myself as an "Event Photographer" but it's a fun evening the photos are 'good enough' for their needs (Basically promoting the night on facebook which helps them promote their 'big night' every month which is a seperate, non-paid, event etc.).
Thanks for your reply.
Every time you go out as a professional photographer you represent the profession. For the sake of your colleagues, please don’t show up to a gig with your phone.
Curious did you get your camera back?
So are you selling your gear on eBay? I have lost several video jobs to kids using their iPhone, because the client said it was good enough.
Great article. I received my new Huawei P20 Pro and one of the first things I did is take a couple of photos at night in my home office. I was amazed at the results. Mind you though, I had to be very steady and if I was to photograph a person, they would have to be also very still. Regardless, the future of photography looks amazing.
Thanks again for sharing, hello from Montreal.
Ugh, ok. The quality justifies the pay rate.
You are missing the point, it's not about justifying your pay based on what you use. The article is to demonstrate how far small cameras have gone and how they (the manufacturers) design software to enhance the results of such a small device.
This technology will soon be on bigger cameras and we all benefit from it.
You photograph this event weekly yet didn't bother checking if the camera was in your bag?
I'm sorry but this story doesn't add up.
It can easily happen.
David said that he is using a Fuji camera ("16-55 2.8 on Fuji"). A small and light system. Now imagine he is running late for the event, he grabs the bag and runs out the door. In his haste he would not notice the weight difference in the bag.
A colleague I worked with was late for a shoot and ran out the door, put his tripod and camera bag in the boot (trunk) of the car and drove to the event he was to cover. When he opened the boot all he had put into it was the tripod. The camera bag was still on the ground in the office carpark. Luckily enough be managed to blag a spare body and lens from another photographer who was also at the event.
It also happened to me. I was running between two shoots and I dropped my MF camera (film days) back at the studio and grabbed my 35mm kit, then had to run to a company announcement shoot. Unfortunitly another photographer that I worked with had bowered my 16-35mm and 24-105mm lenses and left me with two bodies and a 70-200mm. I was so rushed I never noticed the weight difference and yes I was careless to not open the bag and check before I left.
In my defence, we were each covering about 8 to 10 shoots a day. So pressure, parking tickets and speeding fines were all part of the day.
Long story short - Shit happens! If it hasn't happened to you... It will at some point!
This is really cool. Great concept on photography and stage work.
If you ever need another magician to perform for you guys try me at https://AhItsAllenHe.com/themagician