What you get IS NOT what you see

A friend of mine was in a terrible road accident about three months ago and has scarring on her face. She showed me a recent photograph of herself taken on her mobile phone.

‘Do I really look like that?’ she asked.

The answer was ‘no’ and the reason why reminded me that, even though photography is going through a golden age of accessibility thanks to mobile phones, people still don’t understand that things in the camera don’t look the same way they do through your eyes. Yet, such is our touching faith in technology, we imagine the photo represents the truth more than what we see in the mirror or even with our own eyes.

Well, it ain’t so and here’s why.

It’s primarily down to the type of lens used and the processing that goes on in the background.

By the way, whenever I say ‘camera’ in this piece, I mean either the camera in a phone or a top-range dslr and anything inbetween.

DIGITAL CAMERA PROCESSING

Let’s start by disproving a commonly held belief:

The camera records what I see accurately.

It doesn’t. There’s no end of image tweaking that goes on in modern cameras (or phones) BY DEFAULT.

The default is Vibrant

My new phone is a Motorola G6 Play. In the initial setup for the phone, one of the questions was ‘Do you want to keep the display settings at their default, Vivid? Or do you want to set them to Standard? Meaning less ‘poppy’ but more natural. The first thing I do when buying a modern camera is to turn off all the automatic processing that manufacturers build into it. I don’t want the shadows lightened, for example: I frame with shadow, as in this example:

Dark Waterfall from the Out and About collection

The default settings on any modern camera, whether a top-end DSLR or your humble camera phone, are devised to give you an image which makes everything visible, especially areas in deep shadow or very bright light.

Manufacturers used to have lots of differing names for it but the principal is the same: lighten shadows to expose more detail, tone down overbright areas to do the same. This results in an even exposure across all areas, light and dark.

For the majority of people, this is a desirable thing and is why you rarely see a ‘bad’ photo these days. Under or over-exposed shots are a rarity thanks to this kind of processing. Even blurring is less common thanks to impressive camera shake reduction technology.


DON’T YOU MEAN HDR?
Now, you may be thinking ‘he’s talking about HDR’ – and if you don’t know what HDR is, you can find an excellent description of what it is here.
I’m not talking about HDR, however. I’m talking about the default processing that goes on in your camera or – especially – your phone processor, before you even engage the HDR setting.


The truth is, however, that these default settings don’t convey an image accurately. Along with lightening shadows and toning down bright areas to expose more detail, they:

  1. Increase contrast (minimising mid-tones to increase the clarity between light and dark areas, disastrous when applied to delicate shifts in skin tone);
  2. Increase saturation (pumping up colour) and;
  3. Increase sharpness (increasing the definition of the entire shot in an aim to add detail, changing the consistency of smooth areas to that of a gravel driveway. All that moisturising for nothing!).

For a landscape, or a group photo of people, these default effects are fine but for a selfie or portrait shot, such processing tends to ‘harden’ people’s faces and make them look spray-tanned brown or lobster red. The thing is, with mobiles, you mostly cannot turn these settings off. This is why phone manufacturers were quick to add a variety of filters to compensate.

Ever taken a selfie that makes your eyes bigger, smooths your skin and adds floral headbands and hearts flying around your head? Judging by my Facebook and Instagram feeds most of you have. Well stop it. You’re a person, not an elf or an anime character. Although that specific filter is at the extreme end of the spectrum, with the ‘rise of the selfie’ (the working title for the next Star Wars film) phone manufacturers have added dozens of ‘portrait’ filters to compensate for the unflattering effect of their default sensor settings.

So let’s return to my friends question: ‘Do I really look like that?’

Clearly the answer would be, ‘no’ but there’s another, more compelling reason, why portraits and selfies taken on a mobile are inaccurate: the lens.

HOW A LENS CHANGES EVERYTHING

What does the lens do? Without wishing to state the bleedin’ obvious, more than anything else on a camera, the lens affects what you see. Different types of lenses capture the image in different ways and are best suited to different types of photography.


Sports photography requires a zoom/telephoto lens to enable you to get close to the subject from far away while close-up nature photography would require a macro lens, enabling you to be millimetres away from your subject while still being able to focus.

Skier by Johannes Waibel on Unsplash Leaf by gil on Unsplash

You can take a photo with any lens but some are better suited to certain tasks than others.
Let’s avoid getting too technical on this because with photography it’s all too easy to do so, so bear in mind I’m;

  1. talking about smartphone lenses vs camera lenses and;
  2. Talking about what lens gives you the most flattering portrait shot.

WHAT DIFFERENT LENSES DO

To understand what I mean let’s take a look at a range of photographs of a landscape below. For these photographs, the camera would have been mounted on a tripod and pointed towards a fixed view. The position of the camera and the direction it is pointing remain unchanged.

Nine different lenses were used to take each photograph, each lens being a different focal length, from an 18mm lens up to a 300mm lens. In the image, you can see the difference changing the lens has on the photo you get.

Comparison photo from the Campbell Cameras website

Now let’s look at what those different lenses do when pointed at a face.  Here’s a good example of how different focal lengths change the shape of a face from a lighting tutorial on the Pro Video Coalition website.

Image from Pro Video Coalition

As a general rule of thumb, when it comes to portrait photography, most photographers would choose a lens in the 85mm-300mm range as it makes the face look closer to how it does in real life.
Anything below 50mm begins to distort an object close to the camera.

SMARTPHONE LENSES

An article on the Techspot website about smartphone camera hardware makes this statement:

’All smartphones fall into the wide-angle lens bracket, typically somewhere around 24-30mm.’

And that, in a nutshell, is why they are not good for portraiture.

However, rules are made to be broken. Look at this example by photographer Zulmaury Saavedra. It’s an interesting portrait that uses the distortion caused by the wide angle lens for creative effect but it is certainly not a realistic interpretation of what the model looks like.

Photo by Zulmaury Saavedra on Unsplash

It’s really hard to find an example of a bad portrait caused by using the wrong lens because no photographer wants to put bad portrait photos online!

I hope you are closer to understanding that a camera on a phone produces poorer portrait shots because the lens is not the ideal type as it distorts features. Also, the default processing in the camera is unflattering for portraiture.

As with many things, it’s a question of the right tool for the right job.

If you found this article interesting, here are links to some other websites that you might like, too.

How to Choose the Perfect Portrait Lens
5 of the Best Smartphone Camera Lenses Money Can Buy

Skater project

Ever wondered how photographers get that ‘super wide’ look to some of their images? They cheat, of course…

Essentially, you use a fabulous tool in Photoshop called ‘Photomerge’ to stitch two or more photos together.

You can find hundreds of tutorials online showing you how to do this so I won’t do that here.

Instead, here’s one I made earlier…

This first picture is one I’ve had on the website before…

This second one was never on the site. You can see my finger in the corner. Doh!

Photoshop automatically merges them together like this…
Note how it completely removes the skater in the pit and makes the shadow seamless.

Here they are separated…

If you look closely, you can see a mismatch on the edge of the path,…

That can be fixed in just a couple of minutes with the Stamp tool and then you just crop the image…

That’s it. If you fancy, you can add a tint to give it a ‘polaroid’ feel.

There’s still a few inconsistencies in the image above but given this whole thing took less than 10 minutes, you can see how quick it is to do.

Photographers? Dastardly people…

More to come

Each of the photo sections only has 12 photos in it. This is deliberate, to give you a sample of the type of photos I took on the trip.

My photographic trips to China and California were longer than the others, however, so I will be adding new photos to these sections.

You can subscribe to receive an email telling you whenever new photos are added.

I also kept a blog of my 2013 California trip which you can find by clicking here or on the image below.