First of all, apologies for the click-bait title. Just wanted to see if it works.
Now, to the topic. I've been asked by a few people whether the scenes really did look like they did or do I use Photoshop. Long story short, I do use image editing software, but not for the reasons why you might expect. So why do I and in general most photographers photoshop their images?
Photoshop is bad!
You can argue that manipulating an image loses the authenticity of the scene and by doing this the photographer risks deceiving his audience. To an extent this is true, especially when we glimpse into fashion magazines and ponder how their women always look so "perfect".
You can see the same story with landscape photography. Have you ever looked at an image and thought "Wow! I really wish I were there right now..."? And once you actually visit the place, you're left with a confused internal thought "It doesn't look anything like it did in the pictures!"
And who is to say that it is the wrong thing to do? Maybe the photographer's intention was to reveal his imagination and vision of the landscape to the world. If the point is to show surreal scenes not in the way they are, but in the way they could be, then I am not going to stand in anyone's way. In fact, there are a lot of people out there who like to see images that way.
Whether or not photographers should tell their audience about the magnitude to which their images have been manipulated is up to everyone's own ethical code. I personally don't see much value in turning something okay into "POWOW!" when there are real places on the planet that do make your jaw drop without the help of artificial boosts.
Actually, Photoshop is not so bad after all
Here's the thing – I do use Photoshop, both Lightroom and Photoshop to be exact. But I do it for different reasons than to gain the effect mentioned above.
There are two reasons which are very tightly related:
1. Our brains have been ruined to see the world differently than cameras do. Hence, compared to brains, cameras are pretty stupid creatures.
2. Sensor technology is limited compared to the human eye.
Brains vs Sensors
Let's take the first one. Cameras don't really know what you see and what you want to capture. Yes, you can point your glass to an interesting subject to capture exactly what's in the viewfinder and the autofocus button is pretty good at making sure that the point you're aiming at will be sharp – but that's about it. Everything else, as we all know, it very much depends on the settings that you use to capture the correct colours, sharpness, contrast between shadows and highlights and so on.
However, even if you use manual mode to make sure the shutter speed is correct (to eliminate blur coming from moving objects) and the aperture is at the right size (to control for background blur), the camera still doesn't see what you can see. Once you click the shutter button, it uses electrical circuits to convert light into pixels to create an image. Rarely however does it look like your eye really sees it, at least not like it comes out straight of out the box.
That's why most photographers tend to shoot in RAW format. RAW captures more information than it shows you on the screen, allowing the photographer within limits to recover some of the mistakes. The tools to do this specific job? You've guessed it – Photoshop and Lightroom.
Fun fact. Smartphones makers know of this issue and have tackled this by quickly processing every image you take by digitally adding some sharpness, contrast and saturation. So every time you take a selfie, it secretly already does stuff to it so that the image looks more like what your eyes and brain perceive rather than what the camera sees.
You have probably taken a photograph in a high contrast area, say of a person standing in front of a window, and you're only left with a silhouette. Sometimes it is an effect that we desire, sometimes it is just frustrating to only see black where you'd like to see emotion. It's one of the reasons why we are discouraged to shoot against the sun, as the photos were doomed in essence.
The difference from the darkest to the lightest point in your image is called the dynamic range. Although modern cameras have gotten pretty good at it, they are still not as good as our eyes.
To overcome this technical limitation, photographers either use filters or they blend their exposures to expand the dynamic range of an image. In other words, either they put a darkened glass in front of their lens to dampen the strong highlights or instead of taking a single image, they take multiple – some to make sure the shadows are nicely lit and some for highlights.
Now, there are a bunch of different softwares out there to blend these three exposures together. If not used carefully, you end up with a #shittyHDR image that makes Ansel Adams want to wake up as a zombie and devour you alive.
You don't have to be a rocket scientist to realise that the photograph above has gotten too much love from image editing software. It may even make you think whether the photographer was trying to communicate the uniqueness of the scenery – or did he want to create the Powerpuff Girls. And if for some reason you happen to like it, great!
My personal preference however is to capture the reality rather than to create a new one. Now, I don't claim to have mastered the technique of proper exposure blending. Quite the opposite in fact, I am just at the beginning of my journey trying to follow the steps of Jimmy McIntyre. But really it only takes a little bit of thought and logic to bring out the good sides of the photograph to try to reveal what the scene really looked like to your eye.
So there we have it. I do use image editing software to fine tune my images. The purpose of it however is to overcome the technical limitations of the camera rather than to create an alternative reality.