The human eye is a wonderful instrument, relying on refraction and lenses to form images. There are many similarities between the human eye and a camera, including:
- a diaphragm to control the amount of light that gets through to the lens. This is the shutter in a camera, and the pupil, at the center of the iris, in the human eye.
- a lens to focus the light and create an image. The image is real and inverted.
- a method of sensing the image. In a camera, film is used to record the image; in the eye, the image is focused on the retina, and a system of rods and cones is the front end of an image-processing system that converts the image to electrical impulses and sends the information along the optic nerve to the brain.
A photograph is the illusion of a literal description of how the camera ‘saw’ a piece of time and space.
Photography is not about the thing photographed. It is about how that thing looks photographed.
OVERVIEW OF DIFFERENCES
1. ANGLE OF VIEW
With cameras, this is determined by the focal length of the lens (along with the sensor size of the camera). For example, a telephoto lens has a longer focal length than a standard portrait lens, and thus encompasses a narrower angle of view:
Unfortunately our eyes aren’t as straightforward. Although the human eye has a focal length of approximately 22 mm, this is misleading because (i) the back of our eyes are curved, (ii) the periphery of our visual field contains progressively less detail than the center, and (iii) the scene we perceive is the combined result of both eyes.
Each eye individually has anywhere from a 120-200° angle of view, depending on how strictly one defines objects as being “seen.” Similarly, the dual eye overlap region is around 130° — or nearly as wide as a fisheye lens. However, for evolutionary reasons our extreme peripheral vision is only useful for sensing motion and large-scale objects (such as a lion pouncing from your side). Furthermore, such a wide angle would appear highly distorted and unnatural if it were captured by a camera.
|Left Eye||Dual Eye Overlap||Right Eye|
Our central angle of view — around 40-60° — is what most impacts our perception. Subjectively, this would correspond with the angle over which you could recall objects without moving your eyes. Incidentally, this is close to a 50 mm “normal” focal length lens on a full frame camera (43 mm to be precise), or a 27 mm focal length on a camera with a 1.6X crop factor. Although this doesn’t reproduce the full angle of view at which we see, it does correspond well with what we perceive as having the best trade-off between different types of distortion:
Too wide an angle of view and the relative sizes of objects are exaggerated, whereas too narrow an angle of view means that objects are all nearly the same relative size and you lose the sense of depth. Extremely wide angles also tend to make objects near the edges of the frame appear stretched.
By comparison, even though our eyes capture a distorted wide angle image, we reconstruct this to form a 3D mental image that is seemingly distortion-free.
2. RESOLUTION & DETAIL
Most current digital cameras have 5-20 megapixels, which is often cited as falling far short of our own visual system. This is based on the fact that at 20/20 vision, the human eye is able to resolve the equivalent of a 52 megapixel camera (assuming a 60° angle of view).
However, such calculations are misleading. Only our central vision is 20/20, so we never actually resolve that much detail in a single glance. Away from the center, our visual ability decreases dramatically, such that by just 20° off-center our eyes resolve only one-tenth as much detail. At the periphery, we only detect large-scale contrast and minimal color:
Qualitative representation of visual detail using a single glance of the eyes.
Taking the above into account, a single glance by our eyes is therefore only capable of perceiving detail comparable to a 5-15 megapixel camera (depending on one’s eyesight). However, our mind doesn’t actually remember images pixel by pixel; it instead records memorable textures, color and contrast on an image by image basis.
In order to assemble a detailed mental image, our eyes therefore focus on several regions of interest in rapid succession. This effectively paints our perception:
The end result is a mental image whose detail has effectively been prioritized based on interest. This has an important but often overlooked implication for photographers: even if a photograph approaches the technical limits of camera detail, such detail ultimately won’t count for much if the imagery itself isn’t memorable.
Other important differences with how our eyes resolve detail include:
Asymmetry. Each eye is more capable of perceiving detail below our line of sight than above, and their peripheral vision is also much more sensitive in directions away from the nose than towards it. Cameras record images almost perfectly symmetrically.
Low-Light Viewing. In extremely low light, such as under moonlight or starlight, our eyes actually begin to see in monochrome. Under such situations, our central vision also begins to depict less detail than just off-center. Many astrophotographers are aware of this, and use it to their advantage by staring just to the side of a dim star if they want to be able to see it with their unassisted eyes.
Subtle Gradations. Too much attention is often given to the finest detail resolvable, but subtle tonal gradations are also important — and happen to be where our eyes and cameras differ the most. With a camera, enlarged detail is always easier to resolve — but counter-intuitively, enlarged detail might actually become less visible to our eyes.
3. SENSITIVITY & DYNAMIC RANGE
Dynamic range is one area where the eye is often seen as having a huge advantage. If we were to consider situations where our pupil opens and closes for different brightness regions, then yes, our eyes far surpass the capabilities of a single camera image (and can have a range exceeding 24 f-stops). However, in such situations our eye is dynamically adjusting like a video camera, so this arguably isn’t a fair comparison.
If we were to instead consider our eye’s instantaneous dynamic range (where our pupil opening is unchanged), then cameras fare much better. This would be similar to looking at one region within a scene, letting our eyes adjust, and not looking anywhere else. In that case, most estimate that our eyes can see anywhere from 10-14 f-stops of dynamic range, which definitely surpasses most compact cameras (5-7 stops), but is surprisingly similar to that of digital SLR cameras (8-11 stops).
On the other hand, our eye’s dynamic range also depends on brightness and subject contrast, so the above only applies to typical daylight conditions. With low-light star viewing our eyes can approach an even higher instantaneous dynamic range, for example.
*Quantifying Dynamic Range. The most commonly used unit for measuring dynamic range in photography is the f-stop, so we’ll stick with that here. This describes the ratio between the lightest and darkest recordable regions of a scene, in powers of two. A scene with a dynamic range of 3 f-stops therefore has a white that is 8X as bright as its black (since 23 = 2x2x2 = 8).
Sensitivity to light: A film in a camera is uniformly sensitive to light. The human retina is not. Therefore, with respect to quality of image and capturing power, our eyes have a greater sensitivity in dark locations than a typical camera.
There are lighting situations that a current digital cameras cannot capture easily: The photos will come out blurry, or in a barrage of digital noise. As an example, when observing a fluorescence image of cells under a microscope, the image you can see with your eyes would be nigh-on impossible to capture for an ordinary camera. This is mainly because of the fact that the amount of light entering the camera (and your eyes) is so low.
What is ISO and why is it important?
ISO is the number signifying the light sensitivity of an imaging sensor; it is measured in numbers (like 100, 200, 400, 800 etc). Sometimes, this number is also known as an “ISO number”, or, more commonly, the “film speed”. Historically, the lower the ISO number, the lower the sensitivity of the film and the finer the grain in the pictures or shots you are taking. This has translated pretty well into digital photography, too: Higher ISO gives you higher sensitivity, but at the cost of a larger amount of digital noise.
ISO is the indication of how sensitive a film is to light. This means that the higher the ISO setting, the more sensitive the camera sensor is to light. Accordingly, if you take a picture with ISO 400 settings, you only need 1/4 of the light that will be needed to take a picture with ISO 100 camera settings.
Trying to track down the ISO of the human eye
The real issue with the human eye is that, unlike film and camera sensors, our eyes do not have any definite ISO levels. However, our eyes do have a great ability to naturally adjust to ambient light levels even under the most severe lighting conditions.
However, the human eye has a mighty trick up its sleeve: it can modify its own light sensitivity. After about 15 seconds in lower light, our bodies increase the level of rhodopsin in our retina. Over the next half hour in low light, our eyes get more an more sensitive. In fact, studies have shown that our eyes are around 600 times more sensitive at night than during the day.
It should also be noted that the human eye is like the greatest, quickest automatic camera in existence. Every time we change where we’re looking, our eye (and retina) is changing everything else to compensate–focus, iris, dynamic range are all constantly adjusting to ensure that our eyesight is as good it can be.
In addition to straight-up light sensitivity (which we’ll get back to in just a minute), the dynamic range of the human eye is absolutely astonishing: A human can see objects in starlight or in the brightest of sunlight. The difference between the two extremes is absolutely astonishing – In sunlight, objects receive 1,000,000,000 times more light than on a moonless night – and yet, we are able to see under both circumstances
The spanner in the works: Shutter speed
Where our comparison gets complicated is when we mix in shutter speed. In order to do a like-for-like comparison between the human eye and a camera, we can quite easily compare apertures and ISO (which is the most interesting exercise, in my opinion). But shutter speeds makes it complicated, because a camera can stay open for as long as we need it to. In fact, there are examples of photos taken with a 6-month shutter opening, something which the human eye can obviously not match.
Exploring what the shutter speed of a human eye is is actually surprisingly complicated, but let’s look to animation for a start: If you have ever seen any simple animation, you will have noticed that if you don’t get enough frames per second, things can look ‘stuttery’. If you were to see a football game at 1fps, for example, you would essentially be seeing a series of 1 photo per second (at a maximum of 1 second shutter speed). Obviously, that’s not going to do any good, and the human eye has a ‘shutter speed’ of faster than that.
For low light photography, however, we don’t need to know the minimum shutter speed of the human eye, but the maximum. Obviously, we can sit perfectly still and stare at a forest in the pitch dark for half an hour, but we might not be able to ‘see’ anything, even though we, in theory, have had a half-hour exposure. At the same time, a camera might be able to resolve something in that half hour (but it might not). When it comes to our own eyes, it becomes less meaningful to speak of a “shutter speed” as such – our eyes see with an exponential decay, and our vision is a continuous process. In other words, our eyes will take multiple ‘exposures’, and our brain will combine them into a more meaningful image, much like you might do when you are taking a multi-exposure HDR photograph with your camera.
The human eye is extremely good at resolving images in bright light, and it becomes meaningless to speak of ‘noise’ – not because our eyes aren’t misfiring every now and again, but because our brain simply filters out any problems our eyes encounter (Just think about how your brain is constantly filtering out the two blind spots you have – one in each eye – even if you are closing one eye and looking with the other. If you have never experienced your blind spot – give it a shot, it’s rather astonishing).
So, for the sake of argument, let’s say that the minimum ISO of our eyes, on a bright sunny day, is ISO 25. Why 25? Because that’s the lowest-ISO film that’s currently in use, with the least grain and the highest quality around. If the lowest ISO of our eyes is 25, and our eyes are 600 times more sensitive in the dark, that means that the maximum ISO of the human eye would land somewhere around ISO 15,000 or so. If you choose ISO 100 as our base ISO for the human eye (which is equally fair, considering that we’re comparing eyes to digital cameras, and most digital SLRs these days start at ISO 100) – our maximum ISO is around 60,000.
When we consider that the highest-ISO cameras (Like the Nikon D3S) can take photos at up to ISO 102,000 , it becomes clear that our built-in technology is starting to lag behind what the camera manufacturers are cooking up!
It is not possible to put human vision against cameras since it would be an unfair battle, but you can quantify the differences and similarities, give or take.
If you understand yourself better, and if you understand your tools (the camera) better, it will affect the final product. It will improve it and make your workflow more efficient, and thus you’ll need less time to achieve better results.