Until very recently photography, for decades, has not endured much change. Most changes to film emulsions, lens optics, and mechanics were more quantitive than qualitative. Even the first switch to digital which amounted to slapping a digital sensor instead of the film was not revolutionary.
The art of photography is of translating the image of the real world to a still picture on an arbitrary media (paper, screen, slide). It was never about fidelity as the precise replication of that is seen by the human eye is physically impossible. The "art" part is how the picture is transformed. The arsenal of effects of traditional photography was limited: depth of field, focal distance, movement blur, color curves, vignetting, and various optical distortions like chromatic aberrations and results of using optical filters, to name a few. The post-processing steps add a few more, but let us put these aside for now.
As a photographer, I've built a mental model of the camera system, and learned to estimate how changing its parameters will affect the image I am taking. Again, fundamentally they are very few: focus, sensor ISO sensitivity, focal length (zoom), aperture, and exposure. For more specialized cases you may want to know about the type of shutter (running or circular) and your optical filters properties. Most cameras, from $30 point-and-shoot to $3,000 SLR just help you to manage these very same parameters.
The early generations of mobile phone cameras worked exactly like that. On the top of the line (at the time of this writing) Samsung phone, Galaxy S9, you can switch to "Pro" mode and tweak focus, exposure, aperture, sensitivity as some now long dead photographer did on his TLR camera 130 years ago!
But unlike cameras, modern phones have formidable computational resources. They can process data from camera sensor in real-time using their fast CPUs and GPUs. They can do it while taking a picture. Suddenly techniques like using imperceptible hand motion to get several slightly offset images from which a higher-resolution image could be reconstructed [1] become possible. Another burst technique, HDR, used to achieve the better dynamic range become so standard that it is enabled by default on Galaxy S9 and Google Pixel 3. The camera sensors are also evolving. Companies like Lytro developed sensors which capture information about the direction that the light rays are traveling in space. Apple first pioneered phones with dual cameras which opens a world of possibilities. Now you can take simultaneously two pictures at different focal lengths, with different parameters and combine information from them to construct a composite image. This could also be combined with burst techniques. Finally, as the two cameras a physically apart and giving two distinct vantage points, one can try to reconstruct some of the 3D features of the scene.
Comparing digital cameras become more difficult. Before you look at sensor resolution (megapixels), dynamic range, color response curves, focal length, and range of apertures and exposures as the starting point to compare 2 cameras. This is no longer true. Your phones now have multiple cameras, taking multiple sensor readings for each picture, controlled by complex AI-driven proprietary algorithms, which are sometimes as smart as detecting the types of multiple objects in your viewfinder and choosing the best way to render them in a shot.
My Google Pixel 3 camera does not have a "Pro" mode. And this is not because they chose to "dumb it down" for the consumer. The reason is that the set of controls we used to is no longer applicable. The world has changed and many of the skills I've learned over years as a photographer no longer apply. Finally, the real disruption to the camera industry has arrived.
No comments:
Post a Comment