Physics & Photography Entwined

The Physics of Photography > Introduction: Physics & Photography Entwined

Physics & Photography Entwined

This article is a gateway to an in-depth exploration of the ‘Physics of Photography’ series, a collection of pieces exploring the mechanics and principles underpinning this popular art form. We’ll journey from the fundamental nature of light, through the beginnings of photography with the camera obscura and pinhole cameras, to the intricate design of modern camera lenses and the principles of exposure. My goal is to enhance your appreciation for the technical aspects of photography, expanding your understanding of how it translates the vivid array of life into stunning, memorable images. Whether you’re a professional photographer, an enthusiastic hobbyist, or a curious observer, this series will provide a captivating blend of science, technology, and creativity that fuels your passion for photography and physics!

Let There Be Light

Light, the central principle of photography, is more than a physical phenomenon – it’s deeply woven into our cultural, religious, and philosophical fabrics. From Buddhism’s interweaving of luminosity and enlightenment to Hinduism’s Diwali, the Festival of Lights, light symbolizes wisdom, victory, and self-realization across different societies. Even in the annals of religious literature, such as the Old Testament, light signifies the genesis of life and understanding. While not drawing direct links between physics and religion, the shared significance of light underscores its vital role in our collective consciousness. In photography and beyond, light shapes our experiences and reflects our deep-seated values and beliefs.

Seeing, a process intricately tied to light, is often taken for granted. This complex phenomenon involves rod and cone-shaped receptors in our eyes functioning similarly to a digital camera sensor. They absorb light, instigate a cascade of chemical reactions, and transmit electrical signals to the brain. The intricacy and speed of this process are truly astonishing. Simply opening your eyes triggers millions of chemical reactions each second from the nearly 100 million rods and cones in the human eye, sending nerve impulses that allow your brain to construct an image of the surrounding world. As you take in your surroundings, consider the remarkable nature of this process.

Photography, in many respects, is an extension of seeing. It’s a process of capturing, recording, and preserving the three-dimensional visual elements our eyes perceive but in a two-dimensional form. While it isn’t a one-to-one imitation of human sight, it does map the 3D world onto a 2D space. This mapping process often skews perspective in a photograph, which lends itself to the artistry of photography. Photographers can leverage this to convey cultural, spiritual, and personal perspectives, experiences, and emotions in a static image. Photography can be used as a tool to communicate stories, share experiences, and highlight aspects of our world that might otherwise go unnoticed, mimicking the often overlooked complexity of our natural vision.

The processes occurring within a digital camera sensor or on the surface of a film negative, while simpler than the workings of the eye, are nevertheless complex. They convert light into information. Whether this information is processed by a brain, a computer, or a darkroom technician, its origin is the same: light. This fundamental energy source underpins the creation of every image, embodying the intricate intersection of physics and photography.

The discussion of light inevitably leads us to its enigmatic nature. At times, it behaves like a particle, a notion reflected in the concept of photons – particles of light. This idea, which many of us can grasp, encapsulates the journey of a photon, whether emanating from a distant galaxy or the lamp on your desk, as it travels to reach your eye or camera sensor, where it is then visualized or recorded.



However, the particle-like behavior of light doesn’t paint the full picture. As we observe in everyday life, light traverses various materials – water, glass, air, and more. You’ve likely witnessed the phenomenon of refraction, where objects submerged in water appear distorted or skewed at the water’s surface. This bending of light is due to its wave-like behavior, a trait that seemingly stands at odds with its particle-like identity.

The peculiar dual nature of light, straddling the realms of both particles and waves, is a concept we will explore in detail in the forthcoming chapters of this series. This paradoxical behavior has piqued the curiosity of countless scientists and was instrumental in laying the foundations of quantum mechanics.

Use the Force

Light, as we will explore in-depth, is a form of electromagnetic wave. While we can draw certain parallels between electromagnetic waves and phenomena such as ripples in a pond or sound waves traveling through the air, these analogies are limited. For example, sound and water ripples necessitate a medium to propagate – air or water. Electromagnetic waves, on the other hand, are not bound by this requirement. They possess the unique ability to traverse the vacuum of space.

Moreover, the speed of sound and water waves is limited and primarily determined by the medium they travel through. For instance, sound moves over 17 times faster through specific metals than the air. So, what can we infer about the speed of light, especially when it moves through a vacuum?

Indeed, the speed of light does have a limit, but it’s an extraordinary one—it represents the maximum speed at which anything can move or information can be transmitted. This makes the speed of light a fundamental ‘speed limit’ in the universe. However, light doesn’t always travel at this maximum speed; it slows down or speeds up when it passes through various media. This variation in speed through different materials gives rise to refraction at the interface of those materials. This refraction, or ‘bending’ of light, when transitioning from air to glass or the eyes protein-rich lens, for example, enables lenses in your camera or eye to focus light. This focusing process is foundational to both vision and photography.


Affiliate link – I earn a commission if you shop through the link(s) below at no additional cost to you (more info)


The term ‘fundamental’ often surfaces when we discuss light, as I’m about to illustrate once more in the context of forces. It may surprise some to learn that physics identifies only four fundamental forces in nature. These forces, listed in order of decreasing strength, are:

  1. The Strong Nuclear Force
  2. The Electromagnetic Force
  3. The Weak Nuclear Force
  4. The Gravitational Force

The Electromagnetic Force, second in strength among the four fundamental forces, is intricately connected to the generation of light and electromagnetic waves. Charged particles, like electrons and protons, create electric fields. These fields, which represent distortions in the space around the charges, exert forces on any other charges within their range, leading to their acceleration. This is where things take an interesting turn.

Specifically, the acceleration of these charged particles gives rise to magnetic fields. In a reciprocal interaction, a changing magnetic field induces an electric field. It turns out that electric and magnetic fields are two facets of the same entity: the electromagnetic field. This interplay results in the formation of electromagnetic waves, a category that includes light itself.

Light and Electromagnetic Waves

In the previous section, we established that electromagnetic waves are composed of two interdependent fields—an electric field and a magnetic field. These fields oscillate synchronously, yet they are oriented perpendicularly to each other. This perpendicular oscillation happens in a plane that is also perpendicular to the direction of the wave’s propagation, classifying these waves as transverse. In essence, as an electromagnetic wave travels, the electric and magnetic fields ‘wave’ in different directions—up and down and side to side, respectively—while the wave itself continues to advance.

Visual representation of an electromagnetic field moving along an axis.
Visual representation of an electromagnetic field moving along an axis.

The diagram above provides a valuable visualization of how the fields oscillate in space. However, it’s essential to understand that this is a simplified, 2-dimensional representation. In reality, the wave is not confined to a single axis but propagates omnidirectionally, radiating outward from its source in all directions.

Waves of this nature are characterized by their wavelength (the distance from one peak to the next) and frequency (the number of waves passing a fixed point in space per unit of time). When discussing wavelength, we’re considering spatial distance, whereas frequency pertains to time. If we were to measure wavelength in meters [m] and frequency as the reciprocal of time [1/s]—since it’s waves per second—you might notice that the units derived from their multiplication (wavelength x frequency) yield a velocity, specifically, meters per second [m/s]. Without going into too much detail here, if you surmised that this resulting speed corresponds to the speed of light, you would be absolutely correct.

The speed of light in a vacuum, often denoted by the letter c, represents the fundamental speed limit of the universe. Wavelength and frequency, denoted by the Greek letters \lambda (lambda) and \nu (nu), respectively, are two defining characteristics of light. We can express their relationship with the speed of light using the following equation:

    \[c=\nu \cdot \lambda \]

Rearranging this equation algebraically, we can present the relationship between the wavelength and frequency as:

    \[ \lambda=\frac{c}{\nu}\]

    \[ \nu=\frac{c}{\lambda}\]

Since the speed of light in a vacuum, c, is a universal constant, the frequency and wavelength of light are intrinsically linked. In fact, they together define the identity of an electromagnetic wave. It is this identity that gives rise to the various colors of light that we perceive!

The concept I’ve just described underpins our understanding of the electromagnetic spectrum, wherein visible light forms a tiny part. This spectrum encompasses not only the familiar colors of the rainbow but also other forms of electromagnetic waves. These include radio waves, microwaves, X-rays, gamma rays, and infrared radiation, which may not be visible but are nonetheless familiar. Despite their differences, they all share the same fundamental nature as electromagnetic waves, sometimes called electromagnetic radiation.

The electromagnetic spectrum and a breakdown of the visible spectrum
Diagram depicting the broad electromagnetic spectrum in which the visible spectrum is only a tiny part

To emulate human vision, photographic sensors and film are optimized to capture light within the visual spectrum we perceive. However, sensors are capable of recording outside of this narrow range. Some cameras, equipped with infrared sensors, produce images influenced by the heat emitted by objects. Other devices capture X-rays, which easily pass through flesh, showing our internal bone structure. Some sensors detect highly harmful gamma or cosmic radiation, which can damage our DNA and lead to various forms of cancer. Still, others capture the longer wavelengths of radio waves to transmit sound information through the atmosphere.

Furthermore, it’s important to note that not all organisms perceive the same part of the spectrum as humans. Bees, for example, can sense ultraviolet wavelengths, essential for their roles in pollinating flowers and navigating during daylight. This fact may lead us to realize that our vision only allows us to perceive a minute fraction of the world around us.

The Beginnings of Photography

Creating a photograph from the vast array of electromagnetic radiation that constantly surrounds us is no simple task. There must be a method to ‘transcribe’ that visual information onto a compact plane where it can be recorded, whether through a photographic plate, silver halide film, or a digital camera sensor.

The most basic of these methods is the concept of a camera obscura, a precursor to the pinhole camera. A camera obscura provides a straightforward way to produce a two-dimensional image from a real-world scene. And it’s as simple as making a hole in a wall.

A simple representation of a camera obscura
A simple representation of a camera obscura [public domain image]

In a camera obscura, a hole is used to create an inverted image of an object or scene. Light emitted or reflected from an object radiates in all directions. Without a hole to serve as a focal point, this scattered light fills the room or box but doesn’t form a distinct image on the opposite wall because the rays from different points on the object overlap and mix, blurring the potential image. The hole in a camera obscura, however, restricts the light entering the room or box to rays from a single direction for each point on the object. Each of these rays follows a straight-line path to the opposite wall in the room, projecting an image on that wall that is inverted and flipped left-to-right. The small hole effectively filters out the ‘noise’ of the overlapping rays, allowing a coherent image to form.

Historically, camera obscuras were eventually built as smaller boxes. Often, they incorporated a mirror to redirect the incoming image onto a flat surface, where it could be traced for scientific studies or artistic endeavors. With the advent of photosensitive materials, these portable camera obscuras evolved into what we now know as pinhole cameras, placing a photosensitive plate or paper where the image formed.

Geometrical Optics and Lenses

The engineering behind modern photography extends far beyond the simplicity of a box with a hole. The minuscule amount of light that can pass through a small hole in a box is often insufficient to adequately expose film or a sensor without prolonged exposure times — up to 20 seconds even in bright sunlight and considerably longer in less luminous conditions. This limitation makes capturing action shots or taking handheld photos impossible. Moreover, the images produced by a pinhole camera lack sharpness; the larger the hole, the more blurred the image becomes. Therefore, a smaller hole is necessary to achieve sharper images, reducing light intake and lengthening the requisite exposure time on a stable platform.


Affiliate link – I earn a commission if you shop through the link(s) below at no additional cost to you (more info)
Unlimited Photo Storage

We need a method to capture more light without compromising image sharpness to reduce exposure time, which is the duration of light exposure on a sensor, film, or photographic paper. To accomplish this, we must leverage a physical principle that was alluded to in the discussion about light: refraction.

When light passes through a convex lens, it undergoes refraction. This process, in essence, is the change of direction of light when it moves from one medium to another — in this case, from air to glass and then back to air. Refraction occurs twice: first, when the light enters the lens and then again when it exits on the other side. Because of the specific shape of a convex lens, these changes in direction cause the light rays to converge at a particular point.

This convergence of light is illustrated in the ray diagram below, where parallel light rays entering a perfect convex lens converge on a single point, known as the focal point of the lens.

A ray diagram of parallel light rays entering a perfect convex lens. For simplification, the refraction is only shown once but is actually occurring at the entrance of the lens and again at the exit.
A ray diagram of parallel light rays entering a perfect convex lens. For simplification, the refraction is only shown once but is actually occurring at the entrance of the lens and again at the exit.

Most of us have practically applied the concept of a lens’s focal point, perhaps by using a magnifying glass to start a fire or burn a hole in a piece of paper. You might recall adjusting the lens’s distance from the paper or kindling until you found the focal point, characterized by a tiny, intensely bright dot of light. This scenario serves as a real-world example of parallel rays entering a converging lens, just like in the diagram above. Given the Sun’s immense distance from us, the rays reaching the lens are nearly perfectly parallel for all measurable purposes.

This ray diagram-based approach to understanding lenses, known as geometrical optics, is incredibly valuable, especially in lens design. While it is an approximation, it proves to be remarkably accurate under certain conditions — specifically, when the wavelength of the light is much smaller than the size of the equipment being used. For visible light, which has a wavelength on the scale of a few hundred nanometers, this condition is easily met. The lenses and apertures used in photography are on a scale of centimeters or millimeters, making them thousands or even tens of thousands of times larger than the light’s wavelength.

A lens allows us to gather significantly more light from an object or scene than a pinhole camera. However, the process of forming an image with a lens differs from that of a camera obscura. Light from nearby objects doesn’t always enter the lens in a way that’s both parallel and perpendicular to the lens (parallel to the optical axis), as light from a single point on an object diverges in all directions and enters the lens at multiple points. This principle is depicted in the figure below.

diagram of how a lens forms an image from an object in the real world
(Adapted image by Jean Biz Hertzberg – CC BY-SA 4.0)

Observe how only the green line, which is parallel to the optical axis, passes through the focal point. Any lines that pass through the center of the lens, typically along the optical axis, maintain their direction after traversing the lens (as shown by the red line). The blue line originates from the object parallel to the red line, but after passing through the lens, it intersects the red line at the focal plane. (Imagine a plane or line extending from the focal point perpendicular to the optical axis). The image comes into being at the image plane, the two-dimensional surface where the image is in focus.

The image plane is precisely where the camera sensor or film is positioned. This depiction simplifies the actual process by which a camera lens generates an image from a real-world scene. Naturally, the process is accompanied by numerous caveats and complexities. For instance, aberrations, which are distortions in the image, can arise from the spherical shape of the lens. There are five types of monochromatic aberrations and one chromatic aberration, each causing different forms of distortion or blurring in the final image.


Affiliate link – I earn a commission if you shop through the link(s) below at no additional cost to you (more info)

Chromatic aberration, in particular, is highly conspicuous to photographers. It arises because light of different wavelengths (or colors) refracts at slightly varying amounts within materials. This variation leads to different colors having different focal points along the optical axis, as illustrated in the following figure.

Display of chromatic aberration in a lens
(image credit: Bob MelishCC BY-SA 3.0)

Many of these aberrations are rectified within a camera lens by integrating a series of convex and concave lenses, forming a compound lens. This process contributes significantly to the high cost of lenses for DSLR and mirrorless cameras. The degree of optical engineering and the necessity for precision grinding and high-quality glass is remarkable.

The multiple glass elements in a 50 mm Zeiss Sonnar lens.
The multiple glass elements in a 50 mm Zeiss Sonnar lens. (2023, March 21). (Image Credit: Tamasflex – (CC BY-SA 3.0)

In this section, we’ve scratched the surface of geometrical optics and lenses, introducing the foundational principles that make modern photography possible. However, this is just the beginning. These topics, along with the nuances of lens aberrations, lens types, and their specific applications in photography, are rich fields of study in their own right. Throughout this series, we’ll further explore these subjects, examining how they influence image quality, the creative decisions photographers make, and the ongoing innovations in lens technology.

The Camera and Exposure

As we’ve discussed, the lens’s role in a camera is to collect and direct light onto an image plane. This plane is where the sensor of a digital camera or the film of a traditional camera is positioned, capturing the light to create an image. But producing a photograph that truly encapsulates the essence of a scene involves more than just focusing light onto an image; it requires precise control over the volume of light that can reach the sensor or film and the duration for which it does. This light regulation is often called ‘exposure,’ a concept central to photography. While most hobbyist photographers may have a basic understanding of exposure, we must look deeper and establish a quantitative comprehension of it, given its significance as a genuine physical principle. So, let’s jump into the critical concept of exposure, and the camera controls that help control it to comprehend how these elements of photography coalesce to produce the images we appreciate.

When I initially set out to write the detailed exposure section of this series, I was quickly struck by the topic’s depth and breadth. As I drafted nearly 30,000 words, it became apparent that I was just beginning to scratch the surface, necessitating the creation of multiple side articles and prefacing chapters. This realization gave birth to these introductory articles designed to lay a firm foundation before delving into the intricacies of exposure.

This insight is shared to underscore the brevity of this introduction to the concept of exposure and the camera’s tools to manipulate it. This section aims to provide a foundational understanding and introduce essential terminology, equipping you for the forthcoming in-depth chapters of this series. In doing so, it puts you a step ahead as we embark on an exploration of the fascinating intersections of physics and photography.

A substantial portion of the text on the concept of exposure was dedicated to developing a working definition. For the sake of brevity, I’ll distill the first ten pages or so of that discussion down to its essence. In this context, exposure refers to the light intensity (or irradiance) at the sensor or film plane multiplied by the duration the light is allowed to hit it or the sensor is actively recording. For now, we’ll use a simplified working definition for light intensity at the sensor plane, as a detailed explanation is beyond the scope of this introductory article. Essentially, light intensity refers to the power of light per unit area.


Affiliate link – I earn a commission if you shop through the link(s) below at no additional cost to you (more info)

Please note that various definitions of exposure exist, and the one used in this context aligns with the ‘Radiant exposure’ definition prevalent in radiometry—the science of measuring light. This definition may not coincide with those typically presented in photography texts (lux seconds). However, the radiometric definitions and quantities more accurately reflect the underlying physics—which is our primary focus as we strive to understand the physics of photography. As we progress, we’ll dig into how these quantities intimately link with the energy of light.

In photography, exposure is often evaluated based on its ultimate effect: how bright or dark the final image appears. This outcome is influenced by a multitude of factors, not limited to:

  • The luminosity of the subject or scene, which describes how brightly lit the subject is
  • The distance between the camera system and the subject or scene
  • The volume of light admitted into the optical system, determined by the size of the primary lens and the size of the entrance pupil or aperture
  • The shutter speed, which dictates the duration for which light is permitted to strike the sensor or film
  • The ISO setting, which indicates the sensitivity of the sensor

For photographers, the three bolded elements – aperture, shutter speed, and ISO – are often referred to as the three pillars of exposure. These are the trio of in-camera tools at your disposal for controlling exposure.

Display of different aperture sizes in a camera lens
Image displaying the aperture sizes, or f-stops, in a 50 mm lens. The lens aperture controls the amount of light let into the camera.
(Image credit: KoeppiK – CC BY-SA 4.0)

The first two factors on the list above, the luminosity of the scene and the distance from the subject, are not always directly controllable by the photographer. While the photographer can sometimes modify the luminosity of the scene with artificial lighting, it is primarily determined by the amount of light emitted or reflected by the subject. Similarly, while the distance between the camera and the subject can often be adjusted, changes in distance can alter the scene’s perspective. This distance also affects the amount of light entering the lens system, as the intensity of incident light decreases proportionally to the square of the distance from the subject, a principle known as the inverse square law \left(\frac{1}{r^2}\right), where r is the distance from the subject.

Throughout this series, we’ll take a detailed examination of these three elements. We’ll explore the physics underlying their roles in determining the exposure at the sensor plane. Our aim is to build a foundational definition of exposure, grounded in physics, as the sum total of energy transfer from light to the sensor over the entire time that light is striking it (or that it is recording).

The exposure triangle - linking aperture, shutter speed, and ISO
The exposure triangle illustrates the interplay between the three fundamental elements of a camera—aperture, shutter speed, and ISO—that work together to control exposure. Each side of the triangle represents one of these elements, highlighting how a change in one necessitates a compensatory adjustment in the others to maintain the same exposure. (Image Credit: WClarke and SamsaraCC BY-SA 4.0)

Each controllable element – aperture, shutter speed, and ISO – significantly impacts the compositional quality and aesthetic of the final image. For instance, the aperture size largely determines the depth of field, which is the range within the photo that appears in sharp focus. Using a slow shutter speed can result in motion blur, which can be desirable for conveying movement but may also compromise image sharpness. A high ISO sensitivity can introduce random noise, typically undesirable due to its impact on image clarity. However, in traditional film photography, a higher ISO (or ASA) can produce a grainy texture that some photographers might seek for its artistic effect.

The physics behind each of these effects is as fascinating as it is complex. From the optics that govern the behavior of light as it passes through the camera lens to the electromagnetic principles that guide the transformation of light into digital signals, the science of photography is a multidisciplinary field that touches upon numerous branches of physics. It offers a unique perspective of the world, allowing us to understand and manipulate light to create images that can be as scientifically intriguing as they are aesthetically pleasing. In this series, we will journey through these captivating realms of physics, unearthing the scientific principles that underpin the art of photography and discovering how they have been ingeniously harnessed to capture the world in a photograph.

In Conclusion – The Road Ahead

As I’ve reiterated throughout this article, this introductory piece only begins to tap into the depth and breadth of the topics we’ll explore in this series. There are several key concepts and phenomena that we haven’t touched upon yet, all of which significantly contribute to the intricate interplay between physics and photography. Here are a few of these topics, presented in no particular order:

  • The dielectric properties of matter
  • The physics of color and color perception
  • Focal length
  • Huygen’s Principle
  • Snell’s Law
  • Image formation
  • Focal length
  • Lens power
  • The photoelectric effect
  • Bokeh
  • Camera priority modes
  • Image sensors and pixel size
  • Resolution and diffraction limits
  • Astrophotography and the physics of capturing stars and galaxies
  • Polarized light and polarizing filters
  • The science of film
  • UV and haze

Moreover, I’ll be tracing the historical pathways that have shaped these areas of physics and photography, providing you with a rich and comprehensive perspective.

In the upcoming final article of this introductory series, we’ll reverse our gaze to explore how photography itself is employed in diverse scientific fields.

Stay tuned as we journey together through the enthralling intersections of science and art!

Back To The Physics of Photography Homepage – Next article coming soon!

 

Add a comment

*Please complete all fields correctly

Related Posts

The view from the ridge just south of Wickersham Dome. Rain can be seen falling in the valley over the winding Elliott Highway far below while the hillside is showing signs of green-up. Bright sunlight filters in through the dark clouds, illuminating some of the surrounding landscape.
Small Things, Spring Things, and Broken Things