Photographic Technology

  • Single Photons Actually Matter in Photography

    As anyone with a little background science knowledge will know, light can be measured as discrete quanta called photons. These particles behave as both a wave AND a particle, (as an aside, the wave-like behaviour of light is responsible for diffraction, which softens photographs taken at small apertures). For now, I shall focus on the particle side of things.

    Photons of different wavelengths from a light source bounce off various objects in a scene before eventually being captured and counted in a pixel on a camera sensor. The number of red, green, and blue photons that make it onto the corresponding coloured pixels on the sensor determine the brightness of those pixels. There are many, many trillions of these photons bouncing around, and the overwhelming majority of these never make it anywhere near the sensor of a camera that is trying to capture a scene. What is important to note, though, is that while the number of photons bouncing around is very large, it is also finite. read more

  • Is The Megapixel Race Over?

    Resolution has largely reached a point where any megapixel increase no longer matters. There has been many discussions about what resolution the human eye can perceive. Generally, for a sharp picture – either on a screen or print – 300dpi/ppi is required (Apple’s first ‘retina display’ was 326ppi). For a pin-sharp image, 600dpi/ppi is needed, and some 4k phones have exceeded this.

     

    Does pixel size matter?

    Yes and no. It’s not the size of the pixels that counts – it’s the area of your vision they take up. However, all these resolutions assume a close viewing distance of less than an arm’s length. As pictures are printed however large is needed, and displayed on screens of all sorts of sizes, the actual resolution required for them to be perceived as ‘sharp’ varies. Some roadside billboards have a print resolution of less than 100dpi, yet they still look sharp due to the distance they are viewed from. If you walk right up close to a large poster, you will often see fairly coarse dots. read more

  • Using Image Stabilisation

    Image stabilisation, in still photography, refers to a lens or camera’s ability to limit the effect that camera movement has while taking a picture. Unless on a tripod, if an exposure takes a long time (anything from 1/100th of a second to longer than a second), then your hands will move the camera while it is taking the picture. Hey presto – unwanted camera shake!

    Canon usually refers to the technology as Image Stabilisation, whereas Nikon usually refers to it as Vibration Reduction (but I prefer the acronym VR to be used for Virtual Reality). Canon and Nikon (as well as other camera manufacturers) have lenses that stabilise an image by moving a group of lens elements (this is optical image stabilisation – it can also be done by moving the image sensor, which some smaller cameras and phones do). Digital Image Stabilisation is mainly used to stabilise video, and is not really sited to still image capture. I shall refer to all shake-reduction technology as IS.

    read more

  • Lightfield based Computational Photography

    Traditional photography has had a good run. Technological advancements have been made allowing ever greater amounts of image data to be captured (greater number of pixels, higher dynamic range, and better low light capture). Now, with these improvements slowing in pace, coupled with the changing way we use photographs, the capturing of a flat image with fixed attributes does not fit with the future of the way we consume media. Lightfield photography will change all that.

    Adjustments after the fact

    Lightfield cameras (also called Plenoptic cameras) capture all the light that hits a volume instead of a plane. This means that information about the angle of the light as well as brightness and colour is collected. More important than the technology behind lightfields is how they can be used. This blog post focuses on the benefits they have and why they are the future

    All the amazing things that can be done are possible because a depth map of the scene has been created, and the photo is now essentially a 3d model that must be viewed from a fixed angle. While the position the photo was shot from is fixed, anything that can be done with a 3d rendered model can also be done with a photograph that has been computationally generated from a lightfield. Here are the three big game changers:

    FocusLytro’s main selling point, and while undeniably cool, still a bit of a gimmick. Also, the abilities of a tilt/shift lens can be recreated, and the focal plane doesn’t even need to be flat (as with Lensbaby selective focus lenses). Variable focus is also possible, so the photographer can control how much before and after the focal plane is in focus, and how quickly it recedes.

    lightfield refocus lytro

    Refocusing a lightfield image. Image source – www.lytro.com

    Aperture – Closely related to focus, but changes the look of a photograph quite dramatically. The ability to edit the same image at both f2.8 and f16 is amazing.

    Lighting – A shot can be lit differently after it has been taken, which is pretty revolutionary. Unfortunately, different surfaces will not behave as they should, so there are limitations. Reflections are one – you can’t add a light to your scene in post-production and have it flatly illuminate a glass window or coffee table. Knowing how the light should bounce within the scene requires the capture of polarisation information. So far to my knowledge, only one company – Ricoh – has created a camera capable of detecting the polarisation of light, and it only has specialist applications.

    If we want to realistically light a scene after a photo has been taken, then we need to know the reflective properties of the surfaces in that scene (as well as in some cases, the transmissive properties of objects – think about the slight translucency of skin and hair when a light is placed behind a person). This is not something that will be solved overnight, but there are technologies (laser, radar, etc) that can help.

     

    Using depth

    There are many reasons to be excited about photos that contain depth information, a couple of major ones being compositing and movie special effects (there is no need for green screen or mask cutting to extract an object from the background).

    lightfield parallax lytro

    The extent to which parallax can be achieved with a single lightfield camera. Image source – www.lytro.com

    My favourite reason is parallax for stereoscopic viewing and Virtual Reality. The width the stereoscopic field can be from far left to far right of lens, and a person’s Inter pupillary distance is usually 55-65mm, so that’s how big your lens’ from element needs to be. Also, I said earlier that the ability to focus after-the-fact wasn’t important, but with lightfield displays, it most certainly is. Current virtual reality displays are fixed focus (at around 2 metres), and there is no question that variable focus will take immersive VR to the next level. Lightfield displays are the emerging technology that will enable this to happen, the most notable company being Magic Leap, which has secured hundreds of millions worth of investment to fund its research and development.

    Lightfields and increasing computing power

    Processing power has been steadily increasing in line with Moore’s Law, and doubling about every 2 years. Many of the improvements in digital photography recently have been down to the processing of the information captured, rather than improved capture of the information itself.

    Moores Law

    Computational photography will allow amazing advances in both the taking and editing of photos. Once photography becomes more software than hardware dependent, then progress has many limitations removed, as the only constraint on software (aside from the hardware it draws data from and processing power required to run it) is the imagination and creativity of the people who design it. Hardware is an enabler for software. The future can just be seen on the horizon, slowly appearing as though shrouded in mist. Many of the more prominent features are possible to make out, but as we get closer, we are going to be blown away by the full scope of what will be possible.

    I’m excited.

    Find out more about this awesome tech at the Lightfield Forum.

  • Future DSLR CMOS Improvements

    Digital cameras have been steadily improving for many years now, but recently progress has started to stagnate. One area I am particularly interested in is the low light (high ISO) performance of digital SLRs. There is enough light captured at high ISOs to generate decent images, but the main problem comes in the form of noise. While advances are still being made, the pace of change seems to be lowing. This is evidenced by the difference in capabilities between successive generations of cameras.

    The Canon EOS 5D was a pretty ground-breaking camera when it was launched back in 2005. It was the first ‘affordable’ full frame DSLR, with a 12 megapixel sensor and was capable of shooting at up to 3200 ISO. In reality (as is always the case, even today), only images a stop or two below the max ISO were really usable. 3 years later a massive upgrade came in the form of the 5D mkII, improving low light shooting by a stop, allowing the same quality of shots with half the light. In 2012 the 5D mkIII added another half stop. Now, in 2016, the law of diminishing returns continues with the 5D mkIV showing virtually no improvement in low light performance over the mkIII. Below are what I consider to be the main areas of current tech where there is room for improvement and by how much.

    read more

error: Content is protected !!