Lightfield based Computational Photography

Traditional photography has had a good run. Technological advancements have been made allowing ever greater amounts of image data to be captured (greater number of pixels, higher dynamic range, and better low light capture). Now, with these improvements slowing in pace, coupled with the changing way we use photographs, the capturing of a flat image with fixed attributes does not fit with the future of the way we consume media. Lightfield photography will change all that.

Adjustments after the fact

Lightfield cameras (also called Plenoptic cameras) capture all the light that hits a volume instead of a plane. This means that information about the angle of the light as well as brightness and colour is collected. More important than the technology behind lightfields is how they can be used. This blog post focuses on the benefits they have and why they are the future

All the amazing things that can be done are possible because a depth map of the scene has been created, and the photo is now essentially a 3d model that must be viewed from a fixed angle. While the position the photo was shot from is fixed, anything that can be done with a 3d rendered model can also be done with a photograph that has been computationally generated from a lightfield. Here are the three big game changers:

FocusLytro’s main selling point, and while undeniably cool, still a bit of a gimmick. Also, the abilities of a tilt/shift lens can be recreated, and the focal plane doesn’t even need to be flat (as with Lensbaby selective focus lenses). Variable focus is also possible, so the photographer can control how much before and after the focal plane is in focus, and how quickly it recedes.

lightfield refocus lytro

Refocusing a lightfield image. Image source –

Aperture – Closely related to focus, but changes the look of a photograph quite dramatically. The ability to edit the same image at both f2.8 and f16 is amazing.

Lighting – A shot can be lit differently after it has been taken, which is pretty revolutionary. Unfortunately, different surfaces will not behave as they should, so there are limitations. Reflections are one – you can’t add a light to your scene in post-production and have it flatly illuminate a glass window or coffee table. Knowing how the light should bounce within the scene requires the capture of polarisation information. So far to my knowledge, only one company – Ricoh – has created a camera capable of detecting the polarisation of light, and it only has specialist applications.

If we want to realistically light a scene after a photo has been taken, then we need to know the reflective properties of the surfaces in that scene (as well as in some cases, the transmissive properties of objects – think about the slight translucency of skin and hair when a light is placed behind a person). This is not something that will be solved overnight, but there are technologies (laser, radar, etc) that can help.


Using depth

There are many reasons to be excited about photos that contain depth information, a couple of major ones being compositing and movie special effects (there is no need for green screen or mask cutting to extract an object from the background).

lightfield parallax lytro

The extent to which parallax can be achieved with a single lightfield camera. Image source –

My favourite reason is parallax for stereoscopic viewing and Virtual Reality. The width the stereoscopic field can be from far left to far right of lens, and a person’s Inter pupillary distance is usually 55-65mm, so that’s how big your lens’ from element needs to be. Also, I said earlier that the ability to focus after-the-fact wasn’t important, but with lightfield displays, it most certainly is. Current virtual reality displays are fixed focus (at around 2 metres), and there is no question that variable focus will take immersive VR to the next level. Lightfield displays are the emerging technology that will enable this to happen, the most notable company being Magic Leap, which has secured hundreds of millions worth of investment to fund its research and development.

Lightfields and increasing computing power

Processing power has been steadily increasing in line with Moore’s Law, and doubling about every 2 years. Many of the improvements in digital photography recently have been down to the processing of the information captured, rather than improved capture of the information itself.

Moores Law

Computational photography will allow amazing advances in both the taking and editing of photos. Once photography becomes more software than hardware dependent, then progress has many limitations removed, as the only constraint on software (aside from the hardware it draws data from and processing power required to run it) is the imagination and creativity of the people who design it. Hardware is an enabler for software. The future can just be seen on the horizon, slowly appearing as though shrouded in mist. Many of the more prominent features are possible to make out, but as we get closer, we are going to be blown away by the full scope of what will be possible.

I’m excited.

Find out more about this awesome tech at the Lightfield Forum.