CMOS SLR inside

Is The Megapixel Race Over?

Resolution has largely reached a point where any megapixel increase no longer matters. There has been many discussions about what resolution the human eye can perceive. Generally, for a sharp picture – either on a screen or print – 300dpi/ppi is required (Apple’s first ‘retina display’ was 326ppi). For a pin-sharp image, 600dpi/ppi is needed, and some 4k phones have exceeded this.

 

Does pixel size matter?

Yes and no. It’s not the size of the pixels that counts – it’s the area of your vision they take up. However, all these resolutions assume a close viewing distance of less than an arm’s length. As pictures are printed however large is needed, and displayed on screens of all sorts of sizes, the actual resolution required for them to be perceived as ‘sharp’ varies. Some roadside billboards have a print resolution of less than 100dpi, yet they still look sharp due to the distance they are viewed from. If you walk right up close to a large poster, you will often see fairly coarse dots.

So resolution is relative. One thing is constant, though – if you want to look at a picture, the chances are you’ll want to fit the whole thing comfortably in your field of view. More specifically, you’ll want it in the centre – in the potion you have stereoscopic vision. Now, I am avoiding being nerd-sniped by the temptation to do the calculations behind the relationship between field-of-view and visual acuity. This article on filmic worlds does it better than I could anyway.

Suffice to say, it seems that 20 megapixels seems to be about the most information that most people can resolve in a photo that fits within 90 degrees of their field-of view. Is this a coincidence? Maybe it is the reason we haven’t developed cameras that have much higher resolution – it is certainly possible, but we have reached beyond the point of diminishing returns.

Currently, some phone cameras have the ability to resolve better than high speed 35mm colour film. Even Canon’s top-end SLR camera – the 1Dx mkII – has a resolution of ‘only’ 20mp, despite cameras with over double that resolution being available. The reason for this is that a bit of extra resolution (as is found in the 5Ds/5Dsr) is not going to noticeably improve the photos that the camera outputs. However, what it sacrifices in resolution, it does for better low light capabilities, a faster continuous shooting rate, and less pronounced image noise; all desirable features for a professional camera.

 

Sensor resolution

Almost all camera sensors have Bayer filters. This means that they have an RGBG grid layout to their pixels. As a consequence, a 20mp CMOS sensor gives a total of 20 million pixels worth of image data split between 3 colours. That means in order to output a 20mp image from a 20mp sensor, a camera must combine a 5mp red image, 10mp green image and 5mp blue image and guess the values for all the pixels that no data was recorded for (which is two thirds of the pixels in the final image). There is more green because the human eye is better at picking out green than red or blue (I’d recommend reading this article about how your eyes suck at blue).

 

Resizing (up-sampling)

There have been many up-sampling algorithms over the years. Nearest neighbour, bilinear, bicubic, and countless others. Lanczos resampling used to be one of the best, now AI has enabled smart interpolation, something I will cover more on a post on the future of smart image editing.

Scalable images used to be the preserve of vector graphics, but recently deep neural networks such as Google’s RAISR have made up-sampling more viable. The technology is now at a point where it can upscale an image by 200% and in most cases be virtually undetectable (when near the limit for visual acuity).

 

Super-resolution hallucinated images

So, a fairly sharp image can now be made into a very sharp one, but a low res image is still always going to look pixelated, right? Well, not exactly. My favourite technology – neural networks – can now be used to hallucinate (make an educated guess) at detail that would be in a high resolution image, but has been lost due to a low pixel count. In particular, Facial Hallucination can reconstruct recognisable faces from very low resolution inputs, which is useful for applications such as identifying someone from CCTV footage.

facial hallucination

Superresolution hallucinated faces by a deep neural network. Image Credit: https://people.csail.mit.edu/celiu/FaceHallucination/fh.html

 

Other uses for higher megapixel resolutions

Just because traditional cameras and 4k screens have reached the point where any improvement in resolution is largely unnoticeable, that doesn’t mean progress in that area has to stop. Lightfield (plenoptic) cameras use the same CMOS sensors as regular cameras, but capture a 3 dimensional image data. As a result, they can only output an image that is around a tenth of their sensor resolution. A sharp image from a lightfield would require a camera with a 200mp sensor. Lytro have actually made a 755mp camera array for shooting 360 degrees cinema quality footage at up to 300 frames per second, although at its peak it produces 400 gigabytes of data per second!

The Lytro Immerge camera array is 755 megapixels, and produces so much data that it requires its own dedicated server. It would fill multiple 64GB memory cards every second. Image Credit: Lytro.

 

Another technology that requires more resolution than we currently have access to is Virtual Reality. Currently, the displays in most VR headsets are re-appropriated phone screens, and hence have the same resolution (between 1080p and 4K). 8K per eye will be good enough for most people, but a few will see an improvement between 8K and 16K. An 8K display has 33 megapixels, and a 16K display is 133 megapixels. Aside from the technical difficulties in manufacturing such a display, the processing power required to render an image for it is mind boggling…

 

error: Content is protected !!