• The most important pictures of someone’s life

    This month’s post is a more personal one than usual, and is about why I enjoy wedding photography – I’ll be back to writing about the latest tech next month. Until then, I hope you enjoy the change of pace.

    I have the privilege of being an occasional wedding photographer. I often feel honoured that I get to witness someone have never met before having possibly the most emotional moment of their life.

    It is my job to capture it in a way that they can look back on for many years to come. My pictures will (hopefully) make them smile for the rest of their days. I realise that it’s more the event itself than my pictures, but my pictures still become the lens through which the event is remembered. The sands of time will eventually bury any memory not preserved through media, so the couple’s memory of the event eventually becomes my experience of it. Which is why it’s so important for me to document as much of their big day as possible. read more

  • Single Photons Actually Matter in Photography

    As anyone with a little background science knowledge will know, light can be measured as discrete quanta called photons. These particles behave as both a wave AND a particle, (as an aside, the wave-like behaviour of light is responsible for diffraction, which softens photographs taken at small apertures). For now, I shall focus on the particle side of things.

    Photons of different wavelengths from a light source bounce off various objects in a scene before eventually being captured and counted in a pixel on a camera sensor. The number of red, green, and blue photons that make it onto the corresponding coloured pixels on the sensor determine the brightness of those pixels. There are many, many trillions of these photons bouncing around, and the overwhelming majority of these never make it anywhere near the sensor of a camera that is trying to capture a scene. What is important to note, though, is that while the number of photons bouncing around is very large, it is also finite. read more

  • Is The Megapixel Race Over?

    Resolution has largely reached a point where any megapixel increase no longer matters. There has been many discussions about what resolution the human eye can perceive. Generally, for a sharp picture – either on a screen or print – 300dpi/ppi is required (Apple’s first ‘retina display’ was 326ppi). For a pin-sharp image, 600dpi/ppi is needed, and some 4k phones have exceeded this.


    Does pixel size matter?

    Yes and no. It’s not the size of the pixels that counts – it’s the area of your vision they take up. However, all these resolutions assume a close viewing distance of less than an arm’s length. As pictures are printed however large is needed, and displayed on screens of all sorts of sizes, the actual resolution required for them to be perceived as ‘sharp’ varies. Some roadside billboards have a print resolution of less than 100dpi, yet they still look sharp due to the distance they are viewed from. If you walk right up close to a large poster, you will often see fairly coarse dots. read more

  • Using Image Stabilisation

    Image stabilisation, in still photography, refers to a lens or camera’s ability to limit the effect that camera movement has while taking a picture. Unless on a tripod, if an exposure takes a long time (anything from 1/100th of a second to longer than a second), then your hands will move the camera while it is taking the picture. Hey presto – unwanted camera shake!

    Canon usually refers to the technology as Image Stabilisation, whereas Nikon usually refers to it as Vibration Reduction (but I prefer the acronym VR to be used for Virtual Reality). Canon and Nikon (as well as other camera manufacturers) have lenses that stabilise an image by moving a group of lens elements (this is optical image stabilisation – it can also be done by moving the image sensor, which some smaller cameras and phones do). Digital Image Stabilisation is mainly used to stabilise video, and is not really sited to still image capture. I shall refer to all shake-reduction technology as IS.

    read more

  • Improving Your Digital Organisation

    I hate mess.

    Mess slows me down. Untidiness is something that I loathe, yet I can’t always find the motivation to tidy up. I like it when everything is neat and in its place, but it often feels like too much of a chore to put things where they should be. Bank statements, utility bills, and pay slips had been building up on my desk for some time before I decided to do something about it. The final straw was when I had to do my tax return and I couldn’t find my latest P60 (I’m both employed and self-employed). It was then I realised I needed to improve my organisation skills.
    It was a kick up the backside, and got me to make a change.

    My physical surroundings and my digital space (files, folders, etc.) are fairly similar in regards to the mess that accumulates, and how I deal with it. I have cupboards and draws next to my desk to keep all my spare camera gear and batteries, photo papers, envelopes and stamps, blank DVDs, and so on. On my computer, I have my files various folders and sub-folders in a hierarchical filing system.


    Digital Organisation and Filing

    I’d argue that one of the most important elements of organisation is filing. Tidying up your folders is a worthwhile time investment if you wish to keep your files easy to locate. I find that limiting directories to 5-10 sub-folders each is helpful, as more than 10 folders makes it too difficult to find anything. There is only so much you can sort through at any one time, and being presented with a hundred folders is just too much information to process. If each of your folders contains 10 subfolders, then each level down you go, you should increase your filing capacity by an order of magnitude. I try to keep less that around 20-50 files per folder.

    I do not use the Windows filing system of My Documents, Downloads, Pictures, etc… My filing system has main directories in a folder under my main drive, D:\. Each folder has sub-folders, sub-sub-folders, etc. So if I need to get to a vector graphic I created of the Empire State Building, I can find it in D:\Graphics\Vector Graphics\Photo Based.

    Directory Sub-folders Files Size (GB) Average files per folder
    Blog 5 47 0.3 9
    Music 1,486 9,712 57.4 7
    Contracts 440 19,746 144 45
    Fitness 43 723 127 17
    Flat 53 1,061 7.3 20
    Graphics 199 3,457 14.6 17
    Photos 327 6,659 41.6 20
    Portfolio 200 2,628 7.8 13
    Stuff 484 10,204 6.5 21
    Total 3237 54,237 406  
    Average 359 6,026 45.1 19


    Work in Progress

    One strategy I find very useful is having a go to folder for all my work-in-progress, and having this folder load when my computer starts up. The title of this folder is ‘New Art’. An important thing to remember is to not keep too many files in that folder – only those you are actively working on (or intend to work on in the very near future). The more items in that folder, the more divided between them your attention becomes. When presented with three items, you’ll likely work on one of them – however, if confronted with thirty possible things to work on, it can take a while to choose which one is deserving of your time. The best way around procrastination is not to confront yourself with so much every time you start your computer.
    Try archiving off all the photos/artwork that you intend to get around to at a later date, but aren’t a priority for now. I have a folder called ‘Not Important’ for this stuff. You can now focus your attention on the few pieces that you really want to finish.


    To Do Lists

    Check lists are invaluable for productivity. I don’t even really like that word. It’s nice when you can say ‘I’ve been productive today’, but otherwise employing productivity methods feels a bit too much as if you’re treating a creative process as ‘work’.
    Here’s the truth – creating is work.

    However, it can be less of a chore and more enjoyable with the right approach to workflow management. The way you work may not have changed in years, and you’ll be surprised at how a small tweak here and there – such as prioritising future tasks in a to do list – can help. As with my work-in-progress folder, having a To Do list that automatically pops up during start-up helps a lot. I have ‘Graphic Ideas and ‘Blog Ideas’ notes on my phone that I add to whenever I think of something. That way, none of my ideas ever go to waste. Whether they are good ideas or not gets determined in due course, but at least I don’t forget them. Inspiration can strike anytime, and anywhere, so it’s fortunate that 99% of the time I have my phone with me.
    I was very productive while trying to write this post – my procrastination led me to brainstorm a whole load of new topics for other blog posts, as well as fleshing out others I’d partly written. Procrastination is not necessarily a bad thing – it’s what you do while you procrastinate that counts. Having a document with all my part-written ideas in it helped me to just get typing, adding more and more until bullet points became paragraphs, which in turn became fully explored topics.
    If you find you keep putting things off when you know you really should be getting on with them, take a look at Tim Urban’s article on Wait But Why about why procrastinators procrastinate, as well as his TED talk.

    While I will readily admit that there’s nothing ground-breaking in what I have written in this blog post, I hope that it has given you a few new ideas or reminded you of things you knew already but had stopped putting into practice. At the very least, I hope you now have some motivation to get a little more organised. Whatever you do – go create something you can be proud of. I wish you well.

  • Lightfield based Computational Photography

    Traditional photography has had a good run. Technological advancements have been made allowing ever greater amounts of image data to be captured (greater number of pixels, higher dynamic range, and better low light capture). Now, with these improvements slowing in pace, coupled with the changing way we use photographs, the capturing of a flat image with fixed attributes does not fit with the future of the way we consume media. Lightfield photography will change all that.

    Adjustments after the fact

    Lightfield cameras (also called Plenoptic cameras) capture all the light that hits a volume instead of a plane. This means that information about the angle of the light as well as brightness and colour is collected. More important than the technology behind lightfields is how they can be used. This blog post focuses on the benefits they have and why they are the future

    All the amazing things that can be done are possible because a depth map of the scene has been created, and the photo is now essentially a 3d model that must be viewed from a fixed angle. While the position the photo was shot from is fixed, anything that can be done with a 3d rendered model can also be done with a photograph that has been computationally generated from a lightfield. Here are the three big game changers:

    FocusLytro’s main selling point, and while undeniably cool, still a bit of a gimmick. Also, the abilities of a tilt/shift lens can be recreated, and the focal plane doesn’t even need to be flat (as with Lensbaby selective focus lenses). Variable focus is also possible, so the photographer can control how much before and after the focal plane is in focus, and how quickly it recedes.

    lightfield refocus lytro

    Refocusing a lightfield image. Image source – www.lytro.com

    Aperture – Closely related to focus, but changes the look of a photograph quite dramatically. The ability to edit the same image at both f2.8 and f16 is amazing.

    Lighting – A shot can be lit differently after it has been taken, which is pretty revolutionary. Unfortunately, different surfaces will not behave as they should, so there are limitations. Reflections are one – you can’t add a light to your scene in post-production and have it flatly illuminate a glass window or coffee table. Knowing how the light should bounce within the scene requires the capture of polarisation information. So far to my knowledge, only one company – Ricoh – has created a camera capable of detecting the polarisation of light, and it only has specialist applications.

    If we want to realistically light a scene after a photo has been taken, then we need to know the reflective properties of the surfaces in that scene (as well as in some cases, the transmissive properties of objects – think about the slight translucency of skin and hair when a light is placed behind a person). This is not something that will be solved overnight, but there are technologies (laser, radar, etc) that can help.


    Using depth

    There are many reasons to be excited about photos that contain depth information, a couple of major ones being compositing and movie special effects (there is no need for green screen or mask cutting to extract an object from the background).

    lightfield parallax lytro

    The extent to which parallax can be achieved with a single lightfield camera. Image source – www.lytro.com

    My favourite reason is parallax for stereoscopic viewing and Virtual Reality. The width the stereoscopic field can be from far left to far right of lens, and a person’s Inter pupillary distance is usually 55-65mm, so that’s how big your lens’ from element needs to be. Also, I said earlier that the ability to focus after-the-fact wasn’t important, but with lightfield displays, it most certainly is. Current virtual reality displays are fixed focus (at around 2 metres), and there is no question that variable focus will take immersive VR to the next level. Lightfield displays are the emerging technology that will enable this to happen, the most notable company being Magic Leap, which has secured hundreds of millions worth of investment to fund its research and development.

    Lightfields and increasing computing power

    Processing power has been steadily increasing in line with Moore’s Law, and doubling about every 2 years. Many of the improvements in digital photography recently have been down to the processing of the information captured, rather than improved capture of the information itself.

    Moores Law

    Computational photography will allow amazing advances in both the taking and editing of photos. Once photography becomes more software than hardware dependent, then progress has many limitations removed, as the only constraint on software (aside from the hardware it draws data from and processing power required to run it) is the imagination and creativity of the people who design it. Hardware is an enabler for software. The future can just be seen on the horizon, slowly appearing as though shrouded in mist. Many of the more prominent features are possible to make out, but as we get closer, we are going to be blown away by the full scope of what will be possible.

    I’m excited.

    Find out more about this awesome tech at the Lightfield Forum.

  • Future DSLR CMOS Improvements

    Digital cameras have been steadily improving for many years now, but recently progress has started to stagnate. One area I am particularly interested in is the low light (high ISO) performance of digital SLRs. There is enough light captured at high ISOs to generate decent images, but the main problem comes in the form of noise. While advances are still being made, the pace of change seems to be lowing. This is evidenced by the difference in capabilities between successive generations of cameras.

    The Canon EOS 5D was a pretty ground-breaking camera when it was launched back in 2005. It was the first ‘affordable’ full frame DSLR, with a 12 megapixel sensor and was capable of shooting at up to 3200 ISO. In reality (as is always the case, even today), only images a stop or two below the max ISO were really usable. 3 years later a massive upgrade came in the form of the 5D mkII, improving low light shooting by a stop, allowing the same quality of shots with half the light. In 2012 the 5D mkIII added another half stop. Now, in 2016, the law of diminishing returns continues with the 5D mkIV showing virtually no improvement in low light performance over the mkIII. Below are what I consider to be the main areas of current tech where there is room for improvement and by how much.

    read more

  • The advantages of a dual screen setup

     A couple of years ago, I decided it was time to buy a new computer and kick my photography and graphics work up a notch. I made the move from working on a laptop with a 17 inch screen to a desktop pc with various display options. The first decision I had to make was whether to have one large 4K monitor or 2 smaller 1080p ones. Sounds simple, but there were many things to consider. Admittedly, 4K has come down in price since then, but I still think a 1080p dual screen  setup is the way to go at present. In this post, I shall go through a few of the considerations, as well as pros and cons.

    Monitor – price vs performance

    Getting two regular-sized 1080p monitors is still much cheaper than a single large 4K one. There is a massive choice of 1080p panels out there, and even the more premium models are affordable. For the same sort of cost, you’re looking at the ‘budget’ end of the 4K market. Even the cheapest 4K monitors are pretty good, but just not quite as good as the (still cheaper) upmarket 1080p models. The dynamic range, colour gamut, coatings, and refresh rate you get can all improve if you’re willing to spend a little extra.

    Currently, high quality 1080p monitors will set you back around £150 each. Budget 4k panels, while a little larger, are around £500. That’s a £200 difference. read more

  • Deep Learning: Creating graphics from computer vision

    The Painting Fool is a deep learning computer program created with the aim of ‘being taken seriously as an artist’. The question is, can a program following instructions given to it by a human ever be considered truly creative? After all, surprisingly complex behaviours can come from the simplest of rules. One such example is the flocking of Starlings, where there is no overall control of the group but instead a ‘hivemind’ is in operation – each bird keeps track of its 6 or 7 closest neighbours in the flock, and changes direction synchronously. It is easy to imagine how in a similar way, a few simple rules in a software program can create unforeseen images. This is known as machine generated imaging, and is not to be confused with computer generated imaging (CGI). read more

  • High ISO Shooting

    You often want to capture the best images you can with the light you have available. Most of the time a tripod isn’t practical, and even when it is, a long exposure can be undesirable. When there isn’t much light, setting a high ISO allows for fast shutter speeds in low light, but with increased image noise and less detail in the highlights and shadows. Remember the golden rule for eliminating camera shake:

    Shutter speed must equal double lens focal length (mm) as a fraction of a second
    This means that with a 50mm lens, you’ll need a shutter speed of 1/100sec or faster. Image stabilising lenses help things a lot, but just remember this rule and you’ll be fine.


    Why not simply use a high ISO all the time?

    If a fast shutter speed eliminates camera shake, and a high ISO allows for a fast shutter speed, why not simply always shoot at a high ISO? Well, there are two main reasons, the most visible of which is loss of detail. When working out how much perceived detail is lost at high ISOs, I find it helpful to think of my usable image size halving each stop from 3200 upwards, at least when shooting jpegs. A bit more detail can be recovered from RAW files. read more