L-16 Camera
 

Tech Review: The L-16

Posted February 13, 2018
Share To
 
 

Since its invention in the 19th century, all photography has pretty much been based on the same principle - light is captured and focused through a lens and, once focused, cast upon a plane that will capture an image.

This held true whether the capture device was a glass plate of chemically treated glass, a plane of chemically treated plastic sheet or roll, or a CCD.  

This also held true whether the image-maker was recording a still photo or moving images.  

The system worked pretty well, (it would have had to, to have held up for two centuries - find another technology that had done that. OK.. Trains..)

One of the few inherent problems in photography done this was related to depth of field. There was and remains an inherent and fundamental relationship between depth of field and the amount of light let in. The light accessed to the lens was and is controlled by an aperture ring.

If you have a DSLR (or similar camera or device), you have no doubt seen the markings on the lens - f2, f4, f5.6 and so on. These relate to how big the aperture ring is, that is, how much light is allowed to enter into the camera (camera, being the Italian word for 'room').

The lower the f-stop number, the larger the opening of the aperture and the more light that gets in.

This is great, in some ways, but there is an inherent problem in the physics of this. The larger the aperture, the more shallow the depth of field. Hence, at f16 or f32, you get great depth of field, but you need a far slower exposure to allow enough light to penetrate to get a good image. Open the aperture, and depth of field collapses.

If you have a good lens on your DSLR, there is actually a measuring device on the lens that will tell you exactly what is in focus and what is not. It is called a hyperfocal depth ring. 

Of course, most people have no idea what this does, and just sort of spin the aperture ring until they get a red dot in the center, or more likely just leave it all in automatic.

This was the immutable law of optics. There was no escape.

Until now.

A few years ago, Light inventor, Dr. Rajiv Laroia sold his company to Lucent Technologies for a lot of money. A lot. He had always wanted to try out photography, so he went down to B&H and bought the best DSLR camera he could find with all the toys and all the lenses.

Suddenly, he realized he was dragging a lot of heavy and bulky lenses around with him, and having to change them all the time. This made no sense to him, so, brilliant scientist that he was, he set out to build a different kind of camera.  The first fundamentally different kind of camera since, oh, 1850 or so.

The result is the L-16, made by his company, Light.

Instead of having one lens, the L-16 actually has 16 lenses built into it. 16 very small lenses.  They all capture varied focal length images of the same picture at the same instant that you click the button.

Then, through the miracle of processing vast amounts of information very fast, the camera (which is more computer than camera), processes all of that information and creates an 'image' which incorporates all possible focal lengths and all possible depths of field at the same time.

The image delivers the equivalent of 81 megapixels per image. 
You can see an example here

This, of course, is mind-blowing

What is more mind-blowing is that the camera costs $1950.

With lenses.

My Hasselblad H4D, which delivers only 40 megapixels (still pretty good), costs about $40,000, with lenses. 

Now, the L-16 is still in the beta test phase, but you can see the potential.

The company promises many upgrades ,including video... one day.

I can't wait.