The latest addition to reconnaissance is a new kind of camera that takes a new kind of picture. The device is called a plenoptic camera or a light-field camera. Unlike a normal camera that takes a snapshot of a 2D view, the plenoptic camera uses a microlens array to capture a 4D light field. This is a whole new way of capturing an image that actually dates back to 1992 when Adelson and Wang first proposed the design. Back then, the image was captured on film with limited success but it did prove the concept. More recently, a Stanford University team built a 16 megapixel electronic camera with a 90,000-microlens array that proved that the image could be refocused after the picture is taken. Although this is technology that has already made its way into affordable consumer products, as you might expect, it has also been extensively studied and applied to military applications.
To appreciate the importance and usefulness of this device, you need to understand what it can do. If you take a normal picture of a scene, the camera captures one set of image parameters that include focus, depth of field, light intensity, perspective and a very specific point of view. These parameters are fixed and cannot change. The end result is a 2-dimensional (2D) image. What the light field camera does is to capture all of the physical characteristics of the light of a given scene so that a computer can later recreate the image in such detail that it is as if the original image is totally recreated in the computer. In technical terms, it captures the watts per steradian per meter squared along a ray of radiance. This basically means that it captures and can quantify the wavelength, polarization, angle, radiance, and other scalar and vector values of the light. This results in a five dimensional function that can be used by a computer to recreate an image in the computer as if you were looking at the original image at the time the photo was taken.
This means that after the picture is taken, you can refocus on different aspects of the image, you can zoom in on different parts of the image and the resolution is such that you can even zoom in on parts of the image without a significant loss of resolution. If the light field camera is capturing a moving video of a scene, then the computer can render a perfect 3-dimentional representation of the image taken. For instance, using a state-of-the-art light field camera and taking an aerial light field video from a UAV drone at 10,000 feet altitude, of a city, the data could be used to zoom in on images within the city such as the text of a newspaper that someone is reading or the face of a pedestrian. You could recreate the city in a highly dimensionally accurate 3D rendering that you could then traverse from a ground-level perspective in a computer model of the city. The possibilities are endless.
As usual, it was NRL that started the soonest and has developed the most useful and sophisticated applications for the light field camera. Because this camera creates its most useful results when it is used as a video camera, the NRL focused on that aspect of it early on. The end result was the IBAL (pronounced eyeball) for Imaging Ballistic Acquisition of Light.
The IBAL is a micro-miniature focused plenoptic camera that uses a masked synthetic aperture in from of an array of 240,000 microlenses that each capture a 24 megapixel video image. This is accomplished by a massively overclocked processor that takes just 8 seconds of video images at a frame rate of 800 frames per second. This entire device fits into the nose of an 80 mm mortar round or in the M777 155mm howitzer. It can also be fired from a number of other artillery and shoulder-launched weapons as a sabot round. The shell is packed with a powerful lithium battery that is designed to provide up to 85 watts of power for up to two minutes from ballistic firing to impact. The round has a gyro-stabilized fin control that maintains the camera pointed at the target in one of two modes. The first mode is to fire the round at a very high angle 75 to 87 degrees up. This gives the round a very steep trajectory that allows it to capture its image as it is descending from a few thousand feet of altitude. Since the resolution is very high, it captures its images as soon as it is aligned and pointed at the ground. The second mode is to fire the IBAL at a low trajectory 20 to 30 degrees elevation. In this mode the gyro maintains the camera, pointing thru a prism, at the ground as the round traverses the battle zone. In both cases, it uses the last few seconds of flight to transmit a compressed data burst on a UHF frequency to a nearby receiver. The massive amount of data is transmitted using the same kind of compression algorithm used by the intelligence community for satellite reconnaissance imagery data. One final aspect of the ballistic round is that it has a small explosive in the back that assures that it is completely destroyed upon impact. It even has a backup phosphorous envelope that will ignite and melt all of the electronics and optics if the C4 does not go off.
Since the object is to recon and not attack, the actual explosive is really quite small and when it goes off, the explosion is almost entirely contained inside the metal casing of the round. Using the second mode – low trajectory – of firing, the round would pass over the battle zone and land far beyond without attracting much attention. In a more active combat environment, the high trajectory mode would attract little attention. If noticed at all, it would appear to be a dud.
The data is received by a special encrypted digital receiver that decodes it and feeds it into the IBAL processor station which is a powerful laptop that can be integrated into a number of other visual representation systems including 3D imaging projectors, 3D rendering tables and virtual-reality goggles. The data can be used to recreate the images captured in a highly detailed 3-D model that is so accurate that measurements can be taken from the image that are accurate to within one-tenth of an inch.
The computer is also able to overlay any necessary fire-control grid onto the image so that precise artillery control can be vectored to a target. The grid can be a locally created reference or simply very detailed latitude and longitude using GPS measures. As might be expected, this imagery information is fully integrated into the CED (combat environmental data) information network and into the DRS (digital rifle system) described by me in other reports. This means that within seconds of firing the IBAL, the 3D image of the combat zone is available on the CED network for all the soldiers in the field to use. It also is available for snipers to plan out their kill zones and to the artillery to fine tune their fire control. Since it sees the entire combat zone from the front, overhead and back, it can be used to identify, locate and evaluate potential targets such as vehicles, mortar positions, communications centers, enemy headquarters and other priority targets.
Using this new imaging system in combination with all the other advances in surveillance and reconnaissance that I have described here and others that I have not yet told you about, there is virtually no opportunity for an enemy to hide from our weapons.