Monthly Archives: December 2011

IBAL – The latest in Visual Recon


The latest addition to reconnaissance is a new kind of camera that takes a new kind of picture.  The device is called a plenoptic camera or a light-field camera.  Unlike a normal camera that takes a snapshot of a 2D view, the plenoptic camera uses a microlens array to capture a 4D light field.  This is a whole new way of capturing an image that actually dates back to 1992 when Adelson and Wang first proposed the design.  Back then, the image was captured on film with limited success but it did prove the concept.  More recently, a Stanford University team built a 16 megapixel electronic camera with a 90,000-microlens array that proved that the image could be refocused after the picture is taken.   Although this is technology that has already made its way into affordable consumer products, as you might expect, it has also been extensively studied and applied to military applications.


To appreciate the importance and usefulness of this device, you need to understand what it can do.  If you take a normal picture of a scene, the camera captures one set of image parameters that include focus, depth of field, light intensity, perspective and a very specific point of view.  These parameters are fixed and cannot change.  The end result is a 2-dimensional (2D) image.  What the light field camera does is to capture all of the physical characteristics of the light of a given scene so that a computer can later recreate the image in such detail that it is as if the original image is totally recreated in the computer.  In technical terms, it captures the watts per steradian per meter squared along a ray of radiance.  This basically means that it captures and can quantify the wavelength, polarization, angle, radiance, and other scalar and vector values of the light.   This results in a five dimensional function that can be used by a computer to recreate an image in the computer as if you were looking at the original image at the time the photo was taken.


This means that after the picture is taken, you can refocus on different aspects of the image, you can zoom in on different parts of the image and the resolution is such that you can even zoom in on parts of the image without a significant loss of resolution.  If the light field camera is capturing a moving video of a scene, then the computer can render a perfect 3-dimentional representation of the image taken.  For instance, using a state-of-the-art light field camera and taking an aerial light field video from a UAV drone at 10,000 feet altitude, of a city, the data could be used to zoom in on images within the city such as the text of a newspaper that someone is reading or the face of a pedestrian.  You could recreate the city in a highly dimensionally accurate 3D rendering that you could then traverse from a ground-level perspective in a computer model of the city.  The possibilities are endless.


As usual, it was NRL that started the soonest and has developed the most useful and sophisticated applications for the light field camera.  Because this camera creates its most useful results when it is used as a video camera, the NRL focused on that aspect of it early on.  The end result was the “IBAL” (pronounced “eyeball”) – for Imaging Ballistic Acquisition of Light.


The IBAL is a micro-miniature focused plenoptic camera that uses a masked synthetic aperture in from of an array of 240,000 microlenses that each capture a 24 megapixel video image.  This is accomplished by a massively overclocked processor that takes just 8 seconds of video images at a frame rate of 800 frames per second.  This entire device fits into the nose of an 80 mm mortar round or in the M777 155mm howitzer.  It can also be fired from a number of other artillery and shoulder-launched weapons as a sabot round.  The shell is packed with a powerful lithium battery that is designed to provide up to 85 watts of power for up to two minutes from ballistic firing to impact.  The round has a gyro-stabilized fin control that maintains the camera pointed at the target in one of two modes.  The first mode is to fire the round at a very high angle – 75 to 87 degrees up.  This gives the round a very steep trajectory that allows it to capture its image as it is descending from a few thousand feet of altitude.  Since the resolution is very high, it captures its images as soon as it is aligned and pointed at the ground.  The second mode is to fire the IBAL at a low trajectory – 20 to 30 degrees elevation.  In this mode the gyro maintains the camera, pointing thru a prism, at the ground as the round traverses the battle zone.  In both cases, it uses the last few seconds of flight to transmit a compressed data burst on a UHF frequency to a nearby receiver.  The massive amount of data is transmitted using the same kind of compression algorithm used by the intelligence community for satellite reconnaissance imagery data.   One final aspect of the ballistic round is that it has a small explosive in the back that assures that it is completely destroyed upon impact.  It even has a backup phosphorous envelope that will ignite and melt all of the electronics and optics if the C4 does not go off.


Since the object is to recon and not attack, the actual explosive is really quite small and when it goes off, the explosion is almost entirely contained inside the metal casing of the round.  Using the second mode –  low trajectory – of firing, the round would pass over the battle zone and land far beyond without attracting much attention.  In a more active combat environment, the high trajectory mode would attract little attention.  If noticed at all, it would appear to be a dud.


The data is received by a special encrypted digital receiver that decodes it and feeds it into the IBAL processor station which is a powerful laptop that can be integrated into a number of other visual representation systems including 3D imaging projectors, 3D rendering tables and virtual-reality goggles.  The data can be used to recreate the images captured in a highly detailed 3-D model that is so accurate that measurements can be taken from the image that are accurate to within one-tenth of an inch. 


The computer is also able to overlay any necessary fire-control grid onto the image so that precise artillery control can be vectored to a target.  The grid can be a locally created reference or simply very detailed latitude and longitude using GPS measures.  As might be expected, this imagery information is fully integrated into the CED (combat environmental data) information network and into the DRS (digital rifle system) described by me in other reports.   This means that within seconds of firing the IBAL, the 3D image of the combat zone is available on the CED network for all the soldiers in the field to use.  It also is available for snipers to plan out their kill zones and to the artillery to fine tune their fire control.  Since it sees the entire combat zone from the front, overhead and back, it can be used to identify, locate and evaluate potential targets such as vehicles, mortar positions, communications centers, enemy headquarters and other priority targets.


Using this new imaging system in combination with all the other advances in surveillance and reconnaissance that I have described here and others that I have not yet told you about, there is virtually no opportunity for an enemy to hide from our weapons.

“SID” Told Me! The Newest Combat Expert

Sensor Fusion is one of those high tech buzzwords that the military has been floating around for nearly a decade. It is suppose to describe the integration and use of multiple sources of data and intelligence in support of decision management on the battlefield or combat environment. You might think of a true sensor fusion system as a form of baseline education. As with primary school education, the information is not specifically gathered to support a single job or activity but to give the end user the broad awareness and knowledge to be able to adapt and make decisions about a wide variety of situations that might be encountered in the future. As you might imagine, providing support for “a wide variety of situations that might be encountered in the future” takes a lot of information and the collation, processing and analysis of that much information is one of the greatest challenges of a true sensor fusion system.


One of the earliest forms of sensor fusion was the Navy Tactical Data System or NTDS. In its earliest form, it allowed every ship in the fleet to see on their radar scopes the combined view of every other ship in the fleet. Since the ships might be separated by many miles, this effectively gave a radar umbrella that was hundreds of square miles in every direction – much further than any one ship could attain. It got a big boost when the added the radar of airborne aircraft that could fly Carrier Air Patrol (CAP) from 18,000 feet altitude. Now every ship could see as if they had radar that looked out hundreds of miles distant and thousands of square miles of coverage. 

  In the latest version, now called the Cooperative Engagement Capability (CEC), the Navy has also integrated fire control radar so that any ship, aircraft or sub can fire on a target that can be seen by any other ship, aircraft or sub in the fleet, including ships with different types of radars – such as X-Band, MMWL, Pulsed Doppler, phased array, aperture synthesis (SAR/ISAR), FM-CW, even sonar. This allows a Guided Missile Cruiser to fire a missile at a target that it physically cannot see but that can be seen by some other platform somewhere else in the combat arena. Even if a ship has no radar at all, of its own, it can benefit from the CEC system and “see” what any other ship can see with their radar.  That is sensor fusion.


The end result, however, is a system that supports wide variety of situations from the obvious combat defensive tactics and weapons fire control to navigation to air-sea rescue. Each use takes from the CEC system that portion of the total information available that it needs for its specific situation.


The Army has been trying to incorporate that kind of sensor integration for many years. So far, they have made strides in two areas. One is the use of UAV’s (unmanned aerial vehicles) and the other is in the helmet mounted systems.  Both of these gather observed information at some remote command post where it is manually processed, analyzed, prioritized and then selectively distributed to other forces in the combat area. There are dozens of minor efforts that the Army is calling sensor fusion but it really is just a single set of sensors with a dedicated objective to feed a specific system with very specific data. An example of this is the Guardian Angel program that was designed to detect improvised explosive devices (IEDs) in Iraq and Afghanistan. Although it mixed several different types of detection devices that overlaid various imagery data, each sensor was specifically designed to support the single objective of the overall system. A true sensor fusion system gathers and combines data that will be used for multiple applications and situations.


A pure and fully automated form of this technology is sometimes referred to as multi-sensor data fusion (MSDF) and has not yet been achieved, until now. MSDF has been the goal of DoD for a long time. So much so that they even have a Department of Defense (DoD) Data Fusion Group within the Joint Directors of Laboratories (JDL). The JDL defined MSDF as the “multilevel, multifaceted process of dealing with the automatic detection, association, correlation, estimation and combination of data and information from multiple sources with the objective to provide situation awareness, decision support and optimum resource utilization by and to everyone in the combat environment”. That means that the MSDF must be able to be useful not just to the Command HQ and to the generals or planners but to the soldiers on the ground and the tank drivers and the helo pilots that are actively engaged with the enemy in real time – not filtered or delayed by processing or collating the data at some central information hub.


There are two key elements of MSDF that make it really hard to implement in reality. The first is the ability to make sense of the data being gathered. Tidbits of information from multiple sensors are like tiny pieces of a giant puzzle. Each one can, by itself, can provide virtually no useful information but become useful only when combined with hundreds or even thousands other data points to form the ultimate big picture. It takes time and processing power to do that kind of collating and processing and therein lays the problem. If that processing power is centrally located, then the resulting big picture is no longer available in real time and useful to an actively developing situation. Alternatively, if the processing power is given to each person in the field that might need the data, then it becomes a burden to carry, maintain and interpret the big picture in the combat field environment by every solider that might need it, As the quantity, diversity and complexity of the data being integrated rises, so does the processing power and complexity increase at an exponential rate. The knowledge and skills of the end user also rises to the point that only highly trained experts are able to use such systems.


The second problem is the old paradox of information overload. On the one hand, it is useful to have as much information as possible to fully analyze a situation and to be ready for any kind of decision analysis that might be needed. On the other hand, any single given situation might actually need only a small portion of the total amount of data available. For instance, imagine a powerful MSDF network that can provide detailed information about everything happening everywhere in the tactical environment. If every end user had access to all of that data, they would have little use for most of it because they are interested in only that portion that applies to them. But knowing what they will need now and in the future makes it important that they have the ability to access all of it. If you give them that ability, you complicate the processing and training to be able to use it. If you limit what they might need, then you limit their ability to adapt and make decision.  A lot of data is a good thing but too much is a bad thing and the line between those two is constantly changing.


I was a consultant to Naval Research Labs (NRL) in a joint assignment to the JDL to help the Army develop a new concept for MSDF. When we first started, the Army has visions of having a vast MSDF system that would provide everything to everyone but when we began to examine some of the implications and limitations of such a system, it became clear that we would need to redefine their goals. After listening to them for a few weeks I was asked to make a presentation on my ideas and advice to them. I thought about it for a long time and then created just three slides. The first one showed a graphic depiction of the GPS system.. In front of two dozen generals and members of the Army DoD staff, I put up the first slide and then asked them to just think about it. I waited for a full five minutes. They were a room of smart people and I could see the look on their faces when they realized that what they needed was a system like the GPS system.  It provides basic and relatively simple information in a standardized format that is then used for a variety of purposed from navigation to weapons control to location services.  The next question came quickly and that was “what would the nature of a similar system be like for the Army in a tactical environment?” That’s when I put up my next slide. I introduced them to “CED” (pronounced as “SID”).


Actually, I called it the CED (Combat Environmental Data) network. In this case, the “E” for Environment means the physical terrain, atmosphere and human construction in a designated area. The true tactical combat environment. It uses an array of sensors that already existed that I helped developed at the NRL for the DRS – the Digital Rifle System. As you might recall, I described this system and its associated rifle, the MDR-192 in two other reports that you can read. The DRS uses a specially designed sensor called the “AIR” for autonomous information recon device. It gathers a variety of atmospheric data (wind, pressure, temperature, humidity) as well as a visual image, a laser range-finder scan of its field of view and other data such as vibrations, RF emissions and infrared scans. It also has an RF data transmitter and a modulated laser beam transmission capability. All this is crammed into a device that is 15 inches long and about 2.5 cm in diameter that is scattered, fired, air dropped or hidden throughout the target area. The AIR’s are used to support the DRS procession computer in the accurate aiming of the MDR-192 at ranges out to 24,000 feet or about 4.5 miles.


The AIR’s are further enhanced by a second set of sensors called the Video Camera Sights or VCS. The VCS consist of a high resolution video image cameras combined with lasers scanning beams that are combined in the DRS processing computer to render a true and proportional 3D image of the field of view.  The DRS computer integrates the AIR and VCS data so that an entire objective area can be recreated in finite 3D detail in computer images.  Since the area is surrounded with VCS systems and AIR sensors are scattered throughout the area, the target area can be accurately recreated so that the DRS user can see almost everything in the area as if he were able to stand at almost any location anywhere within the target area.  The DRS user is able to accurately see and measure and ultimately target the entire area – even if he is on the other side of the mountain from the target area.  The power of the DRS system is the sensor fusion of this environment for the purpose of aiming the MDR-192 at any target anywhere in the target area.


My second slide showed the generals that using AIR and VCS sensor devices combined with one new sensor, of my design, an entire tactical zone can be fully rendered in a computer. The total amount of data available is massive but the end user would treat it like the GPS or the DRS system, pulling down only the data that is needed at that moment for a specific purpose.  That data and purpose can be in support of a wide variety of situations that may be encountered in the present or future a wide variety of end users.


My third slide was simple a list of what the CED Network would provide to the Army generals as well as to each and every fielded decision maker in the tactical area. I left this list on the screen for another five minutes and began hearing comments like, “Oh my god”, “Fantastic!” and “THAT’S what we need!”


Direct and Immediate Benefits and Applications of the CED Network

  ·        Autonomous and manned weapons aiming and fire control

  ·        Navigation, route and tactical planning, attack coordination

  ·        Threat assessment, situation analysis, target acquisition

  ·        Reconnaissance, intelligence gathering, target identity

  ·        Defense/offence analysis, enemy disposition, camouflage penetration


My system was immediately accepted and I spent the next three days going over it again and again with different levels within the Army and DoD. The only additional bit of information I added in those three days was the nature of the third device that I added to the AIR and VCS sensors.  I called it the “LOG” for Local Optical Guide. 


The LOG mostly gets its name from its appearance. It looks like a small log or a cut branch of a tree that has dried up.  In fact, great effort has gone into making it look like a natural log so that it will blend in.  There are actually seven different LOGs – in appearance – but the insides are all the same.  It contains four sensor modules: (1) a data transceiver that can connect to the CED network and respond to input signals.  The transceiver sends a constant flow of images and other data but it also will collect and relay data received from other nearby sensors.  In order to handle the mixing of data, all the transmitters are FM and frequency agile – meaning that they transmit a tiny fraction of data on a VHF frequency and then hop to another frequency for the next few bits of data.  The embedded encryption keep all the systems synchronized but the effect of it is that it is nearly impossible to intercept, jam or even detect the presence of these signals; (2) six high resolution cameras that have night vision capabilities.  These cameras are located so that no matter how the LOG is placed on the ground, at least two cameras will be useful for gathering information.  The lenses of the cameras can be commanded to zoom from a panoramic wide angle to telephoto with a X6 zoom but it will default to a wide angle; (3) an atmospheric module that measures wind, temperature, humidity and pressure; (4) a finally, it has an acoustic and vibration sensing module with six microphones located on each surface that is accurate enough to be able to give precise intensity and a crude directionality to sensed sounds.  It has a fifth self-destruct module that is powerful enough to completely destroy the LOG and do damage to anyone trying to dismantle it.


The LOG works in conjunction with the AIR for sound sensing of gunfire. Using the same technology that is applied in the Boomerang gunfire locator that was developed by DARPA and BBN Technologies, the CED system can locate the direction and distance to gunfire within one second of the shot.  Because the target area is covered with numerous LOG and AIR sensors, the accuracy of the CED gunfire locator is significantly more accurate than DARPA’s Boomerang system.  


The total CED system consists of these three modules – LOG, AIR and VCS and a receiving processing module that can take the form of a laptop, a handheld or a backpack system. Although the computer processor (laptop) used in the DRS was a very sophisticated analyzer of that system’s sensor inputs, the computer processors for the CED system are substantially more advanced in many ways.  The most important difference is that the CED system is a true network that places all of the sensory data on-the-air in an RF transmitted cloud of information that saturates the target area and nearby areas.  It can be tapped into by any CED processor anywhere within range of the network.  Each CED or DRS processor pulls out of the network just the information it needs for the task at hand.  To see how this works, here are some examples of the various uses of the CED system:



  Either a DRS or a CED processor can be sued to support the sniper. The more traditional snipers using standard rifles will tap into the CED network to obtain highly accurate wind, temperature, pressure and humidity data as well as precise distance measurements.  Using the XM25 style HEAB munitions that are programmed by the shooter, nearly every target within the CED combat area hit and destroyed.  The CED computers can directly input data into the XM25/HEAB system so that the sniper does not have to use his laser range-finder to sight in the target.  He can also be directed to aim using the new Halo Sight System (HSS).  This is a modified XM25 fire control sight that uses a high resolution LCD thin-film filter that places a small blinking dot at the aim-point of the weapon.  This is possible because the CED processor can precisely place the target and the shooter and can calculate the trajectory based on sensor inputs from the LOG and AIR and VCS sensor grid of the network.  It uses lasers from the AIR’s to locate the shooter and images from the VCS and LOG sensors to place the target.  The rest is just mathematical calculations of the aim point to put an HEAB or anti-personnel 25mm round onto the target.  It is also accurate enough to support standard sniper rifles, the M107/M82 .50 cal. Rifle or the MDR-192.  Any of these can be fitted with the HSS sight for automated aim point calculations.


In the case of the MDR-192, the rifle is mounted on a digitally controlled tripod that is linked directly to the DRS or CED computer. The effect is to create an autonomous small caliber artillery weapon.  That means that an operator of a CED (or DRS) computer that has tapped into the CED network can identify a target somewhere in the covered combat arena and send that data to any one of several MDR-192 rifles that have been placed around the combat area.  Each autonomous MDR-192 has an adjustment range of 30 degrees, left and right of centerline and 15 degrees up and down.  Since the range of the typical MDR-192 is up to 24,000 feet, four rifles could very effectively cover a target area of up to four square miles.  The computer data will instruct the selected MDR-192 to aim the rifle to the required aim point – accounting for all of the ballistic and environmental conditions – and fire.  As described in the report of the MDR-192 and DRS, the system can be accessed by an operator that is remotely located from the rifles and the target area – as much as 5 miles away. 


Recent tests of the CED system and the MDR-192 have proven their effectiveness. The only defense that the enemy has is to stay in an underground bunker.



  The CED network is the ultimate forward observe for artillery placement of smart weapons. Using the visual sensors of the LOG and VCS and the gunfire locator sensors of the LOG and AIR sensors, any target within the entire combat arena can be very precisely located.  It can then be identified with GPS coordinates for the dropping of autonomous weapons such as a cruise missile or it can be illuminated with a laser from a nearby AIR or MDR-192 for smart weapon fire control aim point. 


Even standard artillery has been linked into the CED system. A modified M777 Howitzer (155mm) can be linked into the CED system.  It uses a set of sensors that have been strapped to the barrel that can sense its aim point within .ooo3 degrees in three dimensions.   The CED network data is sent to a relay transmitter and then sent up to 18 miles away to the M777 crew.  The M777 is moved in accordance with some simple arrows and lights until a red light comes on, indicating that the aim point has been achieved for the designated target – then they fire.  Tests have been able to place as many as 25 rounds within a 10 foot (3 meters) radius from 15 miles away using this system.


Intelligence and Reconnaissance

  The CED system is also ideally suited to completely define the enemy distribution and activity and covertly pre-identify targets for a later assault or barrage. The AIR and LOG systems can pick up sounds that can be matched to the LOG and VCS images and video to place and identify points of activity, vehicles and radios.  The VCS and AIR imaging capability can map movements and identify specific types of equipment, weapons and vehicles in the area.  During the battle, snipers and other gunfire can be located with the acoustic gunfire locator using the AIR and LOG sensors.  The LOG and VCS systems also have gun flash identifiers that can distinguish muzzle flash in images – even in complete darkness or the brightest daylight.


One of the remarkable additions to the CED processors is the ability to recreate an accurate 3D animation of the target area. This is a 3D rendering of the area that is accurate enough that measurements can be taken from the 3D image and will be accurate to within fractions of an inch to the real world layout.  This is useful to pass the 3D rendering back to an HQ or forward planning area for use in the planning, training and management of an assault.


The CED network has just finished field testing in several isolated combat areas in Afghanistan but it has proven to be most effective. Work has already begun on improving the AIR, LOG and VCS sensors in an effort to consolidate, miniaturize and conceal them to a greater degree.  They are also working on an interface to an autonomous UAV that will add aerial views using laser, IR and visual sensors.


He troops that have used this system consider it the smartest and most advanced combat information system ever devised and the comment that “CED told me” is becoming recognized as the best possible source of combat information.