Monthly Archives: February 2012

Whack-a-Mole comes to the Battlefield

Whack-a-Mole comes to real world combat

  An old idea has been updated and brought back in the latest military weapon system.  Back in Vietnam, the firebases and forward positions were under constant sneak attack from the Vietcong under the cloak of night.  The first response to this was what they called Panic Minute.  This was a random minute chosen several times per day and night in which every soldier would shoot their weapon for one full minute.  They would shoot into the jungle without having any particular target.  We know it worked sometimes because patrols would find bodies just beyond the edge of the clearing.  But it also did not work a number of times and fire bases were being overrun on a regular basis. 

  The next response was Agent Orange.  Originally called a “defoliant” and designed to just make the trees and bushes drop all their leaves.  Of course, the effect was to kill all plant life and often making the soil infertile for years after.  They stopped it when they began to notice that it also was not particularly good for humans.  It acted as a neurotoxin causing all kinds of problems in soldiers that were sprayed or that walked thru it.

  The third and most successful response to these sneak attacks was a top secret program called Sentry.   Remember when this was – in the mid to late 60’s and early 70’s.  Electronics was not like it is now.  The Walkman, which was simply a battery operated transistor radio, was not introduced until 1978.  We were still using 8-track cartridge tapes and reel-to-reel recorders.  All TV’s used tubes and the concept of integrated circuits was in its infancy.  Really small spy cameras were about the size of a pack of cigarettes and really small spy type voice transmitters were about half that size.  Of course, like now, the government and the military had access to advances that had not yet been introduced to the public.

  One such advance was the creation of the sensors used in the Sentry program.  They started with a highly sensitive vibration detector.  We would call them geophones now but back then they were just vibration detectors.  Then they attached a high frequency (VHF) transmitter that would send a clicking sound in response to the detectors being activated by vibrations.  

The first version of this was called the PSR-1 Seismic Intrusion detector – and is fully described on several internet sites.  This was a backpack size device connected to geophones the size of “D” cell batteries.  It worked and proved the concept but it was too bulky and required the sensors to be connected by wires to the receiver.  The next version was much better.

  

What was remarkable about the next attempt was that they were able to embed the sensor, transmitter and batteries inside a package of hard plastic and coated on the outside with a flat tan or brown irregular surface. All this was about the size of one penlight battery.  This gave them the outward appearance of being just another rock or dirt clog and it was surprisingly effective.  These “rocks” were molded into a number of unique shapes depending on the transmitting frequency. 

  

The batteries were also encased in the plastic and it was totally sealed.  It was “on” from the moment of manufacture until the batteries died about 2 months later.  A box of them would contain 24 using 24 different frequencies and 24 different click patterns and were shipped in crates of 48 boxes.  The receiver was a simple radio with what looked like a compass needle on it.  It was an adaptation of the RFDF (radio frequency direction finder) used on aircraft.  It would point the needle toward an active transmitter and would feed the clicking to its speaker.

  

In the field, a firebase would scatter these rocks in the jungle around the firebase, keeping a record of the direction that each different frequency rock was thrown from the base.  All of the No. 1 rocks from 6 to 10 boxes were thrown in one direction.  All of the No. 2 rocks were thrown in the next direction, and so on.  The vibration detectors picked up the slightest movement within a range of 10 to 15 meters (30-50 feet).  The firebase guards would setup the receiver near the middle of the sensor deployment and would monitor it 24 hours a day.  When it began clicking and pointing in the direction of the transmitting sensors, the guard would call for a Panic Minute directed in that direction.  It was amazingly effective.

  

In todays’ Army, they call this Geophysical MASINT (measurement and signature intelligence) and the devices have not actually changed much.  The “rocks” still look like rocks but now they have sensors in them other than just seismic.  Now they can detect specific sounds, chemicals and light and can transmit more than just clicks to computers.  The received quantitative data is fed into powerful laptop computers and can be displayed as fully analyzed in-context information with projections of what is happening.  It can even recommend what kind of response to take.

  

These sensors “rocks” are dispersed at night by UAV’s or dropped by recon troops and are indistinguishable from local rocks.  Using multiple sensors and reception from several different rocks, it is possible to locate the source of the sensor readings to within a few feet.  This is much the same as the way the phone companies can track your locations using triangulation from multiple cell towers.  Using only these rocks, accuracy can be reduced to within ten feet or less but when all this data is integrated into the Combat Environmental Data (SID) network, targets can be identified, confirmed, located and placed within 2 or 3 feet.

  

What the Army has done with all this data is create a near automated version of Whack-a-Mole by integrating the use of artillery and the Digital Rifle System (DSR) into the SID and rock sensor network.  The result is the ability to setup a kill zone (KZ) that can be as big as 30 miles in diameter.  This KZ is sprinkled with the sensor rocks and the AIR systems of the DRS and linked by the SID network into strategically placed DRS rifles and digitally controlled artillery.  When these various systems and sensors are all in place, the Army calls it a WAK zone (pronounced “Whack”) –  hence the nickname Whack-a-Mole.

  

The WAK zone computers are programmed with recognition software of specifically targeted people, sounds, chemicals and images that constitute a confirmed kill target.  When the WAK zone computers make that identity, it automatically programs the nearest DRS rifle or the appropriate artillery piece to fire on the target.  For now, the actual fire command is still left to a person but it is fully capable of a full automatic mode.  In several tests in Afghanistan, it has not made any identification errors and the computerized recommendation to shoot has always been confirmed by a manual entry from a live person.

  

Studies and contractors are already working on integrating UAV’s into the sensor grids so that KZ’s of hundreds of miles in diameter can be defined.  The UAV’s would provide not only arieal sensors of visual, IR and RF detection but also they will carry the kill weapon.

  Whack-a-Mole comes to the battlefield!

 

Unthethered planets Are Not What the Seem

  

Two seemingly unrelated recent discoveries were analyzed by a group at NASA with some surprising and disturbing implications.  These discoveries came from a new trend in astronomy and cosmology of looking at “voids”.

  The trend is to look at areas in the sky that appear to not have anything there.  This is being done for three reasons. 

  

(1) In 2009, the Hubble was trained on what was thought to be an empty hole in space in which no previous objects have ever been observed.  The picture used the recently improved Wide Field and Planetary Camera #2 to do a Deep Field image.   The image covered 2.5 arc minutes – the width of a tennis ball as seen from 100 meters away.  The 140.2 hour exposure resulted in an image containing more than 3,000 distinct galaxies at distances going out to 12.3 billion light years away.  All but three of these were unknown before the picture was taken.  This was such an amazing revelation that this one picture has its own Wikipedia page (Hubble Deep Field) and it altered our thinking for years to come.

  

(2) The second reason is that for this image and for every other image or closer examination of voids, new and profound discoveries have been made.  Using radio frequencies, infrared, UV, and all the other wavelengths that we have cameras, filters and sensors to detect, have resulted in new findings every time they are used on “voids”.

  

(3) In general, the fields of astronomy and cosmology have been getting crowded with many more researchers than there are telescopes and labs to support them.  Hundreds of scientists in these fields do nothing but comb through the images and data of past collections to find something worth studying.  Much of that data has been reexamined hundreds of times and there is very little left to discover about it.  The new data from these examinations of voids has created a whole new set of raw data that can be examined from dozens of different perspectives to find something that all these extra scientists can use to make a name for themselves.

  

To that end, Takahiro Sumi and his team Osaka University recently examined one of these voids and found 10 Jupiter sized planets but the remarkable aspect is that these planets were “unthethered” to any star or solar system.  They were not orbiting anything.  In fact they seem to be moving in random directions at relatively high speeds and 8 of the 10 are actually accelerating.  Takahiro Sumi speculates that these planets might be the result of a star that exploded or collided but that is just a guess.

  

In an unrelated study at the radio telescope array in New Mexico, Albert Swenson and Edward Pillard announced that they found a number of anomalous RF and infrared emissions coming from several areas of space that fall into the category of being voids.  One of those void areas that had one of the strongest signals was the same area that Takahiro Sumi had studies.  Their study was unique because they cross-indexed a number of different wavelength measurements of the same area and found that there were very weak moving points of infrared emissions that appeared to be stronger sources of RF emissions with an unidentified energy emission in the 1.5 to 3.8 MHz region.   This study produced a great deal of measurement data but made very few conclusions about what they meant. 

  

The abundance of raw data was ripe for one of those many extra grad students and scientists to examine the data and correlate it to something.  The first to do so was Eric Vindin, a grad student doing his doctoral thesis on the arctic aurora.  He was examining something called the MF-bursts in the auroral roar – which an attempt to find the explicit cause of certain kinds of aurora emissions.  What he kept coming back to was that there was a high frequency component present in the spectrograms of the magnetic field fluctuations that were expressed at significantly lower frequencies.  Here is part of his conclusion:

  

“There is evidence that such waves are trapped in density enhancements in both direct measurements of upper hybrid waves and in ground-level measurements of the auroral roar for an unknown fine frequency structure which qualitatively matches and precedes the generation of discrete eigenmodes when the Z-mode maser acts in an inhomogeneous plasma characterized by field-aligned density irregularities.  Quantitative comparison of the discrete eigenmodes and the fine frequency structure is still lacking.”

  

To translate that for real people to understand, Vindin is saying that he found a highly modulated high frequency (HF) (what he called a “fine frequency structure “) signal embedded in the magnetic field fluctuations in the earth’s magnetic field that makes up and causes the background visual emissions we know as the Auroral Kilometric Radiation (AKR).  He can cross index these modulations of the HF RF to changes in the magnetic field on a gross scale but has not been able to identify the exact nature or source of these higher frequencies.   He did rule out that the HF RF was coming from Earth or the atmosphere.  He found that they were in the range from 1.5 to 3.8 MHz.  Vindin also noted that the HF RF emissions were very low power as compared to the AKR and occurred slightly in advance (sooner) than changes in the AKR.  His study, published in April 2011, won him his doctorate and a job at JPL in July of 2011.

  

Vindin did not extrapolate his findings into a theory or even a conclusion but the obvious implication of these findings is that these very weak HF RF emissions are causing the very large magnetic field changes in the AKR.  If that is true, then it is a cause-and-effect that has no known correlation in any other theory, experiment or observation.

  

Now we come back to NASA, two teams of analysts lead by Yui Chiu and Mather Schulz, working as hired consultants to the Deep Space Mission Systems (DSMS) within the Interplanetary Network Directorate (IND) of JPL.   Chiu’s first involvement was to publish a paper critical of Eric Vindin’s work.  He went to great effort to point out that the relatively low frequency of 1.5 to 3.8MHz is so low in energy that it is highly unlikely to have extraterrestrial origins and it is even more unlikely that it would have any effect on the earth’s magnetic field.  This was backed by a lot of math equations and physics that showed that such a low frequency could not travel from outside of the earth and still have enough energy to do anything – much less alter a magnetic field.  He showed that there is no know science that would explain how an RF emission could alter a magnetic field.  Chiu pointed out that NASA uses UHF and SHF frequencies with narrow beam antennas with extremely slow modulations to communicate with satellites and space vehicles because it takes the higher energy in those much higher frequencies to travel the vast distances of space.  It also takes very slow modulations to be able to send any reliable intelligence on those frequencies.  That is why it often takes several days to send a single high resolution picture from a space probe.  Chiu also argued that received energies from our planetary vehicles was about as strong as a cell phone transmitting from 475 miles away – a power rating in the nanowatt range.  Unless his HF RF signal originate from an unknown satellite, I could not have come from some distant source in space.

  

The motivation of this paper by Chiu appears to be the result of a professional disagreement that he had with Vindin shortly after Vindin came to work at JPL.  In October of 2011, Vindin published a second paper about his earlier study in which he addressed most of Chiu’s criticisms.  He was able to show that the HF RF signal was received by a polar orbiting satellite before it was detected at an earth-bound antenna array.  He antenna he was using was a modified facility that was once a part of the Defense Early Warning (DEW) line of massive (200 foot high) movable dish antennas installed in Alaska.  The DEW line signals preceded but appeared to be synchronized with the aurora field changes.  This effectively proved that the signal was extraterrestrial. 

  

Vindin also tried to address the nature of the HF RF signal and its modulations.  What he described was a very unique kind of signal that the military has been playing with for years. 

  

In order to reduce the possibility of a radio signal being intercepted, the military uses something called “frequency agility”.  This is a complex technique that breaks up the signal being sent into hundreds of pieces per second and then transmits each piece on a different frequency.  The transmitter and receiver are synchronized so that the receiver is jumping its tuning to match the transmitter’s changes in the transmission frequency.  If you could follow the jumps, it would appear to be random jumps but it actually follows a coded algorithm.  If someone is listening to any one frequency, they will hear only background noise with very minor and meaningless blips, clicks and pops.  Because a listener has no way of knowing where the next bit of the signal is going to be transmitted, it is impossible to rapidly tune a receiver to intercept these kinds of transmissions.  Frequency agile systems are actually in common usage.  You can even buy cordless phones that use this technique. 

  

As complex as frequency agility is, there are very advanced, very wide-band receivers and computer processors that can reconstruct an intelligent signal out of the chopped up emission.  For that reason, the military have been working on the next version of agility.  

  

In a much more recent and much more complicated use of frequency agility they are attempting to combine it with agile modulation.  This method breaks up both the frequency and the modulation of the signal intelligence of the transmission into agile components.  The agile frequency modulation (FM) shifts from the base frequency to each of several sidebands and to first and second tier resonance frequencies as well as shifting the intermediate (IF) frequency up and down.  The effect of this is to make it completely impossible to locate or detect any signal intelligence at all in an intercepted signal.  It all sounds like random background noise. 

  

Although it is impossible to reconstruct an agile frequency that is also modulation agile (called “FMA”), it is possible, with very advanced processors to detect that there is a signal present that is FMA modified.  This uses powerful math algorithms that take several hours of processing on massive amounts of recorded data and uses powerful computers to resolve the analysis many hours after the end of the transmission.  And even then it can only confirm to a high probability that there is a presence of an FMA signal without providing any indication of what is being sent. 

  

This makes it ideal for use on encrypted messages but even our best labs have been able to do it only when the transmitter and the receiver are physically wired together to allow them to synchronize their agile reconstruction correctly.  The NRL is experimenting with mixes of FMA and non-FMA and digital and analog emissions all being sent at the same time but it is years away from being able to deploy a functional FMA system.

  

I mention all this because as part of Vindin’s rebuttal, he was able to secure the use of the powerful NASA signal procession computers to analyze the signals he recorded and was able to confirm that there is a 91% probability that the signal is FMA.  This has, of course, been a huge source controversy because it appears to indicate that we are detecting a signal that we do not have the technology to create.  The NRL and NSA has been following all this with great interest and has independently confirmed Vindin’s claims.

  

What all this means is that we may never be able to reconstruct the signal to the point of understanding or even seeing text, images or other intelligence in it but what it does absolutely confirm is that the signal came from an intelligent being and was created specifically for interstellar communications.  There is not even a remote chance that anything in the natural world or in the natural universe could have created these signals out of natural processes.  It has to be the deliberate creation of intelligent life.

  

What came next was a study by Mather Schulz that is and has remained classified.  I had access to it because of my connections at NRL and because I have a lot of history in R&D in advanced techniques in communications.  Schulz took all these different reports and put them into a very logical and sequential argument that these unthethered planets were not only the source o the FMA signals but they are not planets at all.  They are planet size spaceships.

  

Once he came to this conclusion, he went back to each of the contributing studies to find further confirmation evidence.  In the Takahiro Sumi study from Osaka University and in the Swenson and Pillard study, he discovered that they had detected that the infrared emissions were much stronger on the side away from the line of travel and that there was a faint trail of infrared emissions behind each of the unthethered planets. 

  

This would be consistent with the heat emissions from some kind of a propulsion system that was pushing the spaceship along.  What form of propulsion would be capable of moving a planet-size spaceship is unknown but the fact that we can detect the IR trail at such great distances indicates that it is producing a very large trail of heated or ionized particles that extend for a long distance behind the moving planets.  The fact that he found this on 8 of the 10 unthethered planets was positive but then he also noted that the two that do not have these IR emissions, are the only ones that are not accelerating.  This would also be consistent with heat emissions from a propulsion system that is turned off and the spaceship is coasting.

  

The concept of massive spaceships has always been one of the leading solutions to sub-light-speed interplanetary travel.  The idea has been called “Generations Ships” that would be capable of supporting a population large enough and for a long enough period of time to allow multiple generations of people to survive in space.  This would allow survival for the decades or centuries needed to travel between galaxies or star systems.  Once a planet is free from its gravitational tether to its solar system star, it would be free to move in open space.  The solution of replacing the light and heat from their sun is not a difficult technological problem when you consider the possible use of thermal energy from the planet’s core.  Of course, a technology that has achieved this level of advanced science would probably find numerous other viable solutions.

  

Schulz used a combination of the Very Large Array of interferometric antennas at Socorro, New Mexico along with the systems at Pune, India and Arecibo, PR to collect data and then had the bank of Panther Cray computers at NSA analyze the data to determine that the FMA signals were coming from the region of space that exactly matched the void measured and studies by Takahiro Sumi.  NSA was more than happy to let Schulz use their computers to prove that they had not dropped the ball and allowed someone else on earth to develop a radio signal that they would not be able to intercept and decipher.

  

Schulz admitted that he cannot narrow down the detection to a single unthethered planet (or spaceship) but he can isolate it to the immediate vicinity of where they were detected.  He also verified the Swenson and Pillard finding that other voids had similar but usually weaker readings.  He pointed out that there may be many more signal sources from many more unthethered planets but outside of these voids, the weak signals were being deflected or absorbed by intervening objects.  He admitted that finding the signals in other voids did not confirm that they also had unthethered planets but he pointed out that it does not rule out that possibility either.

  

Finally, Schulz setup detection apparatus to simultaneously measure the FMA signals using the network of worldwide radio telescopes at the same time taking magnetic, visual and RF signals from the Auroral Kilometric Radiation (AKR).  He got the visual images with synchronized high speed video recordings from the ISIS in cooperation with the Laboratory for Planetary Atmospherics out of the Goddard SFC. 

  

Getting NSA’s help again, he was able to identify a very close correlation of these three streams of data to show that it was, indeed, the FMA signal originating from these unthethered planets that preceded and apparently was causing corresponding changes in the lines of magnet force that was made visible in the AKR.  The visual confirmation was not on shape or form changes in the AKR but in color changes that occurred at a much higher frequency than the apparent movements of the aurora lights.  What was being measured were the increase and decrease in the flash rate of individual visual spectrum frequencies.  Despite the high speed nature of the images, they were still only able to pick up momentary fragments of the signal – sort of like catching a single frame of a movie every 100 or 200 frames.  Despite this intermittent nature of the visual measurements, what was observed exactly synchronized with the other magnetic and RF signals – giving a third source of confirmation.  Schulz provided some very shallow speculation that the FMA signal is, in fact, a combined agile frequency and modulation signal that includes both frequencies and modulation methods that are far beyond our ability to decipher it. 

  

This detection actually supports a theory that has been around for years – that a sufficiently high enough frequency that is modulated in harmonic resonance with the atomic level vibrations of the solar wind – the charged particles streaming out of the sun that create the Aurora at the poles – can be used to create harmonics at very large wavelengths – essentially creating slow condensations and rarefactions in the AKR.  This is only a theory based on some math models that seem to make it possible but the control of the frequencies involved are far beyond any known or even speculated technology so it is mostly dismissed.  Schulz mentions it only because it is the only known reference to a possible explanation for the observations.  It has some validity because the theory’s math model exactly maps to the observations.

  

Despite the low energy, low frequency signal and despite the fact that we have no theory or science that can explain it, the evidence was conclusive and irrefutable.  Those unthethered planets appear to be moving under their own power, are emitting some unknown kind of signal that is somehow able to modulate our entire planet’s magnetic field.  The conclusion that these are actually very large spaceships, containing intelligent life that is capable of creating these strange signals, seems to be unavoidable.

  

The most recent report from Schulz was published in late December 2011.  The fallout and reactions to all this is still in its infancy.  I am sure they will not make this public for a long time, if ever.  I have already seen and heard about efforts to work on this at several DoD and private classified labs around the world.  I am sure this story is not over. 

  

We do not now know how to decode the FMA signals and we don’t have a clue how it is affecting the AKR but our confirmed and verified observations have pointed us to only one possible conclusion – we are not alone in the universe and whoever is out there has vastly improved technologies and intelligence than we do.