Category Archives: Future History

Science or not, there are aspects of the future that are absolutely certain to happen. The sun will rise tomorrow is simply history that has not yet happened. So it is with these stories.

Investing Computer

One of the most interesting applications of computer technology is in the field of investing.  It is interesting that with all the sophisticated systems and all the monetary rewards possible, that there has not been a successful program that can guide a broker to make foolproof investment predictions…..until now.  It is a fact that out of all the investors and resources on Wall Street, that none of them much better than just slightly above random selection in picking the optimum investment portfolio.  Numerous studies have been done on this subject that show that the very best investment advisors have perhaps a 10% or 15% improvement over random selection and that even the best analysts cannot sustain their success for very long.

There are lots of people that are able to see very near term trends (on the order of a few days or a week or two, at most) and invest accordingly but no one has figured out how to consistently predict stock rises and falls over the long term (more than 3 or 4 weeks out).  That was the task I attempted to solve – not because I want to be rich but because it seemed like an interesting challenge.   It combines the math of finance and the psychology of sociology with computer logic.

I did a lot of research and determined that there is, in fact, no one that knows how to do it but there is a lot of math research that says that it should be able to be predictable using complex math functions, like chaos theory.  That means that I would have to create the math and I am not that good at math.  However, I do know how to design analytical software programs so I decided to take a different approach and  create a tool that will create the math for me.  That I could do.

Let me explain the difference.  In college, I took programming and one assignment was to write a program that would solve a six by six numeric matrix multiplication problem but we had to do it in 2,000 bytes of computer core memory.  This uses machine code and teaches optimum and efficient coding.  It is actually very difficult to get all the operations needed in just 2k of memory and most of my classmates either did not complete the assignment or work hundreds of hours on it.  I took a different approach.  I determined that the answer was going to be whole positive numbers so I wrote a program that asked if “1” was the answer and checked to see if that solved the problem.  When it didn’t, I added “1” to the answer and checked again.  I repeated this until I got to the answer.  My code was the most accurate and by far the fastest that the instructor had ever seen.

I got the answer correct and fast but I didn’t really “solve” the problem.  That is how I decided to approach this investment problem.  I created a program that would take an educated guess at an algorithm that would predict future stock values.  If it was wrong, then I altered the algorithm slightly and tried again.  The initial guessed algorithm needed to be workable and the method of making the incremental changes had to be well thought out.

The answer is using something called forward chaining neural nets with an internal learning or evolving capability.  I could get real technical but the gist of it is this – I first created a placeholder program (N0. 1) that allows for hundreds of possible variables but has many of them set to 1 or zero.  It then selects inputs from available data and assigns that data to the variable placeholders.  It then defines a possible formula that might predict the movements of the stock market.  This program has the option to add additional input parameters, constants, variables, input data and computations to the placeholder formula.  It seeks out data to insert into the formula.  In a sense, it allows the formula to evolve into totally new algorithms that might include content that has never been considered before.

Then I created a program (No. 2) that executes that formula created by program No. 1, using all the available input data and the selected parameters or constants and generates specific stock predictions.  This program uses a Monte Carlo kind of interruption in which all the parameters are varied over a range in various combinations and then the calculations are repeated.  It also can place any given set of available data into various or multiple positions in the formula.  This can take hundreds of thousands (up to millions) of repetitions of executing the formulas to examine all the possible combinations of all of the possible variations of all the possible variables in all the possible locations in the formula.

Then I created a program (No. 3) that evaluates the results against known historical data.  If the calculations of program No. 2 is not accurate, then this third program notifies the first program and it changes its inputs and/or its formula and then the process repeats.  This third program can keep track of trends that might indicate that the calculations are getting more accurate and makes appropriate edits in the previous programs.  This allows the process to begin to focus toward any algorithm that begins to show promise of leading to an accurate prediction capability.

I then created sort of a super command override program that first replicates this entire three-program process and then manages the results of the outputs of dozens of copies of the number 2 and 3 program and treats them as if they were one big processor.  This master executive program can override the other three by injecting changes that have been learned in other sets of the three programs.  This allowed me to setup multiple parallel versions of the three-program analysis and speed the overall analysis many times over.

As you might image this is a very computer intensive program.  The initial three programs were relatively small but as the system developed, they expanded into first hundreds and then thousands of parallel copies.  All of these copies reading from data sets placed in a bank of DBMS’s that represented hundreds of gigabytes of historical data.  As the size of the calculations and data grew, I began to divide the data and processing among multiple computers.

I began with input financial performance data that was known during the period from 1980 through 2010.  This 30 years of data includes the full details of millions of data points about tens of thousands of stocks as well as huge databases of social-economic data about the general economy, politics, international news, and research papers and surveys of the psychology of consumers, the general population and of world leaders.  I was surprised to find that a lot of this data had been accumulated for use in dozens of other previous studies.  In fact, most of the input data I used was from previous research studies and I was able to use it in its original form.

Program No. 1 used data that was readily available from various sources from these historical research records.  Program No.3 uses slightly more recent historical stock performance data.  In this way, I can look at possible predictive calculations and then check them against real world performance.  For instance, I input historical 1980 data and see if it predicts what actually happened in 1981.  Then I advance the input and the predictions by a year.  Since I have all this data, I can see if the 1980-based calculations accurately predicts what happened in 1981?  By repeating this for the entire 30 years of available data, I can try out millions of variations of the analysis algorithms.  Once I find something that works on this historical data. I can advance it forward to input current data to predict future stock performance.  If that works then I can try using it to guide actual investments.

This has actually been done before.  Back in 1991, a professor of logic and math at MIT created a neural net to do just what I have described above.  It was partially successful but the software, the input data and the computer hardware back then were far less than what I used.     In fact, I found that even my very powerful home computer systems were much too slow to process the massive volumes of data needed.  To get past this problem, I created a distributive-processing version of my programs that allowed me to split up the calculations among a large number of computers.  I then wrote a sort of computer virus that installed these various computational fragments on dozens of college and university computers around the country.  Such programs are not uncommon on campus computers and I was only using 2 or 3% of the total system assets but collectively, it was like using 500 high end PC’s or about 3/4th of one super computer.

Even with all that processing power, it was more than 18 months and more than 9,700 hours of processing time on 67 different computers before I began to see a steady improvement in the predictive powers of the programs that were evolving.  By then, the formula and data inputs had evolved into a very complex algorithm that I would never have imagined but it was closing in on a more and more accurate version.  By early 2011, I was getting up to 85% accurate predictions of both short term and long term fluctuations in the S&P and Fortune 500 index as well as several other mutual fund indexes.

Short term predictions were upward to 95% accurate but that was out only 24 to 96 hours.  The long term accuracy dropped off from 91% for 1 week out to just under 60% for 1 year out….but, it was slowly getting better and better.

By June of last year, I decided to put some money into the plan.  I invested $5,000 in a day-trader account and then allowed my software to instruct my trades.  I limited the trades to one every 72 hours and the commissions ate up a lot of the profits from such a small investment but over a period of 6 months, I had pushed that $5,000 to just over $29,000.  This partially validated the predictive quality of the formulas but it is just 2.5% of what it should be if my formulas were exactly accurate.  I have since done mock investments of much higher sums and a longer investment interval and had some very good success.  I have to be careful because if I show too much profit, I’ll attract a lot of attention and get investigated or hounded by news people.  Both of which I don’t want.

The entire system was steadily improving in its accuracy but I was also getting more and more of my distributive programs on the college systems being caught and erased.  These were simply duplicate parallel systems but it began to slow the overall advance of the processing.  I was at a point that I was making relatively minor refinements to a formula that had evolved from all of this analysis.  Actually, it was not a single formula.  To my surprise, what evolved was sort of a process of sequential interactive formulas that used a feedback loop of calculated data that was then used to analyze the next step in the process.

I tried once to reverse-engineer the whole algorithm but it got very complex and there were steps that were totally baffling. I was able to figure out that it looked at the stocks fundamentals, then it looked at the state of the economy which was applied to the stock performance.  All that seems quite logical but then it processed dozens of “if-then” statements that related to micro, macro and global economics in a sort of logical scoring process that was then used to modify parameters of the stock performance.  This looping and scoring repeated several times and seemed to be the area that was being refined in the final stages of my analysis.

By June of 2012, I was satisfied that I can accomplished my goal.  I had a processing capability that was proving to be accurate in the 89 to 95% range for predictions out two to six weeks but it was still learning and evolving when I took it offline.  I had used the system enough to earn enough to cover all the costs of the hardware and software I invested in this project plus a little extra for a much needed vacation.  I never did do this for the money but it is nice to know that it works and that if I ever need a source of funding for a project, I can get it.

Whack-a-Mole comes to the Battlefield

Whack-a-Mole comes to real world combat

  An old idea has been updated and brought back in the latest military weapon system.  Back in Vietnam, the firebases and forward positions were under constant sneak attack from the Vietcong under the cloak of night.  The first response to this was what they called Panic Minute.  This was a random minute chosen several times per day and night in which every soldier would shoot their weapon for one full minute.  They would shoot into the jungle without having any particular target.  We know it worked sometimes because patrols would find bodies just beyond the edge of the clearing.  But it also did not work a number of times and fire bases were being overrun on a regular basis. 

  The next response was Agent Orange.  Originally called a “defoliant” and designed to just make the trees and bushes drop all their leaves.  Of course, the effect was to kill all plant life and often making the soil infertile for years after.  They stopped it when they began to notice that it also was not particularly good for humans.  It acted as a neurotoxin causing all kinds of problems in soldiers that were sprayed or that walked thru it.

  The third and most successful response to these sneak attacks was a top secret program called Sentry.   Remember when this was – in the mid to late 60’s and early 70’s.  Electronics was not like it is now.  The Walkman, which was simply a battery operated transistor radio, was not introduced until 1978.  We were still using 8-track cartridge tapes and reel-to-reel recorders.  All TV’s used tubes and the concept of integrated circuits was in its infancy.  Really small spy cameras were about the size of a pack of cigarettes and really small spy type voice transmitters were about half that size.  Of course, like now, the government and the military had access to advances that had not yet been introduced to the public.

  One such advance was the creation of the sensors used in the Sentry program.  They started with a highly sensitive vibration detector.  We would call them geophones now but back then they were just vibration detectors.  Then they attached a high frequency (VHF) transmitter that would send a clicking sound in response to the detectors being activated by vibrations.  

The first version of this was called the PSR-1 Seismic Intrusion detector – and is fully described on several internet sites.  This was a backpack size device connected to geophones the size of “D” cell batteries.  It worked and proved the concept but it was too bulky and required the sensors to be connected by wires to the receiver.  The next version was much better.

  

What was remarkable about the next attempt was that they were able to embed the sensor, transmitter and batteries inside a package of hard plastic and coated on the outside with a flat tan or brown irregular surface. All this was about the size of one penlight battery.  This gave them the outward appearance of being just another rock or dirt clog and it was surprisingly effective.  These “rocks” were molded into a number of unique shapes depending on the transmitting frequency. 

  

The batteries were also encased in the plastic and it was totally sealed.  It was “on” from the moment of manufacture until the batteries died about 2 months later.  A box of them would contain 24 using 24 different frequencies and 24 different click patterns and were shipped in crates of 48 boxes.  The receiver was a simple radio with what looked like a compass needle on it.  It was an adaptation of the RFDF (radio frequency direction finder) used on aircraft.  It would point the needle toward an active transmitter and would feed the clicking to its speaker.

  

In the field, a firebase would scatter these rocks in the jungle around the firebase, keeping a record of the direction that each different frequency rock was thrown from the base.  All of the No. 1 rocks from 6 to 10 boxes were thrown in one direction.  All of the No. 2 rocks were thrown in the next direction, and so on.  The vibration detectors picked up the slightest movement within a range of 10 to 15 meters (30-50 feet).  The firebase guards would setup the receiver near the middle of the sensor deployment and would monitor it 24 hours a day.  When it began clicking and pointing in the direction of the transmitting sensors, the guard would call for a Panic Minute directed in that direction.  It was amazingly effective.

  

In todays’ Army, they call this Geophysical MASINT (measurement and signature intelligence) and the devices have not actually changed much.  The “rocks” still look like rocks but now they have sensors in them other than just seismic.  Now they can detect specific sounds, chemicals and light and can transmit more than just clicks to computers.  The received quantitative data is fed into powerful laptop computers and can be displayed as fully analyzed in-context information with projections of what is happening.  It can even recommend what kind of response to take.

  

These sensors “rocks” are dispersed at night by UAV’s or dropped by recon troops and are indistinguishable from local rocks.  Using multiple sensors and reception from several different rocks, it is possible to locate the source of the sensor readings to within a few feet.  This is much the same as the way the phone companies can track your locations using triangulation from multiple cell towers.  Using only these rocks, accuracy can be reduced to within ten feet or less but when all this data is integrated into the Combat Environmental Data (SID) network, targets can be identified, confirmed, located and placed within 2 or 3 feet.

  

What the Army has done with all this data is create a near automated version of Whack-a-Mole by integrating the use of artillery and the Digital Rifle System (DSR) into the SID and rock sensor network.  The result is the ability to setup a kill zone (KZ) that can be as big as 30 miles in diameter.  This KZ is sprinkled with the sensor rocks and the AIR systems of the DRS and linked by the SID network into strategically placed DRS rifles and digitally controlled artillery.  When these various systems and sensors are all in place, the Army calls it a WAK zone (pronounced “Whack”) –  hence the nickname Whack-a-Mole.

  

The WAK zone computers are programmed with recognition software of specifically targeted people, sounds, chemicals and images that constitute a confirmed kill target.  When the WAK zone computers make that identity, it automatically programs the nearest DRS rifle or the appropriate artillery piece to fire on the target.  For now, the actual fire command is still left to a person but it is fully capable of a full automatic mode.  In several tests in Afghanistan, it has not made any identification errors and the computerized recommendation to shoot has always been confirmed by a manual entry from a live person.

  

Studies and contractors are already working on integrating UAV’s into the sensor grids so that KZ’s of hundreds of miles in diameter can be defined.  The UAV’s would provide not only arieal sensors of visual, IR and RF detection but also they will carry the kill weapon.

  Whack-a-Mole comes to the battlefield!

 

Unthethered planets Are Not What the Seem

  

Two seemingly unrelated recent discoveries were analyzed by a group at NASA with some surprising and disturbing implications.  These discoveries came from a new trend in astronomy and cosmology of looking at “voids”.

  The trend is to look at areas in the sky that appear to not have anything there.  This is being done for three reasons. 

  

(1) In 2009, the Hubble was trained on what was thought to be an empty hole in space in which no previous objects have ever been observed.  The picture used the recently improved Wide Field and Planetary Camera #2 to do a Deep Field image.   The image covered 2.5 arc minutes – the width of a tennis ball as seen from 100 meters away.  The 140.2 hour exposure resulted in an image containing more than 3,000 distinct galaxies at distances going out to 12.3 billion light years away.  All but three of these were unknown before the picture was taken.  This was such an amazing revelation that this one picture has its own Wikipedia page (Hubble Deep Field) and it altered our thinking for years to come.

  

(2) The second reason is that for this image and for every other image or closer examination of voids, new and profound discoveries have been made.  Using radio frequencies, infrared, UV, and all the other wavelengths that we have cameras, filters and sensors to detect, have resulted in new findings every time they are used on “voids”.

  

(3) In general, the fields of astronomy and cosmology have been getting crowded with many more researchers than there are telescopes and labs to support them.  Hundreds of scientists in these fields do nothing but comb through the images and data of past collections to find something worth studying.  Much of that data has been reexamined hundreds of times and there is very little left to discover about it.  The new data from these examinations of voids has created a whole new set of raw data that can be examined from dozens of different perspectives to find something that all these extra scientists can use to make a name for themselves.

  

To that end, Takahiro Sumi and his team Osaka University recently examined one of these voids and found 10 Jupiter sized planets but the remarkable aspect is that these planets were “unthethered” to any star or solar system.  They were not orbiting anything.  In fact they seem to be moving in random directions at relatively high speeds and 8 of the 10 are actually accelerating.  Takahiro Sumi speculates that these planets might be the result of a star that exploded or collided but that is just a guess.

  

In an unrelated study at the radio telescope array in New Mexico, Albert Swenson and Edward Pillard announced that they found a number of anomalous RF and infrared emissions coming from several areas of space that fall into the category of being voids.  One of those void areas that had one of the strongest signals was the same area that Takahiro Sumi had studies.  Their study was unique because they cross-indexed a number of different wavelength measurements of the same area and found that there were very weak moving points of infrared emissions that appeared to be stronger sources of RF emissions with an unidentified energy emission in the 1.5 to 3.8 MHz region.   This study produced a great deal of measurement data but made very few conclusions about what they meant. 

  

The abundance of raw data was ripe for one of those many extra grad students and scientists to examine the data and correlate it to something.  The first to do so was Eric Vindin, a grad student doing his doctoral thesis on the arctic aurora.  He was examining something called the MF-bursts in the auroral roar – which an attempt to find the explicit cause of certain kinds of aurora emissions.  What he kept coming back to was that there was a high frequency component present in the spectrograms of the magnetic field fluctuations that were expressed at significantly lower frequencies.  Here is part of his conclusion:

  

“There is evidence that such waves are trapped in density enhancements in both direct measurements of upper hybrid waves and in ground-level measurements of the auroral roar for an unknown fine frequency structure which qualitatively matches and precedes the generation of discrete eigenmodes when the Z-mode maser acts in an inhomogeneous plasma characterized by field-aligned density irregularities.  Quantitative comparison of the discrete eigenmodes and the fine frequency structure is still lacking.”

  

To translate that for real people to understand, Vindin is saying that he found a highly modulated high frequency (HF) (what he called a “fine frequency structure “) signal embedded in the magnetic field fluctuations in the earth’s magnetic field that makes up and causes the background visual emissions we know as the Auroral Kilometric Radiation (AKR).  He can cross index these modulations of the HF RF to changes in the magnetic field on a gross scale but has not been able to identify the exact nature or source of these higher frequencies.   He did rule out that the HF RF was coming from Earth or the atmosphere.  He found that they were in the range from 1.5 to 3.8 MHz.  Vindin also noted that the HF RF emissions were very low power as compared to the AKR and occurred slightly in advance (sooner) than changes in the AKR.  His study, published in April 2011, won him his doctorate and a job at JPL in July of 2011.

  

Vindin did not extrapolate his findings into a theory or even a conclusion but the obvious implication of these findings is that these very weak HF RF emissions are causing the very large magnetic field changes in the AKR.  If that is true, then it is a cause-and-effect that has no known correlation in any other theory, experiment or observation.

  

Now we come back to NASA, two teams of analysts lead by Yui Chiu and Mather Schulz, working as hired consultants to the Deep Space Mission Systems (DSMS) within the Interplanetary Network Directorate (IND) of JPL.   Chiu’s first involvement was to publish a paper critical of Eric Vindin’s work.  He went to great effort to point out that the relatively low frequency of 1.5 to 3.8MHz is so low in energy that it is highly unlikely to have extraterrestrial origins and it is even more unlikely that it would have any effect on the earth’s magnetic field.  This was backed by a lot of math equations and physics that showed that such a low frequency could not travel from outside of the earth and still have enough energy to do anything – much less alter a magnetic field.  He showed that there is no know science that would explain how an RF emission could alter a magnetic field.  Chiu pointed out that NASA uses UHF and SHF frequencies with narrow beam antennas with extremely slow modulations to communicate with satellites and space vehicles because it takes the higher energy in those much higher frequencies to travel the vast distances of space.  It also takes very slow modulations to be able to send any reliable intelligence on those frequencies.  That is why it often takes several days to send a single high resolution picture from a space probe.  Chiu also argued that received energies from our planetary vehicles was about as strong as a cell phone transmitting from 475 miles away – a power rating in the nanowatt range.  Unless his HF RF signal originate from an unknown satellite, I could not have come from some distant source in space.

  

The motivation of this paper by Chiu appears to be the result of a professional disagreement that he had with Vindin shortly after Vindin came to work at JPL.  In October of 2011, Vindin published a second paper about his earlier study in which he addressed most of Chiu’s criticisms.  He was able to show that the HF RF signal was received by a polar orbiting satellite before it was detected at an earth-bound antenna array.  He antenna he was using was a modified facility that was once a part of the Defense Early Warning (DEW) line of massive (200 foot high) movable dish antennas installed in Alaska.  The DEW line signals preceded but appeared to be synchronized with the aurora field changes.  This effectively proved that the signal was extraterrestrial. 

  

Vindin also tried to address the nature of the HF RF signal and its modulations.  What he described was a very unique kind of signal that the military has been playing with for years. 

  

In order to reduce the possibility of a radio signal being intercepted, the military uses something called “frequency agility”.  This is a complex technique that breaks up the signal being sent into hundreds of pieces per second and then transmits each piece on a different frequency.  The transmitter and receiver are synchronized so that the receiver is jumping its tuning to match the transmitter’s changes in the transmission frequency.  If you could follow the jumps, it would appear to be random jumps but it actually follows a coded algorithm.  If someone is listening to any one frequency, they will hear only background noise with very minor and meaningless blips, clicks and pops.  Because a listener has no way of knowing where the next bit of the signal is going to be transmitted, it is impossible to rapidly tune a receiver to intercept these kinds of transmissions.  Frequency agile systems are actually in common usage.  You can even buy cordless phones that use this technique. 

  

As complex as frequency agility is, there are very advanced, very wide-band receivers and computer processors that can reconstruct an intelligent signal out of the chopped up emission.  For that reason, the military have been working on the next version of agility.  

  

In a much more recent and much more complicated use of frequency agility they are attempting to combine it with agile modulation.  This method breaks up both the frequency and the modulation of the signal intelligence of the transmission into agile components.  The agile frequency modulation (FM) shifts from the base frequency to each of several sidebands and to first and second tier resonance frequencies as well as shifting the intermediate (IF) frequency up and down.  The effect of this is to make it completely impossible to locate or detect any signal intelligence at all in an intercepted signal.  It all sounds like random background noise. 

  

Although it is impossible to reconstruct an agile frequency that is also modulation agile (called “FMA”), it is possible, with very advanced processors to detect that there is a signal present that is FMA modified.  This uses powerful math algorithms that take several hours of processing on massive amounts of recorded data and uses powerful computers to resolve the analysis many hours after the end of the transmission.  And even then it can only confirm to a high probability that there is a presence of an FMA signal without providing any indication of what is being sent. 

  

This makes it ideal for use on encrypted messages but even our best labs have been able to do it only when the transmitter and the receiver are physically wired together to allow them to synchronize their agile reconstruction correctly.  The NRL is experimenting with mixes of FMA and non-FMA and digital and analog emissions all being sent at the same time but it is years away from being able to deploy a functional FMA system.

  

I mention all this because as part of Vindin’s rebuttal, he was able to secure the use of the powerful NASA signal procession computers to analyze the signals he recorded and was able to confirm that there is a 91% probability that the signal is FMA.  This has, of course, been a huge source controversy because it appears to indicate that we are detecting a signal that we do not have the technology to create.  The NRL and NSA has been following all this with great interest and has independently confirmed Vindin’s claims.

  

What all this means is that we may never be able to reconstruct the signal to the point of understanding or even seeing text, images or other intelligence in it but what it does absolutely confirm is that the signal came from an intelligent being and was created specifically for interstellar communications.  There is not even a remote chance that anything in the natural world or in the natural universe could have created these signals out of natural processes.  It has to be the deliberate creation of intelligent life.

  

What came next was a study by Mather Schulz that is and has remained classified.  I had access to it because of my connections at NRL and because I have a lot of history in R&D in advanced techniques in communications.  Schulz took all these different reports and put them into a very logical and sequential argument that these unthethered planets were not only the source o the FMA signals but they are not planets at all.  They are planet size spaceships.

  

Once he came to this conclusion, he went back to each of the contributing studies to find further confirmation evidence.  In the Takahiro Sumi study from Osaka University and in the Swenson and Pillard study, he discovered that they had detected that the infrared emissions were much stronger on the side away from the line of travel and that there was a faint trail of infrared emissions behind each of the unthethered planets. 

  

This would be consistent with the heat emissions from some kind of a propulsion system that was pushing the spaceship along.  What form of propulsion would be capable of moving a planet-size spaceship is unknown but the fact that we can detect the IR trail at such great distances indicates that it is producing a very large trail of heated or ionized particles that extend for a long distance behind the moving planets.  The fact that he found this on 8 of the 10 unthethered planets was positive but then he also noted that the two that do not have these IR emissions, are the only ones that are not accelerating.  This would also be consistent with heat emissions from a propulsion system that is turned off and the spaceship is coasting.

  

The concept of massive spaceships has always been one of the leading solutions to sub-light-speed interplanetary travel.  The idea has been called “Generations Ships” that would be capable of supporting a population large enough and for a long enough period of time to allow multiple generations of people to survive in space.  This would allow survival for the decades or centuries needed to travel between galaxies or star systems.  Once a planet is free from its gravitational tether to its solar system star, it would be free to move in open space.  The solution of replacing the light and heat from their sun is not a difficult technological problem when you consider the possible use of thermal energy from the planet’s core.  Of course, a technology that has achieved this level of advanced science would probably find numerous other viable solutions.

  

Schulz used a combination of the Very Large Array of interferometric antennas at Socorro, New Mexico along with the systems at Pune, India and Arecibo, PR to collect data and then had the bank of Panther Cray computers at NSA analyze the data to determine that the FMA signals were coming from the region of space that exactly matched the void measured and studies by Takahiro Sumi.  NSA was more than happy to let Schulz use their computers to prove that they had not dropped the ball and allowed someone else on earth to develop a radio signal that they would not be able to intercept and decipher.

  

Schulz admitted that he cannot narrow down the detection to a single unthethered planet (or spaceship) but he can isolate it to the immediate vicinity of where they were detected.  He also verified the Swenson and Pillard finding that other voids had similar but usually weaker readings.  He pointed out that there may be many more signal sources from many more unthethered planets but outside of these voids, the weak signals were being deflected or absorbed by intervening objects.  He admitted that finding the signals in other voids did not confirm that they also had unthethered planets but he pointed out that it does not rule out that possibility either.

  

Finally, Schulz setup detection apparatus to simultaneously measure the FMA signals using the network of worldwide radio telescopes at the same time taking magnetic, visual and RF signals from the Auroral Kilometric Radiation (AKR).  He got the visual images with synchronized high speed video recordings from the ISIS in cooperation with the Laboratory for Planetary Atmospherics out of the Goddard SFC. 

  

Getting NSA’s help again, he was able to identify a very close correlation of these three streams of data to show that it was, indeed, the FMA signal originating from these unthethered planets that preceded and apparently was causing corresponding changes in the lines of magnet force that was made visible in the AKR.  The visual confirmation was not on shape or form changes in the AKR but in color changes that occurred at a much higher frequency than the apparent movements of the aurora lights.  What was being measured were the increase and decrease in the flash rate of individual visual spectrum frequencies.  Despite the high speed nature of the images, they were still only able to pick up momentary fragments of the signal – sort of like catching a single frame of a movie every 100 or 200 frames.  Despite this intermittent nature of the visual measurements, what was observed exactly synchronized with the other magnetic and RF signals – giving a third source of confirmation.  Schulz provided some very shallow speculation that the FMA signal is, in fact, a combined agile frequency and modulation signal that includes both frequencies and modulation methods that are far beyond our ability to decipher it. 

  

This detection actually supports a theory that has been around for years – that a sufficiently high enough frequency that is modulated in harmonic resonance with the atomic level vibrations of the solar wind – the charged particles streaming out of the sun that create the Aurora at the poles – can be used to create harmonics at very large wavelengths – essentially creating slow condensations and rarefactions in the AKR.  This is only a theory based on some math models that seem to make it possible but the control of the frequencies involved are far beyond any known or even speculated technology so it is mostly dismissed.  Schulz mentions it only because it is the only known reference to a possible explanation for the observations.  It has some validity because the theory’s math model exactly maps to the observations.

  

Despite the low energy, low frequency signal and despite the fact that we have no theory or science that can explain it, the evidence was conclusive and irrefutable.  Those unthethered planets appear to be moving under their own power, are emitting some unknown kind of signal that is somehow able to modulate our entire planet’s magnetic field.  The conclusion that these are actually very large spaceships, containing intelligent life that is capable of creating these strange signals, seems to be unavoidable.

  

The most recent report from Schulz was published in late December 2011.  The fallout and reactions to all this is still in its infancy.  I am sure they will not make this public for a long time, if ever.  I have already seen and heard about efforts to work on this at several DoD and private classified labs around the world.  I am sure this story is not over. 

  

We do not now know how to decode the FMA signals and we don’t have a clue how it is affecting the AKR but our confirmed and verified observations have pointed us to only one possible conclusion – we are not alone in the universe and whoever is out there has vastly improved technologies and intelligence than we do.

  

IBAL – The latest in Visual Recon

  

The latest addition to reconnaissance is a new kind of camera that takes a new kind of picture.  The device is called a plenoptic camera or a light-field camera.  Unlike a normal camera that takes a snapshot of a 2D view, the plenoptic camera uses a microlens array to capture a 4D light field.  This is a whole new way of capturing an image that actually dates back to 1992 when Adelson and Wang first proposed the design.  Back then, the image was captured on film with limited success but it did prove the concept.  More recently, a Stanford University team built a 16 megapixel electronic camera with a 90,000-microlens array that proved that the image could be refocused after the picture is taken.   Although this is technology that has already made its way into affordable consumer products, as you might expect, it has also been extensively studied and applied to military applications.

 

To appreciate the importance and usefulness of this device, you need to understand what it can do.  If you take a normal picture of a scene, the camera captures one set of image parameters that include focus, depth of field, light intensity, perspective and a very specific point of view.  These parameters are fixed and cannot change.  The end result is a 2-dimensional (2D) image.  What the light field camera does is to capture all of the physical characteristics of the light of a given scene so that a computer can later recreate the image in such detail that it is as if the original image is totally recreated in the computer.  In technical terms, it captures the watts per steradian per meter squared along a ray of radiance.  This basically means that it captures and can quantify the wavelength, polarization, angle, radiance, and other scalar and vector values of the light.   This results in a five dimensional function that can be used by a computer to recreate an image in the computer as if you were looking at the original image at the time the photo was taken.

 

This means that after the picture is taken, you can refocus on different aspects of the image, you can zoom in on different parts of the image and the resolution is such that you can even zoom in on parts of the image without a significant loss of resolution.  If the light field camera is capturing a moving video of a scene, then the computer can render a perfect 3-dimentional representation of the image taken.  For instance, using a state-of-the-art light field camera and taking an aerial light field video from a UAV drone at 10,000 feet altitude, of a city, the data could be used to zoom in on images within the city such as the text of a newspaper that someone is reading or the face of a pedestrian.  You could recreate the city in a highly dimensionally accurate 3D rendering that you could then traverse from a ground-level perspective in a computer model of the city.  The possibilities are endless.

 

As usual, it was NRL that started the soonest and has developed the most useful and sophisticated applications for the light field camera.  Because this camera creates its most useful results when it is used as a video camera, the NRL focused on that aspect of it early on.  The end result was the “IBAL” (pronounced “eyeball”) – for Imaging Ballistic Acquisition of Light.

 

The IBAL is a micro-miniature focused plenoptic camera that uses a masked synthetic aperture in from of an array of 240,000 microlenses that each capture a 24 megapixel video image.  This is accomplished by a massively overclocked processor that takes just 8 seconds of video images at a frame rate of 800 frames per second.  This entire device fits into the nose of an 80 mm mortar round or in the M777 155mm howitzer.  It can also be fired from a number of other artillery and shoulder-launched weapons as a sabot round.  The shell is packed with a powerful lithium battery that is designed to provide up to 85 watts of power for up to two minutes from ballistic firing to impact.  The round has a gyro-stabilized fin control that maintains the camera pointed at the target in one of two modes.  The first mode is to fire the round at a very high angle – 75 to 87 degrees up.  This gives the round a very steep trajectory that allows it to capture its image as it is descending from a few thousand feet of altitude.  Since the resolution is very high, it captures its images as soon as it is aligned and pointed at the ground.  The second mode is to fire the IBAL at a low trajectory – 20 to 30 degrees elevation.  In this mode the gyro maintains the camera, pointing thru a prism, at the ground as the round traverses the battle zone.  In both cases, it uses the last few seconds of flight to transmit a compressed data burst on a UHF frequency to a nearby receiver.  The massive amount of data is transmitted using the same kind of compression algorithm used by the intelligence community for satellite reconnaissance imagery data.   One final aspect of the ballistic round is that it has a small explosive in the back that assures that it is completely destroyed upon impact.  It even has a backup phosphorous envelope that will ignite and melt all of the electronics and optics if the C4 does not go off.

 

Since the object is to recon and not attack, the actual explosive is really quite small and when it goes off, the explosion is almost entirely contained inside the metal casing of the round.  Using the second mode –  low trajectory – of firing, the round would pass over the battle zone and land far beyond without attracting much attention.  In a more active combat environment, the high trajectory mode would attract little attention.  If noticed at all, it would appear to be a dud.

 

The data is received by a special encrypted digital receiver that decodes it and feeds it into the IBAL processor station which is a powerful laptop that can be integrated into a number of other visual representation systems including 3D imaging projectors, 3D rendering tables and virtual-reality goggles.  The data can be used to recreate the images captured in a highly detailed 3-D model that is so accurate that measurements can be taken from the image that are accurate to within one-tenth of an inch. 

 

The computer is also able to overlay any necessary fire-control grid onto the image so that precise artillery control can be vectored to a target.  The grid can be a locally created reference or simply very detailed latitude and longitude using GPS measures.  As might be expected, this imagery information is fully integrated into the CED (combat environmental data) information network and into the DRS (digital rifle system) described by me in other reports.   This means that within seconds of firing the IBAL, the 3D image of the combat zone is available on the CED network for all the soldiers in the field to use.  It also is available for snipers to plan out their kill zones and to the artillery to fine tune their fire control.  Since it sees the entire combat zone from the front, overhead and back, it can be used to identify, locate and evaluate potential targets such as vehicles, mortar positions, communications centers, enemy headquarters and other priority targets.

 

Using this new imaging system in combination with all the other advances in surveillance and reconnaissance that I have described here and others that I have not yet told you about, there is virtually no opportunity for an enemy to hide from our weapons.

“SID” Told Me! The Newest Combat Expert

Sensor Fusion is one of those high tech buzzwords that the military has been floating around for nearly a decade. It is suppose to describe the integration and use of multiple sources of data and intelligence in support of decision management on the battlefield or combat environment. You might think of a true sensor fusion system as a form of baseline education. As with primary school education, the information is not specifically gathered to support a single job or activity but to give the end user the broad awareness and knowledge to be able to adapt and make decisions about a wide variety of situations that might be encountered in the future. As you might imagine, providing support for “a wide variety of situations that might be encountered in the future” takes a lot of information and the collation, processing and analysis of that much information is one of the greatest challenges of a true sensor fusion system.

  

One of the earliest forms of sensor fusion was the Navy Tactical Data System or NTDS. In its earliest form, it allowed every ship in the fleet to see on their radar scopes the combined view of every other ship in the fleet. Since the ships might be separated by many miles, this effectively gave a radar umbrella that was hundreds of square miles in every direction – much further than any one ship could attain. It got a big boost when the added the radar of airborne aircraft that could fly Carrier Air Patrol (CAP) from 18,000 feet altitude. Now every ship could see as if they had radar that looked out hundreds of miles distant and thousands of square miles of coverage. 

  In the latest version, now called the Cooperative Engagement Capability (CEC), the Navy has also integrated fire control radar so that any ship, aircraft or sub can fire on a target that can be seen by any other ship, aircraft or sub in the fleet, including ships with different types of radars – such as X-Band, MMWL, Pulsed Doppler, phased array, aperture synthesis (SAR/ISAR), FM-CW, even sonar. This allows a Guided Missile Cruiser to fire a missile at a target that it physically cannot see but that can be seen by some other platform somewhere else in the combat arena. Even if a ship has no radar at all, of its own, it can benefit from the CEC system and “see” what any other ship can see with their radar.  That is sensor fusion.

  

The end result, however, is a system that supports wide variety of situations from the obvious combat defensive tactics and weapons fire control to navigation to air-sea rescue. Each use takes from the CEC system that portion of the total information available that it needs for its specific situation.

  

The Army has been trying to incorporate that kind of sensor integration for many years. So far, they have made strides in two areas. One is the use of UAV’s (unmanned aerial vehicles) and the other is in the helmet mounted systems.  Both of these gather observed information at some remote command post where it is manually processed, analyzed, prioritized and then selectively distributed to other forces in the combat area. There are dozens of minor efforts that the Army is calling sensor fusion but it really is just a single set of sensors with a dedicated objective to feed a specific system with very specific data. An example of this is the Guardian Angel program that was designed to detect improvised explosive devices (IEDs) in Iraq and Afghanistan. Although it mixed several different types of detection devices that overlaid various imagery data, each sensor was specifically designed to support the single objective of the overall system. A true sensor fusion system gathers and combines data that will be used for multiple applications and situations.

  

A pure and fully automated form of this technology is sometimes referred to as multi-sensor data fusion (MSDF) and has not yet been achieved, until now. MSDF has been the goal of DoD for a long time. So much so that they even have a Department of Defense (DoD) Data Fusion Group within the Joint Directors of Laboratories (JDL). The JDL defined MSDF as the “multilevel, multifaceted process of dealing with the automatic detection, association, correlation, estimation and combination of data and information from multiple sources with the objective to provide situation awareness, decision support and optimum resource utilization by and to everyone in the combat environment”. That means that the MSDF must be able to be useful not just to the Command HQ and to the generals or planners but to the soldiers on the ground and the tank drivers and the helo pilots that are actively engaged with the enemy in real time – not filtered or delayed by processing or collating the data at some central information hub.

  

There are two key elements of MSDF that make it really hard to implement in reality. The first is the ability to make sense of the data being gathered. Tidbits of information from multiple sensors are like tiny pieces of a giant puzzle. Each one can, by itself, can provide virtually no useful information but become useful only when combined with hundreds or even thousands other data points to form the ultimate big picture. It takes time and processing power to do that kind of collating and processing and therein lays the problem. If that processing power is centrally located, then the resulting big picture is no longer available in real time and useful to an actively developing situation. Alternatively, if the processing power is given to each person in the field that might need the data, then it becomes a burden to carry, maintain and interpret the big picture in the combat field environment by every solider that might need it, As the quantity, diversity and complexity of the data being integrated rises, so does the processing power and complexity increase at an exponential rate. The knowledge and skills of the end user also rises to the point that only highly trained experts are able to use such systems.

  

The second problem is the old paradox of information overload. On the one hand, it is useful to have as much information as possible to fully analyze a situation and to be ready for any kind of decision analysis that might be needed. On the other hand, any single given situation might actually need only a small portion of the total amount of data available. For instance, imagine a powerful MSDF network that can provide detailed information about everything happening everywhere in the tactical environment. If every end user had access to all of that data, they would have little use for most of it because they are interested in only that portion that applies to them. But knowing what they will need now and in the future makes it important that they have the ability to access all of it. If you give them that ability, you complicate the processing and training to be able to use it. If you limit what they might need, then you limit their ability to adapt and make decision.  A lot of data is a good thing but too much is a bad thing and the line between those two is constantly changing.

  

I was a consultant to Naval Research Labs (NRL) in a joint assignment to the JDL to help the Army develop a new concept for MSDF. When we first started, the Army has visions of having a vast MSDF system that would provide everything to everyone but when we began to examine some of the implications and limitations of such a system, it became clear that we would need to redefine their goals. After listening to them for a few weeks I was asked to make a presentation on my ideas and advice to them. I thought about it for a long time and then created just three slides. The first one showed a graphic depiction of the GPS system.. In front of two dozen generals and members of the Army DoD staff, I put up the first slide and then asked them to just think about it. I waited for a full five minutes. They were a room of smart people and I could see the look on their faces when they realized that what they needed was a system like the GPS system.  It provides basic and relatively simple information in a standardized format that is then used for a variety of purposed from navigation to weapons control to location services.  The next question came quickly and that was “what would the nature of a similar system be like for the Army in a tactical environment?” That’s when I put up my next slide. I introduced them to “CED” (pronounced as “SID”).

  

Actually, I called it the CED (Combat Environmental Data) network. In this case, the “E” for Environment means the physical terrain, atmosphere and human construction in a designated area. The true tactical combat environment. It uses an array of sensors that already existed that I helped developed at the NRL for the DRS – the Digital Rifle System. As you might recall, I described this system and its associated rifle, the MDR-192 in two other reports that you can read. The DRS uses a specially designed sensor called the “AIR” for autonomous information recon device. It gathers a variety of atmospheric data (wind, pressure, temperature, humidity) as well as a visual image, a laser range-finder scan of its field of view and other data such as vibrations, RF emissions and infrared scans. It also has an RF data transmitter and a modulated laser beam transmission capability. All this is crammed into a device that is 15 inches long and about 2.5 cm in diameter that is scattered, fired, air dropped or hidden throughout the target area. The AIR’s are used to support the DRS procession computer in the accurate aiming of the MDR-192 at ranges out to 24,000 feet or about 4.5 miles.

  

The AIR’s are further enhanced by a second set of sensors called the Video Camera Sights or VCS. The VCS consist of a high resolution video image cameras combined with lasers scanning beams that are combined in the DRS processing computer to render a true and proportional 3D image of the field of view.  The DRS computer integrates the AIR and VCS data so that an entire objective area can be recreated in finite 3D detail in computer images.  Since the area is surrounded with VCS systems and AIR sensors are scattered throughout the area, the target area can be accurately recreated so that the DRS user can see almost everything in the area as if he were able to stand at almost any location anywhere within the target area.  The DRS user is able to accurately see and measure and ultimately target the entire area – even if he is on the other side of the mountain from the target area.  The power of the DRS system is the sensor fusion of this environment for the purpose of aiming the MDR-192 at any target anywhere in the target area.

  

My second slide showed the generals that using AIR and VCS sensor devices combined with one new sensor, of my design, an entire tactical zone can be fully rendered in a computer. The total amount of data available is massive but the end user would treat it like the GPS or the DRS system, pulling down only the data that is needed at that moment for a specific purpose.  That data and purpose can be in support of a wide variety of situations that may be encountered in the present or future a wide variety of end users.

   

My third slide was simple a list of what the CED Network would provide to the Army generals as well as to each and every fielded decision maker in the tactical area. I left this list on the screen for another five minutes and began hearing comments like, “Oh my god”, “Fantastic!” and “THAT’S what we need!”

  

Direct and Immediate Benefits and Applications of the CED Network

  ·        Autonomous and manned weapons aiming and fire control

  ·        Navigation, route and tactical planning, attack coordination

  ·        Threat assessment, situation analysis, target acquisition

  ·        Reconnaissance, intelligence gathering, target identity

  ·        Defense/offence analysis, enemy disposition, camouflage penetration

  

My system was immediately accepted and I spent the next three days going over it again and again with different levels within the Army and DoD. The only additional bit of information I added in those three days was the nature of the third device that I added to the AIR and VCS sensors.  I called it the “LOG” for Local Optical Guide. 

  

The LOG mostly gets its name from its appearance. It looks like a small log or a cut branch of a tree that has dried up.  In fact, great effort has gone into making it look like a natural log so that it will blend in.  There are actually seven different LOGs – in appearance – but the insides are all the same.  It contains four sensor modules: (1) a data transceiver that can connect to the CED network and respond to input signals.  The transceiver sends a constant flow of images and other data but it also will collect and relay data received from other nearby sensors.  In order to handle the mixing of data, all the transmitters are FM and frequency agile – meaning that they transmit a tiny fraction of data on a VHF frequency and then hop to another frequency for the next few bits of data.  The embedded encryption keep all the systems synchronized but the effect of it is that it is nearly impossible to intercept, jam or even detect the presence of these signals; (2) six high resolution cameras that have night vision capabilities.  These cameras are located so that no matter how the LOG is placed on the ground, at least two cameras will be useful for gathering information.  The lenses of the cameras can be commanded to zoom from a panoramic wide angle to telephoto with a X6 zoom but it will default to a wide angle; (3) an atmospheric module that measures wind, temperature, humidity and pressure; (4) a finally, it has an acoustic and vibration sensing module with six microphones located on each surface that is accurate enough to be able to give precise intensity and a crude directionality to sensed sounds.  It has a fifth self-destruct module that is powerful enough to completely destroy the LOG and do damage to anyone trying to dismantle it.

  

The LOG works in conjunction with the AIR for sound sensing of gunfire. Using the same technology that is applied in the Boomerang gunfire locator that was developed by DARPA and BBN Technologies, the CED system can locate the direction and distance to gunfire within one second of the shot.  Because the target area is covered with numerous LOG and AIR sensors, the accuracy of the CED gunfire locator is significantly more accurate than DARPA’s Boomerang system.  

  

The total CED system consists of these three modules – LOG, AIR and VCS and a receiving processing module that can take the form of a laptop, a handheld or a backpack system. Although the computer processor (laptop) used in the DRS was a very sophisticated analyzer of that system’s sensor inputs, the computer processors for the CED system are substantially more advanced in many ways.  The most important difference is that the CED system is a true network that places all of the sensory data on-the-air in an RF transmitted cloud of information that saturates the target area and nearby areas.  It can be tapped into by any CED processor anywhere within range of the network.  Each CED or DRS processor pulls out of the network just the information it needs for the task at hand.  To see how this works, here are some examples of the various uses of the CED system:

  

SNIPER

  Either a DRS or a CED processor can be sued to support the sniper. The more traditional snipers using standard rifles will tap into the CED network to obtain highly accurate wind, temperature, pressure and humidity data as well as precise distance measurements.  Using the XM25 style HEAB munitions that are programmed by the shooter, nearly every target within the CED combat area hit and destroyed.  The CED computers can directly input data into the XM25/HEAB system so that the sniper does not have to use his laser range-finder to sight in the target.  He can also be directed to aim using the new Halo Sight System (HSS).  This is a modified XM25 fire control sight that uses a high resolution LCD thin-film filter that places a small blinking dot at the aim-point of the weapon.  This is possible because the CED processor can precisely place the target and the shooter and can calculate the trajectory based on sensor inputs from the LOG and AIR and VCS sensor grid of the network.  It uses lasers from the AIR’s to locate the shooter and images from the VCS and LOG sensors to place the target.  The rest is just mathematical calculations of the aim point to put an HEAB or anti-personnel 25mm round onto the target.  It is also accurate enough to support standard sniper rifles, the M107/M82 .50 cal. Rifle or the MDR-192.  Any of these can be fitted with the HSS sight for automated aim point calculations.

  

In the case of the MDR-192, the rifle is mounted on a digitally controlled tripod that is linked directly to the DRS or CED computer. The effect is to create an autonomous small caliber artillery weapon.  That means that an operator of a CED (or DRS) computer that has tapped into the CED network can identify a target somewhere in the covered combat arena and send that data to any one of several MDR-192 rifles that have been placed around the combat area.  Each autonomous MDR-192 has an adjustment range of 30 degrees, left and right of centerline and 15 degrees up and down.  Since the range of the typical MDR-192 is up to 24,000 feet, four rifles could very effectively cover a target area of up to four square miles.  The computer data will instruct the selected MDR-192 to aim the rifle to the required aim point – accounting for all of the ballistic and environmental conditions – and fire.  As described in the report of the MDR-192 and DRS, the system can be accessed by an operator that is remotely located from the rifles and the target area – as much as 5 miles away. 

  

Recent tests of the CED system and the MDR-192 have proven their effectiveness. The only defense that the enemy has is to stay in an underground bunker.

  

Artillery

  The CED network is the ultimate forward observe for artillery placement of smart weapons. Using the visual sensors of the LOG and VCS and the gunfire locator sensors of the LOG and AIR sensors, any target within the entire combat arena can be very precisely located.  It can then be identified with GPS coordinates for the dropping of autonomous weapons such as a cruise missile or it can be illuminated with a laser from a nearby AIR or MDR-192 for smart weapon fire control aim point. 

  

Even standard artillery has been linked into the CED system. A modified M777 Howitzer (155mm) can be linked into the CED system.  It uses a set of sensors that have been strapped to the barrel that can sense its aim point within .ooo3 degrees in three dimensions.   The CED network data is sent to a relay transmitter and then sent up to 18 miles away to the M777 crew.  The M777 is moved in accordance with some simple arrows and lights until a red light comes on, indicating that the aim point has been achieved for the designated target – then they fire.  Tests have been able to place as many as 25 rounds within a 10 foot (3 meters) radius from 15 miles away using this system.

  

Intelligence and Reconnaissance

  The CED system is also ideally suited to completely define the enemy distribution and activity and covertly pre-identify targets for a later assault or barrage. The AIR and LOG systems can pick up sounds that can be matched to the LOG and VCS images and video to place and identify points of activity, vehicles and radios.  The VCS and AIR imaging capability can map movements and identify specific types of equipment, weapons and vehicles in the area.  During the battle, snipers and other gunfire can be located with the acoustic gunfire locator using the AIR and LOG sensors.  The LOG and VCS systems also have gun flash identifiers that can distinguish muzzle flash in images – even in complete darkness or the brightest daylight.

  

One of the remarkable additions to the CED processors is the ability to recreate an accurate 3D animation of the target area. This is a 3D rendering of the area that is accurate enough that measurements can be taken from the 3D image and will be accurate to within fractions of an inch to the real world layout.  This is useful to pass the 3D rendering back to an HQ or forward planning area for use in the planning, training and management of an assault.

  

The CED network has just finished field testing in several isolated combat areas in Afghanistan but it has proven to be most effective. Work has already begun on improving the AIR, LOG and VCS sensors in an effort to consolidate, miniaturize and conceal them to a greater degree.  They are also working on an interface to an autonomous UAV that will add aerial views using laser, IR and visual sensors.

  

He troops that have used this system consider it the smartest and most advanced combat information system ever devised and the comment that “CED told me” is becoming recognized as the best possible source of combat information.

The Fuel you have never heard of….

 

I have always been fascinated by the stories of people that have invented some fantastic fuel only to have the major oil companies suppress the invention by buying the patent or even killing the inventor.  The fascination comes from the fact that I have heard these stories all my life but have never seen any product that might have been invented by such a person.  That proves that the oil companies have been successful at suppressing the inventors….or it proves that such stories are simply lies.  Using Plato – my research software tool, I thought I would give it a try.  The results were far beyond anything I could have imagined.  I think you will agree.

 

I set Plato to the task of finding what might be changed in the fuel of internal combustion engines that might produce higher miles per gallon (MPG).  It really didn’t take long to return a conclusion that if the burned fuel had more energy in the burning, it would give better MPG for the same quantity of fuel.  It further discovered that if the explosion of the fuel releases its energy in a shorter period of time, it works better but it warned that the engine timing becomes very critical.

 

OK so, what I need is a fuel or a fuel additive that will make the spark plug ignite a more powerful but faster explosion within the engine.  I let Plato work on that problem for a weekend and it came up with Nitroglycerin (Nitro).  It turns out that Nitro actually works precisely because its explosion is so fast.  It also is a good chemical additive because it is made of nitrogen, oxygen and carbon so it burns without smoke and releases only those elements or compounds into the air. 

 

Before I had a chance to worry about the sensitive nature of Nitro, Plato provided me with the answer to that also.  It seems that ethanol or acetone will desensitize Nitro to workable safety levels.  I used Plato to find the formulas and safe production methods of Nitro and decided to give it a try.

 

Making Nitro is not hard but it is scary.  I decided to play it safe and made my mixing lab inside of a large walk-in freezer.  I only needed to keep it below 50F and above 40F so the freezer was actually off most of the time and it stayed cool from the ice blocks in the room.  The cold makes the Nitro much less sensitive but only if you don’t allow it to freeze.  If you do that, it can go off just as a result of thawing out.  My plan was to make a lot of small batches to keep it safe until I realized that even if very small amounts, it was enough to blow me up if it ever went off.  So I just made up much larger batches and ended up with about two gallons.

 

I got three gas engines – a lawn mower, a motorcycle and an old VW Bug.  I got some gas of 87 octane but with 10% ethanol in it.  I also bought some pure ethanol additive and put that in the mix.  I then added the Nitro.  The obvious first problem was to determine how much to add.  I decided to err of the side of caution and began with very dilute mixtures – one part Nitro into 300 parts gas.   I made-up just 100 ml of the mixture and tried it on the lawn mower.  It promptly blew up.  Not actually exploded but the mixture was so hot and powerful that it burned a hole in the top of the cylinder and broke the crankshaft and burned off the valves.  That took less than a minute of running.

 

I then tried a 600:1 ratio in the motorcycle engine and it ran for 9 minutes on the 100 ml.  It didn’t burn up but I could tell very little else about the effects of the Nitro.  It tried it again with 200 ml and determined that it was running very hot and probably would have blown a ring or head gasket if I tried it for any longer.  I had removed the motorcycle engine from an old motorcycle to make this experiment but now I regretted that move.  I had no means to check torque or power.  The VW engine was still in the Bug so I could actually drive it.  This opened up all kinds of possibilities.

 

I gas it up and drove it with normal gas first.  I tried going up and down hills, accelerations, high speed runs and pulling a chain attached to a tree.  At only 1,400 cc, it was rated at only 40 HP when it was in new condition but now it had much less than that using normal gas.

 

I had a Holly carb on the engine and tweaked it to a very lean mixture and lowered the Nitro ratio to 1,200 to 1.   I had gauges for oil temp and pressure and had vacuum and fuel flow sensors to help monitor real-time MPG.  It ran great and outperformed all of the gas-only driving tests.  At this point I knew I was onto something but my equipment was just too crude to do any serious testing.  I used my network of contacts in the R&D community and managed to find some guys at the Army vehicle test center at the Aberdeen Test center (ATC).  A friend of a friend put me in contact with the Land Vehicle Test Facility (LVTF) within the Automotive Directorate where they had access to all kinds of fancy test equipment and tons of reference data.  I presented my ideas and results so far and they decided to help me using “Special Projects” funds.  I left them with my data and they said come back in a week.

 

A week later, I showed up at the LVTF.  They said welcome to my new test vehicle – a 1998 Toyota Corona.  It is one of the few direct injection engines with a very versatile air-fuel control system.  They had already rebuilt the engine using ceramic-alloy tops to the cylinder heads that gave them much greater temperature tolerance and increased the compression ratio to 20:1.  This is really high but they said that my data supported it.  Their ceramic-alloy cylinder tops actually form the combustion chamber and create a powerful vortex swirl for the injected ultra-lean mixture gases.

 

We stared out with the 1,200:1 Nitro ratio I had used and they ran the Corona engine on a dynometer to test and measure torque (ft/lbs) and power (HP).  The test pushed the performance almost off the charts.  We repeated the tests with dozens of mixtures, ratios, air-fuel mixes and additives.  The end results were amazing.

 

After a week of testing, we found that I could maintain a higher than normal performance using a 127:1 air fuel ration and a 2,500:1 Nitro to gas ratio if the ethanol blend is boosted to 20%.  The mixture was impossible to detonate without the compression and spark of the engine so the Nitro formula was completely safe.  The exhaust gases were almost totally gone – even the Nox emissions were so low that a catalytic converter was not needed.  Hydrocarbon exhaust was down in the range of a Hybrid.  The usual problem of slow burn in ultra-lean mixtures was gone so the engine produced improved power well up into high RPMs and the whole engine ran at lower temperatures for the same RPM across all speeds.  The real thrill came when we repeatedly measured MPG values in the 120 to 140 range.

 

The rapid release and fast burn of the Nitro allowed the engine to run an ultra-lean mixture that gave it great mileage while not having any of the usual limitations of lean mixtures.  At richer mixtures, the power and performance was well in excess of what you’d expect of this engine.  It would take a major redesign to make an engine strong enough to withstand the torque and speeds possible with this fuel in a normal 14:1 air-fuel mixture.  Using my mix ratio of 120+:1 gave me slightly improved performance but at better than 140 MPG.  It worked.  Now I am waiting for the buyout or threats from the gas companies.

 

July 2010 Update:

 

The guys at ATC/LVTF contacted my old buddies at DARPA and some other tests were performed.  The guys at DARPA have a test engine that allows them to inject high energy microwaves into the combustion chamber just before ignition and just barely past TDC.  When the Nitro ratio was lowered to 90:1, the result was a 27 fold increase in released energy.  We were subsequently able to reduce the quantity of fuel used to a level that created the equivalent of 394 miles per gallon in a 2,600 cc 4-cyl engine.  The test engine ran for 4 days at a speed and torque load equal to 50 miles per hour – and did that on 10 gallons of gas – a test equivalent of just less than 4,000 miles!  A new H-2 Hummer was rigged with one of these engines and the crew took it for a spin – from Calif. To Maine – on just over 14 gallons of gas.  They are on their way back now by way of northern Canada and are trying to get 6,000 miles on less than 16 gallons.

 

The government R&D folks have pretty much taken over my project and testing but I have been assured that I will be both compensated and protected.  I hope Obama is listening.

The Government knows Everything You have Ever Done!

Sometimes our paranoid government wants to do things that technology does not allow or they do not know about yet. As soon as they find out or the technology is developed, then they want it and use it. Case in point is the paranoia that followed 11 Sept 2001 (9/11) in which Cheney and Bush wanted to be able to track and monitor every person in the US. There were immediate efforts to do this with the so-called Patriots Act that bypassed a lot of constitutional and existing laws and rights – like FISA. They also instructed NSA to monitor all domestic radio and phone traffic, which was also illegal, and against the charter of NSA. Lesser known monitoring was the hacking into computer databases and monitoring of emails, voice mails and text messaging by NSA computers. They have computers that can download and read every email or text message on every circuit from every Internet or phone user as well as every form of voice communication.

Such claims of being able to track everyone, everywhere have been made before and it seems that lots of people simply don’t believe that level of monitoring is possible. Well, I’m here to tell you that it not only is possible, but it is all automated and you can read all about the tool that started it all online. Look up “starlight” in combination with “PNNL” on Google and you will find references to a software program that was the first generation of the kind of tool I am talking about.

This massive amount of communications data is screened by a program called STARLIGHT, which was created by the CIA and the Army and a team of contractors led by Battelle’s Pacific Northwest National Lab (PNNL)at a cost of over $10 million. It does two things that very few other programs can do. It can process free-form text and images of text (scanned documents) and it can display complex queries in visual 3-D graphic outputs.

The free-form text processing means that it can read text in its natural form as it is spoken, written in letters and emails and printed or published in documents. For a database program to be able to do this as easily and as fast as it would for formal defined records and fields of a relational database is a remarkable design achievement. Understand this is not just a word search – although that is part of it. It is not just a text-scanning tool; it can treat the text of a book as if it were an interlinked, indexed and cataloged database in which it can recall every aspect of the book (data). It can associate, cross-link and find any word or phrase in relation to any parameter you can think of related to the book – page numbers, nearby words or phrases, words use per page, chapter or book, etc. By using the most sophisticated voice-to-text messaging, it can perform this kind of expansive searching on everything written or spoken, emailed, texted or said on cell phones or landline phones in the US!

The visual presentation of that data is the key to being able to use it without information overload and to have the software prioritize the data for you. It does this by translating the database query parameters into colors and dimensional elements of a 3-D display. To view this data, you have to put on a special set of glasses similar to the ones that put a tiny TV screen in from of each eye. Such eye-mounted viewing is available for watching video and TV – giving the impression you are looking at a 60-inch TV screen from 5 feet away. In the case of STARLIGHT, it gives a completely 3-D effect and more. It can sense which way you are looking so it shows you a full 3-D environment that can be expanded into any size the viewer wants. And then it adds interactive elements. You can put on a special glove that can be seen in the projected image in front of your eyes. As you move this glove in the 3-D space you are in, the glove moves in the 3-D computer images that you see in your binocular eye-mounted screens. Plus this glove can interact with the projected data elements. Let’s see how this might work for a simple example:

The first civilian (unclassified) application of STARLIGHT was for the FAA to analyze private aircraft crashes over a 10-year period. Every scrape of information was scanned from accident reports, FAA investigations and police records – almost all of this was in free-form text. This included full specs on the aircraft, passengers, pilots, type of flight plan (IFR, VFR) etc. It also entered geospatial data that listed departure and destination airports, peak flight plan altitude, elevation of impact, distance and heading data. It also entered temporal data for the times of day, week and year that each event happened. This was hundreds of thousands of documents that would have taken years to key into a computer if a conventional database were used. Instead, high-speed scanners were used that read in reports at a rate of 200 double-sided pages per minute. A half dozen of these scanners completed the data entry in less than two months.

The operator then assigns colors to a variety of ranges of data. For instance, it first assigned red and blue to male and female pilots and then looked at the data projected on a map. What popped up were hundreds of mostly red (male) dots spread out over the entire US map. Not real helpful. Next he assigned a spread of colors to all the makes of aircraft – Cessna, Beachcraft, etc.. Now all the dots change to a rainbow of colors with no particular concentration of any given color in any given geographic area. Next he assigned colors to hours of the day – doing 12 hours at a time – Midnight to Noon and then Noon to Midnight. Now something interesting came up. The colors assigned to 6AM and 6PM (green) and shades of green (before and after 6AM or 6PM) were dominant on the map. This meant that the majority of the accidents happened around dusk or dawn.  Next the operator entered assigned colors to distances from the departing airport – red being within 5 miles, orange was 5 to 10 miles…and so on with blue being the longest (over 100 miles). Again a surprise in the image. The map showed mostly red or blue with very few in between. When he refined the query so that red was either within 5 miles of the departing or destination airport, almost the whole map was red.

Using these simple techniques, an operator was able to determine in a matter of a few hours that 87% of all private aircraft accidents happen within 5 miles of the takeoff or landing runway. 73% happen in the twilight hours of dawn or dusk. 77% happen with the landing gear lowered or with the landing lights on and 61% of the pilots reported being confused by ground lights. This gave the FAA information they needed to improve approach lighting and navigation aids in the terminal control areas (TCAs) of private aircraft airports.

This highly complex data analysis was accomplished by a programmer, not a pilot or an FAA investigator and incorporated 100’s of thousands of reports that were able to be collated into useful data in a matter of hours.  This had never been done before.

As new and innovative as this was, it was a very simple application that used a limited number of visual parameters at a time. But STARLIGHT is capable of so much more. It can assign things like direction and length of a vector, color of the line or tip, curvature and width and taper to various elements of a search. It can give shape to one result and different shape to another result. This gives significance to “seeing” a cube versus a sphere or to seeing rounded corners on a flat surface instead of square corners on an egg-shaped surface.
Everything visual can have meaning but what is important is to spot anomalies, things that are different and nothing is faster doing that than a visual image.

Having 80+ variables at a time that can be interlaced with geospatial and temporal (historical) parameters can allow the program to search an incredible amount of data. Since the operator is looking for trends, anomalies and outflyers, the visual representation of the data is ideal to spot this data without actually scanning the data itself by the operator. Since the operator is visually seeing an image that is devoid of the details of numbers or words, he can easily spot some aspect of the image that warrants a closer look.

In each of these trial queries, the operator can, using his gloved hand to point to any given dot, line or object, call up the original source of the information in the form of a scanned image of the accident report or reference source data. He can also touch virtual screen elements to bring out other data or query elements. For instance, he can merge two queries to see how many accidents near airports (red dots) had more than two passengers or were single engine aircraft, etc. Someone looking on would see a guy with weird glasses waving his hand in the air but in the eyes of the operator, he is pressing buttons, rotating knobs and selecting colors and shapes to alter his room-filling graphic 3-D view of the data.

In its use at NSA, they add one other interesting capability. Pattern Recognition. It can automatically find patterns in the data that would be impossible for any real person to find by looking at the tons of data. For instance, they put in a long list of words that are linked to risk assessments – such as plutonium, bomb, kill, jihad, etc. Then they let it search for patterns.  Suppose there are dozens of phone calls being made to coordinate an attack but the callers are from all over the US. Every caller is calling someone different so no one number or caller can be linked to a lot of risk words. STARLIGHT can collate these calls and find the common linkage between them, and then it can track the calls, caller and discussions in all other media forms.  If the callers are using code words, it can find those words and track them.  It can even find words that are not used in a normal context, such as referring to an “orange blossom” in an unusual manner – a phrase that was once used to describe a nuclear bomb.

Now imagine the list of risk words and phrases to be hundreds of thousands of words long. It includes phrases and code words and words used in other languages. It can include consideration for the source or destination of the call – from public phones or unregistered cell phones. It can link the call to a geographic location within a few feet and then track the caller in all subsequent calls. It can use voice print technology to match calls made on different devices (radio, CB, cell phone, landline, VOIP, etc.) by the same people. This is still just a sample of the possibilities.

STARLIGHT was the first generation and was only as good as the data that was fed into it through scanned documents and other databases of information. A later version, code named Quasar, was created that used advanced data mining and ERP (enterprise resource planning) system architecture that integrated the direct feed from legacy system information gathering resources as well as newer technologies.

(ERP is a special mix of hardware and software that allows a free flow of data between different kinds of machines and different kinds of software and data formats.  For instance the massive COBAL databases at the IRS loaded on older model IBM mainframe computers can now exchange data easily with NSA CRAY computers using the latest and most advanced languages and database designs.  ERP also has resolved the problem that each agency has a different encryption and data security format and process.  ERP does not change any of the existing systems but it makes them all work smoothly and efficiently together.)

For instance, the old STARLIGHT system had to feed recordings of phone calls into a speech-to-text processor and then the text data that was created was fed into STARLIGHT. In the Quasar system, the voice monitoring equipment (radios, cell phones, landlines) is fed directly into Quasar as is the direct feed of emails, telegrams, text messages, Internet traffic, etc.  Quasar was also linked using ERP to existing legacy systems in multiple agencies – FBI, CIA, DIA, IRS, and dozens of other federal and state agencies.

So does the government have the ability to track you? Absolutely! Are they doing so? Absolutely! But wait, there’s more!

Above, I said that Quasar was a “later version”. It’s not the latest version. Thanks to the Patriot Act and Presidential Orders on warrantless searches and the ability to hack into any database, NSA now can do so much more. This newer system is miles ahead of the relatively well known Echelon program of information gathering (which was dead even before it became widely known). It is also beyond another older program called Total Information Awareness (TIA). TIA was compromised by numerous leaks and died because the technology was advancing so fast.

The newest capability is made possible by the new bank of NSA Cray computers and memory storage that are said to make Google’s entire system look like an abacus.  NSA combined that with the latest integration (ERP) software and the latest pattern recognition and visual data representation systems.  Added to all of the Internet and phone monitoring and screening are two more additions into a new program called “Kontur”. Kontur is the Danish word for Profile. You will see why in a moment.

Kontur adds geospatial monitoring of every person’s location to their database. Since 2005, every cell phone now broadcasts its GPS location at the beginning of every transmission as well as at regular intervals even when you are not using it to make a call. This was mandated by the Feds supposedly to assist in 911 emergency calls but the real motive was to be able to track people’s locations at all times. For those few that are still using the older model cell phones, they employ “tower tracking” which uses the relative signal strength and timing of the cell phone signal reaching each of several cell phone towers to pinpoint a person within a few feet.  Of course, landlines are easy to locate as are all internet connections.

A holdover from the Quasar program was the tracking of commercial data which included every purchase made by credit cards or any purchase where a customer discount card is used – like at grocery stores. This not only gives the Feds an idea of a person’s lifestyle and income but by recording what they buy, they can infer other behaviors. When you combine cell phone and purchase tracking with the ability to track other forms of transactions – like banking, doctors, insurance, police and public records, there are relatively few gaps in what they know about you.

Kontur also mixed in something called geofencing that allows the government to create digital virtual fences around anything they want. Then when anyone crosses this virtual fence, they can be tracked. For instance, there is a virtual fence around every government building in Washington DC. Using predictive automated behavior monitoring and cohesion assessment software combined with location monitoring, geofencing and sophisticated social behavior modeling, pattern mining and inference, they are able to recognize patterns of people’s movements and actions as being threatening. Several would-be shooters and bombers have been stopped using this equipment.  You don’t hear about them because they do not want to explain what alerted them to the bad guys presence.

To talk about the “Profile” aspect of Kontur, we must first talk about why or how is it possible because it became possible only when the Feds were able to create very, very large databases of information and still be able to make effective use of that data. It took NSA 35 years of computer use to get to the point of using a terabyte of data. That was back in 1990 using ferrite core memory. It took 10 more years to get to petabyte of storage – that was in early 2001 using 14-inch videodisks and RAID banks of hard drives. It took four more years to create and make use of an exabyte of storage. With the advent of quantum memory using gradient echo and EIT (electromagnetically induced transparency), the NSA computers now have the capacity to store and rapidly search a yottabyte of data and expect to be able to raise that to 1,000 yottabytes of data within two years.  A yottabyte is 1,000,000,000,000,000 gigabytes or 2 to the 80th power.

This is enough storage to store every book that has ever been written in all of history…..a thousand times over.  It is enough storage to record every word of every conversation by every person on earth for a period of 10 years.  It can record, discover, compute and analyze a person’s life from birth to death in less than 12 seconds and repeat that for 200,000 people at the same time.

To search this much data, they use a bank of 16 Cray XT Jaguar computers that do nothing but read and write to and from the QMEM – quantum memory. The look-ahead and read-ahead capabilities are possible because of the massively parallel processing of a bank of 24 other Crays that gives an effective speed of about 270 petaflops. Speeds are increasing at NSA at a rate of about 1 petaflop every two to four weeks. This kind of speed is necessary for things like pattern recognition and making use of the massive profile database of Kontur.

In late 2006, it was decided that NSA and the rest of the intelligence and right wing government agencies would stop this idea of real-time monitoring and begin developing a historical record of what everyone does. Being able to search historical data was seen as essential for back-tracking a person’s movements to find out what he has been doing and whom he has been seeing or talking with. This was so that no one would ever again accuse the government or the intelligence community of not “connecting the dots”.

But that means what EVERYONE does! As you have seen from the above description, they already can track your movements and all your commercial activities as well as what you say on phones or emails, what you buy and what you watch on TV or listen to on the radio. The difference now is that they save this data in a profile about you. All of that and more.

Using geofencing, they have marked out millions of locations around the world to including obvious things like stores that sell pornography, guns, chemicals or lab equipment. Geofenced locations include churches, organizations like Greenpeace and Amnesty International. They have moving geofences around people they are tracking like terrorists but also political opponents, left wing radio and TV personalities and leaders of social movements and churches. If you enter their personal space – close enough to talk, then you are flagged and then you are geofenced and tracked.

If your income level is low and you travel to the rich side of town, you are flagged. If you are rich and travel to the poor side of town, you are flagged. If you buy a gun or ammo and cross the wrong geofence, you will be followed. The pattern recognition of Kontur might match something you said in an email with something you bought and somewhere you drove in your car to determine you are a threat.

Kontur is watching and recording your entire life. There is only one limitation to the system right now. The availability of soldiers or “men in black” to follow-up on people that have been flagged is limited so they are prioritizing whom they act upon. You are still flagged and recorded but they are only acting on the ones that are judged to be a serious threat now.  It is only a matter of time before they can find a way to reach out to anyone they want and curb or destroy them. It might come in the form of a government mandated electronic tag that is inserted under the skin or implanted at birth. They have been testing these devices in use on animals under the disguise of tracking and identification of lost pets. They have tried twice to introduce these to all the people in the military or in prisons. They have also tried to justify putting them into kids for “safety”. They are still pushing them for use in medical monitoring. Perhaps this will take the form of a nanobot.  So small that you won’t even know you have been “tagged”.

These tags need not be complex electronic devices.  Every merchant knows that RFID tags are so cheap that they are now installed at the manufacturing plant for less than 1 cent per item.  They consist if a special coil of wire or foil cut to a very specific length and folded into a special shape.  It can be activated and deactivated remotely.  This RFID tag is then scanned by an RF signal.  If it is active and you have taken it out of the store, it sounds an alarm.  Slightly more sophisticated RFID tags can be scanned to reveal a variety of environmental, location, time and condition data.  All of this information is gathered by a device that has no power source other than the scanning beam from the tag reader.  A 1 cubic millimeter tag – 1/10th the size of a TicTac – can collect and relay a huge amount of data, will have a nearly indefinite operating life and can be made to lodge in the body so you would never know it.

If they are successful in getting the population to accept these devices and then they determine you are a risk, they simply deactivate you by remotely popping open a poison capsule using a radio signal. Such a device might be totally passive in a person that is not a threat but might be lethal or it can be programmed to inhibit the motor-neuron system or otherwise disable a person that is deemed to be a high-risk person

Certainly this sounds like paranoia and you probably say to yourself, that can never happen in a free society.  If you think that, you have just not been paying attention.  Almost everything in this article can be easily researched online.  The code names of Quasar and Kontur are not public knowledge yet but if you look up the design parameters I have described, you will see that they are in common usage by NSA and others.  There is nothing in this article that cannot be verified by independent sources.

As I said in the beginning of this article, if the technology exists and is being used by the government or corporate America and it is public knowledge, then you can bet your last dollar that there is some other technology that is much more effective that is NOT public knowledge that is being used.

Also, you can bet that the public image of “protecting privacy” and “civil rights” have absolutely no limitations or restrictions on the government if they want to do something. The Bush/Cheney assault on our rights is a most recent example but is by no means rare or unusual.  If they want the information, laws against them gathering it have no effect.  They will claim National Security or classified necessity or simply do it illegally and if they get caught, they will deny it.

Here are just a few web links that might convince you that this is worth taking seriously.

http://www.democracynow.org/2006/3/1/how_major_corporations_and_government_plan

http://www.spychips.com/

http://starlight.pnl.gov/
http://en.wikipedia.org/wiki/Starlight_Information_Visualization_System
http://www.google.com/#q=starlight+pnnl&hl=en&prmd=v&source=univ&tbs=vid:1&tbo=u&ei=1zZGTNGkNYSBlAfTsISTBA&sa=X&oi=video_result_group&ct=title&resnum=4&ved=0CC8QqwQwAw&fp=d706bc2a5dba00d4

http://gizmodo.com/5395095/the-nsa-to-store-a-yottabyte-of-your-phone-calls-emails-and-other-big-brothery-stuff

http://www.greaterthings.com/News/Chip_Implants/index.html

http://computer.howstuffworks.com/government-see-website1.htm

http://www.newsweek.com/2010/02/18/the-snitch-in-your-pocket.html

http://venturebeat.com/2010/06/25/government-sites-to-track-behavior-target-content/

http://www.seattlepi.com/local/269969_nsaconsumer12.html

http://www.usatoday.com/news/washington/2006-05-11-nsa-reax_x.htm

Invisible Eyes – The Army can see EVERYTHING

As an advisor to the Dept. of Defense (DoD) on issues of advanced technology, I have been called into observe or test or evaluate a number of advanced weapons systems and other combat related new technology equipment. Let me tell you about the latest I investigated in Iraq and Afghanistan.

I was asked to evaluate the combat durability of a new multi-use sensor and communication system that can be deployed from an aircraft. I was flown to Baghlan and after a day’s rest; I was invited on a flight in a C-130. We flew north east over the mountains near Aliabad and approached an outpost base near Khanabad. Just before we landed, we were vectored to a large flat area just north west of the base. The ramp on the C-130 was lowered and we all put on harnesses. A man in combat fatigues carried a large canvas bag to the real of the ramp and pull out one of several devices from the bag. It looked like a small over-inflated inner-tube with two silver colored cylinders on top. It had several visible wires and smaller bumps and boxes in the hub and around the cylinders. It looked like it was perhaps 16 to 18 inches in diameter and perhaps 6 inches thick. The man pulled a tab which pulled out what looked like a collapsible antenna and tossed it out the ramp. He then took others out and did the same as we flow in a large circle – perhaps 20 miles in diameter – over this flat plain near the camp – tossing out 12 of these devices and then a final one that looked different. We then landed at the base.

I was taken to a room where they gave me a slide show about this device. It was called Solar Eye or SE for short. The problem they were addressing is the collection of intelligence on troop movements over a protracted period of time, over a large geographic area. The time periods involved might be weeks or months and the areas involved might be 10 to 25 square miles. It is not cost effective to keep flying aircraft over these areas and even if we did, that covers only the instant that the plane is overhead. Enemy troops can easily hide until the plane or drone is gone and then come out and move again. Even using very small drones gives only a day or two at most of coverage. The vast areas of Afghanistan demanded some other solution.

Stationary transmitters might work but the high mountains and deep valleys make reception very difficult unless a SATCOM dish is used and that is so large that it is easily spotted and destroyed. What was needed was a surveillance system that could monitor movements using visual, RF, infrared and vibration sensors. It had to be able to cover a large area which often meant that it had to be able to look down behind ridge lines and into gullies. It had to be able to operate for weeks or months but not cost much and not provide the enemy any useful parts when and if they found it. This was a tall order but those guys at NRL figured it out. Part of why I was called in is because I worked at NRL and a few of the guys there knew me.

After lunch, we got back to the lecture and I was finally told what this device is. When the device is tossed out, a tiny drogue chute keeps it stable and reduces its speed enough so it can survive the fall. The extended antenna helps to make it land on its bottom or on its side. If it lands on its side, it has a righting mechanism that is amazing. The teacher demonstrated. He dropped an SE on the floor and then stepped back. What I thought was a single vertical antenna was actually made up of several rods that began to bend and expand outward from a single rod left in the center. These other rods began to look like the ribs on an umbrella as then slowly peeled back and bent outward. The effect of these rods was to push the Se upright so that the one center rod was pointing straight up.

When I asked how it did that, I was told it uses memory wire. A special kind of wire that bends to a predetermined shape when it is heated – in this case by an internal battery. After the SE was upright, the wires returned to being straight and aligned around the center vertical rod.

“OK, so the device can right itself – now what?” I said. The instructor referred me back to the slide show on the computer screen. I was shown an animation of what looked like a funny looking balloon expanding from the center of the SE and inflating with a gas that made it rise into the air. He was pointing to the two cylinders and the inflatable inner tube I had seen earlier. The balloon rises into the air and the animation made it appear that it rose very high into the air – thousands of feet high.

The funny looking balloon was shaped like a cartoon airplane with wings and a tail with some odd panels on the top of the wings and tail. I finally said I was tired of being spoon fed these dog and pony shows and I wanted to get to the beef of the device. They all smiled and Ok, here is how it works.

The SE lands and rights itself and then those rods which were used to right it now are rotated and sent downward thru the center of the SE into the ground. They have a small amount of threaded pitch on then and when rotated, they screw into the soil. While they are screwing into the hard ground, they are also being bent again by an electrical current that is making them bend in the soil as they penetrate. The end result looks like someone opened an umbrella under ground beneath the SE. Since these rods are nearly 3 feet long, they anchor the SE to the ground very firmly.

The cylinders then inflate a special balloon that is made of some very special material. The Mylar is coated with a material that makes it act as a solar panel, creating electricity. The special shape of the balloon not only holds it facing into the wind but it also keeps it from blowing too far downwind. Sort of like the way a sailboat can sail into the wind, this balloon can resist the upper level winds by keeping the tether as vertical as possible. The balloon rises to between 5,000 and 15,000 feet – depending on the terrain and the kind of surveillance they want to do. It is held by a very special tether.

I was handed a tangled wad of what looked like the thin fiberglass threads that make up the cloth used for fiberglass boats. It was so lightweight that I could barely feel it. I had a wad about the size of a softball in my hand and the instructor told me I had nearly 2,000 feet in my hand. This tether is made from a combination of carbon fibers and specially made ceramics and it is shaped like an over-inflated triangle. What is really amazing is that it is less than one centimeter wide and made with an unusual color that made it shimmer at times and at other times it seemed to just disappear. The material was actually very complex as I was to learn.

The unique shape and material of the tether uses the qualities of the carbon fiber coating and metallic ceramic core to provide some unusual electromagnetic qualities. The impedance of the tether as seen by the RF signal in it is a function of the time-phased signal modulation. In other words, the modulation of the signal can cause the tether to change its antenna tuning aspects to enhance or attenuate the RF signal being sent or received. Using the central network controller, all of the SEs can be configured to act as alternating transmitters to other SEs and receivers from other SEs. This antenna tuning also comes in handy because every SE base unit also can function as a signal intelligence (SIGINT) receiver – collecting any kind of radiated signal from VLF to SHF. Because the antenna can be tuned to exact signal wavelengths and can simulate any size antenna at any point along its entire length, it can detect even very weak signals. The networking analysis system monitor and processor (SMP) records these signals and sends them via satellite for analysis when instructed to do so by the home central command.

The system combines the unique properties of this tether line with three other technologies. The first is an ultra wide-band (UWB) high frequency, low power and exceptionally long range transceiver that uses the UWB in a well controlled time-phase pulsed system that makes the multiple tethered lines act as a fixed linear array despite their movement and vertical nature. This is sometimes called WiMax using a standard called 802.16 but in this case, the tether functions as a distributed antenna system (DAS) maximizing the passive re-radiation capability of WiMax and making maximum use of the dynamic burst algorithm modulation. This means that when the network controlling system monitor determines that it is an optimum time for a specific SE to transmit, it uses a robust burst mode that enhances the power per bit transmitted while maintaining an optimum signal strength to noise ratio. By using this burst mode method in a smart network deployment topology, the SE overcomes the limitations of WiMax by providing both high average bit rates and long distance transmissions – allowing the SEs to be spaced as much as 100 miles apart. The SE tethers function as both a horizontal and vertical adaptive array antenna in which MIMO is used in combination with a method called Time Delayed Matrix-Pencil method (TDMP) to distinguish direct from reflected signals and to quantify phase shifts between different SE tethers connected to the system monitor. This creates a powerful and highly accurate Direction of Arrival (DOA) capability in very high resolution from nano-scale signal reflections.

Combining the precision DOA capability with an equally precise range capability is accomplished using the time-phased pulse which creates powerful signals that are progressively sent up the tether and then systematically cancelled out at certain distances along the tether using destructive echo resonance pulses. The effect is to move the emitted signal from the bottom of the tether along the tether as if it were a much shorter antenna but was traveling up and down the height of the tether. Since effective range is directly proportional to the height of the transmission, this has the effect of coordinating the emitted signal to distance. Using the range data along with the DOA, every detail of the surrounding topography can be recreated in the computer’s imaging monitor and the processor can accurately detect any movement or unusual objects in the field of coverage.

The second adapted technology is loosely based on a design sometimes referred to as the Leaky Coax or ported coax detector. The unique metallic Mylar and conductive ceramics in the tether give the electrical effect of being a large diameter conductor – making insertion losses almost zero – while allowing for an optimum pattern of non-uniformly spaced slots arranged in a periodic pattern that maximizes and enhances the radiating mode of the simulated leaky coax. The idea is that the emitted signal from one SE is coupled to the receiver in adjacent SEs in a manner that can be nulled out unless changes are made in the area in which the emitted signal is projected. The advantage of using the ported coax coupling method is that the signal needed for this detection process is very low power partly because the system makes use of the re-radiation of the signal in sort of an iterative damper wave that maximizes the detection of any changes in the received direct and reflected signals. In simple terms, the system can detect movement over a very large area by detecting changes in a moving temporal reference signal if anything moves in the covered area. In combination with the ultra wide band, spread spectrum transceiver, this detection method can reach out significant distances with a high degree of accuracy and resolution.

The third adapted technology is loosely based on magnetic resonance imaging (MRI). MRI’s are used to detect non-metallic and soft tissue in the body by using a method that blankets the subject in a time-phased magnetic field and then looks for minute timed changes to reflections of that magnetic field. In the case of the SE, the magnetic field is the WiMax, ultra wideband time-phased signal emitted by the tethers. It can blanket a large area with an electromagnetic field that senses changes in the signal reflection, strength and phase so that it can detect both metal and non-metal objects, including humans.

Variations on these three technologies are combined with a networking analysis system monitor and processor (SMP) that can receive signals and control the emissions from multiple SEs and process them into intelligence data. The system uses a combination of wires and lasers to speed communications to and from the SMP and the SMP can use any one or all of the SEs for selective analysis of specific geographic or electromagnetic signals.

Finally there is the balloon. It rises up above the clouds and sits in the bright sun. It has a surface that performs several functions. The outer layer acts sort of like the reverse of automatic dimming sunglasses. That is, it turns a pale blue under bright direct sunlight but it gets darker and darker as the light dims so that by the time the sun is down completely, the balloon is almost black. Although moon light does cause it to slightly brighten in color, the moon light is so direct that it only affects the top and most the bottom half remains black. During the day, the balloon is one to three miles up and is almost impossible to see without binoculars and knowing exactly where to look. During the night, the only way to know it is there is to see the stars that it blocks but at the long distances, it only blocks a very few stars at a time so again it is nearly impossible to see it. Since the tether is also nearly invisible, you have to be standing right next to the SE to be able to see any of it.

Just under this outer coating is a layer of flexible solar sensitive material that acts as a giant solar panel. It produces about 25 watts of power at peak performance but the SE system uses only about half that so the rest charges a Lithium-Cobalt Ion battery in the SE base unit. This is more than enough to power the system at night with enough left over to cover several cloudy days.

The bottom half of the balloon is coated with a reflective Mylar facing the inside of the balloon while the upper half of the balloon does not have this coating. This creates a reflective collection surface for RF signals being sent to and from satellites and high flying planes. Inside the balloon are antenna elements in this semi-parabolic reflector of several feet wide – making it easy to send and receive signals at very low energy levels. The SHF signals being sent are brought to the balloon’s internal antenna by superimposing them on top of the UWB signals on the carbon fiber Mylar surface of the tether. This is done with remarkable efficiency and hardly any signal loss.

Now that I had gotten the entire presentation, I was taken back into the C-130 where there was a small desk with a computer monitor and other equipment. The screen showed a map with the 12 SEs marked with red blinking dots. An internal GPS provided exact positions for both the SE base units, the central network SMP and the balloons. Beside each red dot was a blue dot off to one side showing the relative position of the balloon. Around each red dot was a light-blue circle that represented the coverage area – each light blue circle overlapped two or more other coverage area circles. Finally, there was a larger light yellow circle around all of the SEs showing the coverage area of the central networking SMP that dropped near the center of the SEs. Altogether, these circles covered an area of about 100 square miles but were capable of coverage over three times that area.

The operator then flipped a few switches and the screen changed over to what looked like an aerial monochrome view of a 3-D topographical map – showing the covered terrain in very good detail using shading and perspective to relate the 3-D effects. Then the circles on the screen began to pulsate and small lights appeared on the screen. These lights were different colors – red for metal objects, blue for animals or people and green for anything else that was moving or was inconsistent with the topography. It was programmed to flag anything that MIGHT be unusual such as objects that had sharp corners or smooth rounded edges or a symmetrical geographic pattern. When the operator moved a circular cursor (trackball) over any of these objects, the data lines on the bottom of the screen would fill with all kinds of information like its speed, direction, height above ground, past and projected paths, etc. Once an object was “hooked” by the trackball, it was given a bogie number and tracked continuously. The trackball also allowed for zooming in on the bogie to get increased detail. We spotted one blue dot and hooked it and then zoomed in on it. It was about 4 miles outside the SE perimeter but we were able to zoom in until it looked like a grainy picture from a poor signal on an old TV set. Despite that detail, it was clear that the object was a goat – actually a ram because we could see his horns. Considering it was about 1 AM at night and this was a goat that was 69 miles from where we were and 4 miles from the nearest SE, that is resolution that was incredible.

We zoomed out again and began a systematic screening of all of the red, blue and green dots on the screen. For objects the size of cars, we could reach out more than 40 miles out from the ring of SEs. For people, we could reach out about 15 miles outside the ring but inside; we could see down to rabbit size animals and could pick out individual electrical power poles and road signs.

I was shown a map of the other locations when the other Solar Eye arrays were located and their coverage areas. This is the primary reason and basis for the upcoming Marjah campaign into the Helmand Province – a huge flat plateau that is ideal for the invisible Solar Eyes.

I May Live Forever !

This story is unlike any other on this blog. For one thing, I am not a medical researcher and have had very little exposure or interest in the medical sciences so I approached this subject from an engineer’s perspective and as an investigator that thinks outside the box. I did not and could not follow some of the intricate details of many of the hardcore medical research reports I read. I mostly jumped to the conclusions and stitched together the thoughts and ideas that made sense to me. In retelling it, I have quoted parts of the medical study for those of you that understand it and then provided a translation based on my own interpretation of the studies. This story is also different because, it may well change my life significantly because, when all my research ended, I began experimenting on myself and as a result, I may live forever. Here’s the whole story from the beginning….

Some time ago, I became interested in life extension and began reading about it, in all its forms. I’m old and getting older so this was something that directly applied to my life. My research began with the known and leading edge of the science of experimental and biomedical gerontology. I read about the actual biology of senescence – the process of aging and what it is that actually ages. I learned the role of telomeres in the cell cycle and how some cells are immortal (germ and keratinocyte stem cells). I also learned that the telomerase enzyme, present in every cell, could turn a mortal cell into an immortal cell by stopping the telomere clock (called the Hayflick Limit) that puts a limited duration on the length of the cells’ telemeres. I learned that stem cells exist in many forms and types and have a wide range of capabilities and effects.

The above is a one paragraph summary of a huge amount of study and research over a period of a year or more and included tons more detail about all aspects of the science and the current R&D taking place in labs all over the word. The aging populations of most of the world’s wealthier nations have increased the interest and the funding for such studies. One estimate is that more than 25 experimental biomedical gerontology research studies are concluded and published somewhere every month.

After a year of reading and study, my research into this subject reminded me of someone that has taken two weeks climbing up a mountain and then he looks up and realizes he has only moved up about 10% of the height of the mountain. I could see that there was an enormous amount of material to study and that I would never really be able to learn it all….but I wanted to reach beyond what was being done and see what else I could find so I changed direction in my studies.

I decided to think outside the box and try to jump directly into the areas of controversial medical research. To do this, I began by looking at history. I have been a student of history all my life. I love the subject. I also love to find that there is almost always some truth to ancient legends and myths. Everything from Noah’s Great flood to Atlantis to the Yeti have some basis in fact or in history that has been embellished over the years by countless retellings. If you look hard enough, you can find the tidbit of truth that started it all.

So I began to look for some connection in ancient myths and legends about immortality and life extension. I used my concept search engine called Plato to help me gather and collate and sift thru all these old stories. As you might guess there are thousands of such references. Stories of the Fountain of Youth, the source of life and the miracle of birth get all mixed up in thousands of references to various aspects of immortality and recovery of youth. To sift thru all this, I used my own designed concept search engine called Plato.

Plato is simply a search engine like Google or Bing but it uses a unique searching technique that I invented that combines a thesaurus search, advanced data mining techniques and pattern recognition with a powerful neural network that provides predictive modeling and computational EDA methods. These modules pass the search syntax back and forth in an iterative Monte Carlo statistical manner to quantify the relationship of the data it finds into applicable concepts without relying on simple key-word searches.

It doesn’t just research my key-word search syntax; it can search for a concept. A simple example is searching for “Houses in the Artic”. It will use a thesaurus lookup to find all substitutes for House and Artic. It will then extend its search into the culture in which House may be different in the context of the Artic so that House will relate to igloo, or tent or ice cave or snow burrow and Artic might include Antarctic, polar north or polar south or “above the artic circle”. It will then collate the findings into a list of the most logical and most well documented response to my original query.

Plato has been my research tool for more than a two decade and I have been enhancing its capabilities almost continuously for most of that time as new methods, software and algorithms become available. I often use commercially available software matched to ERP-style data exchanges or simple macros to interlink and connect the applications with my own coded algorithms. I recently added a module that does a new kind of pattern searching called NORA – non-obvious relationship analysis –, which finds links between facts, and data that would otherwise be missed. NORA can find links to references using nicknames, alternate spellings, foreign word substitutes, and data that is seemingly unrelated but uses nonintuitive and disambiguation algorithms. NORA is actually just the next logical and incremental advance from my original simple Bayesian classifier to my newer neural-net pattern recognition and k-nearest neighbor algorithm (KNN) to a more sophisticated combination of all of those methods to make NORA.

Using NORA, Plato often finds interrelationships that would never have occurred to me and then it documents, prioritizes and presents to me why it is important. Such searches are often done by mainframes and super computers but I don’t have all that so I have to rely on my own version of distributed processing in which I use my own bank of PC’s plus some others that I “borrow” by farming out in an N-Tier architecture of commercial, university and government mainframes and other PC’s. This is particularly used when searches can be performed independently and then the results can be collated and evaluated by my own computers.

As you might expect, when I turned Plato onto this study, it did its usual job of searching, often for more than 5 or 6 days and nights (using my six interlinked computers and 18 terabytes of HDD space plus all the other systems that I could make use of). Each search gave me new insights and allowed me to make the next search more specific and productive. When it was done, it found something very interesting……..apples.

It found that when you condense thousands of ancient myths and legends and folklore, apples come up an extraordinary number of times in relation to stories of immortality and anti-aging. Oh, and not just any apples. It seems that only Golden Delicious and Braeburn apples have the connection to most consistent life-giving affects. Obviously, I had to follow this new idea and read many of the stories and links that Plato had documented. Norse, Greek, Chinese, American Indian and Australian Aborigines mythology all have detailed references to stories that related apples to immortality. Such is the kind of links that simply cannot be a total coincidence. There has to be more to this then just a common fruit food.

This was enough to go on so I went back to the hard sciences of experimental and biomedical gerontology to see if there was any link to apples. I really got frustrated because for months, I found virtually no connection to apples and I was beginning to think I might have gone off in the wrong direction too far. It took more than a year and hundreds of searches that took months of dedicated processing time with Plato’s help – but I finally found it. It turns out the reason it took so long is that one of the critical research papers that made the connection was only published in November of 2009. That paper essentially was the keystone of the whole story and provided the final piece of the puzzle that made everything else work and make sense. Here is the connection but I have to give you some of the other findings so you can see the series of links that leads to apples.

First the hard science: Fibrocyte is a term used to identify inactive mesenchymal multipotent stem cells (MSC), that is, cells that can differentiate into a variety of cell types. The term “Fibrocyte” contrasts with the term “fibroblasts.” Fibroblasts are connective tissue cells characterized by synthesis of proteins of the fibrous matrix, particularly the collagens. When tissue is injured – which includes damaged, worn out, aged or destroyed –, the predominant mesenchymal cells (MSC), the fibroblasts, are able to repair or create new replacement tissues, cells or parts of cells. These fibroblasts MSC’s are derived from the Fibrocyte and from muscle cells and glands.

Recently, the term “Fibrocyte” has also been applied to a blood born cell able to leave the blood, enter tissue and become a fibroblast. As part of the more general topic of stem cell biology, a number of studies have shown that the blood contains marrow-derived cells that can differentiate into fibroblasts. These cells have been reported to express the hematopoietic cell surface markers, as well as collagen. These cells can migrate to wound sites, exhibiting a role in wound healing. There are several studies showing that Fibrocyte mediate wound healing and fibrotic tissue repair.

Time to translate; the above says that one form of stem cells is called a Fibrocyte, which can express (a genetics term meaning create or manifests) as a fibroblast, which is a powerful cell capable of healing or even creating other body cells or cell parts. Fibroblasts can be created from Fibrocyte and from muscle cells. A special form of fibroblasts has been recently found in blood and is called myo-fibroblasts (which just means blood-fibroblasts). Myo-fibroblasts appear to also be created by bone marrow and have been found to be critical to wound healing and tissue repair. Myofibroblasts are a blood-borne stem cell that can give rise to all the other blood cell types but as you will see, they can do more.

OK now let’s jump to another researcher that found that Myofibroblasts in the wound tissue are implicated in wound strengthening by extracellular collagen fiber deposition and then wound contraction by intracellular contraction and concomitant alignment of the collagen fibers by integrin mediated pulling on to the collagen bundles. It can contract by using muscle type actin-myosin complex, rich in a form of actin called alpha-smooth muscle actin. These cells are then capable of speeding wound repair by contracting the edges of the wound. More recently it has been shown that the production of fibroblasts can be enhanced with photobiomodulation.

The translation of the above is that Myofibroblasts exist almost everywhere in the body but not in large quantities. Under certain conditions, muscle tissues, bone marrow and other surfaces within the body can create Myofibroblasts. Since the Myofibroblasts moves within the blood, it can reach everywhere in the body but Fibrocyte and fibroblasts are confined to specific sites within the body.

Myofibroblasts are also a sort of universal repair kit for cells and organs that can strengthen the organs and cells down to the cell wall using collagen, actin and intracellular contraction along with constructive rebuilding using special fibers that re-enforce and rebuild cells and parts of cells.

Perhaps the most important finding is that photobiomodulation can cause the level of fibroblasts in the body to increase. Fibroblasts have a self-renewal capacity to maintain their own population at an approximately constant level within the body but under special conditions created by photobiomodulation, that population can be made to grow larger. Under certain light conditions, fibroblasts increase in the blood for many hours or days before returning to their preset but relatively low constant level.

Low-level laser therapy (LLLT, also known as photobiomodulation, cold laser therapy and laser biostimulation) has long been known as a medical and veterinary treatment, which uses low-level lasers or light-emitting diodes to stimulate or inhibit cellular function. This is a really hot topic in the medical community because of its implications to non-pharmacology and non-invasive healing. Clinical and laboratory research investigating optimal wavelengths, power densities, treatment duration and treatment intervals are being performed in dozens of labs all over the world and these labs are publishing numerous papers on the subject. Among these papers, I (and Plato) have found several studies that show that the density of fibroblast cells and phytochemicals increase significantly under the LLLT.

As a universal repair kit, it would be more desirable to have Myofibroblasts than fibroblasts because Myofibroblasts can move throughout the body and repair more other different kinds of cells. However, since fibroblasts are more abundant than Myofibroblasts and are continually being created by Fibrocyte, the stimulation of making more fibroblasts using a special form of light therapy is a major discovery. At issue is to now get the fibroblasts to create more Myofibroblasts.

It is a well-established fact that apples exhibit strong antioxidant and antiproliferative activities and that their major part of total antioxidant activity is from the combination of phytochemicals. Phytochemicals, including phenolics and flavonoids are the bioactive compounds in apples.

A remarkable finding was made in November 2009. While experimenting with the variables in LLLT treatments and measuring the production of stem cells, it was discovered by accident that apples significantly increased the conversion of fibroblasts cells into Myofibroblasts cells. Further research narrowed the effect to just two types of apples, showing that Golden Delicious and Braeburn apples had the best impact on the health and growth of new Myofibroblasts cells.

Further research has shown that this amazing apple effect on the morphology of the cells, which became larger and stronger in the presence of selected apples – shows nearly identical effects as those from Human Growth Hormone (HGH), which is meant to stimulate the growth of cells. This means that apples could be the missing piece of the puzzle for growth of stronger and more lasting cells and could possibly be substituted for HGH therapy.

The net effect of the LLLT on patients that also have a daily diet of at least one Golden Delicious or Braeburn apple is that there is a significant improvement in cell morphology (structure) and in the quantity of fibroblast cells and that those cells are converted into Myofibroblasts cells in significant quantities.

There is one more piece to the puzzle. Even though Myofibroblasts have this great healing and regenerative powers and can travel anywhere in the body in the blood, we need to direct that effect on the telomeres so that the repairs to that one aspect to the cell can allow the normal cell reproduction and renewal process to continue and not die out with age. To do that, we have to change to another line of scientific inquiry.

Regenerative Medicine is a field of study on how to combine cells, engineering cells, and develop suitable bio-chemical stimulation to improve or replace biological functions, tissues and physio-chemical functions. This is a new field of study that most often makes use of stem cells as the major construction and repair tool. By using stem cells or progenitor cells, they have developed methods to induce regeneration in biologically active molecules. The focus of this work has been on the use of the most concentrated and powerful stem cells from embryonic and umbilical cord blood primarily because they want the fastest and most effective response possible on large repairs – like rebuilding the spine or liver. Although Myofibroblasts are less versatile than the embryonic stem cells, they are also multipotent stem cells – meaning that they can repair or rebuild other cells.

Regenerative Medicine applies the stem cells directly to damaged areas and hopes that they will go to the damaged area and fix it. But this method will not work if the problem to be fixed is every cell in the body. Site-specific Injections won’t work so we have to rely on the body’s natural systems to deliver the stem cells where we want them. The only method to reach them all is by the blood and the only effective stem cell that travels effectively in the blood are the Myofibroblasts.

But even moving thru the blood will not automatically make the Myofibroblasts find and repair the telomeres. I had to find a way to specifically, make the Myofibroblasts address the specific repair of the telomeres. To do so, it must somehow be told where to look and what to fix. I found this is done with a method called telomere-targeting agents (TTA). TTA’s were developed to tag the telomeres of cancer cells and then use a small molecule called BIBR1532, which is a telomerase inhibitor to shorten the cancer cells telomeres, thus destroying the cancer. TTA has only rarely been used to identify a point of repair rather than a point to inhibit or destroy but the difference in the two methods is relatively minor.

So, up to this point, we know that Myofibroblasts are stem cells that possess the ability to rebuild, recreate and reproduce a variety of body cells. This capability is called multipotent progenitor cells (MPC), however, the most recent research has shown that certain specific light frequencies, pulse duration and repeat treatments when using LLLT in the presence of the essential elements of apples, has created not just multipotent stem cells but pluripotent stem cells (PSC). These are cells that can essentially become or repair any cell in the body. It would now appear that we have not yet perfected the transformation of all of the fibroblasts into pluripotent stem cells but many are converted. Many more are converted into MPC’s. Both PSC’s and MPC’s are then applied to rebuild, recreate and produce a variety of body cells through a process known as transdifferentiation. This is not just repair but wholesale recreation or replacement of damaged cells.

These MPC’s and PSC’s can give rise to other cell types that have already been terminally differentiated. In other words, these stem cells can rebuild, recreate and reproduce any other cells. By using the special tagging process called TTA, we can direct these stem cells to seek out and repair the telomeres of cells everywhere in the body.

The next big advance in my research was finding a research project funded by the National Institute of on Aging (NIA), which is a part of the National Institute of Health (NIH). What was odd about this study is that it was taking place at Fort Detrick in Frederick Maryland. This is somewhat unusual because Ft. Detrick is where the Dept of Defense does a lot of its classified and dangerous medical research. You would not normally think of aging as being in that group. It got more confusing when I discovered that the labs being used were a part of the National Interagency Confederation for Biological Research (NICBR). This implied that the program was funded across all of the government medical community and that they had enormous resources to pull from. It also spoke of how important this program was. As I looked into this group more out of curiosity than for content, I discovered that a senior director at the NICBR was an old buddy of mine from my days at DARPA and NRL. He was now a senior ranking officer that managed the funding and scientific direction of multiple programs. I am being somewhat secretive because I don’t want his identity to be known. Suffice it to say that I called my old buddy and we met several times and I eventually got a full rundown on the project that I had uncovered.

In brief, what the NICBR is working on is how to enhance the health and recovery of our soldiers and sailors by natural process means. The program builds upon civilian studies and well-known biological facts such as that our bodies have a powerful defense against the growth of cancers, tumors and other defects like leukemia and lymphoma. Among these defenses are tumor suppressors and they work as described above by inhibiting the telomerase of the cancer cells. In the process, they also have the same but slower effect on normal cells – thus contributing to our aging. That means that if these Myofibroblasts stem cells are enhanced and are TTA tagged to repair and lengthen the telomeres of cells, they will do the same for cancer cells making people highly susceptible to tumors and cancers. If, on the other hand, they enhance the telomerase inhibitor in the cancer cells, they will also accelerate the aging process by reducing the length of telomeres in normal cells. We don’t want either one of these.

This joint NICBR team of researchers found that the accelerated lengthening of telomeres (ALT) is a process that can be enhanced using a tumor suppressor called ataxia-telangiectasia mutated kinase or ATM. Using ATM with Myofibroblasts stem cells and TTA tagging gave a marginal benefit of reducing cancers while having a slightly less reduction of cell aging. There was however, another one of those amazing accidental discoveries. During the testing they had to use various cultured and collected Myofibroblasts batches in their attempt to differentiate the effects of the TTA of normal cells from that of cancer cells.

Quite by accident, it was discovered that one batch of the Myofibroblasts cells had an immediate and profound differentiation of normal and cancer cells. This one batch of stem cells had the simultaneous effect of tagging of the telomeres to cause the repair and lengthening of the telomeres of normal cells while inhibiting the telomerase of the cancer cells. Upon closer examination, it was found that the only variable was that the Myofibroblasts of that batch was collected from the researcher that had been able to enhance Myofibroblasts production using photobiomodulation in the presence of enhanced phytochemicals, including phenolics and flavonoids – the bioactive compounds in ….apples.

The NICBR team immediately zeroed in one the active mechanisms and processes and discovered that when ATM is lacking, aging accelerates but they found that by manipulation of the p53/p16 ink4a expression in the presence of the photobiomodulated Myofibroblasts cells, they can differentiate the effects of the TTA of normal cells from that of cancer cells. The method involves using the catalytic subunit telomerase reverse transcriptase (TERT) of the telomerase enzyme to direct the Myofibroblasts to repair the telomere ends by tagging and using the TTAGGG repeat with manipulated p53/p16ink4a -Rb-mediated checkpoints and a very complicated process that involves bmi-1 and p10arf. I am not really sure what that means but I am assured that this differentiated TTA coding process works and can be used to tag very specific repair sites.

In other words, using differentiated TTA with the photobiomodulated Myofibroblasts, the specially created stem cells can be essentially programmed to rebuild the telomeres back to what they were in childhood without any (known) serious side effects. It was, however, the presence of apples in the process that made the difference between success and failure….again.

Once the differentiated TTA coding is combined with the photobiomodulated Myofibroblasts using TERT, these special stem cells will seek out and perform an endless repair of the normal cell telomeres while suppressing the telomeres on cancer and tumor cells. That will have the effect of stopping or greatly slowing the aging process.

There is just one problem. There are not enough Myofibroblasts in the blood to effect aging sufficiently over a long period of time. But, as described above, fibroblasts are not only created within the bone marrow, but also the quantity of fibroblasts can be stimulated and expanded in the presence of photobiomodulation (LLLT). AND we know from the above studies that the conversion of fibroblasts into Myofibroblasts is greatly enhanced by the unique biochemicals in apples in the presence of LLLT. Soooo….

In Summary:

I found that I could use a combination of certain apples and low-level laser therapy (LLLT), also known as photobiomodulation, to stimulate both the production of fibroblasts and the conversion of fibroblasts into Myofibroblasts. These light-stimulated Myofibroblasts cells, when used in connection with a special cell tagging process called TTA can be made to enhance the telomeres on normal healthy cells while suppressing the growth of telomeres of cancer and tumor cells. The end result is that the cells of the body approach immortality.

This has been a long research on my part and has led me down many dead end paths. Plato helped me with a lot of this research and led me into areas that I would not have otherwise pursued. My former Dept. of Defense contacts gave me access to research databases and research findings that are not all available to the public. Many of the medical studies I read were published in obscure journals or newsletters that are only available from a few sources. Many of the links that I followed were links that I found between two different studies that were not individually aware of each other. In other words, I didn’t do any of the actual research described here but I (and Plato) did make a lot of logical connections between the various research papers I found. To my knowledge, no one has taken this as far as I have except the few medical researcher friends of mine that helped me with getting access to the LLLT and performed the TTA coding for me.

But this is not just another story; I can tell you today that it works. At least, I am pretty sure it works as I have done this on myself.

The use of the LLLT was easy. I have been using 3.75 watts/cm2 (which is a relatively powerful setting that I gradually worked up to over a year of therapy). I have been experimenting with 640 to 720 nanometers for the scan wavelength (just below the infrared range) in bursts of 370 nanoseconds. All of these are settings that have evolved over time and will continue to be refined. Of course, I also have been playing around with various aspects of using the apples – eating them, cooking them, juice, pulp, skins, seeds, etc. The problem is that any change in the treatment does not have an immediate result. I have to wait weeks to see if there are any changes and then they are often very small changes that can easily go unnoticed. Despite this, the results have been subtle but have accumulated into significant changes.

I have always been in good health but until 6 months ago, I had bad arthritis in my hands and knees. That is gone. My skin has lost most of that pixilated look that old people get and many of my “age spots” have disappeared. I use to have morning pains and had trouble stretching out or touching the floor. Now I can keep my knees straight and put my hands flat on the floor. I have not been sick despite several exposures to the flu – including H1N1. I have more energy and have stopped wearing my glasses.

As with all of my articles on this blog, I make no representations about this story except to challenge you to research it for yourself. Most of the basics are readily available on the web and a lot of the details can be found in some good medical databases like Medline, EMBASE, PubMed and WISDOM. I also used ARL, NRL and NIH resources.

This all has taken place over the past 38 months but the self treatments have been only over the past 7 months so I have a long way to go to show I will live longer, let alone, live forever, but right now I am feeling pretty good.

Just so you’ll know, I have submitted several patent applications for various processes and methods that I have described above. In several cases, a patent already exists but thru a process called drug repositioning, I can apply for a patent for an alternative use of an existing patented drug. This is only necessary for the TTA tagging chemical and the patent on that chemical expired in 2007. I have patents in on the LLLT treatment settings that I have found to be most successful (not listed in this article) and in the optimum “application” of the apples to the processes – I didn’t exactly tell the whole story above. There are a few details that make it significantly more effective that I described above and those are the parts that are being patented. I say this just so everyone knows that any attempt to duplicate my processes will be sued by me for patent infringement. I have a lawyer that will not charge me anything unless he wins and he is convinced that he can will any such suit.

I want to also caution everyone that parts of this can be very dangerous. If you get it wrong, you can significantly enhance the growth of tumors or cancers in your body and no one can do anything to stop them. Don’t mess with this.

New Power Source Being Tested in Secret

The next time you are driving around the Washington DC beltway, the New York State Thruway, I80 through Nebraska or I5 running through California or any of a score of other major highways in the US, you are part of a grand experiment to create an emergency source of electric power.  It is a simple concept but complex in its implementation and revolutionary in its technology.  Let me explain from the beginning… 

We cannot generate electricity directly.  We have to use either chemical, mechanical solar or nuclear energy and then convert that energy to electricity – often making more than one conversion such as nuclear to heat to steam to mechanical to electrical.  These conversion processes are inefficient and expensive to do in large quantities.  They are also very difficult to build because of the environmental groups, inspections, regulations, competition with utilities and investment costs.   The typical warfare primer says to target the infrastructure first.  Wipe out the utilities and you seriously degrade the ability of the enemy to coordinate a response.  The US government has bunkers and stored food and water but has to rely mostly on public utilities or emergency generators for electricity.   Since the public utilities are also a prime target, that leaves only the emergency generators but they require large quantities of fuel that must be stored until needed.  A 10-megawatt generator might use 2500 gals of fuel per day.  That mandates a huge storage tank of fuel that is also in demand by car and aircraft.  This is not the kind of tenuous link of survivability that the government likes to rely on. 

The government has been looking for years for ways to bypass all this reliance on utilities out of their control and sharing of fuel with the goal of creating a power source that is exclusively theirs and can be counted upon when all other forms of power have been destroyed.  They have been looking for ways to extend their ability to operate during and after an attack for years.  For the past ten years or more they have been building and are experimenting with one that relies on you and me to create their electricity. The theory is that you can create electricity with a small source of very powerful energy – such as nuclear – or from a very large source of relatively weak energy – such as water or wind.   The difficultly and complexity and cost rises sharply as you go from the weak energy sources to the powerful energy sources.  You can build thousands of wind generators for the cost of one nuclear power plant.  That makes the weak energy sources more desirable for the movement to invest in.  The problem is that it takes a huge amount of this weak energy source to create any large volumes of electricity.  Also the nature of having a clandestine source of power means that they can’t put of a thousand wind generators or build a bunch of dams.  The dilemma comes in trying to balance the high power needs with a low cost while keeping it all hidden from everyone.  Now they have done all that. 

If you have traveled very much on interstate highways, you have probably seen long sections of the highway being worked on in which they cut rectangular holes (about 6 feet long, by 18 inches wide by nearly four feet deep) in the perfectly good concrete highway and then fill them up again.  In some places, they have done this for hundreds of miles – cutting these holes every 20 to 30 feet – tens of thousands of these holes throughout the interstate highway system.  Officially, these holes are supposed to be to fix a design flaw in the highway by adding in missing thermal expansion sections to keep the highway from cracking up during very hot or very cold weather.  But that is not the truth. 

There are three errors with that logic.   (1) The highways already have expansion gaps built into the design.  These are the black lines – filled with compressible tar – that create those miles of endless ‘tickety-tickety-tick” sound as you drive over them.  The concrete is laid down in sections with as much as 3 inches between sections that is filled in with tar.  These entire sections expand and contract in weather and squeeze the tar up into those irritating repeating bumps.  No other thermal expansion is needed. 

(2) The holes they cut (using diamond saws) are dug out to below the gravel base and then refilled with poured concrete.  When done, the only sign it happened is that the new concrete is a different color.  Since they refilled it with the same concrete that they took out, the filling has the same thermal expansion qualities as the original so there is no gain.  If there were thermal problems before, then they would have had the same problems after the “fix”.  Makes no sense.   (3) Finally, the use of concrete in our US interstate system was based on the design of the Autobahn in Germany which the Nazi’s built prior to WWII.  Dozens of years of research was done on the Autobahn and more on our highway system before we built the 46,000 miles of the Eisenhower National System of Interstate and Defense Highways, as it was called back in 1956.  The need for thermal expansion was well known and designed into every mile of highway and every section of overpass and bridge ever built.  The idea that they forgot that basic aspect of physics and construction is simply silly.  Ignoring, for a moment, that this is a highly unlikely design mistake, the most logical fix would have been to simply cut more long thin/narrow lines into the concrete and fill them with tar.  Digging an 18” wide by 6 foot long by 40-inch deep hole is entirely unneeded. 

Ok, so if they are not for thermal expansion, what are they.   Back in 1998, I was held up for hours outside of North Platte, Neb. while traffic was funneled into one lane because they were cutting 400 miles of holes in Interstate 80.  It got me to thinking and I investigated off and on for the next 7 years.  The breakthrough came when I made contact with an old retired buddy of mine that worked in the now defunct NRO – National Reconnaissance Office.  He was trying to be cool but told me to take a close look at the hidden parts of the North American Electric Reliability Corporation (NERC).  I did. It took several years of digging and I found out NERC has their fingers into a lot of pots that most people do not know about but when I compared their annual published budget (they are a nonprofit corporation) with budget numbers by department, I found about $300 million unaccounted for.  As I dug further, I found out they get a lot of federal funding from FERC and the Department of Homeland Security (DHA).  The missing money soon grew to over $900 million because much of it was “off the books”.   

In all this digging, I kept seeing references to Alqosh.  When I looked it up, I found it was the name of a town in northwest Iraq where it is believed that Saddam had a secret nuclear power facility.  That intelligence was proved wrong during the inspections that led up to the second Iraq war but the name kept appearing in NERC paperwork.  So I went looking again and found that it is also a derivation of an Arabic name meaning “the God of Power”.  It suddenly fell into context with the references I had been seeing.  Alqosh is not a place but the name of the project or program that had something to do with these holes that were being cut in the highway.  Now I had something to focus on. As I dug deeper, some of it by means I don’t want to admit to, I found detailed descriptions of Alqosh within NERC and its link to DoD and DHS.  Here’s what I found. 

The concrete that was poured into those holes was a special mixture that contained a high concentration of piezoelectric crystals.  These are rocks (quartz), ceramics and other materials that produce electricity when they are physically compressed.  The mix was enhanced with some custom designed ceramics that also create electricity.  The exact mixture is secret but I found out that it contains berlinite, quartz, Rochelle salt, lead zirconate titanate, polyvinylidene fluoride, sodium potassium niobate and other ingredients. The mix of quartz, polymers and ceramics is very unique and with a very specific intent in mind.  Piezoelectric materials will produce electricity when they are compressed squeezed – this is called the direct piezoelectric effect (like a phonograph needle).  But they also have exactly the opposite effect.  The lead zirconate titanate crystals and other ceramics in the mix will expand and contract in the presence of electricity – this is called the reverse piezoelectric effect.  This is how tiny piezoelectric speakers work.   

The concrete mix, in which a part, was designed to create electricity when compressed by a car passing over it.  Some of these materials react immediately and some delay their response for up to several seconds.  This creates a sort of damper wave of voltage spikes passing back and forth thru the material over a period of time. While some of this mix is creating electricity, some other parts of the specially designed ceramics were intended to flex in physical size when they sensed the electricity from the other quartz materials.  As with the quartz crystals, some of these ceramics delay their responses for up to several seconds.  Sort of like time-released capsules.  The flexing ceramics, in turn, continue the vibrations that cause the quartz to continue creating electric pulses.   

The effect is sort of like pushing a child’s swing.  The first push or vibration comes from the car passing.  That, in turn, creates electricity that makes some of the materials flex and vibrate more.  This push creates more electricity that repeats in an escalating manner until, like the swing, it is producing high waveforms of peak power spikes. The end result of this unique mix of chemicals, crystals, ceramics and polymers is what is called a piezoelectric transformer that uses the acoustic (vibration) (initiated by a car passing) coupling to step up the generated voltages by over 1,500-to-1 into a resonance frequency of about 1 megahertz.  A passing car initiates the series of high voltage electrical pulses that develop constructive resonance with subsequent pressures from passing cars so that the voltage peaks of this resonance can top out at or above 12,700 volts and then tapers off in a constant frequency, decreasing amplitude damper wave until regenerated by the next car or truck.  Multiple axle vehicles can produce powerful signals that can resonate for several minutes. 

Once all this electricity is created, the re-bar on the bottom of the hole also has a special role to play.  It contains a special spiral coil of wire hidden under the outer layer of conducting polymers.  By a careful design of the coil and insulating wires, these re-bars create a simple but highly effective “resonance tank circuit”.   The simplest form of a tank circuit is a coil of wire and a single capacitor.  The value of the inductance of the coil and the capacitance of the capacitor determines the resonance frequency of the circuit.  Every radio and every transmitter made has had a tank circuit in it of one sort or another. The coils of wire on the re-bar create an inductor and the controlled conducting material in the polymer coatings create a capacitor that is tuned to the same resonance frequency as the piezoelectric transformer making for a highly efficient harmonic oscillator that can sustain the “ring” (series resonance voltage magnification over a protracted time domain) for several minutes even with out further injection of energy.   In other words, a car passing can cause one of these concrete patches to emit a powerful high frequency signal for as much as 10 to 20 minutes, depending on the size, weight and speed of the vehicle. 

The final element of this system is the collection of that emitted RF energy.  In some areas, such as the Washington DC beltway, there is a buried cable running parallel to the highway that is tuned to receive and pass this electrical energy into special substations and junction boxes that integrate the power into the existing grid.  These special substations and junction boxes can also divert both this piezoelectric energy as well as grid power into lines that connect directly to government facilities. In other more rural areas, the power collection is by a receiver that is hiding in plain sight.  Almost all power lines have one or more heavy cables that run along the upper most portions of the poles or towers.  These top most cables are not connected to the power lines.  This line is most often used as lightning protection and is grounded into earth ground.   

Along those power lines that parallel highways that have been “fixed” with these piezoelectric generators, this line has been replaced with a specially designed cable that acts as a very efficient tuned antenna to gather the EMF and RF energy radiated by the modified highway re-bar transmitters.  This special cable is able to pick up the radiated piezoelectric energy from distances as far away as 1 mile.  In a few places, this specialized cable has been incorporated into the fence that lines both sides of most interstate highways.  Whether by buried cable, power line antenna or fence-mounted collector, the thousands of miles of these piezoelectric generators pumps their power into a nationwide gird of electric power without anyone being aware of it. The combined effect of the piezoelectric concrete mix, the re-bar lattice and the tuned resonant pickup antennas is to create a highly efficient RF energy transmitter and receiver with a power output that is directly dependent upon the vehicle traffic on the highway.  For instance, the power currently created by rush hour traffic along the Washington DC beltway is unbelievable.  It is the most effective and efficient generators in the US and creates as much as 1.6 megawatts by the inner beltway alone.   

The total amount of power being created nationwide is a secret but a report that circulated within DARPA following the 9/11 attacks said that 67 hidden government bunker facilities were brought online and fully powered in preparation to receive evacuated government personnel.  The report, which was focused on the continuity of services, mentioned that all 67 facilities, with a total demand of an estimated 345 megawatts, “used 9% of the available power of Alqosh”.  By extrapolation, that means that the Alqosh grid can create about 3,800 megawatts or about the power of two large nuclear power plants. So why is it secret?  Three reasons.  (1) The government doesn’t want the bad guys nor the American public to know that we can create power from our highways.   They don’t want the bad guys to know because they don’t want it to become a target.  They don’t want the general public to know because they frankly do not want to share any of this power with the public – even if commercial utility power rates get extraordinarily high and fossil fuel or coal pollution becomes a major problem. 

(2) Some of the materials in the concrete mix are not exactly healthy for the environment, not to mention that millions of people have had their travel plans messed up by the highway construction.  Rain run off and mixtures with hydrocarbons are known to create some pretty powerful toxins – in relatively small quantities but the effects of long term exposures are unknown. (3) Its not done yet.  The system is still growing but it is far from being complete.  A recent contract was released by NERC to install “thermal expansion” sections into the runways of the largest 24 airports in the US.  There is also a plan to expand into every railroad, metro, commuter train, subway and freight train system in the US.  A collaboration between DARPA, NERC and DHS recently produced a report that targets 2025 to complete the Alqosh grid with a total capacity of 26,000 megawatts of generating power. 

The task of balancing the high power needs of the government with a low cost while keeping it all hidden from everyone has been accomplished.  The cost has been buried in thousands of small highway and power line projects spread out over the past 10 years.  The power being created will keep all 140 of the hidden underground bunkers fully powered for weeks or months after natural disaster or terrorists have destroyed the utilities.  The power your government uses to run its lights and toasters during a serious national crisis may just be power that you created by evacuating the city where the crisis began. 

Big Brother is Watching

And He knows Everything You have Ever Done! Sometimes our paranoid government wants to do things that technology does not allow or they do not know about yet. As soon as they find out or the technology is developed, then they do it. Case in point is the paranoia that followed 11 Sept 2001 (9/11) in which Cheny and Bush wanted to be able to track and monitor every person in the US. There were immediate efforts to do this with the so-called Patriots Act that bypassed a lot of constitutional and existing laws and rights – like FISA. They also instructed NSA to monitor all radio and phone traffic, which was also illegal, and against the charter of NSA. Lesser known monitoring was the hacking into computer databases and monitoring of emails by NSA computers. They have computers that can download and read every email on every circuit from every Internet user as well as every form of voice communication. Such claims of being able to track everyone, everywhere have been made before and it seems that lots of people simple don’t believe that level of monitoring is possible. Well, I’m here to tell you that it not only is possible, but it is all automated and you can read all about the tool that started it all online. Look up “starlight” in combination with “PNNL” on Google and you will find references to a software program that was the first generation of the kind of tool I am talking about. This massive amount of communications data is screened by a program called STARLIGHT, which was created by the CIA and the Army and a team of contractors led by Battelle’s Pacific Northwest National Lab (PNNL). It does two things that very few other programs can do. It can process free-form text and it can display complex queries in visual 3-D outputs. The free-form text processing means that it can read text in its natural form as it is spoken, written in letters and emails and printed or published in documents. For a database program to be able to do this as easily and as fast as it would for formal defined records and fields of a relational database is a remarkable design achievement. Understand this is not just a word search – although that is part of it. It is not just a text-scanning tool; it can treat the text of a book as if it were an interlinked, indexed and cataloged database in which it can recall every aspect of the book (data). It can associate and find any word or phrase in relation to any parameter you can think of related to the book – page numbers, nearby words, word use per page, chapter or book, etc. By using the most sophisticated voice-to-text messaging, it can perform this kind of expansive searching on everything written or spoken, emailed, texted or said on cell phones or landline phones in the US! The visual presentation of that data is the key to being able to use it without information overload and to have the software prioritize the data for you. It does this by translating the database query parameters into colors and dimensional elements of a 3-D display. To view this data, you have to put on a special set of glasses similar to the ones that put a tiny TV screen in from of each eye. Such eye-mounted viewing is available for watching video and TV – giving the impression you are looking at a 60-inch TV screen from 5 feet away. In the case of STARLIGHT, it gives a completely 3-D effect and more. It can sense which way you are looking so it shows you a full 3-D environment that can be expanded into any size the viewer wants. And then they add interactive elements. You can put on a special glove that can be seen in the projected image in front of your eyes. As you move this glove in the 3-D space you are in, it moves in the 3-D computer images that you see in your binocular eye-mounted screens. Plus this glove can interact with the projected data elements. Let’s see how this might work for a simple example: The first civilian application of STARLIGHT was for the FAA to analyze private aircraft crashes over a 10-year period. Every scrape of information was scanned from accident reports, FAA investigations and police records – almost all of this was in free-form text. This included full specs on the aircraft, passengers, pilot, type of flight plan (IFR, VFR) etc. It also entered geospatial data that listed departure and destination airports, peak flight plan altitude, elevation of impact, distance and heading data. It also entered temporal data for the times of day, week and year that each event happened. This was hundreds of thousands of documents that would have taken years to key into a computer if a conventional database were used. Instead, high-speed scanners were used that read in reports at a rate of 200 double-sided pages per minute. Using a half dozen of these scanners completed the data entry in less than one month. The operator then assigned colors to a variety of ranges of data. For instance, it first assigned red and blue to male and female pilots and then looked at the data projected on a map. What popped up were hundreds of mostly red (male) dots spread out over the entire US map. Not real helpful. Next he assigned a spread of colors to all the makes aircraft – Cessna, Beachcraft, etc.. Now all the dots change to a rainbow of colors with no particular concentration of any given color in any given geographic area. Next he assigned colors to hours of the day – doing 12 hours at a time – Midnight to Noon and then Noon to Midnight. Now something interesting came up. The colors assigned to 6AM and 6PM (green) and shades of green (before and after 6AM or 6PM) were dominant on the map. This meant that the majority of the accidents happened around dusk or dawn. Next the operator entered assigned colors to distances from the departing airport – red being within 5 miles, orange was 5 to 10 miles…and so on with blue being the longest (over 100 miles). Again a surprise in the image. The map showed mostly red or blue with very few in between. When he refined the query so that red was either within 5 miles of the departing or destination airport, almost the whole map was red. Using these simple techniques, an operator was able to determine in a matter of a few hours that 87% of all private aircraft accidents happen within 5 miles of the takeoff or landing runway. 73% happen in the twilight hours of dawn or dusk. 77% happen with the landing gear lowered or with the landing lights on and 61% of the pilots reported being confused by ground lights. This gave the FAA information they needed to improve approach lighting and navigation aids in the terminal control areas (TCAs) of private aircraft airports. This was a very simple application that used a limited number of visual parameters at a time. But STARLIGHT is capable of so much more. It can assign things like direction and length of a vector, color of the line or tip, curvature and width and taper to various elements of a search. It can give shape to one result and different shape to another result. This gives significance to “seeing” a cube versus a sphere or to seeing rounded corners on a flat surface instead of square corners on an egg-shaped surface. Everything visual can have meaning.  Having 20+ variables at a time that can be interlaced with geospatial and temporal (historical) parameters can allow the program to search an incredible amount of data. Since the operator is looking for trends, anomalies and outflyers, the visual representation of the data is ideal to spot this data without actually scanning the data itself by the operator. Since the operator is visually seeing an image that is devoid of the details of numbers or words, he can easily spot some aspect of the image that warrants a closer look. In each of these trial queries, the operator can using his gloved hand to point to any given dot and call up the original source of the information in the form of a scanned image of the accident report. He can also touch virtual screen elements to bring out other data or query elements. For instance, he can merge two queries to see how many accidents near airports (red dots) had more than two passengers or were single engine aircraft, etc. Someone looking on would see a guy with weird glasses waving his hand in the air but in his eyes, he is pressing buttons, rotating knobs and selecting colors and shapes to alter his 3-D view of the data. In its use at NSA, they add one other interesting capability. Pattern Recognition. It can automatically find patterns in the data that would be impossible for any real person to by looking at the data. For instance, they put in a long list of words that are linked to risk assessments – such as plutonium, bomb, kill, jihad, etc. Then they let it search for patterns. Suppose there are dozens of phone calls being made to coordinate an attack but the callers are from all over the US. Every caller is calling someone different so no one number or caller can be linked to a lot of risk words. STARLIGHT can collate these calls and find the common linkage between them, and then it can tack the calls, caller and discussions in all other media forms. Now imagine the list of risk words and phrases to be tens of thousands of words long. It includes code words and words used in other languages. It can include consideration for the source or destination of the call – from public phones or unregistered cell phones. It can link the call to a geographic location within a few feet and then track the caller in all subsequent calls. It can use voice print technology to match calls made on different devices (radio, CB, cell phone, landline, VOIP, etc.). This is still just a sample of the possibilities. STARLIGHT was the first generation and was only as good as the data that was fed into it through scanned documents and other databases of information. A later version, code named Quasar, was created that used advanced data mining and ERP (enterprise resource planning) system architecture that integrated the direct feed from information gathering resources. For instance, the old STARLIGHT system had to feed recordings of phone calls into a speech-to-text processor and then the text data that was created was fed into STARLIGHT. In the Quasar system, the voice monitoring equipment (radios, cell phones, landlines) is fed directly into Quasar as is the direct feed of emails, telegrams, text messages, Internet traffic, etc. So does the government have the ability to track you? Absolutely! Are they? Absolutely! But wait, there’s more! Above, I said that Quasar was a “later version”. It’s not the latest version. Thanks to the Patriot Act and Presidential Orders on warrantless searches and the ability to hack into any database, NSA now can do so much more. This newer system is miles ahead of the relatively well known Echelon program of information gathering (which was dead even before it became widely known). It is also beyond another older program called Total Information Awareness (TIA). This new capability is made possible by the bank of NSA Cray computers and memory storage that are said to make Google’s entire system look like an abacus combined with the latest integration (ERP) software and the latest pattern recognition and visual data representation systems. Added to all of the Internet and phone monitoring and screening are two more additions into a new program called “Kontur”. Kontur is the Danish word for Profile. You will see why in a moment. Kontur adds geospatial monitoring of a person’s location to their database. Since 2005, every cell phone now broadcasts its GPS location at the beginning of every transmission as well as at regular intervals even when you are not using it to make a call. This was mandated by the Feds supposedly to assist in 911 emergency calls but the real motive was to be able to track people’s locations at all times. For those few that are still using the older model cell phones, they employ “tower tracking” which uses the relative signal strength and timing of the cell phone signal reaching each of several cell phone towers to pinpoint a person within a few feet. A holdover from the Quasar program was the tracking of commercial data which included every purchase made by credit cards or any purchase where a customer discount card is used – like at grocery stores. This not only gives the Feds an idea of a person’s lifestyle and income but by recording what they buy, they can infer other behaviors. When you combine cell phone and purchase tracking with the ability to track other forms of transactions – like banking, doctors, insurance, police and public records, there are relatively few gaps in what they can know about you. Kontur also mixed in something called geofencing that allows the government to create digital virtual fences around anything they want. Then when anyone crosses this virtual fence, they can be tracked. For instance, there is a virtual fence around every government building in Washington DC. Using predictive automated behavior monitoring and cohesion assessment software combined with location monitoring, geofencing and sophisticated social behavior modeling, pattern mining and inference, they are able to recognize patterns of people’s movements and actions as being threatening. Several would-be shooters and bombers have been stopped using this equipment. To talk about the “Profile” aspect of Kontur, we must first talk about why or how is it possible because it became possible only when the Feds were able to create very, very large databases of information and still be able to make effective use of that data. It took NSA 35 years of computer use to get to the point of using a terabyte (1012) of data. That was back in 1990 using ferrite core memory. It took 10 more years to get to petabyte (1015) of storage – that was in early 2001 using 14-inch videodisks and RAID banks of hard drives. It took four more years to create and make use of an exabyte (1018) of storage. With the advent of quantum memory using gradient echo and EIT (electromagnetically induced transparency), the NSA computers now have the capacity to store and rapidly search a yottabyte (1024) of data and expect to be able to raise that to 1,000 yottabytes of data within two years. To search this much data, they use a bank of Cray XT Jaguar computers that do nothing but read and write to and from the QMEM – quantum memory. The look-ahead and read-ahead capabilities are possible because of the massively parallel processing of a bank of other Crays that gives an effective speed of about 270 petaflops. Speeds are increasing at NSA at a rate of about 1 petaflop every two to four weeks. This kind of speed is necessary for things like pattern recognition and making use of the massive profile database of Kontur. In late 2006, it was decided that NSA and the rest of the intelligence and right wing government agencies would stop this idea of real-time monitoring and begin developing a historical record of what everyone does. Being able to search historical data was seen as essential for back-tracking a person’s movements to find out what he has been doing and whom he has been seeing or talking with. This was so that no one would ever again accuse them on not “connecting the dots”. But that means what EVERYONE does! As you have seen from the above description, they already can track your movements and all your commercial activities as well as what you say on phones or emails, what you buy and what you watch on TV or listen to on the radio. The difference now is that they save this data in a profile about you. All of that and more. Using geofencing, they have marked out millions of locations around the world to including obvious things like stores that sell pornography, guns, chemicals or lab equipment. Geofenced locations include churches, organizations like Greenpeace and Amnesty International. They have moving geofences around people they are tracking like terrorists but also political opponents, left wing radio and TV personalities and leaders of social movements and churches. If you enter their personal space – close enough to talk, then you are flagged and then you are geofenced and tracked. If your income level is low and you travel to the rich side of town, you are flagged. If you are rich and travel to the poor side of town, you are flagged. If you buy a gun or ammo and cross the wrong geofence, you will be followed. The pattern recognition of Kontur might match something you said in an email with something you bought and somewhere you drove in your car to determine you are a threat. Kontur is watching and recording your entire life. There is only one limitation to the system right now. The availability of soldiers or “men in black” to follow-up on people that have been flagged is limited so they are prioritizing whom they act upon. You are still flagged and recorded but they are only acting on the ones that are judged to be a serious threat now.It is only a matter of time before they can find a way to reach out to anyone they want and curb or destroy them. It might come in the form of a government mandated electronic tag that is inserted under the skin or implanted at birth. They have been testing these devices in use on animals under the disguise of tracking and identification of lost pest. They have tried twice to introduce these to all the people in the military. They have also tried to justify putting them into kids for “safety”. They are still pushing them for use in medical monitoring. Perhaps this will take the form of a nanobot. If they are successful in getting the population to accept these devices and then they determine you are a risk, they simply deactivate you by remotely popping open a poison capsule using a radio signal. Such a device might be totally passive in a person that is not a threat but might be lethal or it can be programmed to inhibit the motor-neuron system or otherwise disable a person that is deemed to be a high-risk person. Watch out for things like this. It’s the next thing they will do. You can count on it. 

Plato: Unlimited Energy – Here Already!

Plato: Unlimited Energy

 

If you are a reader of my blog, you know about Plato. It is what I call a software program that I have been working on since the late 1980’s that does what I call “concept searches”. The complete description of Plato is in another story on this blog but the short of it is that it will do web searches for complex interlinked and related or supporting data that form the basis for a conceptual idea. I developed Plato using a variety of techniques including natural language queries, thesaurus lookups, pattern recognition, morphology, logic and artificial intelligence. It is able to accept complex natural language questions, search for real or possible solutions and present the results in a form that logically justifies and validates the solution. Its real strength is that it can find solutions or possibilities that don’t yet exist or have not yet been discovered. I could go on and on about all the wild and weird stuff have used Plato for but this story is about a recent search for an alternative energy source….and Plato found one.

As a research scientist, I have done a considerable amount of R&D in various fields of energy production and alternate energy sources. Since my retirement, I have been busy doing other things and have not kept up with the latest so I decide to let Plato do a search for me to find out what is the latest state-of-the-art in alternate energy and the status of fusion power. What Plato came back with is a huge list of references in support of an source of energy that is being used by the government but is being withheld from the public. This energy source is technical complex but is far more powerful than anything being used today short of the largest nuclear power plants. I have read over most of what Plato found and am convinced that this source of power exists, it is being used but is being actively suppressed by out government. Here is the truth:

On January 25, 1999 a rogue physicist researcher at the University of Texas named Carl Collins clamed to have achieved stimulated decays of nuclear isomers using a second-hand dental x-ray machine. As early as 1988, Collins was saying that this was possible but it took 11 years to get the funding and lab work to do it. By then, it was confirmed by several labs including Dr. Belic at the Stuttgart Nuclear Physics Group. Collins’ results were published in a peer reviewed Physical Review Letters. The science of this is complex but what it amounts to is a kind of cold fusion. Nuclear isomers are atoms with a metastable nucleus. That means that certain when they are created in certain radioactive materials, the protons and neutrons (nucleons) in the nucleus of the atom are bonded or pooled together in what is called an excited state.

An analogue would be like stacking balls into a pyramid. It took energy to get them into that natural state but what Collins found is that it takes relatively little energy to destabilize this stack and release lots of energy. Hafnium and Tantalum are two naturally occurring metastable elements that can be triggered to release their energy with relatively little external excitation.

Hafnium, for instance, releases a photon with an energy of 75 keV (75,000 electron volts) and one gram produces 1,330 megajoules of energy – the equivalent of about 700 pounds of TNT. A five-pound ball is said to be able to create a two-kiloton blast – that is the equivalent to 4,000,000 pounds of TNT. A special type of Hafnium called Hf-178-m2 is capable of producing energy in the exawatt range, that is 10,000,000,000,000,000,000 (1018) watts of energy! This is far more than all the energy created by all the nuclear plants in the US. As a comparison, the largest energy producer in the world today is the Large Hadron Collider (LHC) near Geneva which cost more than $10 billion and can a beam of energy estimated to be 10 trillion watts (1012 ) but that is power that lasts for about 30 nanoseconds (billionths of a second).

Imagine being able to create 1 million (106) times that energy level but sustain it indefinitely? We actually don’t have a power gird capable of doing that but because we are talking about a generator that might be the size of a small house, this technology could be inexpensively replicated all over the US or the world to deliver as much power as needed.

These are, of course, calculated estimates based on extrapolation of Collins’ initial work and that of the follow-on experiments but not one scientist has put forth a single peer reviewed paper that disputes these estimates or the viability of the entire experiment. It is also obvious that the mechanism of excitation would have to be larger than a dental x-ray machine in order to get 1018 watts out of it. In fact, when Brookhaven National Lab conducted its Triggering Isomer Proof (TRIP) test, it used their National Synchrotron Light Source (NSLS) – a powerful laser – as the excitation.

Obviously this was met with a lot of critical reviews and open hostility from the world of physics. This was just another “Cold Fusion” fiasco that was still fresh in everyone’s minds. It was in 1989 that Pons and Fleischmann claimed to have created fusion in a lab at temperatures well below what was then thought to be necessary. It took just months to prove them wrong and the whole idea of cold fusion and unlimited energy was placed right next to astrology, perpetual motion and pet rocks.

Now Collins was claiming that he had done it again – a tiny amount of energy in and a lot of energy out. He was not reporting the microscopic “indications of excess energy” that Pons and Fleischmann claimed. Collins is saying he got large amounts of excess energy (more energy out that went in) on many orders of magnitude above what Pons and Fleischmann claimed.

Dozens of labs across the world began to try to verify or duplicate his results. The biggest problem was getting a hold on the Hafnium needed to do the experiments – it is expensive and hard to come by so it took mostly government sponsored studies to be able to afford it. Surprisingly, some confirmed it and some had mixed results and some discredited him.

In the US, DARPA was very interested because this had the potential for being a serious weapon that would give us a nuclear bomb type explosion and power but would not violate the worldwide ban on nuclear weapons. The US Navy was very interested in it because had the potential for being not only a warhead but also a new and better form of power for their nuclear power fleet ships and subs.

By 2004, the controversy over whether it was viable or not was still raging so DARPA, which had funded some of the labs that had gotten contradictory results, decided to have a final test. They called it the TRiggering Isomer Proof (TRIP) test and it was funded to be done at Brookhaven National Lab.

This had created such news interest that everyone was interested in the results. NASA, Navy, Dept. of Energy (DOE), Dept of Defense (DoD), NRL, Defense Threat Reduction Agency, State Department, Defense Intelligence Agency (DIA), Argonne Labs, Arms Control and Disarmament Agency (ACDA), Los Alamos, MIT Radiation Lab, MITRE, JASON, and dozens of others were standing in line to hear the outcome of this test being conducted by DARPA.

So what happened in the test? No one knows. The test was conducted and DARPA put the lockdown on every scrap of news about the results. In fact, since that test, they have shutdown all other government funded contracts in civilian labs on isomer triggering. The only break in that cover has been a statement from the senior most DOE scientist involved, Dr. Ehsan Khan when he made this statement:

“TRIP had been so successful that an independent evaluation board has recommended further research….with only the most seasoned and outstanding individuals allowed to be engaged”.

There has been no peer review of the TRIP report. It has been seen by a select group of scientists but no one else has leaked anything about it. What is even more astounding is that none of those many other government agencies and organizations have raised the issue. In fact, any serious inquiry into the status of isomer triggering research is met with closed doors, misdirection or outright hostility. The government has pushed it almost entirely behind the black curtain of black projects. Everything related to this subject is now either classified Top Secret or is openly and outwardly discredited and denounced as nonsense.

This has not, however, stopped other nations or other civilian labs and companies from looking into it. But even here, they cannot openly pursue isomer triggering or cold fusion. Now research into such subjects is called “low-energy nuclear reactions” (LENR) or “chemically assisted nuclear reactions (CANR). Success in the experiments of these researchers is measured in the creation of “excess heat” meaning that it has created more (excess) energy than was put into it. Plato has found that some people and labs that have achieved this level of success include:

Lab or company ResearcherUniversity of Osaka, Japan Arata

ENEA, Rome Frascati, Italy Vittorio Violante

Hokkaido University, Japan Mizuno

Energetic Technology, LLC, Omer, Israel Shaoul Lesin

Portland state University, USA Dash

Jet thermal Products, Inc, USA Swartz

SRI, USA McKubre

Lattice Energy, Inc. USA E. Storms

In addition, the British and Russians have both published papers and intelligence reports indicate they may both be working on a TRIP bomb. The British have a group called the Atomic Weapons Establishment (AWE) that has developed a technique called Nuclear Excitation by Electron Transition and are actively seeking production solutions. The Russians may have created an entire isolated research center just for studying TRIP for both weapons and energy sources.

In addition to the obvious use of such a power source to allow us to wean off of fossil fuels, there are lots of other motivations for seeking a high density, low cost power source: global warming, desalination, robotics, mass transportation, long distance air travel, space exploration, etc.

These applications are normal and common sense uses but what application might motivate our government to surppress the news coverage of further research and to wage a disinformation and discredit campaign on anyone that works on this subject? One obvious answer is its potential as a weapon but since that also is well known and common sense, there must be some other reason that the government does not want this to be pursued. What that is will not be found by searching for it. If it is a black project, it will not have internet news reports on it but it might have a combined group of indicators and seemingly disconnected facts that form a pattern when viewed in light of some common motive or cause. Doing that kind of searching is precisely what Plato was designed to do.

What my Plato program discovered is that there are a number of unexplained events and sightings that have a common thread. These events and sightings are all at the fringes of science or are outright science fiction if you consider current common knowledge of science or listen to the government denounce and discredit any of the observers. Things like UFOs that move fast but make no noise, space vehicles that can approach the speed of light, underwater vessels that have been reported to travel faster than the fastest surface ships and beam weapons (light, RF, rail) that can destroy objects as far away as on the moon. What they have in common is that if you consider that there is a high density, compact source of extremely high-powered energy, then these fantastic sightings suddenly become quite plausible.

A power source that can create 10 TeV (tera-electron Volts) is well within the realm of possibility for an isomer-triggered device and is powerful enough to create and/or control gravitons and the Higgs Boson and the Higgs field. See my other blog story on travel faster than light and on dark energy and you will see that if you have enough power, you can manipulate the most fundamental particles and forces of nature to include gravity, mass and even time.

If you can control that much power, you can create particle beam weapons, lasers and rail guns that can penetrate anything – even miles of earth or ocean. If you can create enough energy – about 15 TeV, you can create a negative graviton – essentially negative gravity – which can be used to move an aircraft with no sounds at supersonic speeds. It will also allow you to break all the rules of normal aerodynamics and create aircraft that are very large, in odd shapes (like triangles and arcs) and still be able to travel slowly. Collins estimated that a full-scale isomer triggered generator could generate power in the 1,000 TeV range when combined with the proper magnetic infrastructure of a Collider like the LHC.

Plato found evidence that is exactly what is happening. The possibility of coincidence that all of these sightings have this one single thread in common is beyond logic or probability. The coincidence that these sightings and events have occurred by the hundreds in just the past few years – since the DARPA TRIP test – is way beyond coincidence. It is clear that DARPA put the wraps on this technology because of its potential as a weapon and as an unlimited high-density power source.

The fact that this has been kept hushed up is mostly due to the impact it would have on the economies of the world if we were suddenly given unlimited power that was not based on fossil fuels, coal or hydroelectric power. Imagine the instant availability of all of the electricity that you could use at next to nothing in cost. Markets would collapse in the wake of drops in everything related to oil, gas and coal. That is not a desirable outcome when we are in such a bad financial recession already.

Plato comes up with some wild ideas some times and I often check them out to see if it really is true. I was given perhaps 75 references, of which I have listed only a few in this article but enough that you can see that they are all there and true. I encourage you to search for all the key words, people and labs listed here. Prove this to yourself – it’s all true.

NASA Astrophysics Data System (ADS) Physical Review Papers Vol 99, Issue 17, id. 172502 titled, “Isomer Triggering via Nuclear Excitation by Electron Capture (NEEC) reported confirmed low energy triggering with high energy yields.

Brookhaven National Lab conducted a Triggering Isomer Proof Test (TRIP) using their National Synchrotron Light Source (NSLS) in which they reported; “A successfully independent confirmation of this valuable scientific achievement has been made … and presented in a Sandia Report (SAND2007-2690, January 2008). This was funded by DARPA but pulled the funding right after the test.

FCC Warning: Anomalous Content

SECRET

Compartment Coded: Megaphone

 

 

FEDERAL COMMUNICATIONS COMMISSION

Enforcement Bureau

Content Enforcement Division (CED)

 

 

*****************************************************

NOTICE

*****************************************************

FCC Violation Notice for the Executive Office of the President

*****************************************************

 

Continuous Custody Courier Delivery

April 20, 2009

Subject: Commercial Broadcast Radio Stations KARB, KARV, KBBR, KCRB, et al

Commercial Radio License CB8I: Warning Notice, Case #EB-2008-2997-RB

Dear Sir:

On August 1, 2007, The FCC/CED discovered a Part 15 violation regarding inappropriate content within the assigned bands of operation of 173 commercial AM and FM broadcast radio stations located in every State. The nature of the inappropriate content appears to be an extremely sophisticated subliminal message that is undetectable by routine spectrum analysis because it is dynamically created by the beat frequencies of the broadcast. This means that any specific analysis of broadcast content will show no embedded or side-band signals, however, the audio modulation of the received broadcast at the receiver’s speaker creates an artificial but highly effective analog influence upon and within any listener.

This signal appears as a result of the signal creating binaural beat tones inside the superior olivary nucleus of the brain stem. Preliminary research has shown that these temporal modulations are creating multiple brainwave synchronizations below the conscious perception thresholds and they are having measurable effects (see below) on the listeners in each of the radio broadcast regions. The signal is not a voice, per se, but rather they have a direct and immediate influence on the inferior colliculus neurons internal to the brain. The affect of this influence has been measured in activated areas of the brain of the primary sensorimotor and cingulate areas, bilateral opercular premotor areas, bilateral SII, ventral prefrontal cortex, subcortically, anterior insula, putamen and thalamus. These areas of the brain and others affected include control of motor reflexes, hunger, vision, decision-making, body temperature control, temperament, smell and memory.

Collaboration with NSA and NRL have provided us with a complete analysis of the signal but this has been of only limited help with the cause and effect on the listening public. At the suggestion of Dr. Wayne Sponson at NSA, the FCC/CED contacted the Sensory Exploitation Division (SED) of NIH, at Fort Detrick, Maryland. We were delayed for 4 weeks in order to process clearances for two members of the FCC/CED (myself and Dr. Edward Willingsley).

In late February, we were able to obtain the following information. The NIH/SED has been working on binaural beats to explore the phenomenon called the Frequency Following Response or Entrainment. They have been highly successful with this field of study, however, their efforts have focused on the creation of infrasound induced beat frequencies to entrain brain waves. This has been shown to impact the delta, theta, alpha, beta and gamma brainwaves.  By contrast, the contaminated signals from these radio stations is created using sounds well above the infrasound range and well within the range of normal music listening.

Dr. Alan Cupfer from NIH’s Neuroscience Research confirmed that entrainment using binaural beat stimulation (or using light) has been shown to be quite effective to affect dream states, focus, anxiety, addiction, attention, relaxation, learning, mood and performance. He also admitted that by first achieving brain synchronization and then applying entrainment to effect constructive or destructive interference with brain frequencies, it is possible to significantly enhance or suppress these brain functions.

NSA computers discovered these signals during their routine monitoring of the broad frequency spectrum of all transmissions. The computers have been recording these signals as an automatic function of finding an anomalous signal, however, because no specific threatening content was recognized by the computers, it was not flagged to any human operators or analysts at NSA. This is a procedural error that has been corrected.

Once the FCC/CED discovered the nature of these anomalous signals in August 2008 and coordinated with NSA, NSA provided our office with archived recordings that date back to 2001 and show an increasing coverage of broadcast stations from the first one found in California to the present 173. They seem to be increasing at a rate of about two per month. It is estimated that approximately 61 million people are currently within the broadcast coverage areas of these stations.

In our two-month exploration of what, if any, impact or objective these broadcasts are having on the listening audience, we have discovered the following:

  1. The subliminal signals appear to be constantly varying at each station and between stations, even when the same music or other recordings are being played. It appears that the anomalous signals are being injected into the broadcast systems at each station’s transmission facility from an exterior source but the means and mechanism of this signal injection has not been determined yet. Until it is, we can’t stop it.
  1. The anomalous signals can be distinguished from non-contaminated signals by means of signal analysis comparisons before and after the use of adaptive filtering. Using a recursive least squares (RLS) and a least mean squares (LMS) in an automated sweep variable filter that seeks a zero cost function (error signal) when compared to a reference baseline. When this computed correction factor is non-zero, the NSA computers determine that the signal is contaminated and they are recorded. These kinds of finite impulse response (FIR) filter structures have proven to be effective at the detection of changes to the baseline reference as small as one cycle at one Giga-Hertz over a period of 24 hours.
  1. Despite being able to detect and isolate the anomalous signal, the combined efforts of NSA, FCC, NIH and NRL have been unable to decode the signal with respect to intelligent content. However, Dr. Tanya Huber and Joel Shiv, two researchers from the National Institute of Science and Technology (NIST) suggested that by examining the non-conscious behavior of the listeners against a baseline, there might be a correlation between signal content and responses. These two researchers have been studying the psychological manipulation of consumer judgements, behavior and motivation since 2004.
  1. Conducting the first macro-survey of listener behavior in each of the broadcast areas initially yielded no anomalous behavior but when micro-communities and community activities were individually examined, some conspicuous changes were noted.
  1. In Mesquite, NV, a change in the recorded anomalous signal coincided with a controversial referendum by the voters on the long-term problems with the Oasis Golf Club. This referendum was notable because it unexpectedly and nearly unanimously reversed a voter survey taken the previous day.
  1. In La Pine, OR, a small farm community with a low power publicly owned station, experienced an uncommonly large increase in the sale of over-the-counter non-steroidal anti-inflammatory agents/analgesics (NSAIAs) such as aspirin, naproxen, Tylenol, and ibuprofen. It appears that the sale was initially motivated by a three week period of a large increase in demand for the analgesic qualities of these drugs but following a week long lull in sales, demand again peaked for three weeks for the antipyretic effects of these drugs. This was validated by a large increase in the sales of thermometers and examination reports of doctor visits. What is unusual is that this appears to have affected nearly every single person in the broadcast area of this small station. The only ones not affected were deaf.
  1. Over the survey of cities and towns, it was discovered that there was a surge in consumer activity associated with a variety of drugs and foods in more than 70 communities over the period analyzed. In each instance, this surge in sales had no prior precedent and lasted for one or two weeks and then returned to normal without reoccurrence.
  1. By contrast, it was also discovered that there was a corresponding decrease in sales of specific drugs and food and drinks in 67 communities – some of which were involved, in the above-mentioned increase of sales. These decreased sales included a drop to nearly zero sales of all drinks containing any form of alcohol or milk. These decreases were especially significant because doctors and local advertisers actively opposed them without effect.

Dozens of other changes in consumer behavior, voter response, mood swings and entertainment sales were discovered but no specific patterns of products, locations, response or demographics were discovered.

Summary:

The findings of the FCC/CED indicate that a significant and growing population have been and are being manipulated and controlled by listening to radio broadcasts. The degree of control exerted has been nothing short of extraordinary and without precedent. The technology involved has so far eluded detection. The source or objectives of these anomalous signals has also not yet been determined.

It is the speculation of the FCC/CED and of the NIH/SED that this has all the signs of someone or some organization that is actively testing their capabilities on a live and diverse group of test subjects. These tests appear to be random but are systematically exploring the degree of influence and the parts of the brain that can be exploited by these signals. What cannot be determined is what is the final intent or objective or possibly that it has already been accomplished or may be ongoing.

Recommendations: It is recommended that the general public NOT be informed of this situation until we are able to define it further.We recommend the use of deaf analysts be assigned to monitor on-site listening stations in all of the largest radio coverage areas to maintain an observation of changes to behavior. In other areas, automated monitoring can be used to isolate the signals before sending encrypted files to NSA for analysis.We recommend the use of FBI and CIA to examine any commonality between these stations.We recommend that NIST and NIH continue their survey of behavior changes in all of the affected communities.We recommend that NRL and FCC collaborate on the creation of a selective RF counter-measure to the anomalous signals.

We recommend that a cabinet-level task force be created within Homeland Security to assist and coordinate all of the above activities.

Sincerely,

Dr. W. Riley Hollingswood Ph.D.

FCC Director, Content Enforcement Division

April 21, 2009 Update:

Following the creation and coordination of the above report, it was reported to this office by NSA that the anomalous signals have been detected in both national broadcast and cable television signals.

SECRET

Government Secrets #2 They Control You!!

They Control You!!

After reading Government Secrets #1, you should know that I had access to a lot of intelligence over a long career and had a lot of insights into our government’s actions on the international political stage. What I observed first hand and in my historical research is that repeatedly over decades, the US government has gone to great effort to create wars. You will never hear a military person admit this because most of them are not a part of the decision process that commits us to war but because they believe in the idea that we are always right and they will go to prison if they disobey, they will execute the directions to go to war with great gusto. We have a very warped view of our own history. In every war we are the heroes and we fought on the side of right and we did it honorably and with great integrity. Well, that is what the history books would have you believe. Did you ever learn that we issued orders to take no prisoners at the battle of Iwo Jima? Thousands of Japanese were shot with their hands raised in surrender. To be fair, some of them would feign surrender and then pop a grenade but you won’t see this in our history books.Did you know that our attack strategy in Europe was to destroy the civilian population? The worst example occurred on the evening of February 13, 1945, Allied bombers and fighters attacked a defenseless German city, one of the greatest cultural centers of northern Europe. Within less than 14 hours not only was it reduced to flaming ruins, but an estimated one-third of its inhabitants, more than half a million, had perished in what was the worst single event massacre of all time. More people died there in the firestorm, than died in Hiroshima and Nagasaki combined.

Dresden, known as the Florence of the North, was a hospital city for wounded soldiers. Not one military unit, not one anti-aircraft battery was deployed in the city. Together with the 600.000 refugees from Breslau, Dresden was filled with nearly 1.2 million people. More than 700,000 phosphorus bombs were dropped on 1.2 million people. More than one bomb for every 2 people. The temperature in the center of the city reached 1600 centigrade (nearly 3,000 degrees Fahrenheit). More than 260,000 bodies and residues of bodies were counted. But those who perished in the center of the city can’t be traced because their bodies were vaporized or they were never recovered from the hundreds of underground shelters. Approximately 500,000 children, women, the elderly and wounded soldiers were slaughtered in one night.

Following the bomber attack, U.S. Mustangs appeared low over the city, strafing anything that moved, including a column of rescue vehicles rushing to the city to evacuate survivors. One assault was aimed at the banks of the Elbe River, where refugees had huddled during the night. The low-flying Mustangs machine-gunned those all along the river, as well as thousands who were escaping the city in large columns of old men, women and children streaming out of the city.

Did you ever read that in your history books? Did you know that we deliberately avoided all attacks on Hiroshima and Nagasaki so as to ensure that the civilian population would not flee the city?

This sparked my interest to look into “my war” – Viet Nam and I began to study it in detail. I read about its start and how the famous Tonken Gulf Incident was a complete ruse to let Lyndon Johnson boost troops for political gain and out of a personal fear that America might be seen as weak. He had great faith in our might and ability to make a quick and decisive victory so he trumped up a fake excuse to get the famous Tonken Gulf Resolution passed to give him more powers to send troops. The whole war had been just a political whim by a misguided politician and bolstered by the military-industrial complex that profited by massive arms sales, which also happened to be the largest contributors to the political campaigns. More than 50,000 US lives and countless Viet Namese lives later, we left Viet Nam having had almost no effect on the political outcome of the initial civil war effort to reunite the North and the South under communism – except that there were a lot fewer people to do it.

Even our basis for most of the cold war was mostly fake. For instance, I found pretty solid evidence that as early as the early 1960’s there was a massive campaign to create a false missile gap mentality in order to funnel massive money into the military.

Look up Operation Paperclip, it had actually given us a huge advantage in missile technology so the whole basis for the cold war from before the Cuban Missile Crisis to the present is all based on a lie. Despite having the largest nuclear warheads, Russia’s missiles are known for being so poorly guided that an ICBM had a probability of hitting its target with the effective range of its warhead of only 20%. That meant that it would be expected to hit within a +/- 30 miles radius of its target. Our missiles, by contrast, are rated at less than 1,000 feet. In every crisis involving Russia in which we refused to back down, the Russians gave in because they knew that they did not have a chance in a nuclear war exchange with the US. There was never any real missile gap nor any real threat to our world from communism. It was a scapegoat for all our mistakes and expenditures.

Did you know about the testing of bio-weapons, nuke weapons and designer drugs on our own US military? Do you know the truth about the start of Viet Nam? How about Angola, Nicaragua, the Congo, Grenada, Guatemala, Panama, El Salvador, Iran, Iraq, Israel, Argentina and dozens of others? Do you know the real story of the USS Liberty? The list is huge of what is not fully known or understood by the US public. I can guarantee that what you think happened, what is in the history books and the press is NOT what really happened.

Here’s just one example of how the news is not really the news as it happened but as our government wants us to hear it. The Falkland Islands went to war in 1982. One incident we had a lot of intelligence about was the sinking of several British warships. One of these ships was hit and sunk by an Exocet air to surface missile despite the use of extensive electronic countermeasures. Or so that was the way it was reported in the news.

Because of my access to intelligence reports, I found out that the use of electronic countermeasures by the British was nearly flawless in its effectiveness to divert or confuse these missiles. The skipper of the HMS Sheffield, in the middle of a battle, ordered the electronic countermeasures equipment to be shut off because he could not get a message to and from Britain with it on. As soon as his equipment was off, the Argentine air attacks from the Super Etendard launched the Exocet.

OK this was a tragic screw up by a British officer but what our military planners and politicians did with it was the REAL tragedy. The bit about shutting of the electronic countermeasures equipment was deleted from all of the news reports and only the effectiveness of the Exocet was allowed to be published by the US press. The Navy and the Air Force both used this event to create the illusion of an anti-missile defense gap in the minds of the public and politicians and to justify the purchase of massive new defensive systems and ships at the cost of billions of dollars. All based on a false report.

In fact, an objective look at how we have been playing an aggressive game of manifest destiny with the world for the past 150 years would make you wonder how we can have any pride in our nation. From the enslavement of millions of blacks to the genocide of the American Indian to the forceful imposition of our form of government on dozens of sovereign nations, we have been playing the role of a worldwide dictator for decades. It has all been a very rude awakening for me.

The military-industrial complex that President Eisenhower warned us about is real but latter day analysts now call it the “military-industrial-congressional complex”. It is congress and some of the Presidents that we have had that are the power side of the triangle that consists of power, money and control.

The money buys the power because we have the best government in the world that is for sale on a daily basis and that sale is so institutionalized that it is accepted as a way of doing routine business. The bribing agents are called Lobbyists but there is little doubt that when the visit a congressman to influence his vote, they are clearly and openly bribing him with money or with votes. The congressmen, in return, vote to give tax money to the companies that the lobbyists represent. Or perhaps they will vote to allow those companies to retain their status, earnings or advantages even when that is at the cost of damage to the environment, other people or to other nations.

The control comes in the form of propaganda to sway and manipulate the masses; the military might to exert control over our enemies and our allies and the control of the workers and people that empower the congressmen – thus making the interlocking triangle complete.

What is not well known is a basic psychological mechanism that the military-industrial-congressional complex employs that few people understand or realize. Historical Sociologists (people that study how societies think over time and history) have discovered that every successful society in the world and over all of history, has had a scapegoat group of people or country or culture on which to blame all their problems.

Scapegoating is a hostile social – psychological discrediting routine by which people move blame and responsibility away from themselves and towards a target person or group. It is also a practice by which angry feelings and feelings of hostility may be projected, via inappropriate accusation, towards others. The target feels wrongly persecuted and receives misplaced vilification, blame and criticism; he is likely to suffer rejection from those who the perpetrator seeks to influence. Scapegoating has a wide range of focus: from “approved” enemies of very large groups of people down to the scapegoating of individuals by other individuals. Distortion is always a feature.

In scapegoating, feelings of guilt, aggression, blame and suffering are transferred away from a person or group so as to fulfill an unconscious drive to resolve or avoid such bad feelings. This is done by the displacement of responsibility and blame to another that serves as a target for blame both for the scapegoater and his supporters.

Primary examples of this include 1930 Germany in which Hitler used a variety of scapegoats to offset the German guilt and shame of World War I. He eventually chose the Jews and the entire population of Germany readily accepted them as the evil cause of all their problems. The US did this in the south for more than a century after the civil war by blaming everything on the black population. But this is true today for most of our successful countries: The Japanese hate the Koreans, the Arabs hate the Jews, in the southwest of the US, the Mexicans are the targets but in the southeast, it is still the blacks, the Turks hate the Kurds…and so it goes for nearly every country in the world and for all of history.

In some cases the scapegoat might be one religious belief blaming another as in the Muslims blaming the Jews or the Catholics blaming the Protestants. These kinds of scapegoats can extend beyond national boundaries but often are confined to regional areas like the Middle East or Central Europe. Finally, there are the political and ideological scapegoats. For many years, the US has pitted conservatives against liberals and Democrats against Republicans. This often has the effect of stopping progress because each side blames the other for a lack of progress and then opposes any positive steps that might favor the other side or give them the credit for the progress. Unfortunately, this scapegoat blame-game ends up being the essence of the struggles for power and control.

What is not well understood or appreciated is that our government is very well versed in this scapegoating and blame-game as a means to avoid accountability and to confuse the objectives. By creating an enemy that we can blame all our insecurities on – like we did with communism in the cold war – we can justify almost any expense, any sacrifice demanded of the public. If you question or oppose the decisions, then you are branded a communist sympathizer and are ostracized by society. Joseph McCarthy is the worst example of this but it exists today when we say someone is not patriotic enough if they dare to question a funding allocation for Iraq or for a new weapon system.

We, the public, are being manipulated by a power and highly effective psychological mechanism that is so well refined and developed that both the Democrat against Republican parties have an active but highly secretive staff composed of experts in the social psychological propaganda techniques that include, among others, scapegoating. In the Democratic Party this office is called the Committee for Public Outreach. In the Republican Party, their staff is called Specialized Public Relations. Even the names they choose make use of misdirection and reframing. Right now, the Democratic Party has the better group of experts partly because they raided the staff of the Republican office of Specialized Public Relations back in 1996 by offering them huge salary increases. By paying them half a million dollars per year plus bonuses that can reach an additional $50 million, they have secured the best propaganda minds in the world.

In both cases, the staffs are relatively unknown and work in obscure private offices located away from the main congressional buildings. Their efforts are passed as quietly and as low a profile as possible and to only the senior most party members. The reports begin with clearly defined objectives of diverting public attention or countering fact-based reports or justifying some political action or non-action but as they work their way through the system of reviewers and writers, the objective remains the same but the method of delivery gets altered so that the intent is not at all obvious. It is here that the experts in psychology and social science tweak the wording or events to manipulate the public, allies or voters.

The bottom line is that the federal government of the US has a long and verifiable history of lying but it is a fact that the lies that have been discovered are perhaps 5% of the lies that have emanated from the government. If you care to look, you will find that a great deal of what you think you know about our nation’s history, our political motivations and accomplishments and our current motives and justifications are not at all what you think they are. But I warn you – don’t begin this exploration unless you are willing to have your view of your country and even yourself seriously shaken up. But, if you don’t want to see the truth, then at least be open minded enough to listen to what will be declared the radical views that oppose the popular political positions of the day.

Ocean Dumping – A Summary of Studies

Ocean Dumping – A Summary of 12 Studies Conducted between 1970 and 2001

By Jerry Botana

The dumping of industrial, nuclear and other waste into oceans was legal until the early 1970’s when it became regulated; however, dumping still occurs illegally everywhere.  Governments world-wide were urged by the 1972 Stockholm Conference to control the dumping of waste in their oceans by implementing new laws. The United Nations met in London after this recommendation to begin the Convention on the Prevention of Marine Pollution by Dumping of Wastes and Other Matter which was implemented in 1975. The International Maritime Organization was given responsibility for this convention and a Protocol was finally adopted in 1996, a major step in the regulation of ocean dumping.

The most toxic waste material dumped into the ocean includes dredged material, industrial waste, sewage sludge, and radioactive waste. Dredging contributes about 80% of all waste dumped into the ocean, adding up to several million tons of material dumped each year. About 10% of all dredged material is polluted with heavy metals such as cadmium, mercury, and chromium, hydrocarbons such as heavy oils, nutrients including phosphorous and nitrogen, and organochlorines from pesticides. Waterways and, therefore, silt and sand accumulate these toxins from land runoff, shipping practices, industrial and community waste, and other sources.  This sludge is then dumped in the littoral zone of each country’s ocean coastline.  In some areas, like the so called “vanishing point” off the coast of New Jersey, in the United States, such toxic waste dumping has been concentrated into a very small geographic area over an extended period of time. 

In the 1970s, 17 million tons of industrial waste was legally dumped into the ocean by just the United States.   In the 1980’s, even after the Stockholm Conference, 8 million tons were dumped bincluding acids, alkaline waste, scrap metals, waste from fish processing, flue desulphurization, sludge, and coal ash.

If sludge from the treatment of sewage is not contaminated by oils, organic chemicals and metals, it can be recycled as fertilizer for crops but it is cheaper for treatment centers to dump this material into the ocean, particularly if it is chemically contaminated. The UN policy is that properly treated sludge from cities does not contain enough contaminants to be a significant cause of eutrophication (an increase in chemical nutrients—typically compounds containing nitrogen or phosphorus—in an ecosystem) or to pose any risk to humans if dumped into the ocean, however, the UN policy was based solely on an examination of the immediate toxic effects on the food chain and did not take into account how the marine biome will assimilate and be affected by this toxicity over time.  The peak of sewage dumping was 18 million tons in 1980, a number that was reduced to 12 million tons in the 1990s.

Radioactive Waste

Radioactive waste is also dumped in the oceans and usually comes from the nuclear power process, medical use of radioisotopes, research use of radioisotopes and industrial uses. The difference between industrial waste and nuclear waste is that nuclear waste usually remains radioactive for decades. The protocol for disposing of nuclear waste involves special treatment by keeping it in concrete drums so that it doesn’t spread when it hits the ocean floor however, poor containers and illegal dumping is estimated to be more than 45% of all radioactive waste. 

Surprisingly, nuclear power plants produce by far the largest amount of radioactive waste but contribute almost nothing to the illegal (after the Stockholm Conference) ocean dumping.  This is because the nuclear power industry is so closely regulated and accountable for its waste storage.  Off the coast of southern Africa and in the Indian Ocean, is the greatest accumulation of nuclear wastes.

The dumping of radioactive material has reached a total of about 84,000 terabecquerels (TBq), a unit of radioactivity equal to 1012 atomic disintegrations per second or 27.027 curies. Curie (Ci) is a unit of radioactivity. One curie was originally defined as the radioactivity of one gram of pure radium.  The high point of nuclear waste dumping was in 1954 and 1962, but this nuclear waste only accounts for 1% of the total TBq that has been dumped in the ocean. The concentration of radioactive waste in the concrete drums varies as does the ability of the drums to hold it.  To date, it is estimated that the equivalent of about 227 million grams (about 500,000 pounds) of pure radium has been dumped on the ocean floor.

Until it was banned, ocean dumping of radioactive waste was considered a safe and inexpensive way to get rid of tons of such materials.  It is estimated that the 1960’s and early 1970’s era nuclear power plants in New Jersey (like Oyster Creek – which is located just 21 miles from the Barnegat Lighthouse) and 12 other nuclear power plants located in Pennsylvania, New Jersey, and New York have dumped more than 100,000 pounds of radioactive material into the ocean off the New Jersey coast.

Although some claim the risk to human health is small, the long-term affects of nuclear dumping are not known, and some estimate up to 1,000 deaths in the next 10,000 years as a result of just the evaporated nuclear waste. 

By contrast, biologists have estimated that the ocean’s biome has been and will continue to be permanently damaged by the exposure to radioactive material.  Large scale and rapid genetic mutations are known to occur as dosage levels of radiation increase.  Plant, animal and micro-organisms in the immediate vicinity of leaking radioactive waste will experience the greatest and most radical mutations between successive generations.  However, test show that even long term exposure to diluted radioactive wastes will create accelerated mutations and adaptations.

The Problems with Ocean Dumping

Although policies on ocean dumping in the recent past took an “out of sight- out of mind” approach, it is now known that accumulation of waste in the ocean is detrimental to marine and human health. Another unwanted effect is eutrophication. A biological process where dissolved nutrients cause oxygen-depleting bacteria and plants to proliferate creating a hypoxic, or oxygen poor, environment that kills marine life. In addition to eutrophication, ocean dumping can destroy entire habitats and ecosystems when excess sediment builds up and toxins are released. Although ocean dumping is now managed to some degree and dumping in critical habitats and at critical times is regulated, toxins are still spread by ocean currents. Alternatives to ocean dumping include recycling, producing less wasteful products, saving energy and changing the dangerous material into more benign waste.

According to the United Nations Group of Experts on the Scientific Aspects of Marine Pollution , the amount of ocean dumping actually brings in less pollution than maritime transportation, atmospheric pollution, and land based pollution like run-off. However, when waste is dumped it is often close to the coast and very concentrated as is the case off the coast of New Jersey.

Waste dumped into the ocean is categorized into the black list, the gray list, and the white list. On the black list are organohalogen compounds, mercury compounds and pure mercury, cadmium compounds and pure cadmium, any type of plastic, crude oil and oil products, refined petroleum and residue, highly radioactive waste, any material made for biological or chemical warfare.

The gray list includes water highly contaminated with arsenic, copper, lead, zinc, organosilicon compounds, any type of cyanide, flouride, pesticides, pesticide by-products, acids and bases, beryllium, chromium, nickel and nickel compounds, vanadium, scrap metal, containers, bulky wastes, lower level radioactive material and any material that will affect the ecosystem due to the amount in which it is dumped.

The white list includes all other materials not mentioned on the other two lists. The white list was developed to ensure that materials on this list are safe and will not be dumped on vulnerable areas such as coral reefs.

In 1995, a Global Waste Survey and the National Waste Management Profiles inventoried waste dumped worldwide to determine what countries were dumping waste and how much was going into the ocean. Countries that exceeded an acceptable level would then be assisted in the development of a workable plan to dispose of their waste.

The impact of a global ban on ocean dumping of industrial waste was determined in the Global Waste Survey Final Report the same year. In addition to giving the impact for every nation, the report also concluded that the unregulated disposal of waste, pollution of water, and buildup of materials in the ocean were serious problems for a multitude of countries. The report also concluded that dumping industrial waste anywhere in the ocean is like dumping it anywhere on land. The dumping of industrial waste had reached unacceptable levels in some regions, particularly in developing countries that lacked the resources to dispose of their waste properly.

The ocean is the basin that catches almost all the water in the world. Eventually, water evaporates from the ocean, leaves the salt behind, and becomes rainfall over land. Water from melted snow ends up in rivers, which flows through estuaries and meets up with saltwater.  River deltas and canyons that cut into the continental shelf – like the Hudson Canyon and the Mississippi Cone – create natural channels and funnels that direct concentrated waste into relatively small geographic areas where it accumulates into highly concentrated areas of fertilizers, pesticides, oil, human and animal wastes, industrial chemicals and radioactive materials.  For instance, feedlots in the United States exceed the amount of human waste with more than 500 millions tons of manure each year – about half of which eventually reaches the ocean basin.

Not only does the waste flow into the ocean, but it also encourages algal blooms to clog up the waterways, causing meadows of seagrass, kelp beds and entire ecosystems to die. A zone without any life remaining is referred to as a dead zone and can be the size of entire states, like in coastal zones of Texas and Louisiana and north-east of Puerto Rico and the Turks and Caicos Islands.  All major bays and estuaries now have dead zones from pollution run-off. Often, pollutants like mercury, PCBs and pesticides are found in seafood meant for the dinner table and cause birth defects, cancer and neurological problems—especially in infants.

One of the most dangerous forms of dumping is of animal and human bodies.  The decomposition of these bodies creates a natural breeding ground for bacteria and micro-organisms that are known to mutate into more aggressive and deadly forms with particular toxicity to the animals or humans that they fed on.  Of the mid-Atlantic coast of the United States was a common dumping zone for animals – particularly horses and human bodies up until the early 1900’s.  Today, the most common areas for human body dumping is in India in which their religious beliefs advocate burial in water.  The results of this dumping may be seen in the rise in extremely drug resistant strains of leprosy, dengue fever and Necrotizing Fasciitis bacteria.

One of the largest deep ocean dead zones is in the area between Bermuda and the Bahamas.  This area was a rich and productive fishing ground in the 1700’s and early 1800’s but by the early 20th Century, it was no longer productive and by the mid-1900’s, it was virtually lifeless below 200 feet of depth.  This loss of all life seems to have coincided with massive ocean dumping along the New Jersey and Carolina coasts.

Recreation

Water recreation is another aspect of human life compromised by marine pollution from human activities like roads, shopping areas, and development in general.  Swimming is becoming unsafe, as over 12,000 beaches in the United States have been quarantined due to contamination from pollutants. Developed areas like parking lots enable runoff to occur at a much higher volume than a naturally absorbent field. Even simply driving a car or making a house warm can leak 28 million gallons of oil into lakes, streams and rivers. The hunt for petroleum through offshore gas and oil drilling leaks extremely dangerous toxins into the ocean and luckily is one aspect of pollution that has been halted by environmental laws.

Environmental Laws

In addition to the lack of underwater national parks, there is no universal law like the Clean Air Act or the Clean Water Act to protect the United States ocean territory. Instead, there are many different laws like the Magnuson-Stevens Fishery Conservation and Management Act , which only apply to certain aspects of overfishing and are relatively ineffective. The act developed in the 1970’s is not based on scientific findings and is regulated instead by the regional fisheries council. In 2000, the Oceans Act  was implemented as a way to create a policy similar to the nationwide laws protecting natural resources on land. However, this act still needs further development and, like many of the conservation laws that exist at this time, it needs to be enforced.

 The total effects of ocean dumping will not be known for years but most scientists agree that, like global warming, we have passed the tipping point and the worst is yet to come.

Intergalactic Space Travel

Sometimes is it fun to reverse-engineer something based on an observation or description.  This can be quite effective at times because it not only offers a degree of validation or contradiction of the observation, it also can force us to brainstorm and think outside the box.

As a reasonably intelligent person, I am well aware of the perspective of the real scientific community with regard to UFO’s.  I completely discount 99.95% of the wing-nuts and ring-dings that espouse the latest abduction, crop circle or cattle mutilation theories.  On the other hand, I also believe Drake’s formulas about life on other worlds and I can imagine that what we find impossible, unknown or un-doable may not be for a civilization that got started 2 million years before us – or maybe just 2 thousand years before us.  Such speculation is not the foolish babbling of a space cadet but rather the reasoned thinking outside the box – keeping an open mind to all possibilities.

In that vein as well as a touch of tongue in cheek, I looked for some topic to try my theory of reverse-engineering on that would test its limits.  With all the I hype about the 50th anniversary of Roswell and the whole UFO fad in the news, I decided to try this reverse-engineer approach on UFOs and the little green (gray) men that are suppose to inhabit them. 

As with most of my research, I used Plato to help me out.  If you don’t know what Plato is, then go read my article on it, titled, Plato – My Information Research Tool.

Here goes:

 

What is the source of their Spacecraft Power? 

Assumptions: 

 Again, with the help of Plato, I did research of witnesses from all over the world. It is important to get them from different cultures to validate the reports.  When the same data comes from cross‑cultural boundaries, the confidence level goes up. Unfortunately, the number of contactees includes a lot of space cadets and dingalings that compound the validation problem.  I had to run some serious research to get at a reliable database of witnesses.  I found that the most consistent and reliable reports seem to increase as the size of their credit rating, home price and/or tax returns went up. When cross‑indexed with a scale of validity based on professions and activities after their reports, my regression analysis came up with a projected 93% reliability factor for a selected group of 94 witnesses.

 What descriptions are common are these: 

 The craft makes little or no noise.  It emits a light or lights that sometimes change colors.  There is no large blast of air or rocket fuel ejected.  Up close, witnesses have reported being burned as if sunburned.  The craft is able to move very slow or very fast and can turn very fast.  The craft is apparently unaffected by air or lack of it. 

 We can also deduce that: the craft crossed space from another solar system; they may not have come from the closest star; their craft probably is not equipped for multi‑

generational flight; there may be more than one species visiting us.

 What conclusions can be draw from these observations: 

 If you exclude a force in nature that we have no knowledge of then the only logical conclusion you can come to is that the craft use gravity for propulsion.  Feinberg, Feynmann, Heinz, Pagels, Fritzsche, Weinberg, Salam and lately Stephen Hawking have all studied, described or supported the existence of the gauge boson with a spin of two called a graviton.  Even though the Standard Model, supersymmetry and other theories are arguing over issues of spin, symmetry, color and confinement, most agree that the graviton exists.

 That gravity is accepted as a force made up of the exchange of fundamental particles is a matter of record.  The Weinberg‑Salam theory of particle exchange at the boson level has passed every unambiguous test to which it has been submitted.  In 1979, they got the Nobel Prize for physics for their model.

 Repulsive Gravity:

 We know that mass and energy are really the same and that there are four fundamental interactions and that the interactions take place by particle exchange.  Gravity is one of these four interactions.  IF we can produce a graviton, we can control it and perhaps alter it.  Altering it in the same way we can produce a POSITRON using the interaction of photons of energy greater than 1.022MeV with matter.  This is antimatter similar to an electron but with a positive charge.  As early as 1932, positrons were observed. 

It seems logical that we can do the same with gravitons.  It is, after all, gravity that is the only force that has not had an observed repulsive force and yet it doesn’t appear to be so very different than the other three fundamental interactions.

 Einstein and Hawking have pointed out that gravity can have a repulsive force as well as an attractive force.  In his work with black holes, Hawking showed that quantum fluctuations in an empty de Sitter space could create a virtual universe with negative gravitational energy.  By means of the quantum tunnel effect, it can cross over into the real universe. Obviously, this is all math theory but parts of it are supported by observed evidence.  The tunneling effect is explained by quantum mechanics and the Schrodinger wave equations and is applied in current technology related to thin layers of semiconductors.  The de Sitter‑Einstein theory is the basis of the big bang theory and current views of space‑time.

The bottom line is that if we have enough energy to manipulate gravitons, it appears that we can create both attractive and repulsive gravitons.  Ah, but how much power is needed?

 Recipe to Make Gravity

 We actually already know how to make gravitons.  Several scientists have described it.  It would take a particle accelerator capable of about 10 TeV (10 trillion electron volts) and an acceleration chamber about 100 Km long filled with superconducting magnets.

 The best we can do now is with the CERN and the FERMI synchrotrons.  In 1989 they reached 1.8 TeV at the FERMI LAB.  The Superconducting Super Collider (SSC) that was under construction in Ellis County, Texas would have given us 40 TeV but our wonderful “education president”, the first Mr. Bush, killed the project in August 1992.  With the SSC, we could have created, manipulated and perhaps altered a graviton.

 We Need A Bigger Oven

 The reason we are having such a hard time doing this is that we don’t know how else to create the particle accelerators than with these big SSC kind of projects.  Actually, that’s not true.  What is true is that we don’t know how to create the particle accelerators except with these big SSC kind of projects, SAFELY.  A nice nuclear explosion would do it easily but we might have a hard time hiring some lab technicians to observe the reaction.

 What do you think we will have in 50 or 100 or 500 years. Isn’t it reasonable to assume that we will have better, cheaper, faster, more powerful and smaller ways of creating high-energy sources? Isn’t it reasonable to assume that a civilization that may be 25,000 years ahead of us has already done that.  If they have, then it would be an easy task to create gravitons out of other energy or matter and concentrate, direct and control the force to move a craft.

 Silent Operation

 Now let’s go back to the observations.  The movement is silent.  That Fits ‑ gravity is not a propulsive force based on thrust of a propellant.  I imagine the gravity engine to be more like a gimbaled searchlight.  The beam being the attractive or repulsive graviton beam with a shield or lens to direct it in the direction they want to move.

 Sunburns from the UFOs

 How about the skin burns on close witnesses ‑ as if by sunburn? OK lets assume the burn was exactly like sunburn ‑ i.e.   caused by ultraviolet light (UVL).  UVL is generated by transitions in atoms in which an electron in a high‑energy state returns to a less energetic state by emitting an energy burst in the form of UVL.  Now we have to get technical again.  We also have to step into the realm of speculation since we obviously have not made a gravity engine yet.  But here are some interesting subjects that have a remarkable degree of coincidence with the need for high-energy control necessary for the particle accelerator and the observed sunburn effects.

 The BCS theory (Bardeen, Cooper & Schrieffer) states that in superconductivity, the “quantum‑mechanical zero‑point motion” of the positive ions allows the electrons to lower their energy state.  The release of energy is not absorbed as heat, implying it is not in the infrared range.  Recently, the so‑called high temperature ceramic and organic superconducting compounds are also based on electron energy state flow.  Suppose a by‑product of using the superconductors in their graviton particle accelerator is the creation of UVL?

 Perhaps the gimbaled graviton beam engine is very much like a light beam.  A MASER is a LASER that emits microwave energy in a coherent and single wavelength and phase.  Such coherency may be necessary to direct the graviton beam much like directing the steering jets on the space shuttle for precision docking maneuvers. 

A maser’s energy is made by raising electrons to a high-energy state and then letting them jump back to the ground state.  Sound familiar.  The amount of energy is the only difference between the microwave energy and the UVL process.  In fact, microwaves are just barely above the UVL in the electromagnetic spectrum. Suppose the process is less than perfect or that it has a fringe area effect that produces UVL at the outer edges of the energy field used to create the graviton beam.  Since the Grays would consider it exhaust, they would not necessarily shield it or even worry about it.

 But it has got to GO FAST! 

 Finally, we must discuss the speed.  The nearest star is Proxima Centauri at about 1.3 parsecs (about 4.3 light years).  The nearest globular cluster is Omega Centauri at about 20,000 light years and the nearest galaxy is Andromeda at about 2.2 million light years.  Even at the speed of light, these distances are out of reach to a commuter crowd of explorers.  But just as the theory of relativity shows us that matter and energy are the same thing, it shows that space and time are one and the same.  If space and time are related, so is speed.   This is another area that can get real technical and the best recent reference is Hawking’s A Brief History of Time.  In it he explains that it may be possible to travel from point A to point B by simply curving the space‑time continuum so that A and B are closer.  In any case we must move fast to do this kind of playing with time and space and the most powerful force in the universe is Gravity.  Let’s take a minor corollary:

 Ion Engine

 In the mid 60’s, a new engine was invented in which an electrically charged ion stream formed the reaction mass for the thrusters.  The most thrust it could produce was 1/10th HP with a projected maximum of 1 HP if they continued to work on improvements to the design.  It was weak but its Isp (specific impulse ‑ a rating of efficiency) was superior.  It could operate for years on a few pounds of fuel.  It was speculated that if a Mars mission were to leave Earth orbit and accelerate using an ion engine for half the mission and then decelerate for half the distance to Mars, they would get there 5 months sooner than if they had not used it.  The gain came from a high velocity exhaust of the ion engine giving a small but continuous gain in speed.

 Suppose such a small engine had 50,000 HP and could operate indefinitely.  Acceleration would be constant and rapid.  It might be possible to get to .8 or .9 of C (80% or 90% of the speed of light) over time with such an engine.  This is what a graviton engine could do.  At these speeds, the relativistic effects would take effect.   We now have all the ingredients

 Super String theory and other interesting versions of the space‑time continuum and space‑time curvature are still in their infancy.  We must explore them in our minds since we do not have the means to experiment in reality.  We make great gains when we can have a mind like Stephen Hawking working on the ideas.  We lose so much when we have politicians like Bush (Sr or Jr.) stop projects like the SSC.  We can envision the concept of travel and the desire and purpose but we haven’t yet resolved the mechanism.  The fact that what we observe in UFOs is at least consistent with some hard-core leading edge science is encouraging.

This is one subject that really surprises me that we haven’t begun some serious research into.  A lot of theoretical work has already been done and the observed evidence confirms the math.

Alien Life Exists

October 13, 1998

I want to thank you for letting me post your article about gravity shielding that appeared in the March ‘98 WIRED magazine.  Your comments on my article about lightning sprites and the blue-green flash are also appreciated.  In light of our on-going exchange of ideas, I thought you might be interested in some articles I wrote for my WEB forum on “bleeding edge science” that I hosted awhile back.  Some of these ideas and articles date back to the mid-90’s, so some of the references are a little dated and some of the software that I use now is generally available as a major improvement over what I had then.

What I was involved with then can be characterized by the books and magazines I read, a combination of Skeptical Enquirer, Scientific American, Discovery and Nature.  I enjoyed the challenge of debunking some space cadet that had made yet another perpetual motion machine or yet another 250 mile-per-gallon carburetor – both claiming that the government or big business was trying to suppress their inventions.  Several of my articles were printed on the bulletin board that pre-dated the publication of the Skeptical Enquirer.

I particularly liked all the far-out inventions attributed to one of my heroes – Nikola Tesla.  To hear some of those fringe groups, you’d think he had to be an alien implant working on an intergalactic defense system.  I got more than one space cadet upset with me by citing real science to shoot down his gospel of zero-point energy forces and free energy.

Perhaps the most fun is taking some wing ding that has some crazy idea and bouncing that against what we know about in hard science.  Most often than not, they make use of fancy science terms and word that they do not really understand to try to add credibility to their ravings.  I have done this so often, in fact, that I thought I’d take on a challenge and try to play the other side for once.  I’ll be the wing nut and spin a yarn about some off the wall idea but I’ll do it in such a way that I’ll try to really convince you that it is true.  To that, I’m going to use every thing I know about science.  You be the judge if this sounds like a space cadet or not.

===============================

 

Are They Really There?             Life is Easy to Make: 

 Since 1953, with the Stanley Miller experiment, we have, or should have discarded the theory that we are unique in the universe.  Production of organic life and even DNA and RNA have been shown to occur in simple mixtures of hydrogen, ammonia, methane and water when exposed to an electrical discharge (lightning).  The existence of most of these components has been frequently verified by spectral analysis in distant stars but, of course, until recently, we can’t see the star’s planets.  Based on the most accepted star and planet formation theories, most star systems would have a significant number of planets with these elements and conditions.

 Quantifying the SETI

 A radio astronomer, Frank Drake developed some equations that were the first serious attempt to quantify the number of technical civilizations in our galaxy.  Unfortunately, his factors were very ambiguous and various scientists have produced numbers ranging from 1 to 10 billion technical civilizations in just our galaxy.  This condition of a formula is referred to as unstable or ill‑conditioned systems.  There are mathematical techniques to reduce the instability of such equations.  I attempted to do so to quantify the probability of the existence of intelligent life.

 I approached the process a little different.  Rather than come up with a single number for the whole galaxy, I decided to relate the probability to distance from Earth.  Later I added directionality.

 Using the basic formulas Drake used to start, I added a finite stochastic process using conditional probability. This produces a tree of event outcomes for each computed conditional probability.  (The conditions being quantified were those in his basic formula: rate of star formation; number of planets in each system with conditions favorable to life; fraction of planets with on which life develops; fraction of planets that develop intelligent life; fraction of planets that develop intelligent life that evolve technical civilizations capable of interstellar communications and the lifetime of such a civilization).

 I then layered one more parameter onto this by increasing the probability of a particular tree path inversely to the relation of one over the square of the distance.  This added a conservative estimate for the increasing probability of intelligent life as the distance from Earth increases and more stars and planets are included in the sample size.

 I Love Simulation Models

 I used standard values used by Gamow and Hawking in their computations, however, I ignored Riemannian geometry and assumed a purely Euclidean universe.  Initially, I assumed the standard cosmological principles of homogeneity and isotropic distributions.  (I changed that later) Of course this produced 1000’s of probable outcomes but by using a Monte Carlo simulation of the probability distribution and the initial computation factors of Drake’s formula (within reasonable limits), I was able to derive a graph of probability of technical civilizations as a function of distance.

 But I Knew That

 As was predictable before I started, the graph is a rising, non‑linear curve, converging on  if you go out in distance far enough 100%.  Even though the outcome was intuitive, what I gained was a range of distances with a range of corresponding probabilities of technical civilizations.  Obviously, the graph converges to 100% at infinite distances but what was really surprising is that it is above 99% before leaving the Milky Way Galaxy.  We don’t even have to go to Andromeda to have a very good chance of there being intelligent life in space.  Of course, that is not so unusual since our galaxy may have about 200 billion stars and some unknown multiple of planets.

 Then I made It Directional

 I toyed with one other computation.  The homogeneous and isotropic universe used by Einstein and Hawking is a mathematical convenience to allow them to relate the structure of the universe to their theories of space‑time. These mathematical fudge‑factors are not consistent with observation in small orders of magnitude in distance from earth ‑ out to the limits of what we can observe ‑ about 15 billion light years.  We know that there is inhomogeneous or lumps in the stellar density at these relatively close distances.  The closest lump is called the Local Group with 22 galaxies but it is on the edge of a super cluster of 2500 galaxies.  There is an even larger group called the Great Attractor that may contain tens of thousands of galaxies. 

By altering my formula,  I took into account the equatorial system direction (ascension & declination) of the inhomogeneous clustering.  Predictably, this just gave me a probability of intelligent life based on a vector rather than a scalar measure.  It did however, move the distance for any given probability much closer ‑ in the direction of clusters and super clusters.  So much so that at about 351 million light years, the probability is virtually 100%.  At only about 3 million light years, the probability is over 99%. That is well within the Local Group of galaxies.

 When you consider that there are tens of billions of stars and galaxies within detection range by Earth and some unknown quantity beyond detection – it is estimated that there are galaxies numbering as many as a 1 followed by 21 zeros – that is more than all the grains of sand in all the oceans, beaches and deserts in the entire world.  And in each of those galaxies, there are billions of stars!  Now you can begin to see that the formula to quantify the number of technical civilizations in space results in virtually 100% no matter how conservative you make the input values.  It can do no less than prove that life is out there.

Trans-Dimensional Travel

These articles deal with the fringe in that I was addressing the “science” behind so called UFO’s.

I have done some analysis on life in our solar system other than Earth and the odds against it are very high.  At least, life as we know it.  Even Mars probably did not get past early life stages before the O2 was consumed.  Any biologist will tell you that in our planet evolution, there were any number of critical thresholds of presence or absence of a gas or heat or water (or magnetic field or magma flow) that, if crossed, would have returned the planet to a lifeless dust ball. 

Frank Drakes formulas are a testament to that.  The only reason that his formulas are used to “prove” life exists is because of the enormous quantities of tries that nature has to get it right in the observable universe and over so much time.

One potential perspective is that what may be visiting us, as “UFO’s” could be a race or several races of beings that are 500 to 25,000 years or more advanced than us.  Given the age of the universe and the fact that our sun is probably second or third generation, this is not difficult to understand.  Some planet somewhere was able to get life started before Earth and they are now where we will be in the far distant future.

  Stanley Miller proved that life, as we know it, could form out of organic and natural events during the normal evolution of a class M planet.  But Drake showed that the chances of that occurring twice in one solar system is very high against it.  If you work backwards from their formulas, given the event of earth as an input of some solution of the equations, you would need something like 100 million planets to get even a slight chance for another planet with high‑tech life on it.

  Taken this into consideration and then comparing it to the chances that the monuments on mars are natural formations or some other claim of extraterrestrial life within our solar system, you must conclude that there is virtually no chance for life in our solar system.  Despite this, there are many that point to “evidence” such as the appearance of a face and pyramids in Mars photographs.  It sounds a lot like an updated version of the “canals” that were first seen in the 19th century.  Now we can “measure” these observations with extreme accuracy – or so they would have you believe.

The so‑called perfect measurements and alignment that are supposedly seen on the pyramids and “faces” are very curious since even the best photos we have of these sites have a resolution that could never support such accuracy in measurements.  When you get down to “measuring” the alignment and sizes of the sides, you can pretty much lay the compass or ruler anywhere you want because of the fuzz and loss of detail caused by the relatively poor resolution.  Don’t let someone tell you that they measured down to the decimal value of degrees and to within inches when the photo has a resolution of meters per pixel!

   As for the multidimensional universe; I believe Stephen Hawkin when he said that there are more than 3 dimensions however, for some complex mathematical reasons, a fifth dimension would not necessarily have any relationship to the first four and objects that have a fifth dimension would have units of the first four (l,w,h & time) that are very small ‑ on the order of atomic units of scale.  This means that according to our present understanding of the math, the only way we could experience more than 4 dimensions is to be able to be reduced to angstrom sizes and to withstand very high excitation from an external energy source.   Lets exclude the size issue for a moment since that is an artifact of the math model that we have chosen in the theory and may not be correct.

  We generally accept that time is the 4th dimension after l, w, and h which seem to be related as being in the same units but in different directions.  If time is a vector (which we believe it is) and it is so very different than up, down, etc, then what would you imagine a 5th dimension unit to be?

  Most people think of “moving” into another dimension and it being just some variation of the first 4 but this is not the case.  The next dimension, is not capable of being understood by us because we have no frame of reference. 

Hawkin makes a much better explanation of this in one of his books but suffice it to say that we do not know how to explore this question because we cannot conceive of the context of more than 4 dimensions.  The only way we can explore it is with math ‑ we can’t even graph it because we haven’t got a 5-axis coordinate system.  I have seen a 10 dimensional formula graphed but they did only 3 dimensions at a time. 

Whatever the relationship of a unit called a “second” has with a unit called a “meter”, may or may not be the same relationship as the meter has with “???????” (Whatever the units of the 5th dimension are called).  What could it possibly be?  You describe it for me, but don’t use any reference to the first 4 dimensions.  For instance, I can describe time or length without reference to any of the other known dimensions.  The bottom line is that this is one area that even a computer cannot help because no one has been able to give a computer an imagination ……..yet.  However, it is an area that is so beyond out thinking that perhaps we should not speculate about them coming from another dimension. 

Let’s look at other possibilities.    To do that, take a look at the other article on this blog titled, “Intergalactic Space Travel”.

Achieving the Speed of Light NOW

Scientists have been telling us for some time that it is impossible to achieve the speed of light.  The formula says that mass goes to infinity as you approach C so the amount of power to go faster also rises to infinity.  The theory also says that time is displaced (slows) as we go faster.  We have “proven” this by tiny fractions of variations in the orbits of some of our satellites and in the orbit of Mercury.  For an issue within physics that is seen as such a barrier to further research, shouldn’t we see a more dramatic demonstration of this theory?  I think it should so I made up one.

Let us suppose we have a weight on the end of a string.  The string is 10 feet long and we hook it up to a motor that can spin at 20,000.  The end of the string will travel 62.8 feet per revolution or 1,256,637 feet per minute.  That is 3.97 miles per second or an incredible 14,280 miles per hour.  OK so that is only .0021% of C but for only ten feet of string and a motor that we can easily create, that is not bad.

There are motors that can easily get to 250,000 RPM and there are some turbines that can spin up to 500,000 RPM.  If we can explore the limits of this experimental design, we might find something interesting.   Now let’s get serious. 

Let’s move this experiment into space.  With no gravity and no air resistance, the apparatus can function very differently.  It could use string or wire or even thin metal tubes.  If we control the speed of the motor so that we do not exceed the limitations imposed by momentum, we should be able to spin something pretty fast.

Imagine a motor that can spin 50,000 RPM with a sting mechanism that can let out the string from the center as the speed slowly increases.  Now let’s, over time, let out 1 mile of string while increasing the speed of rotation to 50,000 RPM.  The end will not be traveling at nearly 19 thousand miles per hour or 2.82% of C.

If we boost the speed up to 100,000 RPM and can get the length out to 5 miles, the end of the string will be doing an incredible 188,495,520 miles per hour.  That is more that 28% the speed of light.

What will that look like?  If we have spun this up correctly, the string (wire, tubes, ?) will be pulled taunt by the centrifugal force of the spinning.  With no air for resistance and no gravity, the string should be a nearly perfect vector outward from the axis of rotation.  The only force that might distort this perfect line is momentum but if we have spun this setup slowly so that the weight at the end of the string is pulling the string out of the center hub, then it should be straight. 

I have not addressed the issue of the strength of the wire to withstand the centrifugal force of the spinning weight.  Not that it is trivial but for the purposes of this thought experiment, I am assuming that the string can handle whatever the weight size we use.

Let us further suppose that we have placed a camera exactly on the center of the spinning axis facing outward along the string.  What will it see?  If the theory is correct, then despite the string being pulled straight by the centrifugal force, I believe we will see the string curve backward and at some point it will disappear from view.  The reason is that as you move out on the string, its speed is going faster and faster and closer to the C.  This will cause the relative time at each increasing distance from the center to be slower and appear to lag behind.  When viewed from the center-mounted camera, the string will curve.

If we could use some method to make the string visible for its entire length, its spin would cause it to eventually fade from view when the time at the end of the string is so far behind the present time at the camera that it can no longer be seen.  It is possible that it might appear to spiral around the camera, even making concentric overlapping spiral rings. 

If synchronized clocks were places at the center and at the end of the string, and then we placed a camera at both ends but could view the two images side-by-side at the hub.  Each one would view a clock that started out synchronized and the only difference would be that one is now traveling at some percentage of C faster than the other.  I believe they would read different times as the spin rate increased. 

But now here is a thought puzzle.  Suppose there is an electronic clock at the end of the string as described by the above paragraph but now instead of sending its camera image back to the hub, we send its actual reading by wires embedded in the string back to the hub where it is read side-by-side with a clock that has been left at the hub.  What will it read now?  Will the time distortion alter the speed of the electrons so that they do NOT show a time distortion at the hub?  Or will the speed of the electricity be constant and thus show two different times?  I don’t know.

Longevity

January 10, 1987

As for longevity, there has been some very serious research going on in this area but it has recently been hidden behind the veil of aids research. There is a belief that the immune system and other recuperative and self‑correcting systems in the body wear‑out and slowly stop working.  This is what gives us old‑age skin and gray hair.  This was an area that was studied very deeply up until the early 1980’s.  Most notably were some studies at the U. of Nebraska that began to make some good progress in slowing the biological aging by a careful stimulation and supplementation of naturally produced chemicals.  When the AIDS problem surfaced a lot of money was shifted into AIDS research.  It was argued that the issues related to biological aging were related to the immune issues of AIDS.  This got the researchers AIDS money and they continued their research, however, they want to keep a very low profile because they are not REALLY doing AIDS research. That is why you have not heard anything about their work. 

Because of my somewhat devious links to some medical resources and a personal interest in the subject, I have kept myself informed and have a good idea of where they are and it is very impressive.  Essentially, in the inner circles of gerontology, there is general agreement that the symptomology of aging is due to metabolic malfunction and not cell damage.  This means that it is treatable.  It is the treatment that is being pursued now and as in other areas of medicine in which there is such a large multiplicity of factors affecting each individuals aging process, successes are made in finite areas, one area at a time.  For instance, senility is one area that has gotten attention because of the mapping to metabolic malfunction induced by the presence of metals along with factors related to emotional environment.  Vision and skin condition are also areas that have had successes in treatments.

  When I put my computer research capability to look at this about a year ago, what I determined was that by the year 2024, humans will have an average life span of about 95‑103 years.  It will go up by about 5% per decade after that for the next century, then it will level out due to the increase of other factors.