Author Archives: Tom

About Tom

See "About the Author" for some details. All that is written there is true...or not. I am on the fringe of both reality and existance. The proof that I really exist at all lies in the truth..or lack, thereof, in these stories. You are the judge. I am just a whisper in the wind.

Investing Computer

One of the most interesting applications of computer technology is in the field of investing.  It is interesting that with all the sophisticated systems and all the monetary rewards possible, that there has not been a successful program that can guide a broker to make foolproof investment predictions…..until now.  It is a fact that out of all the investors and resources on Wall Street, that none of them much better than just slightly above random selection in picking the optimum investment portfolio.  Numerous studies have been done on this subject that show that the very best investment advisors have perhaps a 10% or 15% improvement over random selection and that even the best analysts cannot sustain their success for very long.

There are lots of people that are able to see very near term trends (on the order of a few days or a week or two, at most) and invest accordingly but no one has figured out how to consistently predict stock rises and falls over the long term (more than 3 or 4 weeks out).  That was the task I attempted to solve – not because I want to be rich but because it seemed like an interesting challenge.   It combines the math of finance and the psychology of sociology with computer logic.

I did a lot of research and determined that there is, in fact, no one that knows how to do it but there is a lot of math research that says that it should be able to be predictable using complex math functions, like chaos theory.  That means that I would have to create the math and I am not that good at math.  However, I do know how to design analytical software programs so I decided to take a different approach and  create a tool that will create the math for me.  That I could do.

Let me explain the difference.  In college, I took programming and one assignment was to write a program that would solve a six by six numeric matrix multiplication problem but we had to do it in 2,000 bytes of computer core memory.  This uses machine code and teaches optimum and efficient coding.  It is actually very difficult to get all the operations needed in just 2k of memory and most of my classmates either did not complete the assignment or work hundreds of hours on it.  I took a different approach.  I determined that the answer was going to be whole positive numbers so I wrote a program that asked if “1” was the answer and checked to see if that solved the problem.  When it didn’t, I added “1” to the answer and checked again.  I repeated this until I got to the answer.  My code was the most accurate and by far the fastest that the instructor had ever seen.

I got the answer correct and fast but I didn’t really “solve” the problem.  That is how I decided to approach this investment problem.  I created a program that would take an educated guess at an algorithm that would predict future stock values.  If it was wrong, then I altered the algorithm slightly and tried again.  The initial guessed algorithm needed to be workable and the method of making the incremental changes had to be well thought out.

The answer is using something called forward chaining neural nets with an internal learning or evolving capability.  I could get real technical but the gist of it is this – I first created a placeholder program (N0. 1) that allows for hundreds of possible variables but has many of them set to 1 or zero.  It then selects inputs from available data and assigns that data to the variable placeholders.  It then defines a possible formula that might predict the movements of the stock market.  This program has the option to add additional input parameters, constants, variables, input data and computations to the placeholder formula.  It seeks out data to insert into the formula.  In a sense, it allows the formula to evolve into totally new algorithms that might include content that has never been considered before.

Then I created a program (No. 2) that executes that formula created by program No. 1, using all the available input data and the selected parameters or constants and generates specific stock predictions.  This program uses a Monte Carlo kind of interruption in which all the parameters are varied over a range in various combinations and then the calculations are repeated.  It also can place any given set of available data into various or multiple positions in the formula.  This can take hundreds of thousands (up to millions) of repetitions of executing the formulas to examine all the possible combinations of all of the possible variations of all the possible variables in all the possible locations in the formula.

Then I created a program (No. 3) that evaluates the results against known historical data.  If the calculations of program No. 2 is not accurate, then this third program notifies the first program and it changes its inputs and/or its formula and then the process repeats.  This third program can keep track of trends that might indicate that the calculations are getting more accurate and makes appropriate edits in the previous programs.  This allows the process to begin to focus toward any algorithm that begins to show promise of leading to an accurate prediction capability.

I then created sort of a super command override program that first replicates this entire three-program process and then manages the results of the outputs of dozens of copies of the number 2 and 3 program and treats them as if they were one big processor.  This master executive program can override the other three by injecting changes that have been learned in other sets of the three programs.  This allowed me to setup multiple parallel versions of the three-program analysis and speed the overall analysis many times over.

As you might image this is a very computer intensive program.  The initial three programs were relatively small but as the system developed, they expanded into first hundreds and then thousands of parallel copies.  All of these copies reading from data sets placed in a bank of DBMS’s that represented hundreds of gigabytes of historical data.  As the size of the calculations and data grew, I began to divide the data and processing among multiple computers.

I began with input financial performance data that was known during the period from 1980 through 2010.  This 30 years of data includes the full details of millions of data points about tens of thousands of stocks as well as huge databases of social-economic data about the general economy, politics, international news, and research papers and surveys of the psychology of consumers, the general population and of world leaders.  I was surprised to find that a lot of this data had been accumulated for use in dozens of other previous studies.  In fact, most of the input data I used was from previous research studies and I was able to use it in its original form.

Program No. 1 used data that was readily available from various sources from these historical research records.  Program No.3 uses slightly more recent historical stock performance data.  In this way, I can look at possible predictive calculations and then check them against real world performance.  For instance, I input historical 1980 data and see if it predicts what actually happened in 1981.  Then I advance the input and the predictions by a year.  Since I have all this data, I can see if the 1980-based calculations accurately predicts what happened in 1981?  By repeating this for the entire 30 years of available data, I can try out millions of variations of the analysis algorithms.  Once I find something that works on this historical data. I can advance it forward to input current data to predict future stock performance.  If that works then I can try using it to guide actual investments.

This has actually been done before.  Back in 1991, a professor of logic and math at MIT created a neural net to do just what I have described above.  It was partially successful but the software, the input data and the computer hardware back then were far less than what I used.     In fact, I found that even my very powerful home computer systems were much too slow to process the massive volumes of data needed.  To get past this problem, I created a distributive-processing version of my programs that allowed me to split up the calculations among a large number of computers.  I then wrote a sort of computer virus that installed these various computational fragments on dozens of college and university computers around the country.  Such programs are not uncommon on campus computers and I was only using 2 or 3% of the total system assets but collectively, it was like using 500 high end PC’s or about 3/4th of one super computer.

Even with all that processing power, it was more than 18 months and more than 9,700 hours of processing time on 67 different computers before I began to see a steady improvement in the predictive powers of the programs that were evolving.  By then, the formula and data inputs had evolved into a very complex algorithm that I would never have imagined but it was closing in on a more and more accurate version.  By early 2011, I was getting up to 85% accurate predictions of both short term and long term fluctuations in the S&P and Fortune 500 index as well as several other mutual fund indexes.

Short term predictions were upward to 95% accurate but that was out only 24 to 96 hours.  The long term accuracy dropped off from 91% for 1 week out to just under 60% for 1 year out….but, it was slowly getting better and better.

By June of last year, I decided to put some money into the plan.  I invested $5,000 in a day-trader account and then allowed my software to instruct my trades.  I limited the trades to one every 72 hours and the commissions ate up a lot of the profits from such a small investment but over a period of 6 months, I had pushed that $5,000 to just over $29,000.  This partially validated the predictive quality of the formulas but it is just 2.5% of what it should be if my formulas were exactly accurate.  I have since done mock investments of much higher sums and a longer investment interval and had some very good success.  I have to be careful because if I show too much profit, I’ll attract a lot of attention and get investigated or hounded by news people.  Both of which I don’t want.

The entire system was steadily improving in its accuracy but I was also getting more and more of my distributive programs on the college systems being caught and erased.  These were simply duplicate parallel systems but it began to slow the overall advance of the processing.  I was at a point that I was making relatively minor refinements to a formula that had evolved from all of this analysis.  Actually, it was not a single formula.  To my surprise, what evolved was sort of a process of sequential interactive formulas that used a feedback loop of calculated data that was then used to analyze the next step in the process.

I tried once to reverse-engineer the whole algorithm but it got very complex and there were steps that were totally baffling. I was able to figure out that it looked at the stocks fundamentals, then it looked at the state of the economy which was applied to the stock performance.  All that seems quite logical but then it processed dozens of “if-then” statements that related to micro, macro and global economics in a sort of logical scoring process that was then used to modify parameters of the stock performance.  This looping and scoring repeated several times and seemed to be the area that was being refined in the final stages of my analysis.

By June of 2012, I was satisfied that I can accomplished my goal.  I had a processing capability that was proving to be accurate in the 89 to 95% range for predictions out two to six weeks but it was still learning and evolving when I took it offline.  I had used the system enough to earn enough to cover all the costs of the hardware and software I invested in this project plus a little extra for a much needed vacation.  I never did do this for the money but it is nice to know that it works and that if I ever need a source of funding for a project, I can get it.

B-17 Miracle

The B-17 Miracle and PVT Sam Sarpolus

A mid-air collision on February 1, 1943 between a B-17 and a German fighter over the Tunis dock area became the subject of one of the most famous photographs of World War II. An enemy fighter attacking a 97th Bomb Group formation went out of control, probably with a wounded or dead pilot.   An Me109 crashed into the lead aircraft of the flight, ripped a wing off the Fortress, and caused it to crash. The enemy fighter then continued its crashing descent into the rear of the fuselage of a Fortress named All American, piloted by Lt. Kendrick R. Bragg, of the 414th Bomb Squadron. When it struck, the fighter broke apart, but left some pieces in the B-17. The left horizontal stabilizer of the Fortress and left elevator were completely torn away. The vertical fin and the rudder had been damaged, the fuselage had been cut almost completely through – connected only at two small parts of the frame – most of the control cables were severed, and the radios, electrical and oxygen systems were damaged.   The two right hand engines were out and one on the left had a serious oil pump leak.  There was also a hole in the top that was over 16 feet long and 4 feet wide at it’s widest and the split in the fuselage went all the way to the top gunner’s turret.  Although the tail actually bounced and swayed in the wind and twisted when the plane turned, one single elevator cable still worked, and the aircraft still flew-miraculously!  The turn back toward England had to be very slow to keep the tail from twisting off.  They actually covered almost 70 miles to make the turn home.

The tail gunner was trapped because there was no floor connecting the tail to the rest of the plane.  The waist and tail gunners used straps and their parachute harnesses in an attempt to keep the tail from ripping off and the two sides of the fuselage from splitting apart more.  British fighters intercepted the All American over the Channel and took one of the pictures that later became famous – you can easily find it on the internet.  The figher pilots also radioed to the base describing the empennage (tail section) was “waving like fish tail” and that the plane would not make it and to send out boats to rescue the crew when they bailed out.

Two and a half hours after being hit, the aircraft made an emergency landing and when the ambulance pulled alongside, it was waved off for not a single member of the crew had been injured. No one could believe that the aircraft could still fly in such a condition. The Fortress sat placidly until the crew all safely existed through the door in the fuselage, at which time the entire rear section of the aircraft collapsed onto the ground and the landing gear folded. The rugged old bird had done its job.

This event topped off an impressive streak of good luck that the crew of the All American experienced.  In all of the 414th Bomb Squadron for the entire war, they were the only crew that survived without a single major injury for their entire 25 mission assignment.  This incident was on their 25th mission and as a result, the entire crew were given orders to other non-combat assignments following their return from this flight.


B-17 “All American” (414th Squadron, 97BG)

That is the story that has been told and repeated for the past 70 years but there is something that has only recently come to light.  Lt. Bragg was busy flying the plane but he was in constant contact with the two waist gunners SGT. Henry Jebbson and PVT Michael “Mike” Zuk, as they kept Bragg informed of the condition of the tail and made their attempts to strap it to the rest of the plane.  Henry and Mike also tried several times to reach the tail gunner – PVT Sam Sarpolus – but there just was too much body damage to the aircraft.  All of the crew have since died except Mike and Sam and this new aspect of the story comes from Mike.  Sam was the youngest member of the crew at only 19 years old – with red hair and freckles.  Mike was the next youngest

I met Mike at a Silver Eagles meeting in Pensacola in 2004.  He was 81 and very frail and talked slow because of a stroke but there was nothing wrong with his mind.  Few of the other party goers were willing to take the time to talk to Mike but I did.  I took him into another room where we talked for more than 4 hours.  He told me about the flight and his life after that.  He became an enlisted pilot (a Silver Eagle) during the war and ferried aircraft over to England from the US.  When I asked him if any of his crew was still alive, he said, “Only Sam, and of course he will be for a long time”.  I wondered what he meant and asked.  He smiled and said there was much more to the story than anyone has ever said.  It wasn’t Henry and himself that held the plane to together.  It wasn’t Lt. Bragg’s careful flying… was Sam.

Mike went on, “the whole time we were flying that day after the collision, Sam sat backwards in the tail gunners seat with his hands out like he was stopping traffic and his eyes closed.  He never moved from that position….except once.  One of the fighters flew too close to us and his prop wash shook the All American hard.  We heard metal cracking and one of the two beams of the frame that was holding it together snapped.  At that moment, Sam opened his eyes and looked straight at the broken beam and pointed to it with one hand while still holding the other out “stopping traffic”.  Henry and I turned to look at what Sam was pointing to just in time to see a blinding light come from the break.  When our eyes cleared, we could see that the beam had been fused back together again.  We both snapped back to looking at Sam and he had gone back to holding his hands up with his eyes closed but he had a smile on his face.

He sat like that until after we landed.  They had to cut open the front of this gunner’s position and pull him out thru the window.  All the time with him holding his hands out.  Everyone thought he was scared or frozen stiff.  When he was put down on the ground, he still had his eyes closed.  I finally told him that everyone was out of the plane and he opened one eye and looked at me and said, “Really?”.  I assured him everyone was safe and then he put his arms down.  When he did, the old B-17 broke right in half – the tail fell off, the #3 engine burst into flames and the landing gear collapsed.  Sam looked at Mike and me and smiled and said, “Don’t tell anybody – I’ll explain later”.

It was three weeks later before we met with Sam in a quiet pub and had a long talk with him.  Sam said he didn’t know how he does it but he can move stuff and make things happen just by thinking about it.  He said he’s been busy during most of the flights keeping bullets from hitting any of the crew members.  We were the only crew that ever flew 25 missions without having a single crewman shot up.  We just stared at him and then both Henry and I said “bullshit” at the same time.  Sam said, “No, really, let me show you”.  He pulled out his K-bar sheath knife and handed it to Henry and told him to stab his hand.  Henry said, “No” so Sam said, “OK, then just stab this table napkin”.  Henry raised up the knife and plunged it down onto the table.  The table made a loud thud but the knife stopped about one inch above the napkin.  Henry pushed with both hands and then leaned his entire body onto the knife but it would not go that last inch into the table.  Sam said that it was harder to do bullets but he had a lot of practice.

We spent hours talking and testing Sam over the next few days before he went back to the US and we were reassigned to a USO tour to talk up our flight in the All American.  It seems that Sam has a rather well developed ability of telekinesis that allows him to control objects with his mind.  Not just move them but manipulate them even at an atomic scale.  That was how he welded the aluminum beams in the B-17 and created a sort of force field around each crewman when we were attacked.  We wanted to tell other people and told Sam that he would be famous if he would let us but he made us promise to keep it a secret.  Mike said I was the first person that he has ever told.  After telling me, Mike sat there very quietly as if he was regretted telling me.  I waited awhile and we sipped our drinks.  Mike finally spoke, “I wonder if Sam remembers me?”.  I asked if he had seen Sam since the war.  Mike said, “The next time we talked was about 1973 or so.  We met at a Silver Eagle Reunion in San Diego.  I didn’t know Sam had gotten his enlisted pilot’s license also.  That was the only reunion that Sam ever attended.  When I saw him, I recognized him immediately and then realized that the reason I recognized him so quickly was because he looked pretty much like he did 30 year earlier.  He had grown a mustache and dyed his hair but he did not look like he had aged much at all.  He and I went off into a corner of the bar and talked for hours.  It seems he liked helping people and he got a job as a paramedic on a rescue truck.  He was very well qualified and confided in me that he often used his powers to help him in an emergency.  Because he seemed to not age very fast he could only stay for a few years at each job but his skills were in high demand and he could get a job anywhere he went.  He also had had jobs as a policeman and a highway patrol officer”.  Mike would stop and stare at the floor every so often as he would get lost in memories and thoughts.

One of these moments that Mike stopped to stare turned into several minutes.  I said his name several times but he did not respond.  Finally, I touched his arm and asked if he was OK.  Mike got a grimace on his face and then grabbed his chest and rolled out of his chair onto the floor.  I recognized the signs of a heart attack and I called for help.  In an instant, a large crowd of people had gathered around him and calls for a doctor and 911 were shouted.  Someone put a large coat over Mike to keep him warm and another put a rolled up coat under his head for a pillow.

As I was sitting in my chair, holding his hand, someone with a hat on, bent down from the crowd and leaned over Mike.  He put one hand on Mike’s forehead and the other under the coat on his chest.  I thought it might be a doctor trying to check his vital signs but the person just frozen in that position.  I watched intently and then noticed a slight glow of light coming from under the coat.  No one else seemed to notice but I’m sure I did not imagine it.  After about 15 seconds, Mike opened his eyes and looked up.  He smiled and said, “Hi Sam”.  The man in the hat then got up and melted back into the crowd.  I asked Mike if he was OK and he said he felt fine that the he wanted to get up off the floor.

As I helped him up, I saw the man with the hat go out the door of the room we were in.  I sat Mike down and rushed out the door but there was no one anywhere in sight.  I rushed back to Mike who was shooing everyone away and sipping his drink.  I sat down with him and said, “Was that Sam?”.  Mike said, “Oh yea, he seems to come whenever I need him – that’s the third time he has done that”.  “Done what?” I asked.  Mike winked at me and said, “You know, you saw it”.  Then he said, ”I’m getting tired and I need to go. It has been good talking with you”.  I asked if we could talk again but Mike told me he was traveling back home early the next morning.  I asked if he knew where I could find Sam.  Mike turned to me and smiled and said, “I have no idea where he lives but every time I have needed him, he shows up”.

I spent two years searching for Sam with no luck.  I carried a picture of him from his days of flying the B-17 but had it cropped and colored so that it did not look like an old picture.  I showed it to anyone I thought might have seen him.  He did not have a social security number and there were no public records of his name anywhere in the US.  During my travels, I passed through Las Vegas and just out of habit, I showed Sam’s picture around.  The second night I was there, the desk clerk at my hotel said he recognized Sam.  He came about twice a year for only two or three days and played the roulette and Keno for a few hours in each of several hotels and then he would leave town.  He seemed to have remarkably good luck and the desk clerk said that he was always generous with the tips and always seemed to be smiling.  I smiled and agreed.

I figured I had been looking for Sam the wrong way.  Instead of trying to find someone that had seen him by showing his picture, I took another tack.  I started by looking in newspapers and online for unusual happenings that seemed to be unexplained or that were very much out of the ordinary.  I started with the first few days after he was last seen in Las Vegas and looked in a 500 mile circle around Vegas.  I was surprised at how many such events were reported on the internet and in YouTube videos but by reading each one, I narrowed it down.

One was for a small town in central Utah called Eureka – just south of Salt Lake City.  They had reported that someone had tipped a waitress at the local truck stop with $500.  It turned out that she needed about that much to be able to pay for a home medical device that her son needed for his severe asthma.  I drove to this small town and found the waitress.  Her name was Sally.  She was reluctant to talk about it because of all the news attention she had gotten but when I showed her Sam’s picture, she clearly recognized his face but she hesitated for a minute and then said that was not him.  I assured her that I was not a reporter and that I did not want to harm him.  I showed her my previous stories about the B17 and his days in the Silver Eagles.  She sat down with me in a quiet corner of the diner and we talked.  She said he was quick to pick up on her sadness about her son and he listened intently as she described the problem.  She had saved for an aspirator for her son Jimmy but times were tough and not many people were leaving tips and business at the truck stop was slow outside of tourist season.  When Sam left, he smiled and held my hand and said “thank you and say hello to Jimmy for me”.  Sally stopped for a moment and then said, “to this day, I don’t know what he was thanking me for – I only gave him coffee and he didn’t even finish that.”

I used the date Sam was here in Eureka and began the search again.  I found another story in Ketchem, Idaho where someone paid to have a house rebuilt for a single mother with four kids.  The husband had been killed in Iraq in 2009 and she had struggled to make ends meet but when a fire burned down their house, she was faced with having to send her kids to foster homes.  Someone paid a local contractor to build an entire house on their old lot and then put $10,000 into a bank account in her name.  She never saw the donor but at Perry’s restaurant on First Ave., a waitress that received a $100 tip confirmed that it saw Sam.

I repeated this searching pattern and tracked down more than a dozen places where Sam had stopped by some remote town or obscure business and helped out someone.  Most often he paid for something or gave money to someone.  About half the time, no one knew it was him but what he did seemed to follow a pattern.  He would show up just as some situation was about as serious as it can get and he seemed to know exactly what was needed and exactly who needed it.  He never seemed to stay overnight in the towns where he helped someone and he didn’t seem to do much investigating or asking around.  He often spent less than 10 minutes at the place where he did his good deed and then he was gone completely out of town.  I didn’t meet one person that knew his name.

I followed his trail up through Idaho and western Montana, then east through North Dakota and then south all the way to southern Texas.  He did his good deeds about every 300 to 400 miles about every other day.   Sam stopped along the way at casinos that were on Indian reservations and he also bought lottery tickets the day before the drawings.  He often won.  He always paid the IRS taxes immediately but I found out that he was using different social security numbers so that no one really knew who he was.

In Kansas, I found a State trooper that told me about a 25 car pileup that happened in a major storm on I-235 just outside of McPherson.  Lots of people were hurt but when the paramedics came, they found that no one had any broken bones or life-threatening injuries.  16 of the accident victims said that someone had come to their car shortly after the crash and “fixed” them.  They described a young looking man with red hair and freckles that calmed them down and then rubbed their legs or arms where it hurt and it stopped hurting.  The medics said that the blood found in some of the cars indicated that there had been some very serious injuries but when the examined the people inside, they found no cuts or bleeding from any of them.  No one saw Sam come or leave and most of them just called him an angel.

I don’t know who or what Sam is and maybe he doesn’t either.  He roams around doing good deeds, saving lives and bringing a little peace and happiness to everyone he meets.  He obviously wanted to remain unknown and I finally decided that I needed to honor that so I went home.



2011 – The Year the Government Got Healthy

The discoveries and creations accomplished in 2011 will have far reaching affects for decades to come. These advances in biology, nano-technology, computer science, materials science and programming are truly 21st Century science. The latest issue of Discovery magazine details the top stories as they have been released to the public, but, as you have learned from this blog, there is much that is not released to the public. Every research study or major development lab in the US that is performing anything that is of interest to any part of the government is watched and monitored very closely. Obviously, every government funded research project has government “oversight” that keeps tabs on their work but this monitoring applies to every other civilian R&D lab and facility as well. I have described the effects of this monitoring in several of my reports but I have not specifically spelled it out. Now I will.

The government has a network of labs and highly technically trained spies that monitor all these civilian R&D projects as they are developed. These guys are a non-publicized branch of the Federal Laboratory Consortium (FLC) that provides the cover for this operation behind the guise of supporting technology transfer from the government to the civilian market – when, in fact, it’s real goal is just the opposite.

The Labs in FLC are a mix of classified portions of existing federal labs – such as NRL, Ft. Detrick, Sandia, Argonne, Brookhaven, Oak Ridge, PNNL, Los Alamos, SEDI and about a dozen others – and a lot of government run and controlled civilian labs such as Lawrence Livermore, NIH and dozens of classified college and university and corporate labs that give the appearance of being civilian but are actually almost all government manned and controlled.

The spy network within the FLC is perhaps the least known aspect. Not even Congress knows much about it. It is based in Washington but has offices and data centers all over. The base operations comes under an organization within the Dept. of Homeland Security (DHS) called the Homeland Security Studies and Analysis Institute (HSSAI). The public operations of HSSAI is run by Analytic Services, Inc. but the technology spy activities are run by the Office of Intelligence and Analysis (OIA) Division of DHS.

Within the OIA, the FLC technology spies come under the Cyber, Infrastructure and Science Director (CIS) and are referred to as the National Technology Guard (NTG) and it is run like a quasi-military operation. In fact, most of these NTG spies were trained by the Department of Defnse (DoD) and many are simply on loan from various DoD agencies.

This is a strange and convoluted chain of command but it works fairly efficiently mostly because the lines of information flow , funding and management are very narrowly defined by the extremely classified nature of the work. What all these hidden organizations and fake fronts and secret labs do is to allow the funding for these operations to be melted into numerous other budget line items and disguised behind very official and humanitarian and publicly beneficial programs. This is necessary because some of the lab work that they get involved in can become quite expensive – measured in the billions of dollars.

The way this network actually works is actually fairly simple. Through the FLC and other funding and information public resources, leading edge projects are identified within HSSAI. They then make the decision to “oversight”, “grab” or “mimic” the details of the R&D project. If they implement “oversight”, that means that OIA and CIS keep records of what and how the R&D projects are progressing. If they “grab” it, that means that the NTG is called upon to obtain copies of everything created, designed and/or discovered during the project. This is most often done using cyber technology by hacking the project’s computers of everyone involved. It is the mimic that gets the most attention in the OIA.

If a project is tagged as a mimic or “M” project, the HSSAI mates a government lab within the FLC to be the mimic of the R&D project being watched. The NTG usually embeds spies directly in the civilian R&D project as workers and the OIA/CIS dedicates a team of hackers to grab everything and pass it directly to the mated FLC lab. The NTG spies will also grab samples, photos, duplicates and models of everything that is being accomplished.

What is kind of amazing is that this is all done in real time – that is, there is almost no delay between what is being done in the civilian R&D lab and what is being done to copy that work in the government lab. In fact, the payoff comes when the government lab can see where a project is going and can leap ahead of the civilian R&D lab in the next phase of the project. This is often possible because of the restraints in funding, regulations, laws and policy that the civilian labs must follow but the government labs can ignore. This is especially true in biological sciences in which the civilian lab must follow mandated protocols that can sometimes delay major breakthroughs by years. For instance, the civilian lab has to perform mice experiments and then monkey experiments and then petition for human testing. That process can take years. If a treatment looks promising, the government lab can skip to human testing immediately – and has done so many times.

Let me give you an example that is in recent news. The newest advances in science are being made in the convergence areas between two sciences. Mixing bioscience with any other science is called bioconvergence and it is the most active area of new technologies. This example is the bioconvergence of genetics and computers. The original project was begun by a collaboration between a European technology lab based in Germany and an American lab based in Boston. The gist of the research is that they created a computer program that uses a series of well known cell-level diagnostic tests to determine if a cell is a normal cell or a cancer cell. The tests combine a type of genetic material called microRNA with a chemical marker that can react to six specific microRNAs. The markers can then be read by a computer sensor that can precisely identify the type of cell it is. This is accomplished by looking at the 1,000+ different microRNA sequences in the cell. The computer knows what combination of too much or too little of the six microRNAs that identifies each distinct type of cell.

Once that is accomplished, they can define, identify and isolate the specific individual cancer cells. If it is a cancer cell, then the same program creates a gene that is custom designed to turn off the reproductive ability of that specific identified cancer cell. This synthetic gene for a protein called hBxi, promotes cell death by stopping its ability to split, divide and/or reproduce. There are several chemical safeguards built into the process that prevent healthy cells from being targeted. The whole project is being called the “Geniom RT Analyzer for microRNA quantification analysis for biomarkers of disease and treatment” but the lab guys just call it “biologic” for short.

Nearly all of the separate aspects of this project are well known but in the past, it has taken months or years to cross-index the various aspects of the 1000 or more microRNA sequences and then months or years more to devise a response. Using this biologic computer program mated to a biochemical logic “circuit”, the process takes a few hours. The biocomputer analyzes millions of combinations and then defines exactly how to tag and destroy the bad cells.

In keeping with standard protocols, human testing will begin around 2015 and it could take until 2025 before this is a commercially available medical treatment for treating cancer. FLC identified the value of this treatment very early on and created a mimic lab at Ft. Detrick, Maryland at the National Interagency Confederation of Biological Research (NICBR). The NICBR has a long history of managing non-weapons-related biological research. They provide an easy funding path and a nice cover for some of the most advanced and most classified medical research performed by the US.

The NICBR mimic lab was keeping pace with the progress being made by the biologic project until they could project ahead and see the benefits in other areas. NICBR, of course, had the computer analysis program as soon as it was completed and had duplicated the biochip and geniom analyzer hardware just as fast. Once it had proved that the process worked, they began to make much greater progress than the biologic labs because they had more money, less limitations and access to immediate human test subjects. As successes began to pile up, they added more staff to help make modifications to the biologic system by creating new biochips, modifying the geniom analyzer and analysis software. Within a few months in mid-2011, they had geared up to a staff of over 100 using four different labs at Ft. Detrick churning out new capabilities one a weekly and then on a daily basis.

By the time the biologic lab was making their preliminary reports public in SCIENCE magazine, in September 2011, the NICBR lab was just finishing its first human tests which were entirely successful. By the middle of October 2011, they had all but eliminated false positives and began optimizing the circuits to identify new cell types. Using a flood of new and redefined biochips and modifications to the software, they had expanded the microRNA analysis to other complex cell states and by the middle of November, had successful tests on 16 different types of cancer and were adding others at a rate of 3 to 5 per week but parallel efforts were also working on other applications of the same biologic process.

Since the core analysis is actually a computer program and the microRNA sequences defined by the multiplex primer extension assay (MPEA) for a vast number of different types of cells are well known, this process can be expanded to cover other applications just by altering the computer program, biochip and MPEA and the synthetic protein gene that is applied. They also quickly discovered that the computer processing power was there to perform many of these tests simultaneously by using multiple biochips and MPEAs and CCD cameras for reading the biochips. This allowed doing analysis on dozens of cancers and other cell types and then allowing the computer to define and concoct the appropriate response.

The NICBR report at the end of November described their latest extension of the applications of this technology to regenerative medicine to allow the almost immediate repair of bad hearts, destroyed lung tissues and other failed organs. Essentially any cell in the body that can be uniquely defined by its microRNA can be targeted for elimination, replacement or enhancement. The NICBR lab is expanding and adding new applications almost daily and the limits to what is possible won’t be reached for years.

At the end of November, the first report from NICBR had made its way up through OIA/CIS and HSSAI to a very select group of DoD intelligence officers – some military and some civil service and some civilians (by invitation only). This is a group that does not show up on any organization chart or budget line item. They are so deep cover that even their classified security compartment name is classified. Unofficially, they call themselves the Red Team and reports they create are called Red Dot Reports or RDRs. (They use a red dot to identify projects that they believe have immediate applications and high priority) They advise the JCS and the president on where and how to direct black-ops R&D funds and how to develop and use the developed technology. They are not the final word but they do act as a buffer between what is really going on in the labs and those that might benefit or take advantage of the technology.

This group imagined the application of the biologic technology in the role of prolonging the life of key individuals in the military and government. Anyone with a life threatening disease like cancer can now be cured. Anyone with a failing or damaged organ can use this technology to put synthetic genes or designer stem cells directly into the near immediate repair or replacement of the damaged cells. Almost immediately, each of the members began to name off members of the senior most military officers and the senior most political leaders that might benefit from this biologic technology.

Now comes the part you will never hear made public. The Red Team are highly trained and very capable of keeping secrets but they are also human and they know that technology like this can mean life or death to some people and for that, those people might do anything. It is still not known who did it first but someone in the Red Team contacted a senior US Senator that he knew had recently been diagnosed with prostate cancer. (In fact there are 11 members of Congress and two Cabinet members that currently have cancer of one form or another. This is not something that they want made public if they want to be re-elected so it is very confidential.) Traditional treatment involves surgery, radiation and chemo-therapy and then you just reduce your chances of a recurrence. With the biologic technology, you skip past all that unpleasant treatment and go immediately to being cured without any chance of recurrence. For that, anyone would be most grateful and it is obvious that whomever it was on the Red Team that leaked this news, did so to gain favor with someone that could benefit him a great deal.

Once it was known that the news had leaked out, almost every one of the Red Team members made contact with someone that they thought would benefit from the biologic technology. By the second week in December, dozens of people were clamoring for the treatment and promising almost anything to get it. The word of this technology and its benefits are still spreading to the leaders and business tycoons around the world and the Red Team is trying desperately to manage the flood of bribes and requests for treatment.

As you read this, the NICBR is treating the sixth Congressman for various cancers and there is a line of more than 30 behind these 6. The lab has enlisted the aid of two other departments to setup and begin treatments within Ft. Detrick and plans are in the works to create treatment centers in five other locations – all of them on very secure military installations – plus one will be setup on Guam at the Air Force base there to treat foreign nationals. By the end of January, these facilities will be operational and it is expected that there will be a list of over 500 people waiting for treatments for cancer or damaged or failed organs. I have heard that the price charged to corporate tycoons is $2 million but the treatment is being traded with other political leaders in other countries for various import/export concessions or for political agreements.

This will all be kept very, very secret from the public because there are millions of people that would want treatments and that would create incredible chaos. The biologic equipment is only about $950,000 for a complete system, not counting the payments for patents to the original researchers. But this is not the holdup from going public with this. If it got out that the government had this technology, they would have to admit to having stolen it from the Boston group and that would imply that they are doing and have done this before – which is completely true. They do not want to do that so they are going to let the original researchers work their way through the system of monkey testing for 3 years and then human trials for 3 or 4 years and then through the FDA approval process which will take another 2 to 3 years and they will get to market about when they estimated – about 2025.

In the meantime, if you hear about some rich and famous guy or some senior Congressman making a miraculous recovery from a serious illness or a failing body part, you can bet it was because they were treated by a biologic device that is unavailable to the general public for the next 15 years or so.


<< You are probably wondering how I know all this detail about some of the best kept secrets in the US government. As I have mentioned in numerous other reports, I worked in government R&D for years and almost all of it was deep cover classified. My last few years of work were in the field of computer modeling and programming of something called sensor fusion. The essence of this type of computer programming is the analysis of massive amounts of inputs or calculations leading to some kind of quantified decision support. This is actually a pretty complex area of math that most scientists have a hard time translating to their real-world metrics and results.

When the CIS staff at HSSAI first got tagged to support the mimic of the biologic lab work, they needed some help in the programming of the biologic analysis using the photo CCD input data and the massive permutations and combinations of microRNA characteristics. I was asked to provide some consulting on how to do that. The task was actually pretty simple because those guys at the Boston biologic lab were pretty smart and they had already worked out the proper algorithms needed. I just reversed engineered their logic back to the math and then advanced it forward to the modified algorithms needed for other cell detections.

In the process of helping them I was also asked to advise and explain the processing to other parts of the government offices involved – the OIA, the NICBR, FLC, HSSAI and even to the Red Team. I was privy to the whole story. I am writing it here for you to read because I think it is a great disservice to the general public to not let them have access to the very latest medical technology – especially when it can save lives. If I get in legal trouble for this, then it will really go public so I am sure that the government is hoping that I am going to reach a very few people with my little blog and that I will not create any real problems. That is their hope and they are probably right. >>

Bombs Away!

The Air Force is working overtime to redefine its role in warfare in light of using UAV’s, drones and autonomous weapons. What is at stake is the very nature of what the AF does. For the past 40 years, it has based its power and control on a three-legged table – bombers, fighters and missiles. Its funding and status in DoD is based on keeping all three alive and actively funded by congress.

The dying cold war and the end of the threat from Russia has largely diminished the role of ICBM missiles. The AF is trying to keep it alive by defining new roles for those missiles but it will almost certainly lose the battle for all but a few of the many silos that are still left.

The role as fighter is actively being redefined right now as UAV’s take over attack and recon roles. There is still the queasy feeling that we do not want to go totally robotic and there is a general emotion that we still need a butt in the seat for some fighter missions such as intercept and interdiction of targets of opportunity but even those are being reviewed for automation. There is, however, no denying that the AF will maintain this responsibility – even if non-pilots perform it.

The role of bomber is the one that is really in doubt. If the Army uses the Warthog for close combat support and the Navy uses A-6’s and F-18’s for attack missions, then the role of strategic bomber is all that is left for the AF and that is a role that is most easily automated with standoff weapons and autonomous launch-and-forget missiles. The high altitude strategic bomber that blankets a target area is rapidly becoming a thing of the past because of the nature of our enemy and because of the use of surgical strikes with smart bombs. To be sure, there are targets that need blanket attacks and carpet-bombing but a dropped bomb is notorious for not hitting its targets and the use of hundreds of smart weapons would be too costly as compared to alternatives.

The AF is groping for solutions. One that is currently getting a lot of funding is to lower the cost of smart bombs so that they can, indeed, use many of them in large numbers, necessitating the need for a manned bomber aircraft and still be cost-effective. To that end, a number of alternatives are being tried. Here is one that I was involved in as a computer modeler for CEP (circular error of probably) and percent damage modeling (PDM). CEP and PDM are the two primary factors used to justify the funding of a proposed weapon system and then they are the first values measured in the prototype testing.

CEP says what is the probably of the weapons hitting the target. CEPs for cruise missiles are tens of feet. CEPs for dumb bombs is hundreds or even thousands of feet and often is larger than the kill radius of the bomb making it effectively useless against the target while maximizing collateral damage. PDM is the amount of damage done to specific types of targets given the weapon’s power and factoring in the CEP. A PDM for a cruise missile may be between 70% and 90% depending on the target type and range (PDM decreases for cruise missiles as range to target increases). The PDM for a dumb (unguided) bomb is usually under 50% making the use of many bombs necessary to assure target destruction. In WWII, PDM of our bombers was less than 10% and in Viet Nam, it was still under 30%. The AF’s problem is to improve those odds. Here is how they did it.

Whether we call them smart bombs or precision guided munitions (PGM) or guided bomb units (GBU) or some other name, they are bombs that steer to a target by some means. The means changes from GPS, to laser to infrared or RF to several other methods of sending guidance signals. JDAM is one of the latest smart bombs but there are dozens of others. JDAM’s run about $35,000 on top of the cost of making the basic bomb. In other words, the devices (fins, detection, guidance) that make it a smart bomb add about $35,000 to the cost of a dumb bomb. The AF’s goal was to reduce this to under $5,000, particularly in a multiple drop scenario.

They accomplished this in a program they code named TRACTOR. It starts with a standard JDAM or other PGM that uses the kind of guidance needed for a specific job. The PGM is then modified with a half-dome shaped device that is attached to the center of the tail of the JDAM. This device looks like a short rod about 1 inch in diameter with a half dome at one end and a bracket for attaching it to the JDAM at the other end. It can be attached with glue, clamps or screws. It extends about 6 inches aft of the fins and is very aerodynamic in shape.

Inside the dome is a battery and a small processor along with a matrix of tiny laser emitting diodes (LsED) that cover the entire inside of the dome. It can be plugged into the JDAM’s system or run independent and can be modified with add-on modules that give it additional capabilities. This is called the LDU – laser direction unit.

The other side of this device is a similar looking half dome that is attached to the nose of a dumb bomb using glue or magnets. There is a plug-in data wire that then connects to a second module that is attached to the rear of the dumb bomb. This second unit is a series of shutters and valves that can be controlled by the unit on the nose. This is called the FDU – following direction unit.

Here is how it works. The LDU is programmed with how the pattern of bombs should hit the ground. It can create a horizontal line of the bombs that are perpendicular or parallel to the flight of the JDAM or they can be made to form a tight circle or square pattern. By using the JDAM as the base reference unit and keying off its position, all the rest of the bombs can be guided by their FDUs to assume a flight pattern that is best suited for the target. The FDU’s essentially assume a flight formation during their decent based on instructions received from the LDU. This flight formation is preprogrammed into the LDU based on the most effective pattern needed to destroy the target or targets.

A long line of evenly spaced bombs might be used to take out a supply convoy while a grid pattern might be used to take out a large force of walking enemy that are dispersed on the ground by several yards each. It is even possible to have all the bombs nail the exact same target by having them all form a line behind the LDU JDAM bomb in order to penetrate into an underground bunker.

It is also possible to create a pattern in which the bombs take out separate but closely spaced targets such as putting a bomb onto each of 9 houses in a tightly packed neighborhood that might have dozens of houses. Controlling the relative distance from the reference LDU and making sure that that bomb is accurate will also accurately place all the other bombs placed on their targets. This effectively creates multiple smart bombs in an attack in which only one bomb is actually a PGM.

The method of accomplishing this pattern alignment is thru the use of the lasers in the LDU sending out coded signals to each bomb to assume a specific place in space relative to the LDU, as the bombs fall toward the target. The lasers in the LDU send coded signals that cause the FDU bombs to align along specific laser tracks being sent out by the LDU and at specific distances from the LDU. The end result is that they can achieve any pattern they want without regard to how the bombs are dropped – as long as there is enough altitude to accomplish the alignment. It is even possible for the LDU dropped from on bomber to control the FDU’s on bombs dropped by a second bomber.

The low cost was achieved by the use of easily added-on parts to existing bomb types and devices and by using innovative control surfaces that do not use delicate vanes and flaps. The FDU uses rather robust but cheap solenoids that move a spoon-shaped surface from being flush with the FDU module to being extended out into the slipstream of air moving over the bomb. By inserting this spoon up into the airflow, it creates drag that steers the bomb in one direction. There are eight of these solenoid-powered spoons that are strapped onto the FDU that can be used separate or together to steer or slow the bomb to its proper place in the desired descent flight pattern.

Since these LDU and FDU devices are all generic and are stamped out using SMD – (surface mount devices) – the cost of the LDU is under $3,000 and the FDU is under $5,000. 25 dumb bombs can be converted into an attack of 25 smart bombs for a total cost of about $110,000. If all of them had to be JDAMs, the cost would have been $875,000 – a savings of more than 87%.

These have already been tested and are being deployed as fast as they can be made.

Update 2012:

A recent innovation to the Tractor program was initiated in March of 2012 with the advent of miniaturized LDU’s and FDU’s that can be easily attached to the individual bomblets in a cluster bomb. These new add-ons are small enough and custom fitted to the bomblets so that they can be very quickly added to the cluster bombs. In practice, a separate LDU bomb is dropped with the cluster bomb and the cluster bomb is dropped from a much higher altitude than normal. This gives the individual bomblets time to form complex patterns that enhance their effectiveness. For instance, an anti-runway cluster bomb would line up the bomblets in a staggered zig-zag pattern. If the intent is area denial to personnel and tanks, the submunitions would be directed into an evenly spaced blanket covering a wide but defined area. This allows the placement of the mines into a pattern that is much wider than would normally be achievable with a standard cluster bomb drop which is usually limited to only slightly wider than the flight path of the dropping aircraft. Now a single drop can cover two or three square miles if the are dropped from above 15,000 feet.

A similar deployment technique is being developed for the dispersion of clandestine sensors, listening devices, remote cameras and other surveillance systems and devices.

Power from Dirt

Part of the year, I live in Vermont, where there is a lot of interest in renewable energy sources. They want to use wind or solar or wood or biofuels but almost all the tree-huggers skip the part about all those renewable energy sources combined would not meet the demand and we would still need a coal, gas or nuclear power plant to make up the difference. I decided to try to make up something that really could give enough energy for a household but would also work year round and be independent of weather, temperature and would use a fuel that is cheap and renewable. That is a big set of requirements and it took me several months to work out how to do it. It turns out that it can be done with dirt and some rocks and a little electronics.

As I have said many times, I worked for NRL and then DARPA while I was active duty in the Navy and then for other labs and in my own R&D company when I got out of the military. While I was at DARPA, they worked on an idea of using piezoelectric devices in the shoes of soldiers to provide electricity to low powered electronics. It turned out to be impractical but it showed me the power of piezoelectric generators.

I also work at NRL when they were looking into thermal-electric generators to be used on subs and aircraft. Both subs and planes travel where the outside is really cold and the insides are really hot and that temperature differential can be used to create electricity. I had a small involvement in both these projects and learned a lot about harvesting energy from micro-watt power sources. I also learned why they did not work well or could not be used for most situations back then but that was 22 years ago and a lot has changed since then. I found that I could update some of these old projects and get some usable power out of them.

I’ll tell you about the general setup and then describe the details. The basic energy source starts out with geothermal. I use convective static fluid dynamics to move heat from the earth up to the cold (winter) above ground level – giving me one warm surface (about 50 degrees year round) and a cold surface – whatever the ambient air temperature is, in the winter.

I then used a combination of electro and thermal-mechanical vibrators attached to a bank of piezoelectric crystal cylinders feeding into a capture circuit to charge a bank of batteries and a few super capacitors. This, in turn, powers an inverter that provides power for my house. The end result is a system that works in my area for about 10 months of the year, uses no fuel that I have to buy at all, has virtually no moving parts, and works 24×7 and in all weather, day and night. It gives me about 5,000 watts continuous and about 9,000 watts surge which covers almost all the electrical needs in my house – including the pump on the heater and the compressor on the freezer and refrigerator. I’ll have to admit that I did get rid of my electric stove in order to be able to get “off the grid” entirely. I use propane now but I am working on an alternative for that also. So, if you are interested, here’s how I did it.

The gist of this is that I used geothermal temperature differentials to create a little electricity. That was used to stimulate some vibrators that flexed some piezo-electric material to create a lot more electricity. That power was banked in batteries and capacitors to feed some inverters that in turned powered the house. I also have a small array of photovoltaic (PV) solar panels and a small homemade wind mill generator. And I have a very small hydro-electric generator that runs off a stream in my back yard. I use a combination of deep cycle RV, and AGM and Lithium and NiMH batteries in various packs and banks to collect and save this generated power. I total, on a good day, I get about 9,500 watts out. On a warm, cloudy, windless and dry day, I might get 4,000 watts but because I get power and charge the system 24/7 but use it mostly only a few hours per day, it all meets my needs with power to spare.

Today it was 18F degrees outside. Last night it was 8F degrees. From now until mid-April, it will be much colder than 50 degrees above ground. Then we have a month or less in which the air temp is between 40 and 60 followed by about 3 months in which the temps are above 70. Then another month of 40-60 before it gets cold again. That gives me from 20 to more than 40 degrees of temperature differential for 10 months of the year.

Using these two differential temperatures, I hooked up a bank of store-bought, off-the-shelf solid-state thermal electric devices (TEDs). These use “Peltier” elements (first discovered in 1834) to convert electricity to heat on one plate and cold on another. You can also reverse the process and apply heat and cold to the two plates and it will produce electricity. That is called the “Seebeck effect”, named after a guy that discovered it in 1821. It does not produce a lot of electricity but because I had an unlimited supply of a temperature differential, I could hook up a lot of these TEDs and bank them to get about 160 volts at about 0.5 amps on an average day with a temperature differential of 20 degrees between the plates. That’s about 80 watts. With some minor losses, I can convert that to 12 volts at about 6 amps (72 watts) to power lots of 12 volt devices or I can get 5 volts at about 15 amps (75 watts) to power a host of electronics stuff.

Then, I dug a hole in the ground – actually, you have to drill a hole in the ground. Mine is 40 feet but the deeper the better. It has to be about 10-12 inches in diameter. If you have a lot of money and can customize the parts and then you can use a smaller diameter hole. I salvaged the cooling coils off the back of several commercial grade freezers to get the copper pipes that have those thin metal heat sinks attached to them. I cut and reshaped these into a tightly packed cylinder that was 10″ in diameter and nearly four feet long, containing nearly 40 feet of copper pipes in a wad of spiral and overlapping tubes – so it would fit in my 40’ deep by 12″ inch diameter hole. Down that deep, the hole filled with water but the water was still about 50 degrees. I wrapped the heat sinks in several layers of wire fence material. This was aluminum screens with about ¼” holes. I used two long copper tubes of 1 inch diameter to connect the two ends of the coil to the surface as I sank them to the bottom. All the joints were soldered and then pressure tested to make sure they did not leak.

Just before and after it was sunk into the hole, I pushed some marble sized pea rocks into the hole. This assured that there would be a free-flow of water around the heat sink lines without it becoming packed with clay. I bought a 100 foot commercial grade water hose to slip over the two pipes to cover them from the surface down to the sunken coils. This hose has a thick hard rubber outside and soft rubber on the inside and had a 1.75 inch inside diameter. It was designed to use with heavy duty pumps to pump out basements or ponds. It served as a good sleeve to protect the copper tubes and to insulate the pipes. To insulate it further, I bought a can of spray expanding foam. The kind that you use to fill cracks and it hardens into a stiff Styrofoam. I cut the can open and caught the stuff coming out in a bucket. I then diluted it with acetone and poured it down between the hose and the copper pipe. It took about 18 days to dry and harden but it formed a really good insulating layer so that the pipes would not lose much heat or cold while the fluid moved up and down in the copper pipes. The two copper pipes sticking out were labeled “UP” and “Down” and I attached the down pipe to the bottom of a metal tank container.

The next part is another bit of home brew. I needed a large thin metal sandwich into which to run the “hot” fluid. To make it would cost a fortune but I found what I needed at a discount store. It is a very thin cookie sheet for making cookies in the oven. Its gimmick is that it is actually two thin layers separated by about a quarter inch of air space. This keeps the cookies from getting too hot on the bottom and burning. I bought 16 of these sheets and carefully cut and fused them into one big interconnected sheet that allowed the fluid to enter at one end, circulate between the layers of all the sheets and then exit the other end. Because these sheets were aluminum, I had to use a heliarc (also known as TIG or GTAW welding and I actually used argon, not helium) but I was rained by some of the best Navy welders that work on airframes of aircraft. The end product was almost 6 x 6 feet with several hose attachment points into and out of the inner layer.

I then made a wood box with extra insulation all around that would accommodate the metal sandwich sheet. The sheet was then hooked up to the UP hose at one end and to the top of the tank/container that was connected to the Down hose. Actually, each was connected to splitters and to several inlet nad outlet ports to allow the flow to pass thru the inner sandwich along several paths. This made a complete closed loop from the sunken coils at the bottom of the hole up the UP tube to the 6 x 6 sheet then thru the tank to the DOWN tube and back to the coils.

Now I placed my bank of Peltier solid-state thermal-electric modules (SSTEMs) across the 6×6 sheet. Attaching one side of the SSTEMs to the 6×6 sheet and the other side to a piece of aluminum that made up the lid of the box that the sandwich sheet was in. This gave me one side heated (or cooled) by the sandwich sheet with fluid from the sunken coils and the other side of the SSTEMs was cooled (or heated) by the ambient air. The top of the flat aluminum lid also had a second sheet of corrugated aluminum welded to it to help it dissipate the heat.

So, if you are following this, starting from the top, there is a sheet of corrugated aluminum that is spot welded to a flat sheet that forms the top of the box lid. Between these two sheets that are outside the box and exposed to the air, there are air gaps where the sine-wave shaped sheet of corrugated aluminum meets the flat sheet. This gives a maximum amount of surface area exposed to the air. In winter, the plates are the same temperature as the ambient air. In the Summer, the plates have the added heat of the air and the sun.

The underside of this flat aluminum sheet (that makes up the box lid) is attached to 324 Peltier SSTEMs wired in a combination of series and parallel to boost both voltage and current. The lower side of these SSTEMs is connected to the upper layer of the thin aluminum of the cookie-sheet sandwich. This cookie-sheet has a sealed cavity that will be later filled with a fluid. The lower side of this cookie sheet is pressed against the metal side of a stack of three inch thick sheets of Tyvek house insulation. The sides and edges of all of these layers is also surrounded by the Tyvek insulation.

I then poured 100% pure car antifreeze into the tank on the copper up/down tubes. I had to use a pump to force the antifreeze down to the coils and back up thru the cookie sheet back to the tank. I ran the pump for about 6 hours to make sure that there was no trapped air anywhere in the system. The tank acted like an expansion tank to keep the entire pipe free of any trapped air. The antifreeze was the thick kind – almost like syrup – that would not freeze at any temperature and carried more heat than water would.

It actually began to work very fast. The top of the large flat hollow sheet had filled with fluid and it got cold from the ambient air. This cooled the antifreeze and the cold fluid wants to sink down the DOWN pipe to the sunken coils at the bottom of the hole. The coils meanwhile were heating the fluid down there to 54 degrees and that wanted to rise up the UP pipe. As soon as the heated fluid got up to the top, it cooled in the hollow sheet and sank down the DOWN tube again. This is called dynamic convective thermal fluid circulation or some just call it thermal siphoning.

The transfer of heat up to the surface creates a continuous temperature differential across the plates of the Peltier SSTEMs and then they create about 160 volts of DC electricity at about 0.5 amps or about 80 watts of electricity. I needed to use a solar panel controller to manage the power back to a usable 12 to 14 volts to charge a bank of batteries. But I am not done yet.

I added a second flat aluminum sheet on top of the corrugated aluminum- like a sandwich. This added to the surface area to help with heat dissipation but it also was to allow me attach 100 piezoelectric vibrators. These small thin 1.5″ diameter disks give off a strong vibration when as little as 0.5 volts are applied to them but they can take voltages up to 200 volts. They were 79 cents each from a surplus electronic online store and I bought 100 of them and spaced them in rows on the aluminum lid. Along each row, I placed a small tube of homemade piezoelectric crystals. I’m still experimenting with these crystals but I found that a combination of Rochelle salt and sucrose work pretty well but more importantly, I can make these myself. I’d rather use quartz or topaz but that would cost way too much.

The crystal cylinders have embedded wires running along their length and are aligned along the rows of piezoelectric vibrators. They are held in place and pressured onto the vibrators by a second corrugated aluminum sheet. This gives a multi-layer sandwich that will collectively create electricity.

One batch of the SSTEMs is wired to the 100 piezoelectric vibrators while the rest of the SSTEMs feed the solar controller to charge the batteries. I had to fiddle with how many SSTEMs it took to power the vibrators since they will work with a very small amount but they do a better job if they are powered at a higher level.

The vibrators cause a rapid oscillation in the cylinders of Rochelle salt and sucrose which in turn give off very high frequency, high voltage electricity. Because the bank of cylinders is wired in both series and parallel, I get about 1,500 volts at just over 200 milliamps, or about 300 watts of usable electricity.

It takes an agile-tuned filter circuit to take that down to a charging current for the batteries. I tried to make such a device but found a military surplus voltage regulator from an old prop-driven aircraft did the job. This surplus device gave me an initial total at a continuous 13.5 volts DC of about 22 amps charging power fed into a bank of deep cycle AGM batteries.

I found that the piezo vibrators had a secondary and very unexpected positive benefit. Since the vibration was also felt in the circulating antifreeze and the SSTEMs, it seems to have made them function more efficiently. There is more heat transfer in the dynamic convective thermal fluid circulation than the normal formulas and specs would dictate but I think it is because the vibration of the fluid makes the thermal transfer in the cookie sheet panel more efficient. The SSTEMs are boosted in output by several watts of power. So when everything is running, I am getting about 340 watts of charging power on a continuous basis. Of course this fluctuates as the temperature differential changes but I rarely get less than 250 watts and sometimes as high as 400 watts.

A local recreational vehicle (RVs, trailers, campers, boats) dealer removes the very large deep cycle AGM batteries from their high-end RVs’ solar systems even when they have been used very little. He lets me test and pick the ones I want and sells them for $20. I have 24 of them now that are banked to charge off the thermal and piezoelectric devices and then feed into several inverters that give me power for lights, heaters, fans, freezers, TV’s etc. The inverters I use now give me up to 5,000 watts continuous and up to 12,000 watts of surge (for up to 2 hours) but I have set the limit of surge to 9,000 watts so I do not damage the batteries. The 24 deep cycle batteries could give me 5,000 watts continuously for up to several days without any further charging but so far, I have found that I am using only about 25% to 35% of the system capacity for about 80% of the time and about 80% of the capacity for 20% of the time. The high usage comes from when the freezer or refrigerator compressors kick on and when the heater boiler pumps kick on. As soon as these pumps start and get up to speed, the load drops back to a much lower value. The rest of the time, I am just using CFL and LED lights and my computer.

I finished this setup in September of 2011 and it worked better and better as the winter temperatures dropped in November and December. I had to make one adjustment. The piezo vibrators made too much noise and the aluminum plates made them sound even louder. I have since added some noise dampening and now I can’t hear it unless I am outside and standing near it. The dampening I used was two 4’x8’ sheets of thick heavy-duty foam used to put in horse and cattle stalls to keep the animals from standing on freezing cold ground. These were $30 each and have isolated the sheets from the wood and metal frame but still allows the vibrators to do their thing on the piezo tubes and the cookie sheet SSTEMs.

I have lots of meters and gauges on the system to monitor temperatures and power outputs and levels and so far nothing seems to be fading or slowing. There are slight changes in the charge levels of the batteries due to changes in the ambient air temperature but that has been less than +/- 10% so far. I was concerned that the cold antifreeze would freeze the water around the sunken coils but so far that has not happened. I think it is because there is a fairly rapid turnover of water at that depth and the coils just don’t have a chance to get that cold.

I’m also going to experiment with rewiring the whole thing to give me perhaps 60 volts output into the bank of batteries that are wired in series to make a 60 volt bank. This is the way that electric cars are wired and I have obtained a controller out of a Nissan Leaf that uses a bank of batteries in a 60 volt configuration. It should be more efficient.

The whole system cost me about $950 in batteries, fittings, hoses and chemicals and a lot of salvage of used and discarded parts. I already had the inverters and solar controller. I also had a friend that drilled the hole for me – that would have cost about $400. The rest I got out of salvage yards or stripped off old appliances or cars. It took about 3 weeks to build, working part time and weekends. I estimate that if you had to pay for all of the necessary parts and services to build the system; it would cost about $3,000. By the end of next year, I will have saved about half that much in electricity. As it is, I will have a full payback by about March of 2013.

I still have a hookup to the city power lines but since installing this system, I have used only about 10 kilowatts. I think that was mostly when I used my arc welder and did not want to suck too much out of the batteries. A side benefit has also been that since September, when I first started using it, there have been 4 power outages – one lasted for two days….for my neighbors. I never lost power.

I have not done it yet but I can also setup an electric meter that allows me to sell electricity back to the power company. When I integrate this whole system with my solar PV array, I might do that but for now, I can store unused power in the batteries for later use and since I won’t run out of fuel, I don’t need to recover any extra ongoing costs.

Since this system has no expendable fuel, no moving parts, no maintenance and no costs, I expect it will be functional for the next 15 to 20 years – maybe longer. Actually, I can’t think of why it will ever stop working.

Sept 2012 Update:

This system has been running for a year now. My power company monthly bill is averaging about $28/mo. But it goes lower almost every month. I am still running several AC units during the summer and have used ARC welding and electric heaters in the winter.

My estimate of costs and maintenance was a little optimistic as I discovered my 79 cent piezo vibrators were worth every penny – they lasted about 6 months. I have since replaced them with a bunch of salvaged parts used in claxon alarms on board Navy ships. These normally run on 28 volts but I did not need them to be loud so I found that if I fed them with 4 volts, I got the vibration I needed without the noise and they are so under-powered that they will likely last for years.

During the Spring and Fall, the system was not too productive because the temperature differential was usually less than 10 degrees but in the hottest part of the summer, I was back up to over 300 watts of total output with differentials of 20+ degrees between the 50 degree ground and the 70 to 85 degree summer heat.

I was not hit by the floods of last Spring but my place in the woods experienced torrential rains and the water table rose to nearly the surface. In all that my system continued to work – in fact, I noticed a slight improvement in performance since the temperature exchange rate improved with the heavy flow of underground water.

I still have not hooked up to a reversing electric meter but I did calculate that I would have made a net $290 over the past year instead of paying out $28/mo average. If I added in my solar PV system, my small $400 vertical wind generator and the 60 to 100 watts I get from a tiny hydroelectric setup I have on a small steam that runs thru my property, I would have had a net gain of over $1,000. Not bad for a bunch of junk and surplus parts and a little help from the dirt under my lawn.

My Bathtub – My Fountain of Youth

I have written many times that I spend quite a bit of time at my place in BC, Canada. It is an isolated place that was carved out of an existing cave in a rock face near a lake. I have added a lot of my power generation gadgets and installed lots of technology to give me all the comforts of home while being miles from the nearest town (by road). Actually, New Denver is only a few miles away, across the lake. I had a Cessna 206h with pontoons until last year and then I traded up to a Berive Be-103 Snipe. It’s a twin engine 6 seater that gives me increased range and speed to fly back to the states. I liked the reversible props for water and ice control and opted for the radar, extra fuel tanks and autopilot. I can go over 1,000 miles in one hop at 120 kts. It’s a great plane, even though it is Russian.

The living space in BC is a large natural cave that I expanded it significantly and added poured concrete floors to level it. I have 14 different rooms; the largest is about 30 feet long and 40 feet wide with a 25-foot ceiling. Most of the cave is carved out of hard stone but when I was building in it, I tapped into a fresh water spring that was flowing in a rather well defined channel through the stone. I maintained the natural channel – even enhanced it – but tapped into the water to fill a massive cistern that is carved into the rock wall, high up in the cave. This gives me a huge water supply (over 2,000 gals) and also creates the gravity-flow water pressure for the sinks, hot tub, showers and my favorite, the soaking tub.

The water actually tastes a little weird so for drinking and cooking, I run it through a bubbling ionizer that bubbles a constant flow of negative ion air through the water. This has a great filtering effect as well as purifying it of all bacteria and other stuff. There is also a larger version of the ionizer in the cistern.

The hot tub and the soaking tub are carved out of the stone but I bored and drilled into both to give me bubblers and water jet outlets. I’ve been using both for about 18 years now and love it but recently I found out it may be better than I ever thought it could be.

I have always been thought by others to be younger than my real age. I always assumed it was just luck and good genes but about a year ago, a doctor told me that I was way past just being young looking. I had the skin and organ function of a man 30 or more years younger than my real age. I feel fine for someone that was in college when Kennedy got shot so I figured he was just being kind but he wanted to run some tests. He did and came back and said that I was a real medical miracle and that he wanted to do a paper on me. I said I’d cooperate but I did not want to be identified in the study. He agreed.

I won’t bore you with the details of the tests and results but suffice it to say that I was the source of a lot of interest in a relatively narrow medical field that is into longevity and life span studies. After testing me for 6 months and then coming to my two residences and testing everything I eat and touch, I got the report last week. It seems that my two tubs and the spring water at the cave are partially the cause of my good fortune. The water has a very high content of magnesium bicarbonate. Just since 2002, there has been a lot of interest in magnesium bicarbonate following a study done in Australia in which they studied cows and sheep that were living 30% to 45% longer than normal and were able to continue to have normal offspring, even into their advanced years. After a two-year study, it was determined that it was the water that was high in magnesium bicarbonate. Look it up, you’ll see that there is now a commercial company that is selling the water from the ranch that the cows had been drinking.

I have not only been drinking this water for the past 18 years but have been bathing and soaking in it on a regular basis. I seem to have lucked out and as a result may end up living to be a lot older than I ever expected to. I think this is a good thing.

Run Silent, Fast and Undetectable – US Navy Submarines

An old 1955 movie was called “Run Silent, Run Deep”, about the US Navy’s submarine service in WWII. Our subs today are quite different and as you will see, a new movie might be named “Run Silent, Run Fast, Invisibly”. Subs today go faster than you would imagine, quieter than anyone thought possible and – thanks to a contribution I made 15 years ago – they are now almost invisible. My small part had to do with the stealth aspects of subs. The exact nature of stealth technology is a secret and I, for one, will not give it away but I can tell you something about it. But first, I have to explain a little about the technology.

Imagine you have a very large bundle of sewing needles. Tens of thousands of them. Now imagine you can set them all upright, with their pointy ends pointing up and pushed together as close as they can get. If you then looked down on those pointy ends, it would look very black. The reason is that the light on the sides near the points reflects inward and keeps reflecting as it bounces further and further down into the mass of needles. Officially, this is called, “the angle of incidence (the incoming light) equals the angle of reflection (the bounced light). With each reflection, a little of the light energy is absorbed and converted to heat. Because of the shape and angle of the needles, the light never reflects back outward thus making it appear to be totally black. In physics, this is called a “black body”.

This is essentially what stealth technology is like only at a microscopic scale. Aircraft are painted with a special kind of paint that has tiny but densely packed little pointy surfaces that act just like those needles. When radar hits the aircraft, the paint absorbs all of the radar’s energy and lets none reflect back to the enemy receiver. When no radar reflection is seen, it is assumed that there is nothing out there to be seen.

Sonar for subs works pretty much the same as radar but instead of radio frequency (RF) energy, it uses sound. Sound is emitted and a reflection is heard. This is called active sonar. Because subs are relatively noisy in the water, it is also possible to just listen for their noise and then figure out what direction the noise is coming from. That is called passive sonar. The props, engine noise and just the water rushing over and around the sub makes noise. The faster you go, the more these things make more and louder noise.

Despite their best efforts at sub design, even our subs create some sounds and they are, of course, going to reflect an active sonar bing when that is used. However, the US is the worlds best at creating very quiet subs. It is mostly because of the secret design of the props that are able to turn fast but not create cavitation – which makes a lot of noise underwater. Flush mounted hatches and even screw heads also make our subs quiet. In the 1960’s and 70’s, going over 15 knots under water was like screaming, “here I am”. In the 1980’s and early 1990’s, we could go up to 25 knots in relative silence. The latest subs – built or being built – can go over 35 knots and still remain mostly quiet.

That means that the enemy has to use active sonar to try to find them and that gives away the enemy’s position. At that point, they become easy targets.

Pushing a 400 foot long sub underwater at 35 knots is no easy chore but due to some amazing designs in the hull shape and the power plant and props, that is nowhere near the limit of the potential speed possible. Our subs could do as much as 85 knots underwater (that’s nearly 100 MPH!) but they would sound like a freight train and would create a wake large enough to be visible from space. Since stealth is the primary tactic of subs, that kind of speed was simple not reasonable….until now.

While I was at NRL, I presented a paper on how to create a totally quiet sub. Even if it were producing a lot of mechanical or hydroaction noise, my method would make it totally silent. More importantly, it would also completely hide the sub even from active sonar bings.

The advantages of this are significant. In a combat environment, going slow to keep the sub quiet also makes it linger in dangerous areas longer but going fast makes it easier to locate and track. Being able to launch weapons and then move very fast out of the area in total silence – even to active sonar – would be a game changer for submarine warfare.

Since I was a former pilot and worked in an entirely different department from the sub guys, the first reaction to my suggestion was “Yeah, Right – a flyboy is going to tell us how to make a sub quiet”. That was back in 1998. I recently found out that the latest sub, the Virginia, SSN-774, incorporates my design in an applied active acoustic quietness system that they now call Seawolf. When I contacted some old NRL friends and asked them about it, they were reluctant to talk about it until I started quoting my research paper from 1998. They said, “YOU wrote that paper!” Then they began to tell me the whole story.

It seems that my paper sat in files for six months before it was read by someone that understood and recognized it for what it could do. After a few preliminary computer models and some lab scale experiments, they were able to get funding for some major research and within a three months, they were proposing to incorporate the idea into the next class of subs. That was in early 2000. It was decided to incorporate the design into the last sub in the Seawolf class of subs – SSN-23, the USS Jimmy Carter. It proved to be effective but the SSN-23 was mostly a test bed for further development and a modified design was planned for the next class – the USS Virginia. After seeing how effective it was, the entire rest of the Seawolf class of subs was cancelled so that all the efforts could be put into the Virginia class with this new technology. My design was improved; named after the Seawolf class where it’s design was finalized and retrofitted into the Virginia before it was turned over to the Navy in 2004.

Soon after this discussion, I was invited to a party of the sub guys down near Groton. Since I was still at my Vermont residence, I figured, why not. I could go to my Canada residence right after the party. So last September; I flew my plane down to Elizabeth Field at the southwest end of Fishers Island. The sub guys from Groton had a nice retreat on the island at the end of Equestrian Ave. After I arrived, I was shown to a room upstairs in the large house and told to meet in the Great Room at 5PM. When I went down to the party, I got a huge surprise. I was the guest of honor and the party was being thrown for me.

It seems that they lost the cover page to my original 1998-research paper and never knew who wrote it. Several people at NRL had suggested that it was mine but they were sure that it had to have come from one of their own sub community guys but could never find the right one. When I sent someone a copy I had kept, they determined that I was the original author and deserved the recognition. It seems that my idea has actually been a game changer for the entire submarine warfare community in both tactics and strategy as well as hull design, combat operations, even weapons design. I was apparently quite a hero and did not even know it.

I enjoyed the party and met a lot of my old NRL buddies that were now admirals or owners of major corporations or renowned research scientists within select circles of mostly classified technologies. I got a lot of details about how they had implemented my idea and about some of the mostly unexpected side benefits that I had suggested might be possible in my paper. It was humbling and almost embarrassing to be honored for an idea that was now 15 years old and was mostly refined and developed by a host of other researchers. I began looking forward to getting on to my Canada retreat.

Two days later, I flew out for BC, Canada with a large handful of new contacts, renewed old contacts and lots of new ideas and details of new technologies that were being developed. I also ended up receiving several offers to do some research and computer modeling for some problems and developing technologies that some of the partygoers needed help on. I’ll probably end up with a sizable income for the next few years as a result of that party.

I suppose you’re interested in what exactly was this fantastic technology I designed way back in 1998 that has proved to be so popular in 2012. It was actually a pretty simple concept. Most techies have heard of noise-canceling headphones. They work by sensing a noise and then recreating an identical sounding noise with a phase shift of 180 degrees. When a sound is blended with the same sound but phase shifted by 180 degrees, what you get is total silence. This works very well in the confined and controlled environment of a headphone but was thought to be impossible to recreate in an open environment of air or water. I simple created a computer model that used a Monte Carlo iterative algorithm that quantified the location, intensity, lag time, and other parameters for an optimum installation on a sub. It took the super computers at NRL several hours to refine a design of placement, power, sensors and other hardware and temporal design aspects but when it was done, I was surprised at the degree of efficiency that it was theoretically possible to achieve. I wrote all this into my 1998 paper, mostly out of the hopes that my computer model would be used and I could get another project funded.

My paper included a reference to where and how to run my model on the NRL computers and eventually, it was used as their primary design optimization tool for what would later be called the Seawolf Acoustic Quieting and Stealth System (SAQ-SS). The actual modeling software I created and left on the NRL computers began being called SAQ, which got shortened to pronouncing it as “SACK”. As it developed and got seen by more people and they saw the side benefits on the whole stealth effect, it was called SAQ-SS, which evolved into “SACKSS”, then into SAQ-SaS and into SACK-SAS and was eventually called the “SUCCESS” system.

Those side benefits I keep referring to are worth mentioning. When an active sonar bing or sound wave front is detected by the SAQ system, it activates a hull mounted sound modulator that causes the hull itself to act as a giant speaker or transducer to initiate a response wave form that is 180 degrees out of sync from the incoming sound. This effectively nulls out the sound. The same happens for sounds created by the mechanics of the sub that is passed by conduction to the hull. In this case, the hull is modulated so that it completely absorbs any sounds that might otherwise pass through it to the water.

Another side benefit is that the SAQ system created the opportunity, for the first time, for the sub commander to “see” directly behind his own sub. In the past, the noise from the prop, the engine and the distortion of the water because of the prop wash; the rear of the sub was a blind spot. To see back there, the sub had to make wide slow turns to the left and right or they had to drag a towed array – sort of a remote sonar – on a cable behind the sub. Despite having rear facing torpedo tubes, the sub could not effectively use active or passive sonar for about 30 degrees astern. This, of course, was the approach of any hunter-killer subs that wanted to get a sure-fire launch at another sub target. Because of the hull nullification and the ambient noise cancellation of the SAQ system, the aft facing sensors and sonar’s now are very effective at both detection and fire control for torpedo launch. There is still some loss of resolution as compared with any other direction due to the water disturbance in the prop wash but a good sonar operator can compensate.

The final side benefit of the SAQ system is that, for the first time, it allows a sub to travel as fast as it is capable of going, even in a confined combat environment, without being detected. It was this benefit that led to the immediate cancellation of the remaining balance of the Seawolf class of subs and go directly to the Virginia class. Using a design similar to a jetski engine, called a Propulsor or jet pump, the Virginia is capable of speeds far in excess of any of its predecessors. Despite very high speeds, the Virginia class of subs will be undetectable by sonar – allowing it to move as fast as the engine can push it. Exact speeds and limits of depth of all US Navy subs is highly classified but prototype tests on the USS Jimmy Carter reached 67 knots or about 78 MPH and that was before enhancements and design changes were made. My guess is that the SSN-774 and its sister boats will be able to exceed 90 MPH when fully submerged – perhaps over 100 MPH.

The current version of the SAQ system is so effective that when it was tested in war-games against surface ships and P-3 ASW aircraft, it created a huge argument that had to be resolved by the CNO of the Navy. The USS Virginia was able to simulate the kill of all 19 ships in the exercise without being detected by any of them or by any ASW aircraft or helo. The squadron commanders of the S3 and P3 aircraft and the captains of the ASW destroyers filed formal complaints against the sub commanders for cheating during the exercise. They claimed that there was no sub anywhere in the exercise area and that the simulated kills were all as a result of cheating in the exercise computer models. The fighting between the aircraft, surface and sub communities was so fierce that the CNO had to call a major conference to calm everyone down and explain what and how the exercise went the way it did.

I am pleased that my idea from 15 years ago was eventually found to be valid and that I have contributed in some manner to our security and ability to meet any threat.

The Aurora Exists but Its Not What You Think

The Aurora is the new jet that people have been saying is the replacement for the SR-71- it is real but it isn’t what you’d think it is. First a little history.

The U-2 spy plane was essentially a jet powered glider. It had very long wings and a narrow body that could provide lift with relatively little power. It used the jet engine to take it very high into the air and then it would throttle back to near idle and stay aloft for hours. The large wings were able to get enough lift in the high thin air of the upper atmosphere partly because it was a very light weight plane for its size. Back in the early 60’s, being high was enough protection but still allowed the relatively low resolution spy cameras to take good photos of the bad guys.

When Gary Powers’ U-2 got shot down, it was because the Soviets had improved their missile technology in both targeting and range and because, we gave the Russians details about the flight – but that is another story. The US stopped the U-2 flights but immediately began working on a replacement. Since shear altitude was no longer a defense, they opted for speed and the SR-71 was born. Technically, the SR-71 (Blackbird) was not faster than the missiles but, because of its speed (about Mach 3.5) and its early attempt at stealth design, by the time they had spotted the spy plane and coordinated with a missile launch facility, it was out of range of the missiles.

The CIA and the Air Force used the Blackbird until the early 1980’s when it was retired for spying and used only for research. At the time, the official word for why it was retired was that satellite and photographic technology had advanced to the point of not needing it any more. That is only partially correct. A much more important reason is that the Russians had new missiles that could shoot down the SR-71. By this time, Gorbachev was trying to mend relations with the west and trying to move Russia into a more internationally competitive position so he openly told Regan that he had the ability to shoot down the SR-71 before he actually tried to do it. Regan balked so Gorbachev conducted a “military exercise” in the Spring of 1981 in which the Russians made sure that the US was monitoring one of their old low orbit satellites and then during a phone call to Regan, the satellite was “disabled” – explosively.

At the time it was not immediately clear how they had done it but it wasn’t long before the full details were known. A modified A-60 aircraft code named “SOKOL-ESHELON,” which translates to “Falcon Echelon”, flying out of Beriev airfield at Taganrog, shot down the satellite with an airborne laser. When Regan found out the details, he ordered the Blackbird spy missions to stop but he demanded that Gorbachev give him some assurances that the A-60 would not be developed into an offensive weapon. Gorbachev arranged for an “accident” in which the only operational A-60 was destroyed by a fire and the prototype and test versions were mothballed and never flew again.

The spy community – both the CIA and DoD – did not want to be without a manned vehicle spy capability so they almost immediately began researching a replacement. In the meantime, the B-1, B-2 and B-117 stealth aircraft were refined and stealth technology was honed to near perfection. The ideal spy aircraft would be able to fly faster than the SR-71, higher than the U-2 and be more invisible than the B117 but it also had to have a much longer loiter time over the targets or it would not be any better than a satellite.

These three requirements were seen to be mutually exclusive for a long time. The introduction and popularity of unmanned autonomous vehicles also slowed progress but both the CIA and DoD wanted a manned spy plane. The CIA wanted it to be able to loft more sophisticated equipment into the complex monitoring of a dynamic spy situation. DoD wanted it to be able to reliably identify targets and then launch and guide a weapon for precision strikes. For the past 30 years, they have been working on a solution.

They did create the Aurora which uses the most advanced stealth technology along with the latest in propulsion. This, at least satisfied two of the ideal spy plane requirements. It started with a very stealthy delta-wing design using an improved design of the SR-71 engines, giving it a top speed of about Mach 4.5 and a ceiling of over 80,000 feet but that was seen as still too vulnerable. In 2004, following the successful test of NASA’s X-43 scramjet reaching Mach 9.8 (about 7,000 MPH), DoD decided to put a scramjet on the Aurora. Boeing had heard that DoD was looking for a fast spy jet and they attempted to bust into the program with their X-51a but DoD wanted to keep the whole development secret so they dismissed Boeing and pretended there was no such interest in that kind of aircraft. Boeing has been an excluded outsider ever since.

In 2007, DARPA was testing a Mach10 prototype called the HyShot – which actually was the test bed for the engine planned for the Aurora. It turns out that there are a lot technological problems to overcome that made it hard to resolve a working design in the post-2008 crashed economy and with the competition from the UAV’s while also trying to keep the whole development secret. They needed to get more money and find somewhere to test that was not being watched by a bunch of space cadets with tin foil hats that have nothing better to do than hang around Area 51, Vandenberg and Nellis.

DoD solved some of these issues by bringing in some resources from the British and got NASA to foot some of the funding. This lead to the flight tests of the HiFire in 2009 and 2010 out of the Woomera Test Range in the outback of South Australia. The HiFire achieved just over 9,000 MPH but it also tested a new fuel control system that was essentially the last barrier to production in the Aurora. They used a pulsed laser to ignite the fuel while maintaining the hypersonic flow of the air-fuel mixture. They also tested the use of high velocity jets of compressed gas into the scramjet to get it started. These two innovations allowed the transition from the two conventional jet engines to the single scramjet engine to occur at a lower speed (below Mach5) while also making the combination more efficient at very high altitudes. By late 2010, the Aurora was testing the new engines in the Woomera Test Range and making flights in the 8,000 to 9,700 MPH range.

During this same period, the stealth technology was refined to the point that the Aurora has a RCS (radar cross-section) of much less than 1 square foot. This means that it has about the radar image of a can of soda and that is way below the threshold of detection and identification of most radars today. It can fly directly into a radar saturated airspace and not be detected. Because of its altitude and speed and the nature of the scramjet, it has an undetectable infrared signature also and it is too high to hear audibly. It is, for allintents and purposes, invisible.

This solved two of the three spy plane criteria but they still had not achieved a long loiter time. Although the scramjet is relatively fuel efficient, it really is only useful for getting to and from the surveillance site. Once over the spy area, the best strategy is to fly as slow as possible. Unfortunately, wings that can fly at Mach 10 to Mach 12 cannot support the aircraft at much slower speeds – especially in the thin air at 80,000 feet.

Here is where the big surprise pops up. Thanks to the guys at NRL and a small contribution I made to a computer model, the extended loiter time problem was something that they began working on back in 2007. It started back when they retrofitted the HyShot engine into the Aurora, then NRL convinced the DARPA program manager to also retrofit the delta wings of the Aurora with a swing capability, similar to the F-14 TomCat. The result would be a wing that expands like a folding Japanese fan. In fast flight mode, the wing would be tucked into the fuselage making the aircraft look like the long tapered blade of a stiletto knife. In slow flight mode, the wings would fan out to wider than an equilateral triangle with a larger wing surface area.

As with any wing, it is a compromise design of flying fast and slow. The swing wing gave the Aurora a range increase from reduced drag while using the scramjet. It also allowed the wing loading to be expanded slightly giving it more lift at slower speeds and in thinner air. However, most of the engineers on the project agreed that these gains were relatively minor and it was not worth the added cost in building and maintenance. This was not a trivial decision as it also added weight and took up valuable space in the fuselage that was needed to put in the modified scramjet and added fuel storage. Outside of NRL, only two people were told why they needed to do this wing modification and how it could be done. Those two were enough to get the funding and NRL won the approval to do it.

What NRL had figured out was how to increase lift on the extended wing by a factor of 10 or more over a conventional wing. This was such a huge increase that the aircraft could shut off its scramjet and run one or both of its conventional jet engines at low idle speeds and still stay aloft – even at extreme altitudes. Normally, this would require a major change in wing shape and size to radically change the airfoil’s coefficient of lift of the wing but then the wing would be nearly useless for flying fast. A wing made to fold from one type wing (fast) to another (slow) would also be too complex and heavy to use in a long-range recon role. The solution that NRL came up with was ingenious and it turns out it partly used a technology that I worked on earlier when I was at NRL.

They designed a series of bladders and chambers in the leading edge of the wing that could be selectively expanded by pumping in hydraulic fluid and expanding these bladders to alter the shape of the wing from a near symmetric chambered foil to that of a high lift foil. More importantly, it also allowed for a change in the angle of attack (AoA) and therefore, the coefficient of lift. They could achieve AoA change without altering the orientation of the entire aircraft – this kept drag very low. This worked well and would be enough if they were at a lower altitude but in the thin air at 80,000+ feet, the partial vacuum created by the wing is weakened by the thin air. To solve that, they devised a way to create a much more powerful vacuum above the wing.

When they installed the swing-wing, there were also some additions to some plumbing between the engines and the wing’s suction surface (upper surface, at the point of greatest thickness). This plumbing consisted of very small and lightweight tubing that mixes methane and other gases from an on-board cylinder with super heated and pressurized jet fuel to create a very high volatile mix that is then fed to special diffusion nozzles that are strategically placed on the upper wing surface. The nozzles atomize the mixture into a fine mist and spray it under high pressure into the air above the wing. The nozzles and the pumped fuel mixture are timed to stagger in a checkerboard pattern over the surface of the wing. This design causes the gas to spread in an even layer across the length of the wing but only for about 2 or 3 inches above the surface.

A tiny spark igniter near each nozzle causes the fuel to burn in carefully timed bursts. The gas mixture is especially designed to rapidly consume the air in the burning – creating a very high vacuum. While the vacuum peaks at one set of nozzles, another set of nozzles are fired. The effect is a little like a pulse jet in that it works in a rapid series of squirt-burn-squirt-burn repeated explosions but they occur so fast that they blend together creating an even distribution of enhanced vacuum across the wing.

You would think that traveling at high Mach speeds would simply blow the fuel off the wing before it could have any vacuum effect. Surprisingly, this is not the case. Due to something called the laminar air flow effect, the relative speed of the air moving above the wing gets slower and slower as you get closer to the wing. This is due to the friction of the wing-air interface and results in a remarkable slow relative air movement within 1 to 3 inches of the wing. This unique trick of physics was known as far back as WWII when crew members on B-29’s, flying at 270 knots, would stick their heads out of a hatch and scan for enemy fighters with binoculars. If they kept within about 4 or 5 inches of the outer fuselage surface, the only effect was that they would get their hair blow around. The effect on the Aurora was to keep the high vacuum in close contact with the optimum lifting surface of the wing.

Normally, the combination of wing shape and angle of attack, creates a pressure differential above and below the wing of only 3 to 5 percent. The entire NRL design creates a pressure differential of more than 35% and a coefficient of lift that is controllable between .87 and 9.7. This means that with the delta wing fully extended; the wing shape bladders altering the angle of attack and the wing surface burn nozzles changing the lift coefficient, the Aurora can fly at speeds as low as 45 to 75 MPH without stalling – even at very high altitudes.

At the same time, it is capable of reducing the angle of attack and reshaping the wing into a chambered wing (a very thin symmetric) shape and then sweeping the delta wing into a small fraction of its extended size so that it can achieve Mack 15 under scramjet power. For landing and takeoff and for subsonic flight, it can adjust the wing for optimum fuel or performance efficiency while using the conventional jet engines.

My cohorts at NRL tell me that the new version of the Aurora is now making flights from the Woomera Test Range in the outback of South Australia to Johnston Atoll (the newest test flight center for black ops aircraft and ships) – a distance of 5,048 miles – in just over 57 minutes – which included the relatively slow speed climb to 65,000 feet. The Aurora then orbited over Johnson Atoll for 5 ½ hours before flying back to Woomera. In another test, the Aurora left Woomera loaded with fuel and a smart bomb. It flew to Johnson Atoll and orbited for 7 hours before a drone target ship was sent out from shore. It was spotted by the Aurora pilot and then bombed by the laser-guided bomb and then the pilot returned to Woomera.

I was also told that at least three of the precision strikes of Al Quida hideouts were, in fact, hit by the Aurora and then credited to a UAV in order to maintain the cover.

The Aurora is the fastest and the slowest highest altitude spy aircraft ever made and if the pilots don’t make a mistake, you may never see it.

Whack-a-Mole comes to the Battlefield

Whack-a-Mole comes to real world combat

  An old idea has been updated and brought back in the latest military weapon system.  Back in Vietnam, the firebases and forward positions were under constant sneak attack from the Vietcong under the cloak of night.  The first response to this was what they called Panic Minute.  This was a random minute chosen several times per day and night in which every soldier would shoot their weapon for one full minute.  They would shoot into the jungle without having any particular target.  We know it worked sometimes because patrols would find bodies just beyond the edge of the clearing.  But it also did not work a number of times and fire bases were being overrun on a regular basis. 

  The next response was Agent Orange.  Originally called a “defoliant” and designed to just make the trees and bushes drop all their leaves.  Of course, the effect was to kill all plant life and often making the soil infertile for years after.  They stopped it when they began to notice that it also was not particularly good for humans.  It acted as a neurotoxin causing all kinds of problems in soldiers that were sprayed or that walked thru it.

  The third and most successful response to these sneak attacks was a top secret program called Sentry.   Remember when this was – in the mid to late 60’s and early 70’s.  Electronics was not like it is now.  The Walkman, which was simply a battery operated transistor radio, was not introduced until 1978.  We were still using 8-track cartridge tapes and reel-to-reel recorders.  All TV’s used tubes and the concept of integrated circuits was in its infancy.  Really small spy cameras were about the size of a pack of cigarettes and really small spy type voice transmitters were about half that size.  Of course, like now, the government and the military had access to advances that had not yet been introduced to the public.

  One such advance was the creation of the sensors used in the Sentry program.  They started with a highly sensitive vibration detector.  We would call them geophones now but back then they were just vibration detectors.  Then they attached a high frequency (VHF) transmitter that would send a clicking sound in response to the detectors being activated by vibrations.  

The first version of this was called the PSR-1 Seismic Intrusion detector – and is fully described on several internet sites.  This was a backpack size device connected to geophones the size of “D” cell batteries.  It worked and proved the concept but it was too bulky and required the sensors to be connected by wires to the receiver.  The next version was much better.


What was remarkable about the next attempt was that they were able to embed the sensor, transmitter and batteries inside a package of hard plastic and coated on the outside with a flat tan or brown irregular surface. All this was about the size of one penlight battery.  This gave them the outward appearance of being just another rock or dirt clog and it was surprisingly effective.  These “rocks” were molded into a number of unique shapes depending on the transmitting frequency. 


The batteries were also encased in the plastic and it was totally sealed.  It was “on” from the moment of manufacture until the batteries died about 2 months later.  A box of them would contain 24 using 24 different frequencies and 24 different click patterns and were shipped in crates of 48 boxes.  The receiver was a simple radio with what looked like a compass needle on it.  It was an adaptation of the RFDF (radio frequency direction finder) used on aircraft.  It would point the needle toward an active transmitter and would feed the clicking to its speaker.


In the field, a firebase would scatter these rocks in the jungle around the firebase, keeping a record of the direction that each different frequency rock was thrown from the base.  All of the No. 1 rocks from 6 to 10 boxes were thrown in one direction.  All of the No. 2 rocks were thrown in the next direction, and so on.  The vibration detectors picked up the slightest movement within a range of 10 to 15 meters (30-50 feet).  The firebase guards would setup the receiver near the middle of the sensor deployment and would monitor it 24 hours a day.  When it began clicking and pointing in the direction of the transmitting sensors, the guard would call for a Panic Minute directed in that direction.  It was amazingly effective.


In todays’ Army, they call this Geophysical MASINT (measurement and signature intelligence) and the devices have not actually changed much.  The “rocks” still look like rocks but now they have sensors in them other than just seismic.  Now they can detect specific sounds, chemicals and light and can transmit more than just clicks to computers.  The received quantitative data is fed into powerful laptop computers and can be displayed as fully analyzed in-context information with projections of what is happening.  It can even recommend what kind of response to take.


These sensors “rocks” are dispersed at night by UAV’s or dropped by recon troops and are indistinguishable from local rocks.  Using multiple sensors and reception from several different rocks, it is possible to locate the source of the sensor readings to within a few feet.  This is much the same as the way the phone companies can track your locations using triangulation from multiple cell towers.  Using only these rocks, accuracy can be reduced to within ten feet or less but when all this data is integrated into the Combat Environmental Data (SID) network, targets can be identified, confirmed, located and placed within 2 or 3 feet.


What the Army has done with all this data is create a near automated version of Whack-a-Mole by integrating the use of artillery and the Digital Rifle System (DSR) into the SID and rock sensor network.  The result is the ability to setup a kill zone (KZ) that can be as big as 30 miles in diameter.  This KZ is sprinkled with the sensor rocks and the AIR systems of the DRS and linked by the SID network into strategically placed DRS rifles and digitally controlled artillery.  When these various systems and sensors are all in place, the Army calls it a WAK zone (pronounced “Whack”) –  hence the nickname Whack-a-Mole.


The WAK zone computers are programmed with recognition software of specifically targeted people, sounds, chemicals and images that constitute a confirmed kill target.  When the WAK zone computers make that identity, it automatically programs the nearest DRS rifle or the appropriate artillery piece to fire on the target.  For now, the actual fire command is still left to a person but it is fully capable of a full automatic mode.  In several tests in Afghanistan, it has not made any identification errors and the computerized recommendation to shoot has always been confirmed by a manual entry from a live person.


Studies and contractors are already working on integrating UAV’s into the sensor grids so that KZ’s of hundreds of miles in diameter can be defined.  The UAV’s would provide not only arieal sensors of visual, IR and RF detection but also they will carry the kill weapon.

  Whack-a-Mole comes to the battlefield!


Unthethered planets Are Not What the Seem


Two seemingly unrelated recent discoveries were analyzed by a group at NASA with some surprising and disturbing implications.  These discoveries came from a new trend in astronomy and cosmology of looking at “voids”.

  The trend is to look at areas in the sky that appear to not have anything there.  This is being done for three reasons. 


(1) In 2009, the Hubble was trained on what was thought to be an empty hole in space in which no previous objects have ever been observed.  The picture used the recently improved Wide Field and Planetary Camera #2 to do a Deep Field image.   The image covered 2.5 arc minutes – the width of a tennis ball as seen from 100 meters away.  The 140.2 hour exposure resulted in an image containing more than 3,000 distinct galaxies at distances going out to 12.3 billion light years away.  All but three of these were unknown before the picture was taken.  This was such an amazing revelation that this one picture has its own Wikipedia page (Hubble Deep Field) and it altered our thinking for years to come.


(2) The second reason is that for this image and for every other image or closer examination of voids, new and profound discoveries have been made.  Using radio frequencies, infrared, UV, and all the other wavelengths that we have cameras, filters and sensors to detect, have resulted in new findings every time they are used on “voids”.


(3) In general, the fields of astronomy and cosmology have been getting crowded with many more researchers than there are telescopes and labs to support them.  Hundreds of scientists in these fields do nothing but comb through the images and data of past collections to find something worth studying.  Much of that data has been reexamined hundreds of times and there is very little left to discover about it.  The new data from these examinations of voids has created a whole new set of raw data that can be examined from dozens of different perspectives to find something that all these extra scientists can use to make a name for themselves.


To that end, Takahiro Sumi and his team Osaka University recently examined one of these voids and found 10 Jupiter sized planets but the remarkable aspect is that these planets were “unthethered” to any star or solar system.  They were not orbiting anything.  In fact they seem to be moving in random directions at relatively high speeds and 8 of the 10 are actually accelerating.  Takahiro Sumi speculates that these planets might be the result of a star that exploded or collided but that is just a guess.


In an unrelated study at the radio telescope array in New Mexico, Albert Swenson and Edward Pillard announced that they found a number of anomalous RF and infrared emissions coming from several areas of space that fall into the category of being voids.  One of those void areas that had one of the strongest signals was the same area that Takahiro Sumi had studies.  Their study was unique because they cross-indexed a number of different wavelength measurements of the same area and found that there were very weak moving points of infrared emissions that appeared to be stronger sources of RF emissions with an unidentified energy emission in the 1.5 to 3.8 MHz region.   This study produced a great deal of measurement data but made very few conclusions about what they meant. 


The abundance of raw data was ripe for one of those many extra grad students and scientists to examine the data and correlate it to something.  The first to do so was Eric Vindin, a grad student doing his doctoral thesis on the arctic aurora.  He was examining something called the MF-bursts in the auroral roar – which an attempt to find the explicit cause of certain kinds of aurora emissions.  What he kept coming back to was that there was a high frequency component present in the spectrograms of the magnetic field fluctuations that were expressed at significantly lower frequencies.  Here is part of his conclusion:


“There is evidence that such waves are trapped in density enhancements in both direct measurements of upper hybrid waves and in ground-level measurements of the auroral roar for an unknown fine frequency structure which qualitatively matches and precedes the generation of discrete eigenmodes when the Z-mode maser acts in an inhomogeneous plasma characterized by field-aligned density irregularities.  Quantitative comparison of the discrete eigenmodes and the fine frequency structure is still lacking.”


To translate that for real people to understand, Vindin is saying that he found a highly modulated high frequency (HF) (what he called a “fine frequency structure “) signal embedded in the magnetic field fluctuations in the earth’s magnetic field that makes up and causes the background visual emissions we know as the Auroral Kilometric Radiation (AKR).  He can cross index these modulations of the HF RF to changes in the magnetic field on a gross scale but has not been able to identify the exact nature or source of these higher frequencies.   He did rule out that the HF RF was coming from Earth or the atmosphere.  He found that they were in the range from 1.5 to 3.8 MHz.  Vindin also noted that the HF RF emissions were very low power as compared to the AKR and occurred slightly in advance (sooner) than changes in the AKR.  His study, published in April 2011, won him his doctorate and a job at JPL in July of 2011.


Vindin did not extrapolate his findings into a theory or even a conclusion but the obvious implication of these findings is that these very weak HF RF emissions are causing the very large magnetic field changes in the AKR.  If that is true, then it is a cause-and-effect that has no known correlation in any other theory, experiment or observation.


Now we come back to NASA, two teams of analysts lead by Yui Chiu and Mather Schulz, working as hired consultants to the Deep Space Mission Systems (DSMS) within the Interplanetary Network Directorate (IND) of JPL.   Chiu’s first involvement was to publish a paper critical of Eric Vindin’s work.  He went to great effort to point out that the relatively low frequency of 1.5 to 3.8MHz is so low in energy that it is highly unlikely to have extraterrestrial origins and it is even more unlikely that it would have any effect on the earth’s magnetic field.  This was backed by a lot of math equations and physics that showed that such a low frequency could not travel from outside of the earth and still have enough energy to do anything – much less alter a magnetic field.  He showed that there is no know science that would explain how an RF emission could alter a magnetic field.  Chiu pointed out that NASA uses UHF and SHF frequencies with narrow beam antennas with extremely slow modulations to communicate with satellites and space vehicles because it takes the higher energy in those much higher frequencies to travel the vast distances of space.  It also takes very slow modulations to be able to send any reliable intelligence on those frequencies.  That is why it often takes several days to send a single high resolution picture from a space probe.  Chiu also argued that received energies from our planetary vehicles was about as strong as a cell phone transmitting from 475 miles away – a power rating in the nanowatt range.  Unless his HF RF signal originate from an unknown satellite, I could not have come from some distant source in space.


The motivation of this paper by Chiu appears to be the result of a professional disagreement that he had with Vindin shortly after Vindin came to work at JPL.  In October of 2011, Vindin published a second paper about his earlier study in which he addressed most of Chiu’s criticisms.  He was able to show that the HF RF signal was received by a polar orbiting satellite before it was detected at an earth-bound antenna array.  He antenna he was using was a modified facility that was once a part of the Defense Early Warning (DEW) line of massive (200 foot high) movable dish antennas installed in Alaska.  The DEW line signals preceded but appeared to be synchronized with the aurora field changes.  This effectively proved that the signal was extraterrestrial. 


Vindin also tried to address the nature of the HF RF signal and its modulations.  What he described was a very unique kind of signal that the military has been playing with for years. 


In order to reduce the possibility of a radio signal being intercepted, the military uses something called “frequency agility”.  This is a complex technique that breaks up the signal being sent into hundreds of pieces per second and then transmits each piece on a different frequency.  The transmitter and receiver are synchronized so that the receiver is jumping its tuning to match the transmitter’s changes in the transmission frequency.  If you could follow the jumps, it would appear to be random jumps but it actually follows a coded algorithm.  If someone is listening to any one frequency, they will hear only background noise with very minor and meaningless blips, clicks and pops.  Because a listener has no way of knowing where the next bit of the signal is going to be transmitted, it is impossible to rapidly tune a receiver to intercept these kinds of transmissions.  Frequency agile systems are actually in common usage.  You can even buy cordless phones that use this technique. 


As complex as frequency agility is, there are very advanced, very wide-band receivers and computer processors that can reconstruct an intelligent signal out of the chopped up emission.  For that reason, the military have been working on the next version of agility.  


In a much more recent and much more complicated use of frequency agility they are attempting to combine it with agile modulation.  This method breaks up both the frequency and the modulation of the signal intelligence of the transmission into agile components.  The agile frequency modulation (FM) shifts from the base frequency to each of several sidebands and to first and second tier resonance frequencies as well as shifting the intermediate (IF) frequency up and down.  The effect of this is to make it completely impossible to locate or detect any signal intelligence at all in an intercepted signal.  It all sounds like random background noise. 


Although it is impossible to reconstruct an agile frequency that is also modulation agile (called “FMA”), it is possible, with very advanced processors to detect that there is a signal present that is FMA modified.  This uses powerful math algorithms that take several hours of processing on massive amounts of recorded data and uses powerful computers to resolve the analysis many hours after the end of the transmission.  And even then it can only confirm to a high probability that there is a presence of an FMA signal without providing any indication of what is being sent. 


This makes it ideal for use on encrypted messages but even our best labs have been able to do it only when the transmitter and the receiver are physically wired together to allow them to synchronize their agile reconstruction correctly.  The NRL is experimenting with mixes of FMA and non-FMA and digital and analog emissions all being sent at the same time but it is years away from being able to deploy a functional FMA system.


I mention all this because as part of Vindin’s rebuttal, he was able to secure the use of the powerful NASA signal procession computers to analyze the signals he recorded and was able to confirm that there is a 91% probability that the signal is FMA.  This has, of course, been a huge source controversy because it appears to indicate that we are detecting a signal that we do not have the technology to create.  The NRL and NSA has been following all this with great interest and has independently confirmed Vindin’s claims.


What all this means is that we may never be able to reconstruct the signal to the point of understanding or even seeing text, images or other intelligence in it but what it does absolutely confirm is that the signal came from an intelligent being and was created specifically for interstellar communications.  There is not even a remote chance that anything in the natural world or in the natural universe could have created these signals out of natural processes.  It has to be the deliberate creation of intelligent life.


What came next was a study by Mather Schulz that is and has remained classified.  I had access to it because of my connections at NRL and because I have a lot of history in R&D in advanced techniques in communications.  Schulz took all these different reports and put them into a very logical and sequential argument that these unthethered planets were not only the source o the FMA signals but they are not planets at all.  They are planet size spaceships.


Once he came to this conclusion, he went back to each of the contributing studies to find further confirmation evidence.  In the Takahiro Sumi study from Osaka University and in the Swenson and Pillard study, he discovered that they had detected that the infrared emissions were much stronger on the side away from the line of travel and that there was a faint trail of infrared emissions behind each of the unthethered planets. 


This would be consistent with the heat emissions from some kind of a propulsion system that was pushing the spaceship along.  What form of propulsion would be capable of moving a planet-size spaceship is unknown but the fact that we can detect the IR trail at such great distances indicates that it is producing a very large trail of heated or ionized particles that extend for a long distance behind the moving planets.  The fact that he found this on 8 of the 10 unthethered planets was positive but then he also noted that the two that do not have these IR emissions, are the only ones that are not accelerating.  This would also be consistent with heat emissions from a propulsion system that is turned off and the spaceship is coasting.


The concept of massive spaceships has always been one of the leading solutions to sub-light-speed interplanetary travel.  The idea has been called “Generations Ships” that would be capable of supporting a population large enough and for a long enough period of time to allow multiple generations of people to survive in space.  This would allow survival for the decades or centuries needed to travel between galaxies or star systems.  Once a planet is free from its gravitational tether to its solar system star, it would be free to move in open space.  The solution of replacing the light and heat from their sun is not a difficult technological problem when you consider the possible use of thermal energy from the planet’s core.  Of course, a technology that has achieved this level of advanced science would probably find numerous other viable solutions.


Schulz used a combination of the Very Large Array of interferometric antennas at Socorro, New Mexico along with the systems at Pune, India and Arecibo, PR to collect data and then had the bank of Panther Cray computers at NSA analyze the data to determine that the FMA signals were coming from the region of space that exactly matched the void measured and studies by Takahiro Sumi.  NSA was more than happy to let Schulz use their computers to prove that they had not dropped the ball and allowed someone else on earth to develop a radio signal that they would not be able to intercept and decipher.


Schulz admitted that he cannot narrow down the detection to a single unthethered planet (or spaceship) but he can isolate it to the immediate vicinity of where they were detected.  He also verified the Swenson and Pillard finding that other voids had similar but usually weaker readings.  He pointed out that there may be many more signal sources from many more unthethered planets but outside of these voids, the weak signals were being deflected or absorbed by intervening objects.  He admitted that finding the signals in other voids did not confirm that they also had unthethered planets but he pointed out that it does not rule out that possibility either.


Finally, Schulz setup detection apparatus to simultaneously measure the FMA signals using the network of worldwide radio telescopes at the same time taking magnetic, visual and RF signals from the Auroral Kilometric Radiation (AKR).  He got the visual images with synchronized high speed video recordings from the ISIS in cooperation with the Laboratory for Planetary Atmospherics out of the Goddard SFC. 


Getting NSA’s help again, he was able to identify a very close correlation of these three streams of data to show that it was, indeed, the FMA signal originating from these unthethered planets that preceded and apparently was causing corresponding changes in the lines of magnet force that was made visible in the AKR.  The visual confirmation was not on shape or form changes in the AKR but in color changes that occurred at a much higher frequency than the apparent movements of the aurora lights.  What was being measured were the increase and decrease in the flash rate of individual visual spectrum frequencies.  Despite the high speed nature of the images, they were still only able to pick up momentary fragments of the signal – sort of like catching a single frame of a movie every 100 or 200 frames.  Despite this intermittent nature of the visual measurements, what was observed exactly synchronized with the other magnetic and RF signals – giving a third source of confirmation.  Schulz provided some very shallow speculation that the FMA signal is, in fact, a combined agile frequency and modulation signal that includes both frequencies and modulation methods that are far beyond our ability to decipher it. 


This detection actually supports a theory that has been around for years – that a sufficiently high enough frequency that is modulated in harmonic resonance with the atomic level vibrations of the solar wind – the charged particles streaming out of the sun that create the Aurora at the poles – can be used to create harmonics at very large wavelengths – essentially creating slow condensations and rarefactions in the AKR.  This is only a theory based on some math models that seem to make it possible but the control of the frequencies involved are far beyond any known or even speculated technology so it is mostly dismissed.  Schulz mentions it only because it is the only known reference to a possible explanation for the observations.  It has some validity because the theory’s math model exactly maps to the observations.


Despite the low energy, low frequency signal and despite the fact that we have no theory or science that can explain it, the evidence was conclusive and irrefutable.  Those unthethered planets appear to be moving under their own power, are emitting some unknown kind of signal that is somehow able to modulate our entire planet’s magnetic field.  The conclusion that these are actually very large spaceships, containing intelligent life that is capable of creating these strange signals, seems to be unavoidable.


The most recent report from Schulz was published in late December 2011.  The fallout and reactions to all this is still in its infancy.  I am sure they will not make this public for a long time, if ever.  I have already seen and heard about efforts to work on this at several DoD and private classified labs around the world.  I am sure this story is not over. 


We do not now know how to decode the FMA signals and we don’t have a clue how it is affecting the AKR but our confirmed and verified observations have pointed us to only one possible conclusion – we are not alone in the universe and whoever is out there has vastly improved technologies and intelligence than we do.


IBAL – The latest in Visual Recon


The latest addition to reconnaissance is a new kind of camera that takes a new kind of picture.  The device is called a plenoptic camera or a light-field camera.  Unlike a normal camera that takes a snapshot of a 2D view, the plenoptic camera uses a microlens array to capture a 4D light field.  This is a whole new way of capturing an image that actually dates back to 1992 when Adelson and Wang first proposed the design.  Back then, the image was captured on film with limited success but it did prove the concept.  More recently, a Stanford University team built a 16 megapixel electronic camera with a 90,000-microlens array that proved that the image could be refocused after the picture is taken.   Although this is technology that has already made its way into affordable consumer products, as you might expect, it has also been extensively studied and applied to military applications.


To appreciate the importance and usefulness of this device, you need to understand what it can do.  If you take a normal picture of a scene, the camera captures one set of image parameters that include focus, depth of field, light intensity, perspective and a very specific point of view.  These parameters are fixed and cannot change.  The end result is a 2-dimensional (2D) image.  What the light field camera does is to capture all of the physical characteristics of the light of a given scene so that a computer can later recreate the image in such detail that it is as if the original image is totally recreated in the computer.  In technical terms, it captures the watts per steradian per meter squared along a ray of radiance.  This basically means that it captures and can quantify the wavelength, polarization, angle, radiance, and other scalar and vector values of the light.   This results in a five dimensional function that can be used by a computer to recreate an image in the computer as if you were looking at the original image at the time the photo was taken.


This means that after the picture is taken, you can refocus on different aspects of the image, you can zoom in on different parts of the image and the resolution is such that you can even zoom in on parts of the image without a significant loss of resolution.  If the light field camera is capturing a moving video of a scene, then the computer can render a perfect 3-dimentional representation of the image taken.  For instance, using a state-of-the-art light field camera and taking an aerial light field video from a UAV drone at 10,000 feet altitude, of a city, the data could be used to zoom in on images within the city such as the text of a newspaper that someone is reading or the face of a pedestrian.  You could recreate the city in a highly dimensionally accurate 3D rendering that you could then traverse from a ground-level perspective in a computer model of the city.  The possibilities are endless.


As usual, it was NRL that started the soonest and has developed the most useful and sophisticated applications for the light field camera.  Because this camera creates its most useful results when it is used as a video camera, the NRL focused on that aspect of it early on.  The end result was the “IBAL” (pronounced “eyeball”) – for Imaging Ballistic Acquisition of Light.


The IBAL is a micro-miniature focused plenoptic camera that uses a masked synthetic aperture in from of an array of 240,000 microlenses that each capture a 24 megapixel video image.  This is accomplished by a massively overclocked processor that takes just 8 seconds of video images at a frame rate of 800 frames per second.  This entire device fits into the nose of an 80 mm mortar round or in the M777 155mm howitzer.  It can also be fired from a number of other artillery and shoulder-launched weapons as a sabot round.  The shell is packed with a powerful lithium battery that is designed to provide up to 85 watts of power for up to two minutes from ballistic firing to impact.  The round has a gyro-stabilized fin control that maintains the camera pointed at the target in one of two modes.  The first mode is to fire the round at a very high angle – 75 to 87 degrees up.  This gives the round a very steep trajectory that allows it to capture its image as it is descending from a few thousand feet of altitude.  Since the resolution is very high, it captures its images as soon as it is aligned and pointed at the ground.  The second mode is to fire the IBAL at a low trajectory – 20 to 30 degrees elevation.  In this mode the gyro maintains the camera, pointing thru a prism, at the ground as the round traverses the battle zone.  In both cases, it uses the last few seconds of flight to transmit a compressed data burst on a UHF frequency to a nearby receiver.  The massive amount of data is transmitted using the same kind of compression algorithm used by the intelligence community for satellite reconnaissance imagery data.   One final aspect of the ballistic round is that it has a small explosive in the back that assures that it is completely destroyed upon impact.  It even has a backup phosphorous envelope that will ignite and melt all of the electronics and optics if the C4 does not go off.


Since the object is to recon and not attack, the actual explosive is really quite small and when it goes off, the explosion is almost entirely contained inside the metal casing of the round.  Using the second mode –  low trajectory – of firing, the round would pass over the battle zone and land far beyond without attracting much attention.  In a more active combat environment, the high trajectory mode would attract little attention.  If noticed at all, it would appear to be a dud.


The data is received by a special encrypted digital receiver that decodes it and feeds it into the IBAL processor station which is a powerful laptop that can be integrated into a number of other visual representation systems including 3D imaging projectors, 3D rendering tables and virtual-reality goggles.  The data can be used to recreate the images captured in a highly detailed 3-D model that is so accurate that measurements can be taken from the image that are accurate to within one-tenth of an inch. 


The computer is also able to overlay any necessary fire-control grid onto the image so that precise artillery control can be vectored to a target.  The grid can be a locally created reference or simply very detailed latitude and longitude using GPS measures.  As might be expected, this imagery information is fully integrated into the CED (combat environmental data) information network and into the DRS (digital rifle system) described by me in other reports.   This means that within seconds of firing the IBAL, the 3D image of the combat zone is available on the CED network for all the soldiers in the field to use.  It also is available for snipers to plan out their kill zones and to the artillery to fine tune their fire control.  Since it sees the entire combat zone from the front, overhead and back, it can be used to identify, locate and evaluate potential targets such as vehicles, mortar positions, communications centers, enemy headquarters and other priority targets.


Using this new imaging system in combination with all the other advances in surveillance and reconnaissance that I have described here and others that I have not yet told you about, there is virtually no opportunity for an enemy to hide from our weapons.

“SID” Told Me! The Newest Combat Expert

Sensor Fusion is one of those high tech buzzwords that the military has been floating around for nearly a decade. It is suppose to describe the integration and use of multiple sources of data and intelligence in support of decision management on the battlefield or combat environment. You might think of a true sensor fusion system as a form of baseline education. As with primary school education, the information is not specifically gathered to support a single job or activity but to give the end user the broad awareness and knowledge to be able to adapt and make decisions about a wide variety of situations that might be encountered in the future. As you might imagine, providing support for “a wide variety of situations that might be encountered in the future” takes a lot of information and the collation, processing and analysis of that much information is one of the greatest challenges of a true sensor fusion system.


One of the earliest forms of sensor fusion was the Navy Tactical Data System or NTDS. In its earliest form, it allowed every ship in the fleet to see on their radar scopes the combined view of every other ship in the fleet. Since the ships might be separated by many miles, this effectively gave a radar umbrella that was hundreds of square miles in every direction – much further than any one ship could attain. It got a big boost when the added the radar of airborne aircraft that could fly Carrier Air Patrol (CAP) from 18,000 feet altitude. Now every ship could see as if they had radar that looked out hundreds of miles distant and thousands of square miles of coverage. 

  In the latest version, now called the Cooperative Engagement Capability (CEC), the Navy has also integrated fire control radar so that any ship, aircraft or sub can fire on a target that can be seen by any other ship, aircraft or sub in the fleet, including ships with different types of radars – such as X-Band, MMWL, Pulsed Doppler, phased array, aperture synthesis (SAR/ISAR), FM-CW, even sonar. This allows a Guided Missile Cruiser to fire a missile at a target that it physically cannot see but that can be seen by some other platform somewhere else in the combat arena. Even if a ship has no radar at all, of its own, it can benefit from the CEC system and “see” what any other ship can see with their radar.  That is sensor fusion.


The end result, however, is a system that supports wide variety of situations from the obvious combat defensive tactics and weapons fire control to navigation to air-sea rescue. Each use takes from the CEC system that portion of the total information available that it needs for its specific situation.


The Army has been trying to incorporate that kind of sensor integration for many years. So far, they have made strides in two areas. One is the use of UAV’s (unmanned aerial vehicles) and the other is in the helmet mounted systems.  Both of these gather observed information at some remote command post where it is manually processed, analyzed, prioritized and then selectively distributed to other forces in the combat area. There are dozens of minor efforts that the Army is calling sensor fusion but it really is just a single set of sensors with a dedicated objective to feed a specific system with very specific data. An example of this is the Guardian Angel program that was designed to detect improvised explosive devices (IEDs) in Iraq and Afghanistan. Although it mixed several different types of detection devices that overlaid various imagery data, each sensor was specifically designed to support the single objective of the overall system. A true sensor fusion system gathers and combines data that will be used for multiple applications and situations.


A pure and fully automated form of this technology is sometimes referred to as multi-sensor data fusion (MSDF) and has not yet been achieved, until now. MSDF has been the goal of DoD for a long time. So much so that they even have a Department of Defense (DoD) Data Fusion Group within the Joint Directors of Laboratories (JDL). The JDL defined MSDF as the “multilevel, multifaceted process of dealing with the automatic detection, association, correlation, estimation and combination of data and information from multiple sources with the objective to provide situation awareness, decision support and optimum resource utilization by and to everyone in the combat environment”. That means that the MSDF must be able to be useful not just to the Command HQ and to the generals or planners but to the soldiers on the ground and the tank drivers and the helo pilots that are actively engaged with the enemy in real time – not filtered or delayed by processing or collating the data at some central information hub.


There are two key elements of MSDF that make it really hard to implement in reality. The first is the ability to make sense of the data being gathered. Tidbits of information from multiple sensors are like tiny pieces of a giant puzzle. Each one can, by itself, can provide virtually no useful information but become useful only when combined with hundreds or even thousands other data points to form the ultimate big picture. It takes time and processing power to do that kind of collating and processing and therein lays the problem. If that processing power is centrally located, then the resulting big picture is no longer available in real time and useful to an actively developing situation. Alternatively, if the processing power is given to each person in the field that might need the data, then it becomes a burden to carry, maintain and interpret the big picture in the combat field environment by every solider that might need it, As the quantity, diversity and complexity of the data being integrated rises, so does the processing power and complexity increase at an exponential rate. The knowledge and skills of the end user also rises to the point that only highly trained experts are able to use such systems.


The second problem is the old paradox of information overload. On the one hand, it is useful to have as much information as possible to fully analyze a situation and to be ready for any kind of decision analysis that might be needed. On the other hand, any single given situation might actually need only a small portion of the total amount of data available. For instance, imagine a powerful MSDF network that can provide detailed information about everything happening everywhere in the tactical environment. If every end user had access to all of that data, they would have little use for most of it because they are interested in only that portion that applies to them. But knowing what they will need now and in the future makes it important that they have the ability to access all of it. If you give them that ability, you complicate the processing and training to be able to use it. If you limit what they might need, then you limit their ability to adapt and make decision.  A lot of data is a good thing but too much is a bad thing and the line between those two is constantly changing.


I was a consultant to Naval Research Labs (NRL) in a joint assignment to the JDL to help the Army develop a new concept for MSDF. When we first started, the Army has visions of having a vast MSDF system that would provide everything to everyone but when we began to examine some of the implications and limitations of such a system, it became clear that we would need to redefine their goals. After listening to them for a few weeks I was asked to make a presentation on my ideas and advice to them. I thought about it for a long time and then created just three slides. The first one showed a graphic depiction of the GPS system.. In front of two dozen generals and members of the Army DoD staff, I put up the first slide and then asked them to just think about it. I waited for a full five minutes. They were a room of smart people and I could see the look on their faces when they realized that what they needed was a system like the GPS system.  It provides basic and relatively simple information in a standardized format that is then used for a variety of purposed from navigation to weapons control to location services.  The next question came quickly and that was “what would the nature of a similar system be like for the Army in a tactical environment?” That’s when I put up my next slide. I introduced them to “CED” (pronounced as “SID”).


Actually, I called it the CED (Combat Environmental Data) network. In this case, the “E” for Environment means the physical terrain, atmosphere and human construction in a designated area. The true tactical combat environment. It uses an array of sensors that already existed that I helped developed at the NRL for the DRS – the Digital Rifle System. As you might recall, I described this system and its associated rifle, the MDR-192 in two other reports that you can read. The DRS uses a specially designed sensor called the “AIR” for autonomous information recon device. It gathers a variety of atmospheric data (wind, pressure, temperature, humidity) as well as a visual image, a laser range-finder scan of its field of view and other data such as vibrations, RF emissions and infrared scans. It also has an RF data transmitter and a modulated laser beam transmission capability. All this is crammed into a device that is 15 inches long and about 2.5 cm in diameter that is scattered, fired, air dropped or hidden throughout the target area. The AIR’s are used to support the DRS procession computer in the accurate aiming of the MDR-192 at ranges out to 24,000 feet or about 4.5 miles.


The AIR’s are further enhanced by a second set of sensors called the Video Camera Sights or VCS. The VCS consist of a high resolution video image cameras combined with lasers scanning beams that are combined in the DRS processing computer to render a true and proportional 3D image of the field of view.  The DRS computer integrates the AIR and VCS data so that an entire objective area can be recreated in finite 3D detail in computer images.  Since the area is surrounded with VCS systems and AIR sensors are scattered throughout the area, the target area can be accurately recreated so that the DRS user can see almost everything in the area as if he were able to stand at almost any location anywhere within the target area.  The DRS user is able to accurately see and measure and ultimately target the entire area – even if he is on the other side of the mountain from the target area.  The power of the DRS system is the sensor fusion of this environment for the purpose of aiming the MDR-192 at any target anywhere in the target area.


My second slide showed the generals that using AIR and VCS sensor devices combined with one new sensor, of my design, an entire tactical zone can be fully rendered in a computer. The total amount of data available is massive but the end user would treat it like the GPS or the DRS system, pulling down only the data that is needed at that moment for a specific purpose.  That data and purpose can be in support of a wide variety of situations that may be encountered in the present or future a wide variety of end users.


My third slide was simple a list of what the CED Network would provide to the Army generals as well as to each and every fielded decision maker in the tactical area. I left this list on the screen for another five minutes and began hearing comments like, “Oh my god”, “Fantastic!” and “THAT’S what we need!”


Direct and Immediate Benefits and Applications of the CED Network

  ·        Autonomous and manned weapons aiming and fire control

  ·        Navigation, route and tactical planning, attack coordination

  ·        Threat assessment, situation analysis, target acquisition

  ·        Reconnaissance, intelligence gathering, target identity

  ·        Defense/offence analysis, enemy disposition, camouflage penetration


My system was immediately accepted and I spent the next three days going over it again and again with different levels within the Army and DoD. The only additional bit of information I added in those three days was the nature of the third device that I added to the AIR and VCS sensors.  I called it the “LOG” for Local Optical Guide. 


The LOG mostly gets its name from its appearance. It looks like a small log or a cut branch of a tree that has dried up.  In fact, great effort has gone into making it look like a natural log so that it will blend in.  There are actually seven different LOGs – in appearance – but the insides are all the same.  It contains four sensor modules: (1) a data transceiver that can connect to the CED network and respond to input signals.  The transceiver sends a constant flow of images and other data but it also will collect and relay data received from other nearby sensors.  In order to handle the mixing of data, all the transmitters are FM and frequency agile – meaning that they transmit a tiny fraction of data on a VHF frequency and then hop to another frequency for the next few bits of data.  The embedded encryption keep all the systems synchronized but the effect of it is that it is nearly impossible to intercept, jam or even detect the presence of these signals; (2) six high resolution cameras that have night vision capabilities.  These cameras are located so that no matter how the LOG is placed on the ground, at least two cameras will be useful for gathering information.  The lenses of the cameras can be commanded to zoom from a panoramic wide angle to telephoto with a X6 zoom but it will default to a wide angle; (3) an atmospheric module that measures wind, temperature, humidity and pressure; (4) a finally, it has an acoustic and vibration sensing module with six microphones located on each surface that is accurate enough to be able to give precise intensity and a crude directionality to sensed sounds.  It has a fifth self-destruct module that is powerful enough to completely destroy the LOG and do damage to anyone trying to dismantle it.


The LOG works in conjunction with the AIR for sound sensing of gunfire. Using the same technology that is applied in the Boomerang gunfire locator that was developed by DARPA and BBN Technologies, the CED system can locate the direction and distance to gunfire within one second of the shot.  Because the target area is covered with numerous LOG and AIR sensors, the accuracy of the CED gunfire locator is significantly more accurate than DARPA’s Boomerang system.  


The total CED system consists of these three modules – LOG, AIR and VCS and a receiving processing module that can take the form of a laptop, a handheld or a backpack system. Although the computer processor (laptop) used in the DRS was a very sophisticated analyzer of that system’s sensor inputs, the computer processors for the CED system are substantially more advanced in many ways.  The most important difference is that the CED system is a true network that places all of the sensory data on-the-air in an RF transmitted cloud of information that saturates the target area and nearby areas.  It can be tapped into by any CED processor anywhere within range of the network.  Each CED or DRS processor pulls out of the network just the information it needs for the task at hand.  To see how this works, here are some examples of the various uses of the CED system:



  Either a DRS or a CED processor can be sued to support the sniper. The more traditional snipers using standard rifles will tap into the CED network to obtain highly accurate wind, temperature, pressure and humidity data as well as precise distance measurements.  Using the XM25 style HEAB munitions that are programmed by the shooter, nearly every target within the CED combat area hit and destroyed.  The CED computers can directly input data into the XM25/HEAB system so that the sniper does not have to use his laser range-finder to sight in the target.  He can also be directed to aim using the new Halo Sight System (HSS).  This is a modified XM25 fire control sight that uses a high resolution LCD thin-film filter that places a small blinking dot at the aim-point of the weapon.  This is possible because the CED processor can precisely place the target and the shooter and can calculate the trajectory based on sensor inputs from the LOG and AIR and VCS sensor grid of the network.  It uses lasers from the AIR’s to locate the shooter and images from the VCS and LOG sensors to place the target.  The rest is just mathematical calculations of the aim point to put an HEAB or anti-personnel 25mm round onto the target.  It is also accurate enough to support standard sniper rifles, the M107/M82 .50 cal. Rifle or the MDR-192.  Any of these can be fitted with the HSS sight for automated aim point calculations.


In the case of the MDR-192, the rifle is mounted on a digitally controlled tripod that is linked directly to the DRS or CED computer. The effect is to create an autonomous small caliber artillery weapon.  That means that an operator of a CED (or DRS) computer that has tapped into the CED network can identify a target somewhere in the covered combat arena and send that data to any one of several MDR-192 rifles that have been placed around the combat area.  Each autonomous MDR-192 has an adjustment range of 30 degrees, left and right of centerline and 15 degrees up and down.  Since the range of the typical MDR-192 is up to 24,000 feet, four rifles could very effectively cover a target area of up to four square miles.  The computer data will instruct the selected MDR-192 to aim the rifle to the required aim point – accounting for all of the ballistic and environmental conditions – and fire.  As described in the report of the MDR-192 and DRS, the system can be accessed by an operator that is remotely located from the rifles and the target area – as much as 5 miles away. 


Recent tests of the CED system and the MDR-192 have proven their effectiveness. The only defense that the enemy has is to stay in an underground bunker.



  The CED network is the ultimate forward observe for artillery placement of smart weapons. Using the visual sensors of the LOG and VCS and the gunfire locator sensors of the LOG and AIR sensors, any target within the entire combat arena can be very precisely located.  It can then be identified with GPS coordinates for the dropping of autonomous weapons such as a cruise missile or it can be illuminated with a laser from a nearby AIR or MDR-192 for smart weapon fire control aim point. 


Even standard artillery has been linked into the CED system. A modified M777 Howitzer (155mm) can be linked into the CED system.  It uses a set of sensors that have been strapped to the barrel that can sense its aim point within .ooo3 degrees in three dimensions.   The CED network data is sent to a relay transmitter and then sent up to 18 miles away to the M777 crew.  The M777 is moved in accordance with some simple arrows and lights until a red light comes on, indicating that the aim point has been achieved for the designated target – then they fire.  Tests have been able to place as many as 25 rounds within a 10 foot (3 meters) radius from 15 miles away using this system.


Intelligence and Reconnaissance

  The CED system is also ideally suited to completely define the enemy distribution and activity and covertly pre-identify targets for a later assault or barrage. The AIR and LOG systems can pick up sounds that can be matched to the LOG and VCS images and video to place and identify points of activity, vehicles and radios.  The VCS and AIR imaging capability can map movements and identify specific types of equipment, weapons and vehicles in the area.  During the battle, snipers and other gunfire can be located with the acoustic gunfire locator using the AIR and LOG sensors.  The LOG and VCS systems also have gun flash identifiers that can distinguish muzzle flash in images – even in complete darkness or the brightest daylight.


One of the remarkable additions to the CED processors is the ability to recreate an accurate 3D animation of the target area. This is a 3D rendering of the area that is accurate enough that measurements can be taken from the 3D image and will be accurate to within fractions of an inch to the real world layout.  This is useful to pass the 3D rendering back to an HQ or forward planning area for use in the planning, training and management of an assault.


The CED network has just finished field testing in several isolated combat areas in Afghanistan but it has proven to be most effective. Work has already begun on improving the AIR, LOG and VCS sensors in an effort to consolidate, miniaturize and conceal them to a greater degree.  They are also working on an interface to an autonomous UAV that will add aerial views using laser, IR and visual sensors.


He troops that have used this system consider it the smartest and most advanced combat information system ever devised and the comment that “CED told me” is becoming recognized as the best possible source of combat information.

The Problems with Cosmology

Why the Universe does NOT add up!

In 2008, Lead re­search­er Al­ex­an­der Kash­lin­sky of NASA’s God­dard Space Flight Cen­ter in Green­belt, and his team, completed a study of three years of da­ta from a NASA sat­el­lite, the Wilkin­son Mi­cro­wave An­i­sot­ro­py Probe (WMAP) using the kinematic Sunyaev-Zel’dovich effect. They found evidence of a common motion of dis­tant clus­ters of ga­lax­ies of at least 600 km/s (2 million miles per hour) toward a 20-degree patch of sky between the constellations of Centaurus and Vela.


Kash­lin­sky and col­leagues sug­gest what­ev­er is pulling on the mys­te­ri­ously mov­ing gal­axy clus­ters might lay out­side the vis­i­ble uni­verse.  Telescopes cannot see events earlier than about 380,000 years after the Big Bang, when the the Cosmic Microwave Background (CMB) formed; this corresponds to a distance of about 46 billion (4.6×1010) light years. Since the matter causing the net motion in Kash­lin­sky’s proposal is outside this range, it would appear to be outside our visible universe.

Kash­lin­sky teamed up with oth­ers to iden­ti­fy some 700 clus­ters that could be used to de­tect the ef­fect. The as­tro­no­mers de­tected bulk clus­ter mo­tions of nearly two mil­lion miles per hour, to­ward a 20-degree patch of sky be­tween the con­stella­t­ions of Cen­tau­rus and Ve­la. Their mo­tion was found to be con­stant out to at least about one-tenth of the way to the edge of the vis­i­ble uni­verse.


Kash­lin­sky calls this col­lec­tive mo­tion a “dark flow,” in ana­logy with more fa­mil­iar cos­mo­lo­g­i­cal mys­ter­ies: dark en­er­gy and dark mat­ter. “The dis­tri­bu­tion of mat­ter in the ob­served uni­verse can­not ac­count for this mo­tion,” he said.

According to standard cosmological models, the motion of galaxy clusters with respect to the cosmic microwave background should be randomly distributed in all directions.  The find­ing con­tra­dicts con­ven­tion­al the­o­ries, which de­scribe such mo­tions as de­creas­ing at ev­er great­er dis­tances: large-scale mo­tions should show no par­tic­u­lar di­rec­tion rel­a­tive to the back­ground.  If the Big Bang theory is correct, then this should not happen so we must conclude that either (1) their measurements are wrong or (2) the big bang theory is wrong. Since they have measured no small movement (2 million MPH) by 700 galaxy clusters all moving in the same direction, it seems unlikely that their observations are wrong. So that leaves us to conclude perhaps the whole big bang theory is wrong.


In fact, there are numerous indicators that our present generally accepted theory of the universe is wrong and has been wrong all along.   Certainly our best minds are trying to make sense of the universe but when we can’t do so, we make up stuff to account for those aspects we cannot explain.


For instance, current theory suggests that the universe is between 13.5 and 14 billion years old.  This was developed from the Lambda-CDM Concordance model of the expansion evolution of the universe and is strongly supported by high-precision astronomical observations such as the Wilkinson Microwave Anisotropy Probe (WMAP).  However, Kash­lin­sky’s team calculates that the source of the dark flow appears to be at least 46.5 billion light years away.  That would make it three times older than the known universe!  Whatever it is would have to be more than 30 billion years older than the Big Bang event.


Or perhaps we got it all wrong.  Consider the evidence and the assumptions we have drawn from them.


The Big Bang is based on Big Guesses and Fudge Factors

ΛCDM or Lambda-CDM is an abbreviation for Lambda-Cold Dark Matter. It is frequently referred to as the concordance model of big bang cosmology, since it attempts to explain cosmic microwave background observations, as well as large scale structure observations and supernovae observations of the accelerating expansion of the universe. It is the simplest known model that is in general agreement with observed phenomena.


·         Λ (Lambda) stands for the cosmological constant which is a dark energy term that allows for the current accelerating expansion of the universe.  Currently, 0.74, implying 74% of the energy density of the present universe is in this form.  That is an amazing statement – that 74% of all the energy in the universe is accounted for by this dark energy concept.  This is a pure guess based on what has to be present to account for the expansion of the universe.  Since we have not discovered a single hard fact about dark energy – we don’t know what it is or what causes it or what form it takes – Lambda is a made up number that allows the math formulas to equal the observations in a crude manner.  We do not know if dark energy is a single force or the effects of multiple forces since we have no units of measure to quantify it.  It is suppose to be an expansion force that is countering the effects of gravity but it does not appear to be anti-gravity nor does it appear to be emanating from any one location or area of space.  We can observe the universe out to about 46 billion light years and yet we have not found a single observable evidence for dark energy other than its mathematical implications.


·         Dark matter is also a purely hypothetical factor that expresses the content of the universe that the model says must be present in order to account for why galaxies do not fly apart.   Studies show that there is not enough mass in most large galaxies to keep them together and to account for their rotational speeds, gravitational lensing and other large structure observations.  The amount of mass needed to account for the observations is not just a little bit off.  Back in 1933, Fritz Zwicky calculated that it would take 400 times more mass than is observed in galaxies and clusters to account for observed behavior.  This is not a small number.  Dark matter accounts for 22% of all of the matter in the universe.  Since Zwicky trusted his math and observations to be flawless, he concluded that there is, in fact, all the needed mass in each galaxy but we just can’t see it.  Thus was born the concept of dark matter.  Although we can see 2.71 x 10 23 miles into space, we have not yet observed a single piece of dark matter.  To account for this seemingly show-stopping fact, advocates say, “well, duh, it’s DARK matter”, you can’t SEE it!”.  However, it appears that it is not just dark but also completely transparent because areas of dense dark matter do not stop stars from being visible behind the dark matter.  So, 22% of all the mass in the universe cannot be seen, is, in fact, transparent, has never ever been observed, and does not appear to have had any direct interactions with any known mass other than the effects of gravity.


·         The remaining 4% of the universe consists of 3.6% intergalactic gas and just 0.4% makes up all of the matter (and energy) that makes up all the atoms (and photons) of all the visible planets and stars in the universe. 


ΛCDM is a model.   ΛCDM says nothing about the fundamental physical origin of dark matter, dark energy and the nearly scale-invariant spectrum of primordial curvature perturbations: in that sense, it is merely a useful parameterization of ignorance.


One last problem with modern cosmology.  There is a very poor agreement between quantum mechanics and cosmology.  On numerous levels and subjects, quantum mechanics does not scale up to account for cosmological observations and cosmology does not scale down to agree with quantum mechanics.  Sir Roger Penrose, perhaps one of the pre-eminent mathematicians in the world, has published numerous studies documenting the failure of our math to accurately reflect our observed universe and vice versa.  He can show hundreds of failures of math to account for observations while showing hundreds of observations that contradict the math we believe in.


The truth is that we have done the best we can but we should not fool ourselves that we have discovered the truth.  Much as we once believed in ether, astrology, a flat earth and the four humours – we must be willing to expand our thinking that notions like dark matter are ingenious and inventive explanations that account for observations but probably do not relate to factual and realistic natural phenomenon.


There is, however, a logical and quite simple explanation of all of the anomalies and observations that perplex cosmology today.  That simple explanation is described in the next report called “Future Cosmology”.

Fast Boat – No Power


I grew up around boats and have had several of my own – power and sail.  I also did the surfing scene in my youth but that was back when the boards were 12 feet long and weighted 65 pounds or more.  When I had a sailing sloop, I was fascinated by being able to travel without an engine.  I began experimenting with what other kinds of thrust or moving force I could use to move me over water.  I eventually came up with something that is pretty neat.


My first attempt was to put an electric trolling motor on my 12-foot fiberglass surfboard and a small lawn mower battery.  Later, I added a solar panel to charge the battery.  A newer one that I tried about two years ago was much larger and made enough power that I could use the motor at low speed for several hours.  I put a contoured lounge chair and two tiny outriggers on it and traveled from Mobile AL to Pensacola, FL, non-stop in one day.  I liked it but not fast enough.


Surfing always surprised me at how fast you can go.  Even normal ocean and Gulf waves move faster than most boats – averaging about 25 MPH.  I wanted to make a boat that could use that power.  A boat that was featured in an article in Popular Science especially motivated me.  The Suntory Mermaid II, an aluminum catamaran was built by Yutaka Terao in 2007 and has been tested.  It will sustain a speed of 5 knots using an articulated fin (foil) that is activated by the up and down motion of the boat in the waves.  This obviously works but it is slow and obviously depends on bobbing up and down.  I wanted a smoother ride and to go faster.  Much faster.  It took a few years but I did.


At first I took the purely scientific approach and tried to computer model the Boussinesq equations along with the hull formula and other math calculations to help design a method for keeping the boat in the optimum point on the wave.  I even got Plato to help and this gave me some background but the leap from model to design was too difficult to design and I was confident I could figure it out. 


What I learned is that ocean waves vary by wavelength and that varies their speed.  The USS Ramapo calculated that waves they encountered were moving at 23 meters per second and had energy of 17,000 kilowatts in one-meter length of those waves.  That is 51 miles per hour and enough energy to move a super freighter.  That is about twice as fast as the average wave.  Waves with a wavelength of about 8 meters in deep water will have a speed of about 10 m/s or about 22 miles per hour – a very respectable speed for a boat.  The energy in a wave is equal to the square of its height – so a 3m wave is 9 times more powerful than a 1m wave but even a 1 meter wave has more than enough energy to move a boat hull through the water.


I started with a small 21-foot motorsailer with a squared off stern and a deep draft keel.    I selected this because it had a narrow hull and had a deep draft for a boat this size.  It also had an unusual keel design – instead of a deep narrow keel, it extended from just aft of the bow, down to a draft of nearly 5 feet all the way back to the stern and then rose vertically straight up to the transom – giving an area of almost 85 square feet of keel to reduce the lateral forces of wave and wind action.


I installed electric motor thrusters below the waterline on the port and starboard of the stern with an intake facing down on the stern.  These were water jet thrusters I salvaged from some old outboards with bad engines.  I put in electric starter motors from cars to run the jet thrusters.  This gave me near instant yaw control so I could keep the stern of the boat facing the wave. 


After I got the yaw thrusters working and tested, I replaced the inefficient starter motors with brushless DC motors.  My new water jet thrusters are mounted on fore and aft look like a shrunk down version of the Azimuth Stern Drives (ASD) or “Z” drives used in ASD tugs.  The gimbaled thruster housing extends outside the hull while the BLDC motors are safely inside.


I then experimented with the transom/stern design and found that having a weather deck (one that could take on and empty a wave of water without sinking the boat) was essential but it could also simply be a sealed deck so that water could not get onto the deck.  I started with the former and ended with the latter.  The obvious intent is to optimize the design so as to minimize the problem of broaching – when a wave overtakes a boat and can pushes it sideways and capsizes the boat.


I also wanted to make sure that the pressure from the wave on the stern was strong and focused on creating thrust for the boat.  I called this addition the pushtram.  To do this I tested several shapes for a concave design of a fold-out transom (pushtram) that extended down to the bottom of the keel.  This ended up taking the shape of a tall clam-shell that could fold together to form a rudder but when opened, it presented a 4 foot wide by 5 foot deep parabolic pushing surface to for the wave. 


The innovation on this pushtram design came when I realized that facing the concave portion of the design toward the bow instead of aft, gave it a natural stability to keep the boat pointed in the direction of the wave travel.  As the boat points further away from being perpendicular to the wave, the pushtram exerts more and more rotational torque to direct the boat back to pointing perpendicular to the wave.  This design essentially all but eliminates the danger of broaching.


The lifting of the stern and plowing of the bow is also a problem so I also installed a set of louvers that closed with upward travel and opened with downward travel of the stern in the water.  This controls the pitch fore and aft of the boat as it moves in and out of the waves.  This “pitch suppressor” stuck out aft from the lower most point of the hull for about 4 feet and was reinforced with braces to the top of the transom. 


After some experimenting, I also added a horizontal fin (foil) under the bow that was motorized to increase its lift when the rear louvers closed tightly as controlled from a computer.  This bow-foil lift was created by a design I had developed for the US Navy that uses oil pumped into heavy rubber bladders to selectively reform the lifting (airfoil) effect of the blade.  The all-electric control could change the upper and lower cambers of the foil in less than a second.  Combined with a small change in the angle of attack (to prevent cavitation), I could go from a lift coefficient of zero to more than 10.5 (using Kutta-Joukowski’s theorem).  I also used my computer modeling to optimize laminar flow and minimize the Kutta condition, keeping the drag coefficient below 0.15.


The effect of this weird underwater configuration was to allow me to control the stern to keep it perpendicular to the wave front with the yaw jets and long keel.  I then used the louvers and front foil to keep the stern down and the bow up as waves pushed the boat.  The computer controller for all this was the real innovation.


I used eight simple ultrasonic range finders that I took from parking sensors for cars and placed them on the four sides of the ship.  Four were pointing horizontal and 4 were pointing down.  The horizontal ones gave me distance to the wave, if it was visible to that sensor and the ones pointing down gave me the freeboard or height of the deck above the water line.  I also installed a wind vane and aeronometer for wind speed and relative direction. I fed all this into a computer that then used servos and relays to control the yaw jets, foil and rudder.


I had modeled the management software in a simulated ocean wave environment using a Monte Carlo analysis of the variable parameters and it took four days of running but the modeling found the optimum settings and response times for various combinations of input values. I also developed settings to allow for angles other than 90 degrees to the following waves so I could put the boat on a reach to the winds.  This placed a heavy and constant load on the yaw thrusters but I found that my boat was lightweight enough to go as much as 35 to 40 degrees left and right of perpendicular to the wave front.


At first, I kept the sail mast and kept the inboard motor of the motorsailer but after getting more confidence in the boat’s handling, I took both off.  I do keep a drop-down outboard motor for getting in and out of the harbor. 


In operation, I would use the drop down outboard to get out of the harbor and into the Gulf and facing in the direction of the wave travel.  While the outboard is still running, I open up the pushtram and lower the bow-foil and aft pitch-suppressor and bring the computer online.  The software is preprogrammed to run a quick test of the thrusters and bow-foil and gives the boat a little wiggle to let me know it is all working.  I then run the outboard up to what’s needed to get me on a wave crest and then shut it down.  Within a few waves, the boat settles into the perfect location on the wave to receive the optimum benefit of the gravity, wave motion and system settings.  The end result was a boat that travels +/- 40 degrees to the direction the wind is blowing at sustained speed up to 35 knots or more all day long without using any gas.


Waves being as inconsistent as they are, the thrusters and bow-foil and pitch-suppressors kick in every few minutes to try to correct for a change in wave or wind direction or when I drop a wave and have to pick up another.  Between the pitch-suppressor and the pushtram, it usually only takes about 2 or 3 waves to get back up to speed again.  This happens slightly more often as I deviate from the pure perpendicular direction using the thrusters but it still keeps me moving at almost the speed of the waves for about 80 to 90% of the time.


I recently tested an improvement that will get me to +/- 60 degrees to the wind’s direction so I can use the boat under a wider range of wind and wave conditions.  I found that using some redesigned shapes on the pushtram, I can achieve a stable heading that is nearly 60 degrees off the wind.  The innovation came when I mixed the use of the hydraulic reshapeable bow-foil idea on the pushtram.  By using the computer to dynamically reshape the pushtram using pumped up oil bladders controlled by the computer, I can create an asymmetric parabolic shape that also creates a stable righting force at a specific angle away from the wind.


I also recently incorporated a program that will take GPS waypoint headings and find a compromise heading between optimum wave riding and the direction I want to go.  This was not as hard as it seems since I need only get within 60 degrees either side of the wind direction.  Using the computer, I calculate an optimum tack relative to the present wind that will achieve a specific destination.  Because it is constantly taking in new data, it is also constantly updating the tack to accommodate changes in wind and wave direction.  It gives me almost complete auto-pilot control of the boat.  I even set it up with automatic geofencing so that if the system gets too far off track or the winds are not cooperating, it sounds an alarm so I can use other power sources.


I began using a 120-watt solar panel that charges the batteries with a small generator for backup.  I keep a few hours of fuel in the on-board tank for the outboard in case the waves and wind die or I need to cruise the inland waterways or intercoastal.


Once I’m in the sweet spot of the wave and traveling at a constant speed, the ride is smooth and steady. 


I have found that the power of the wave is sufficient that I could have considerable additional drag and still not change my speed or stability.  I jury-rigged a paddle-wheel generator and easily produced about 300 watts of power with no changes in my computer controller or control surface settings.  This plus the solar panels now can keep up with the usage rates for the electric thrusters on most days without depleting any of the battery reserve.


I am now working in a drop-down APU – auxiliary power unit – which will produce all the power; I need on board with enough left over to charge some large batteries.  My plan is to then use the battery bank to eliminate the need for the outboard motor and gas.   I figure I can get about 800 watts out of the APU and can feed into a bank of 12 deep cycle batteries.  When the winds are not right, I just turn the yaw thrusters to act as main propulsion and take off.  


I recently took my boat on a trip from Jacksonville Fla. (Mayport), up the coast to Nags Head and then on to Cape May, NJ.   There was an Atlantic high pressure off South Carolina that was slowly moving north so I got out in it and caught the northerly winds and waves.  The total distance was about 1,100 miles.  Being retired from the US Navy, I used the launching facilities at the Mayport Naval Station to put to sea about 8AM on a Monday morning.  I pulled into the Cape May Inlet about 7:30PM on Tues.  That was just under 30 hours of wave powered travel at an average speed of about 27 knots.  Not bad for an amateur.  The best part is that I used just over two gallons of gas and most of the trip I just let the boat steer itself.


All the modeling in the world does not hold a candle to an hour in the real world.  I observed firsthand how frequently that the waves are always parallel to the last one and how often that they don’t all go in the same direction.  I also observed groups of waves – called the long wavelength – of waves.  The effect of all that is that the boat did not ride just one wave but lost and gained waves constantly but at irregular intervals.  Sometimes I would ride a wave for as much as 20 minutes and sometimes it was 3 or 4 minutes.  A few times, I got caught in a mix-master of waves that had no focus and had to power out with the outboard.  This prompted me to speed up my plans for installing the APU and the bank of aux batteries so I can make more use of the electric powered thrusters for main propulsion so that I could add that into the computer controller to help maintain and steady the speed.


I powered around to a friend’s place off Sunset Lake in Wildwood Crest.  He had a boat barn with a lift that allowed me to pull my boat out of the water and change the inboard propeller shaft.  Earlier, I had taken the inboard engine out and the prop off last year but left the shaft.  This gave me tons of room because I also took out the oversize fuel tank. 


I salvaged one of the electric motor/generators from a crashed Prius and connected it to the existing inboard propeller shaft.  I then mounted a 21″ Solas Alcup high thrust, elephant ear propeller.  This prop is not meant for speed but it is highly efficient at medium and slow speeds.  The primary advantage of this prop is that it produces a large amount of thrust when driven at relatively slow speeds by the motor.  It also can be easily driven by water flowing past it to drive the generator.


I used a hybrid transmission that allows me to connect a high torque 14.7 HP motor-generators and converter to the propeller shaft and to a bank of 12 deep cycle batteries in a parallel-serial arrangement to give a high current 72 volt source.  This combination gives me a powerful thrust but also produces as much as a 50 amp current at RPMs that can readily be achieved while under wave power.


Now I have a powerful electric motor on the shaft and a bank of deep cycle batteries in the keel.   The motor-generator plus the solar panels and the APU easily create enough charging current to keep the batteries topped off while still giving me about 5 hours of continuous maximum speed electric power with no other energy inputs.  However, in the daytime, with the solar panels and APU working, I can extend running time to about 9 hours.  If I have wave powered travel for more 6 hours out of every 24, I can run nearly non-stop.


 I am now working on a refined controller for all these changes.  The plan is to have the motor kick on if the speed drops below a preset limit.  The computer will also compute things like how fast and how far I can travel under electric power using only the batteries, solar panels, APU and motor-generator in various combinations.  I’ll also be adding a vertical axis wind turbine that I just bought.  It produces nearly 1 kW and is only 9 feet tall and 30″ in diameter.  For under $5,000, it will be mounted where the sail mast use to be but it will be on a dampened gimbal that will maintain it in an upright vertical position while the boat goes up and down the waves.  By my calculations, on a sunny day with a 10 knot wind, I should be able to power the electric drive all day long without tapping the batteries at all.


These changes will be made by mid-July 2010 and then I am reasonably confident that I can travel most any direction, day or night, for a virtually unlimited distance.


My next trip was planned for hugging the coastline from Cape May south to Key West – then around the Gulf down to the Panama Canal – thru to the Pacific and up the coast to San Francisco.  An investor there has challenged me that if can make that trip; he will buy my boat for $1.5M and will build me a much larger version – a Moorings 4600 using a catamaran GRP hull.  Using a catamaran hull should boost the efficient of the wave drive to almost perfection. 


This trip was all set and then BP has to go a screw it up.  I figure I’ll make the trip in 2011.

The Fuel you have never heard of….


I have always been fascinated by the stories of people that have invented some fantastic fuel only to have the major oil companies suppress the invention by buying the patent or even killing the inventor.  The fascination comes from the fact that I have heard these stories all my life but have never seen any product that might have been invented by such a person.  That proves that the oil companies have been successful at suppressing the inventors….or it proves that such stories are simply lies.  Using Plato – my research software tool, I thought I would give it a try.  The results were far beyond anything I could have imagined.  I think you will agree.


I set Plato to the task of finding what might be changed in the fuel of internal combustion engines that might produce higher miles per gallon (MPG).  It really didn’t take long to return a conclusion that if the burned fuel had more energy in the burning, it would give better MPG for the same quantity of fuel.  It further discovered that if the explosion of the fuel releases its energy in a shorter period of time, it works better but it warned that the engine timing becomes very critical.


OK so, what I need is a fuel or a fuel additive that will make the spark plug ignite a more powerful but faster explosion within the engine.  I let Plato work on that problem for a weekend and it came up with Nitroglycerin (Nitro).  It turns out that Nitro actually works precisely because its explosion is so fast.  It also is a good chemical additive because it is made of nitrogen, oxygen and carbon so it burns without smoke and releases only those elements or compounds into the air. 


Before I had a chance to worry about the sensitive nature of Nitro, Plato provided me with the answer to that also.  It seems that ethanol or acetone will desensitize Nitro to workable safety levels.  I used Plato to find the formulas and safe production methods of Nitro and decided to give it a try.


Making Nitro is not hard but it is scary.  I decided to play it safe and made my mixing lab inside of a large walk-in freezer.  I only needed to keep it below 50F and above 40F so the freezer was actually off most of the time and it stayed cool from the ice blocks in the room.  The cold makes the Nitro much less sensitive but only if you don’t allow it to freeze.  If you do that, it can go off just as a result of thawing out.  My plan was to make a lot of small batches to keep it safe until I realized that even if very small amounts, it was enough to blow me up if it ever went off.  So I just made up much larger batches and ended up with about two gallons.


I got three gas engines – a lawn mower, a motorcycle and an old VW Bug.  I got some gas of 87 octane but with 10% ethanol in it.  I also bought some pure ethanol additive and put that in the mix.  I then added the Nitro.  The obvious first problem was to determine how much to add.  I decided to err of the side of caution and began with very dilute mixtures – one part Nitro into 300 parts gas.   I made-up just 100 ml of the mixture and tried it on the lawn mower.  It promptly blew up.  Not actually exploded but the mixture was so hot and powerful that it burned a hole in the top of the cylinder and broke the crankshaft and burned off the valves.  That took less than a minute of running.


I then tried a 600:1 ratio in the motorcycle engine and it ran for 9 minutes on the 100 ml.  It didn’t burn up but I could tell very little else about the effects of the Nitro.  It tried it again with 200 ml and determined that it was running very hot and probably would have blown a ring or head gasket if I tried it for any longer.  I had removed the motorcycle engine from an old motorcycle to make this experiment but now I regretted that move.  I had no means to check torque or power.  The VW engine was still in the Bug so I could actually drive it.  This opened up all kinds of possibilities.


I gas it up and drove it with normal gas first.  I tried going up and down hills, accelerations, high speed runs and pulling a chain attached to a tree.  At only 1,400 cc, it was rated at only 40 HP when it was in new condition but now it had much less than that using normal gas.


I had a Holly carb on the engine and tweaked it to a very lean mixture and lowered the Nitro ratio to 1,200 to 1.   I had gauges for oil temp and pressure and had vacuum and fuel flow sensors to help monitor real-time MPG.  It ran great and outperformed all of the gas-only driving tests.  At this point I knew I was onto something but my equipment was just too crude to do any serious testing.  I used my network of contacts in the R&D community and managed to find some guys at the Army vehicle test center at the Aberdeen Test center (ATC).  A friend of a friend put me in contact with the Land Vehicle Test Facility (LVTF) within the Automotive Directorate where they had access to all kinds of fancy test equipment and tons of reference data.  I presented my ideas and results so far and they decided to help me using “Special Projects” funds.  I left them with my data and they said come back in a week.


A week later, I showed up at the LVTF.  They said welcome to my new test vehicle – a 1998 Toyota Corona.  It is one of the few direct injection engines with a very versatile air-fuel control system.  They had already rebuilt the engine using ceramic-alloy tops to the cylinder heads that gave them much greater temperature tolerance and increased the compression ratio to 20:1.  This is really high but they said that my data supported it.  Their ceramic-alloy cylinder tops actually form the combustion chamber and create a powerful vortex swirl for the injected ultra-lean mixture gases.


We stared out with the 1,200:1 Nitro ratio I had used and they ran the Corona engine on a dynometer to test and measure torque (ft/lbs) and power (HP).  The test pushed the performance almost off the charts.  We repeated the tests with dozens of mixtures, ratios, air-fuel mixes and additives.  The end results were amazing.


After a week of testing, we found that I could maintain a higher than normal performance using a 127:1 air fuel ration and a 2,500:1 Nitro to gas ratio if the ethanol blend is boosted to 20%.  The mixture was impossible to detonate without the compression and spark of the engine so the Nitro formula was completely safe.  The exhaust gases were almost totally gone – even the Nox emissions were so low that a catalytic converter was not needed.  Hydrocarbon exhaust was down in the range of a Hybrid.  The usual problem of slow burn in ultra-lean mixtures was gone so the engine produced improved power well up into high RPMs and the whole engine ran at lower temperatures for the same RPM across all speeds.  The real thrill came when we repeatedly measured MPG values in the 120 to 140 range.


The rapid release and fast burn of the Nitro allowed the engine to run an ultra-lean mixture that gave it great mileage while not having any of the usual limitations of lean mixtures.  At richer mixtures, the power and performance was well in excess of what you’d expect of this engine.  It would take a major redesign to make an engine strong enough to withstand the torque and speeds possible with this fuel in a normal 14:1 air-fuel mixture.  Using my mix ratio of 120+:1 gave me slightly improved performance but at better than 140 MPG.  It worked.  Now I am waiting for the buyout or threats from the gas companies.


July 2010 Update:


The guys at ATC/LVTF contacted my old buddies at DARPA and some other tests were performed.  The guys at DARPA have a test engine that allows them to inject high energy microwaves into the combustion chamber just before ignition and just barely past TDC.  When the Nitro ratio was lowered to 90:1, the result was a 27 fold increase in released energy.  We were subsequently able to reduce the quantity of fuel used to a level that created the equivalent of 394 miles per gallon in a 2,600 cc 4-cyl engine.  The test engine ran for 4 days at a speed and torque load equal to 50 miles per hour – and did that on 10 gallons of gas – a test equivalent of just less than 4,000 miles!  A new H-2 Hummer was rigged with one of these engines and the crew took it for a spin – from Calif. To Maine – on just over 14 gallons of gas.  They are on their way back now by way of northern Canada and are trying to get 6,000 miles on less than 16 gallons.


The government R&D folks have pretty much taken over my project and testing but I have been assured that I will be both compensated and protected.  I hope Obama is listening.

A REAL Fountain of Youth?

Last April, I was given my annual physical by my family doctor.  It was the usual turn-your-head-and-cough kind of checkup that included a series of blood tests.  Partly due to my work history and partly because I am aware of the benefits of a number of rather obscure tests, I pay for several extra tests that are not normally included in the average annual physical.  I get the usual cholesterol, thyroid, iron, prostate, albumen, etc but I also get some others, these extra tests include:  CBC, RBC, Hematocrit, WBC, DIFF, MCV, Hemoglobin, BPC, ABO and about two dozen others.

The ABO test determines you blood type and the level of antigens and antibodies present on the surface of the red blood cells.  This is not something that usually changes but it can point to early signs of any hemolytic or autoimmune diseases or the presence of toxins such as radiation exposure.  I have been Type “O” with Anti-A and Anti-B antibodies with no antigens for as long as I have been tested.  That is the very definition of the Type “O” blood group.  That is until this last time I was tested. 

I have always been O-negative but this last test showed I was now O-Positive.  Somehow, I had acquired the “D” antigen of the Rhesus factor.  This is not impossible but it usually results from a blood transfusion or bone marrow transplant and occurs over a long period of time.   It was also discovered that I had both A and B antigens that is only present in blood type AB.  This kind of change has not been observed before and after a delay of several weeks, I was called back for more testing.  It seems that I am somewhat of a medical mystery.

In the course of the testing, my entire medical history was examined and they found that despite my advanced age, I have a number of anomalous readings that are uncommon for my age.  My nerve reaction time is that of a 40 year old.  My heart, skin, muscle contraction and brain wave activity are all that of a 40 year old man or even younger.  Then they dumped me into a ton of other tests and drew blood and took tissue samples for a week.  These extra tests showed that the epithelial transport of nutrients and ions, T-cell activation, pancreatic beta-cell insulin release, and synaptic conduction were all abnormally high for a man of my age.  I had never particularly noticed but it was discovered that in the past 15 years or so, I have not had a cold or flu or allergy response or any other typical disease or negative environmental response. 

All this has baffled my doctors and although some tests are still going on and two research clinics are still interested, most have simply marked it off as a medical anomaly and moved on.  I, however, was very curious and wanted to know more so I broke down the problem into parts and fed it into Plato – my automated research tool – for an in-depth analysis.  The results were amazing and have some far reaching implications.  I want to tell you about what Plato found but I have to start with a lot of background and history so that you will understand how it all fits together and how Plato figured it out.

I have always been fascinated by high voltage electricity.  In science fairs, I built tesla coils and Van de Graff generators and played with Jacob’s Ladders and Wimshurst generators.  In college, I participated in research studies of lightning and worked on high energy physics as well as other fringe science related to high power electromagnetic energy.  When I got into computer simulation, I was asked to create and run simulations on the first MHD generator, the first rail gun and the first ion engine.  I also worked on computer models for a cyclotron and a massive hydroelectric system that work on waves and ocean currents. 

As a hobbyist, I liked the idea of incorporating some of this stuff into my everyday life.  Way back in the 1960’s, I created a scaled down model of an ion engine at about the time that NASA was planning to use on interplanetary vehicles.  (It appears in Science Illustrated and I made a DYI model like the one in the magazine).  It was, essentially, a negative ion generator with an extra set of acceleration plates.  Because it made no noise and used a tiny amount of electricity, I have had that first model plugged in and “running” in my home since 1967.  It actually creates moving air with no moving parts which looks really neat.

When some biologists discovered that negative ions have a beneficial effect on breathing and air quality, I made one and tried it out for a few months.  I liked it and decided if one is good then a dozen must be even better.  I made a total of 29 of them – incorporating them into fans, lamp shades, ceiling fixtures, heating and AC vents and other hidden and novel locations all around the home.  Most of these have been running since the mid 70’s in my home, office and workshops.

In the early 1990’s, it was discovered that negative ions that are bubbled up thru water have some fascinating effects on the water.  The ions destroy virtually 100% of all the germs, viruses and bacteria in the water – making it the cleanest water you can drink.  These negative ion bubbles also purify the water like no filter could ever do.  It causes metals and chemicals to dissolve and pass out of the water as gas or they solidify and fall out of the water as a precipitant that falls to the bottom of the container.  These clumps solidified metals and toxins can easily be filtered out with the cheapest paper water filter.  If the water is canted, it leaves this sludge behind.  The end results in the cleanest, purest water on earth.  But wait, there is more.  The high ion quality of the cleaned water can also be used to clean other things.  If you wash fresh fruits and vegetables in this ion water, it cleans them of all bacteria and toxic chemicals within a matter of minutes. 

It turns out that drinking this water is also good for you.  The at first I did not know why but if you think about it, your body system runs on electricity in nerves and brain activity and adding a few extra electrons to that operation has got to help.

After reading all about this, I built a device that taps into my kitchen faucet water and diverts some of the water to a 10 gallon holding tank that is hidden under the cabinet.  When it is full, it gets 6 hours of treatment from a series of aerators that bubble up negative ion air thru the water.  After 6 hours, the water is pumped into a second sealed stainless steel tank that is mounted on top of the upper most kitchen cabinets.  From there, it gravity feeds thru a line to a small spigot near my sink that allows me to use the water to wash, drink or clean with.  I built one of these back in 1995 and liked it so much that in 2001, I built four more for use in the bathrooms, office and workshop.  I have been using them ever since.

The net result of these electronic hobby projects and my fascination with electricity and ions is that for the past 35 years, I have been breathing, drinking and living in an ion-rich environment.  And specifically a negative ion rich environment – one in which there is an over-abundance of electrons, making the ions have a negative charge. 

Plato found that this was the central factor to my changed blood chemistry and other bio-system anomalies.  When I asked Plato to trace the basis of its premise, I got back pages and pages of links to leading edge biological and chemical research that took me days to read and collate into a hypothesis.  Here is the gist of it.

The presence of a negative ion-saturated environment has, over the past three decades, slightly altered my body chemistry and specifically those chemical reactions that are enhanced, caused by or results from electro-chemical reactions.  Apparently, one of the first to respond was the near elimination of free radicals from my system.  Although radicals can be positive, negative or zero charge, it appears that the nature of the unpaired electrons that create radicals is affected by the presence of an excessive amount of extra electrons.  My assumption is that the negative ions in my environment supplied the missing electrons to the unpaired electrons of the free radicals and thus neutralized them or keeping them from pairing with the wrong chemicals. 

Some of the findings that Plato referred me to discussed the electron spin resonance and described how the transient chemical properties of the radicals are counter-balanced by the electron ionization because of their de Broglie wavelength which matches the length of the typical bonds in organic molecules and atoms and the energy transfer to the organic analyte molecules is maximized.  I’m not a chemist and that is pretty deep stuff but the net result is the radical ion is negated.  Since the absence or reduction of free radicals has been proven to be highly beneficial in the reduction or prevention of degenerative diseases and cancers, the accidental result that I have achieved with my rich ion environment has been a major contribution to my good health.

The other major finding that Plato provided was concerning a vast but little understood area of bio-chemistry called ion channels.  Ion channels are essentially electrochemical paths on the plasma membrane of biological cells that allows the cells to control their interaction with other cells, chemicals and proteins.  In effect, these ion channels are the mechanism by which biological cells interact with the cells, chemicals and molecules around them.  You could imagine these channels as electrically powered communications devices.  If they are in good working order and properly charged, then the cell does what it is suppose to do.  If the channels become weak, then the cell has a greater susceptibility to damage or to be compromised by the wrong connection with other substances, cells or viruses.

Part of how this ion channel works is by creating a gated voltage gradient across the cell membrane.  This voltage gradient underlies the voltage activated channels and plays a critical role in a wide variety of biological processes related to nerve, synaptic, muscle and cell interactions.  If the ion channel is strong then the voltage gradient is strong and the cell functions in an optimal manner.  If the ion channel is weak, then the voltage gradient is weak and the cell is subject to a variety of interference and inefficiencies in its functioning.

Plato found numerous other supporting aspects of this interaction in the form of ionotropic receptors of ligand molecules and ion flux across the plasma membranes that also benefit from a strong ionic environment.  There are also the 13 chloride channels and other transmembrane helices that function in this ion induced voltage gradient environment. 

What Plato postulated is that the ion-rich environment I have been living in for the past 35 years has created an excessively powerful voltage gradient on these ion channels – making the channels function not only in an optimum manner but making them extremely difficult to block.  I am not willing to test it, but Plato has speculated that I may be immune to a long list of toxins, chemicals, genetic disorders and diseases that disrupt the normal functioning of ion channels.  As a result, I am, apparently immune to the puffer fish toxin, the saxitoxin from “red tide”, the bite of black mamba snakes, and diseases like cystic fibrosis, Brugada syndrome, epilepsy, hyperkalaemic paralysis, and dozens, perhaps hundreds of others.

I find this all fascinating and very appealing because it means that I may be around for a lot longer than I had thought I would.  I seem to be in excellent health and have not experienced any decrease in mental activity – in fact, I often think I can do things now that I could not do when I was 40 but I have always just attributed that to experience, age and a lifetime of accumulated wisdom.  Now it appears that it may have been because I started messing around with negative ion generators back when I was in my 30’s and have, quite by accident, created a sort of fountain of youth in the air and water of my house.



The Hang Glider Incident

Two years ago today, it happened.  It changed my life.  At the time, I was care free and enjoying my 200th hang-gliding flight.  I had saved up to do something special.  My plan was to pay a hot air balloon pilot to take me up to 17,000 feet over the northern Green Mountains of Vermont.  I would then try to glide as far south as I could, using the air currents (thermals, ridge lift and mountain waves) of the east-coast mountains to help keep me aloft.  I didn’t realize how far I would travel.

I asked an old friend of mine, Eddie, if I could borrow a 2-person hang glider because it is designed larger for more lift.  He agreed and said to come by sometime and he would show it to me.  He said it was his Mars glider.  I knew a two person model from a maker called Moyes was called the Mars so I figured he had one of those old 1984 designs.

I carefully prepared a special backpack of all the goodies I might need.  I had two radios – a 5 watt CB handheld and a VHF transceiver going to my boom mike.  I had water, food, a bunch of survival and camping equipment.  I had rigged a small 5 watt solar panel to the top of the sail and wired in my iPhone, MP3 player and GPS plus a digital altimeter (variometer) that is combined with a small calculator sized flight computer.  I tried to consider everything from the worst case scenario to the most ideal comfort.

When I arrived, Eddy’s took me out to his hangar and showed me his “Mars” hang glider.  It was huge!  It was, in fact, not really a “hang” glider, Eddie told me.  It was actually a tailless, foot-launched rigid wing sail plane, similar to the Swift – a Stanford design that dates back to the mid 1980’s.  I was familiar with the Swift design and had even flown one.  It is not really in the same class as hang gliders at all.  The one I flew had a 41 foot wingspan with vertical winglets and an aircraft style joystick controlling fully functioning control surfaces that want by names like elevons, flaperons and spoilerons but coming from years of flying aircraft in the Navy, I just called them elevators, flaps, spoilers and rudders.

Eddy’s Mars version was a new prototype that used carbon fiber struts and a metallic mylar/Kevlar laminate for the wing skin but because of the rigid frame, it has fantastic performance and strength.  It was originally developed by and for NASA as a possible Mars exploration vehicle but when funding for Mars missions were cut, the glider project was cancelled and this particular model was given to the engineer that did most of the design and construction of the glider.  It just happened that he (Eddy) and I served in the Navy together at NRL and he retired only about 30 miles from where I live.

The glider is an extension of Swift design but it has been enhanced using computer modeling and more exotic materials.  It has a 54 foot wing and a full pilot fairing shaped like a bomb slung under the center of the wing.  It was designed for the thinner air of Mars so it had outstanding glide slope performance in the thicker air of earth – I was told it might get as much as 60:1.  As I was looking at it, I told Eddy, “This is looks like the best unpowered glider ever made”.  He smiled and said, “Well, that’s partly right”.

Eddy showed me the lexan fairing, seats, wing construction and how it was going to be lifted by the hot air balloon.  He then showed me one feature on it that was so advanced that Eddy said it was just not worth the time and effort to train me on it.  This glider actually had an autopilot.  It sounds weird but at the same time it is a logical extension of putting more real aircraft control surfaces on a glider.  In this case, they created a set of unique double-layer panels on the wing skin that are ribbed with shape-memory alloy wires.  These wires respond to electrical signals from a tiny flight computer that uses a small polymer battery that is charged by a large lightweight flexible solar panel on the upper wing surface.  When these wires are heated up by a flow of electricity, they change shape into something that has been programmed into their molecules when the wire was forged.  In this case, the wires go from straight to being curved by varying amounts depending on the voltage applied.  I had read about memory wire but had never seen a practical application of it until now.

The flight computer, which is actually called the ASFS for auto-stability flight system, takes readings from an internal GPS, and a set of pitot static tubes, rate gyros and tension and torsion sensors located throughout the glider’s frame.  When fully deployed, it also uses a small sensor module that hangs on a thin wire from the tail of the fairing and drops down several hundred feet where it measures temperature, pressure and winds.  It also uses lasers that point in almost every direction that measure thermal density, humidity and air movement with as much or more accuracy as a Doppler radar.  This gives the flight computer all the data it needs to keep the glider stable and to compensate for the shortened fuselage which can make rigid hang gliders more susceptible to spin and wing torque.

When activated, the ASFS (auto-stability flight system), as Eddy calls the flight computer, uses all these inputs to compute the optimum flight configuration to maintain a specified heading and attitude.  It can be set to seek out and maintain a steady altitude or a steady climb or descent on a given heading.  It accomplishes this by adjusting critical panels in the airfoil control surfaces by flexing these thin memory wires with computer controlled electrical pulses.  After measuring the entire envelope of air around the glider, it can optimize the wing for the best possible performance.  The end result is that in a head wind of more than 7 knots, it can climb steadily and when trimmed properly, it can achieve better than 90 MPH.

The system was designed because the hope was that this glider would be so efficient that it could be used to travel long distances or remain aloft for long periods of time while traversing the thinner air of the Martian surface.  Although it was created to be nearly fully automatic, it was also a one-of-a-kind prototype that cost nearly $30 million to develop.  I told Eddy I thought it was kind of neat to have a glider with an auto-pilot.  He frowned at me and said, “It’s an auto-stability flight system, not an auto-pilot”.  I said, “Yeah, whatever, it’s still an auto-pilot”.   Eddy said it was impossible to remove the ASFS so I should just not mess with it.   I agreed and pretty much forgot about it but I also marveled at the thought that this glider might actually be able to fly farther than I expected.

The flight position wasn’t like a hang glider but more like a real airplane.  The fairing covers two narrow seats of fabric drawn tight between the frame spars.  Once it is airborne, you close two small doors that look like bomb bay doors under the seat.  This creates a fully enclosed cabin of Mylar and lexan.  The ASFS control panel and other switches and lights are on a drop down panel above the pilot seat.

The right armrest has a tiny joy stick that controls the elevons and flaps.  It has the usual joystick movements but it also rotates and goes up and down.  As I moved it, I was shocked that nothing was moving – then Eddy reached in and flipped a switch on the overhead panel and everything shook and started working.  The damn thing was fly-by-wire!!  That means that there are no rods and cables going to all the control surfaces – just tiny wires that activate solenoids or motorized screw actuators to move the flaps and elevons and all the other surfaces this thing can control.  Eddy told me that NASA had figured out how to make the fly-by-wire more reliable and lighter weight than cables.

Eddy had rigged the seat so I could sit in the center and place my bag of goodies on the left and right of me in the same seat.  Since it was designed to carry two, I could take all my stuff and still be way under its normal max weight – making its performance even better.  I even found out that it was rigged to use a solar powered electric propeller that would extend out of a tube in the tail of the fairing but Eddy said it was not setup.  I’d be using just the glider aspects on this flight.

Since I was hopeful of being able to fly perhaps as much as a few hours after dark, I was pleased when Eddy showed me that it was rigged with a host of blinking LED lights and two small quartz-iodine beam lights for landing.   All were wired to the lithium-ion battery that was being charged by the wing top solar panel.  He also showed me that NASA had rigged a small generator to a small propeller to give me power after dark using the forward motion of the glider to spin the prop.  He warned me that it was designed to power the ASFS and a few LEDs so I should not be thinking of heating up a coffee cup with it.   As you might imagine, I was getting pretty optimistic about my flight and using all these goodies but since I had nobody tracking me – no ground crew – I had to plan on being on my own, no matter what happened.

Actually, having no ground crew was part of the plan.  I didn’t want to be reporting to anyone or be trying to meet some schedule or destination.  I specifically wanted to simply soar for as long as I wanted to and go wherever the winds took me.  No commitments, no obligations, no limitations.  I accepted that at some point, I’d have to figure out how to get back home but I knew I could rent a car or truck and manage somehow.  The idea of just going without a care as to where or when was a fantastic feeling.  When I tried to express this to Eddy, he frowned and told me not to break his glider.  Then he chuckled a little and said that I probably couldn’t break it if I tried.  I wasn’t sure what he meant but was glad he was letting me use it.

The flight in the hot air balloon was a fantastic experience all by itself.  We launched out of North Troy, Vermont, 2 hours before the sun came up so we watched the sunrise from about 10,000 feet up.  We had caught some very light winds out of the east and were moving toward Jay Peak, about 15 miles away.  Except for the occasional blast from the burners, it was totally silent.  Even from 5,000 feet up we could see and hear traffic and dogs barking.  When we finally got up to about 16,000, I started getting ready and as we hit 17,000 feet exactly, at 6:18AM, I pulled a cord and had a short free fall of about 20 feet before the wing caught the air and leveled off.

I was comfortable despite the 21 degree air temperature, warm in my baggie around my legs, snug behind the fairing under the huge wing above me and wearing a special helmet with a full head shield of clear plastic.  I had on gloves and enough layers of clothes to be very comfortable.  The wing was nearly three times the wingspan of my own hang glider but was completely silent – the Mylar and Kevlar skin was pulled so tight on the frame that there was no flapping or ruffling sound.  The only wind noise was coming from the two small round air vents on the left and right panels of the lexan windows.  I pulled them closed and it was almost silent.  I thought I was in paradise.

I was very impressed with the performance of the glider – the rigid wing gave it good forward speed while the huge size and lift gave it a great glide slope.  It was hard to gauge the real glide slope while over the mountains since I was remaining in nearly flat flight or even gaining slightly in altitude as I traveled south west along the ridge lines.  The vertical updraft winds from the slopes were pretty weak up this high but apparently they were there enough that I was measuring only about 1 or 2 meters of drop in altitude (sink) per mile of forward travel.  That was amazing performance.

About 50 miles south, as I was passing over Ricker Mt., I was still at 16,500 feet.  Another 100 miles south, as I passed over Mt. Wilson, I was still above 16,000 feet.  It was just before 9AM and I computed I was averaging almost 50 MPH.  This was a shock because I had not really been paying attention to my gauges or tracking my progress.  I was too caught up in the sights and the whole experience of it.  The helmet and fairing let me have a full field of view without feeling the wind and gave me no sense of my speed.

Around noon, I had descended to just under 15,000 feet but was able to use the dual ridgelines near Mt. Greylock, and some cooperative mountain waves and winds to climb back up.  Only took two 360 degree spirals to get back up to just over 16,000 feet and then hold that down to Mt. Holy to make the crossing over the Hudson River valley just south of Albany, New York.   I had been fortunate so far to have caught a lot of good thermals and updrafts from the mountains.  The large high pressure center over central New York had given me very favorable low level winds out of the north east producing great lifting winds for my soaring.  Now I had to cross a huge valley of about 25 miles before I could get back to some updrafts from the Blackhead Range near Palenville, NY.

As I passed between Doll and Shaker mountains, just west of Pittsfield, I turned south west toward Queechy lake which I could see reflecting the early afternoon sunlight.  The crossing was mostly uneventful; I caught a few updrafts but mostly relied on the lift of the wing to sustain as much altitude as possible.  I took the time to eat for the first time and drink some water.  A single engine private airplane circled around me a few times and I gave hand signals to the pilot to tune in my frequency on his VHF.  We spoke briefly, exchanging pleasantries and small talk.  He was surprised I had come so far and was still so high.  He told me that a low pressure system was developing over Maryland and moving northeast.  I laughed and told him I wasn’t likely to make it out of New York, let alone get as far south as Maryland.

The clear sunny skies were giving me some interesting thermals that were hard to read as I passed over the river new Catskill but within a few minutes, I was beginning to feel the updrafts coming off the sharp cliff face of North Mountain.  I had descended to about 9,800 feet which worked out to be just over a 30:1 glide slope.   That was about what I expected but a lot less than what Eddy had led me to believe that this glider was capable of doing.   I caught the ridge lift and circled in it for 30 minutes as I climbed back up to 12,000 feet.  It was about 1PM when I headed west again along the southern side of the Blackhead Range.  This route took me slightly north again up toward Prattsville but I was getting really good lifting air. By the time I got to the southern tip of the Schoharie Lake, I was back up to nearly 15,000 feet.

I turned southwest again, heading toward Roxbury and following highway 30 which runs along the ridgeline of a shallow range.  It did not give me the lift I wanted but it kept me level except for passing over the Pepacton Reservoir.  From there, the land flattened out so I made a bee-line for Elk Hill north of Scranton where I circled for about 45 minutes to gain height and map out the rest of my flight.  It was now about 4PM and I was topping out at about 14,000 feet.  This was way better than I had planned so I started thinking maybe I will make it to Maryland.

I had been at this altitude several times before and now noticed it was considerably cooler up here indicating that upper level cooler air was moving up from the south.  This was the weather that I had heard about earlier that was supposed to be over Maryland but because I was so high, I was encountering the changes sooner.  I used my 3G connected iPhone to get into an internet weather radar and flight information web site to see the latest patterns and winds.  I wasn’t surprised but was very pleased that the low pressure were creating lower level winds out of the south west and were tapering off up to about 10,000 feet and then above that, the upper level high pressure winds were out of the north east.  Although this mix of opposite flowing winds created a layer of turbulence, it was tolerable and I was confident my glider was strong enough to take the buffeting.

After gaining as much altitude as I could off Elk Hill, I headed south along the Scranton valley and then followed as much as I could, the ridge lines that fan out over eastern Pennsylvania.  I found that I could catch a slight tail wind above 10,000 feet until I descended into the turbulence and then I’d catch an updraft and gain a little.  I repeated this dive-and-climb maneuver all the way down the south-easterly ridge lines.

I as I moved further south, I found the boundary layer between the lower Low and the upper High was moving up in altitude, indicating that I was moving more toward the center of the Low.  This created more and stronger headwinds that gave me good lifting air but slowed my forward progress to a snail’s pace.  By 7PM, I was still above 10,000 feet but I had only made it to just west of Hagerstown, Maryland.  The low pressure center was passing from my right (west) to my left (east) and the winds were shifting rapidly from head winds to tail winds.

By 9PM, I was getting really cold but I was moving with a ground speed of almost 80 MPH, encountering a lot of turbulence and descending faster than I have on any other part of the trip.  My GPS told me I was near Covington, Virginia but I was down to about 5,000 feet.  I was getting some reasonably good updrafts from the ridgelines but it was not enough to take me much higher.  My plan was to make it to Potts Mountain and circle it to get some altitude but the tail winds and turbulence were getting worse and I could see just ahead, some rain with flashes of lightning.  I had been lucky in avoiding rain so far but now it looked like a wall I could not go around or over.  The lightning was giving me brief silhouettes of the skyline, trees and storm clouds.

It suddenly dawned on me that I might be going down in these rugged hills of Appalachia at night in a storm and with no one knowing where I was or being able to help.  The terrain below me was all trees and mountains with no apparent clearing or opening big enough to plan a landing.  As I descended below 3,000 feet, I raised my helmet visor and took off my gloves so I could see better and not fumble with the tiny joystick.   I was able to see brief flashes of isolated houses and cars in the forest below.  As the tailwinds grew stronger, I was temped to turn back north and just ride the winds to a better landing but I was determined to continue south.  It was a bad decision.

I was now looking up to the ridge lines and mountain peaks above my altitude as I skimmed the trees tops along the valley walls.  The winds were jerking me up and down as my wing lights were now lighting up the trees below me.  To make matters worse, it started to rain.  I was now only a few feet above the trees and was desperately looking for any opening to land without wrecking the hang glider.  I had been following State Road 18, hoping to find a wide place in the road when I was surprised to see a large farmer’s field ahead when a large flash of lightning lit up the whole area around me.  I aimed for the field and was coming in fast and wet over the trees at one end and about to descend onto the field and try to do a pylon turn to land into the wind.

I had turned into the wind with less than 100 feet altitude and was getting ready to flair for the landing – that’s when it happened.  The blinding light and loud sound of the lightning numbed me all over.  I felt the heat from the flash as if someone had suddenly put me naked under a dozen heat lamps.  Even before the flash and loud explosion began to subside, my vision closed down like I was looking through a tunnel and then all went black and I was out.  I didn’t have time to think about landing or falling or anything.  I just winked out.

The cold on my face was my first sensation.  Then I felt my cold hands.  I could see only black.  I opened my eyes and blinked but I still could not tell if I had my eyes open or closed.  All was black.  Then I turned my head a little and could see the wingtip lights on my glider.  There were LEDs that were pointing away from me but I could see that they were lit.  As I was trying to gather my senses and remember what had happened, I was again aware of the cold on my face and hands.  Just as I remembered the lightning, I jerked my head around to see what had been destroyed by the strike.  I figured the wing material would be shredded and all the electronics would be fried.  I whipped around in my seat as I surveyed the whole glider in the dim light of the wingtip LEDs.  I groped for a switch that would turn on some other lights that would let me examine the frame and my instrument panel.  When I flipped on the overhead flood and landing lights, I was surprised that everything looked normal.

My hands and face were now getting so cold from the cold foggy-wet wind that I was totally distracted from everything else to try to get my face and hands warmer.  As I was fumbling with the visor and pulling my hands into the sleeves of my jacket, I realized I should be able to just sit up on the ground and get out my gloves and face mask from my duffle bag.  I bent my knees and pushed them thru the bomb bay doors under me and reached for the ground …and it wasn’t there.  I moved and swung around in my seat harness to extend my legs but instead of touching the ground, the whole glider lurched down and to the right and I felt a rush of wind in my face and my legs were getting really cold.  It suddenly dawned on me that …I was still flying!

I quickly pulled my legs up and closed the hatch doors and straightened out and tried to stabilize the glider but it was already leveling out so my movements of the joystick induced even more violent dips and rocking followed by more leveling out.  I tried to grab the joystick like it was my only lifeline to survival but as my panic subsided; I realized I was in straight and level flight.  I glanced at the instrument panel thinking it had been fried by the lightning.  It said I was at 12,557 feet and climbing and on a heading of 119 degrees (slightly south of due east).  There was no way I could be that high.  I searched for other instruments to crosscheck the altitude and heading.  The variometer, the GPS and the backup barometric altimeter all agreed.  I really was that high.  I thought I must be insane.  This can’t be right.

As I panicked over being whether I was crazy or dead, I saw a persistent flashing LED on my overhead dash panel.  It was labeled ASFS.  I suddenly realized the damn auto-pilot had flown me up this high.  It must have kicked on when the lightning flashed and taken advantage of the strong head winds at the landing site to gain altitude.  I had no idea what it was set to or where it was heading.  I didn’t even know how to turn it off or change its settings.  The flashing LED was just above a small hole which was probably an input jack to connect a user interface of some kind; something that Eddy did not give me.

Before I did anything, I needed to figure out where I was.  I searched around for what instruments were still working and reliable.  The panel lights were out so it was hard to see the screens of the GPS and the variometer since they were simply separate devices velcrowed to the dash panel.  The simple backup systems consisted of a magnetic compass, a backup barometric altimeter and a simply gyro-based artificial horizon combined with a turn-and-bank indicator also called an AHTB.

My watch showed 3:14 AM.  I had been unconscious for nearly six hours!  The GPS put me an incredible 290 miles east of Norfolk, VA. – out over the Atlantic!  I was nearly half way to Bermuda!  I crosschecked and there was no indication that this was wrong.  I was flying just above a cloud layer – skimming into it every few minutes.  That was the wet fog wind I was feeling on my face and hands.  There was a higher cloud layer above me that hid the stars.  Based on the last weather map I downloaded into my iPhone, I was in the southern most portion of the low pressure system that was hitting Virginia and Maryland.  This area of the cyclonic would have winds that generally blew east to west giving me headwinds while I was flying east.

Just as I was trying to figure this out, the glider pitched forward into a slight descent back into the lower cloud layer.  I pulled on the joystick but only succeeded in creating a lot of turbulence and rocking action.  The auto-pilot was obviously taking control and had decided to descend for some reason.

The cold wet fog of the cloud was making we hurt with the pain of the cold.  I figured the auto-pilot had gotten me this far, I might as well let it steer a little while longer while I got out my gloves and face mask.  After 5 minutes of tussle with my bag in the dark, I was all snug in my heavy gloves and full face mask.  I noticed that we were moving at 97 miles per hour – air speed but about 66 miles per hour ground speed – a 31 mph head wind.  The auto-pilot continued to descending until the glider was vibrating all over and we had hit 123 mph – and then it slowly pitched up into a gentle ascent and we climbed back up to just over 13,000 feet.  Then we leveled out for a few minutes before starting another slow and shallow descent.

I figured out that the auto-pilot was using the probe on the long wire hung under the glider to figure the optimum glide path for using the winds to the best advantage.  We would descend into stronger headwinds to build speed and then slingshot up to gain altitude where the winds were lower.  This could be repeated over and over to maintain a relatively high altitude while not having to fly inside the clouds or in strong head winds.  This auto-pilot was smart.

It was time to try to turn around and get out of the danger of flying out over open water.  If I went down out here, I’d simple die without a raft or food or water.  The problem is that gliders don’t fly very well going down wind in high wind speeds.  A tail wind might give you a higher ground speed but it also gives you zip for lift so you can’t maintain any altitude.  That is certainly not what I want to do when I am 300 miles from the nearest land.  I had to think very carefully about what to do.

The low pressure zone I was in had to be moving east or north east fairly fast and I was sure that I was already in the far outer fringe of its southern edge – meaning it was just a matter of time before I flew out of it and into a zone with light and variable or even tail winds.  If I tried to stay with the weather front, I’d have to fly north east but that takes me into an area of the ocean where there are no islands for hundreds of miles.  If I continued on I might be able to make Bermuda but that would be another 400 miles and perhaps 6 or 7 hours of flight under ideal conditions.  I almost certainly would fly out of those ideal conditions within the next hour or two.  There was no option at all to turn south or west.  Turning north will keep me in the air but there is nothing in that direction to land on.  Staying on an easterly heading might get me to Bermuda but is not very likely.

The option to look for a ship or contact a plane struck me as the best possible option.  If I could get thru on my VHF transceiver or iPhone or CB, I could simply land near them and get picked up.  It was coming up on 4AM so I was not sure who might be listening now but I started getting out the equipment to make the attempt.

I figured while I was this high, the VHF might be the best bet because it gives me line-of-sight (LOS) connection to any other VHF in the area which would be a range of about 25 to 35 miles radius.  Then I got to thinking that 25 miles is not much in this big ocean.  The VHF was in my vest pocket and was also already hooked up to my boom mike and head phones so all I had to do was turn it on.  I began broadcasting on various frequencies, asking for anyone to respond.  After talking for 3 or 4 minutes, I would listen for 5 or 10 minutes and then repeat.  I did this for more than an hour and then I heard a short beep beep and the LED on the box in my vest went out.  I had killed the battery.  I plugged it into the solar panel but I knew it would take hours to recharge and only after the sun came up.

By this time, the sun was beginning to lighten up the clouds in front of me and I was noticing that the auto-pilot was dipping further and further down in altitude and coming up less and less.  I was now just over 11,000 feet at the peak of the dive and climb cycle.  This was not good.  The sun was also becoming more clear and distinct on the horizon meaning that the cloud cover was getting thinner.  When the clouds are gone, the head wind would probably go also.

Now I was beginning to panic.  I spent several minutes thinking about my position and then trying to figure out exactly where I was.  At a few minutes before 6AM, I figured I was about 127 miles west-north-west of  Bermuda, moving at about 41 mph ground speed on a heading that would take me just north of the island.  At this speed, I needed to keep this up for about 3 more hours.

I don’t know why or how but the auto-pilot seemed to be making corrections to take me directly to Bermuda.  My guess is that it had a built-in GPS with maps that were used for its testing phase.  It was designed for flying autonomously on Mars so it probably had a logic circuit that seeks out the best landing sites.  That was all just a guess but it seemed to be working that way for now.  I figured I’d let it continue while I tried the CB and iPhone.

The face mask and big gloves I was wearing made the effort ten times more difficult.  I had to pull the duffle out and unzip it and dig around and find the radios and then operate them with these big clumsy gloves on.  I just remembered that cruise liners set up cell phone systems on their ships so that passengers can call on their own cell phones.  These LOS systems could reach out 50 or 75 miles from my altitude.  With luck I might even be able to pick up Bermuda.  This made me excited as I pulled at the bag to see where the iPhone was.  I was bending over the bag pulling against my harness straps while pulling on the bag, trying to see out of the helmet face mask I had on that was riding up against my chest and blocking my vision.  I spotted the iPhone out of corner of one eye and reached for it.   I should have slowed down a little.

My glove straps caught on the zipper of the bag.  I jerked my hand to get it free and watched as the iPhone flew in a slow arc up over my legs and down onto the landing hatch doors under me.  I jerked forward as fast as I could to grab it before it slipped between the doors and fell.  As I jerked forward and reached between my legs for the phone, the seatbelt harness tightened and jerked just as hard to snap me back into my seat.  It also shook the glider enough that the landing doors opened just a little and the phone gently bounced out of the opening and quickly disappeared below and behind me.  It was gone.

I stared at the patch of clouds where it had fallen as if to see if it would come back.  What stupidity.  I was angry.  It was those damn gloves that made me lose it.   I shook and waved my hands wildly trying to shake the gloves off.  At nearly the same time, they both flew off into the air, down between my legs and out the landing doors.  And that damn visor on my helmet that kept me from seeing clearly.  I ripped it off and flung it away.  I was furious and felt somewhat relieved that I had punished the culprits that had caused me to lose my only chance at a radio contact.

My relief did not last long.   Reality set in as I suddenly became aware of the intense cold on my hands and face.  The temperature was 19 degrees F on the gauge and the wind chill from the open landing doors probably brought it down to well below zero.  I carefully dug out my wrap-around sun glasses and the CB radio.  I zipped up the duffle bag and tied off the CB to my vest and plugged in the boom mike and turned it on.  I then pulled my leg baggie up and pulled my coat up over my neck and lower part of the helmet so only the edges around my eyes and the sun glasses were exposed to the wind.  I stuffed my hands into my coat pockets and started transmitting on the CB.

The idea of the CB made sense when I was thinking of flying over highways where truckers still use these radios.  It made a lot less sense when I am out over the Atlantic Ocean and a hundred miles from the nearest land.  I transmitted, listened, changed channels and then repeated for an hour with not even a hint of response.  I gave up.

I had been so busy with being angry about dropping the iPhone and trying so hard to make the CB work that I had not noticed that over the past hour, I had lost almost half my altitude.  I was now down to 5,200 feet and descending more.  The head winds and clouds had gone and the warmer lower altitude air and the bright sun felt good on my face but now I could clearly see the ocean below me.  There were whitecaps on huge waves leaving long tails of foam on the surface.  I was still a mile above the waves but they looked big with deep valleys between the white crests – so deep that the morning sun was casting deep shadows that darkened the wave troughs even more – making them seem even deeper.

I looked ahead, hoping to see land but it was still just over 80 miles away.  The ASFS auto-pilot appeared to be still working because I could see panels in the control surfaces changing position as the memory wires were flexing.  The sunlight was giving a boost to the batteries and it seemed that more and more of the panels were being manipulated as it got brighter.  I guessed that the system had a built-in power saver for night flight.  I watched the GPS and the altimeter closely for a few minutes and figured out I was descending at about 40 feet per minute.  That worked out to be 2,400 feet per hour.  Being off in my descent by a few feet per minute could make the difference of landing miles sooner or having plenty of time.  At my current speed and with no changes, I figured I would land about 10 miles short of Bermuda – close but too far to swim in the cold open ocean.  I also remembered that Bermuda is known for having a high concentration of Great White sharks.

I used the GPS to plug in Bermuda as a waypoint and it calculated and then pointed about 5 degrees to my left – meaning that I was not heading directly at Bermuda but it was hard to determine if this was the auto-pilot’s correction for a cross wind or the GPS pointing to another part of the island or simply a mistake and I would miss the island all together.  I had to think of what to do but this was never a situation that I expected or trained for when I left on this adventure.

Do I try to take this off auto-pilot and steer it myself or let it navigate and trust it will take me where I need to go.  Not having any way to communicate or control the ASFS is making me very nervous since I don’t know if it will take me to Bermuda or not.  I searched for how it was wired so I could disable it if I needed to.  To see the back of the control console, not thinking, I pulled myself forward on in the seat and the whole glider took a dive followed by the ASFS trying to correct by using the flaps and stabilator and elevons and trying to improve lift and reduce the pitch forward and down.  I let go of the joystick and let the glider re-stabilize but I had lost nearly a hundred feet of altitude – that would mean I’d hit the water sooner and have to swim several hundred yards more.  I’d have to be more careful if I want to live to see tomorrow.

I did find the cable that connected the control console to the lithium battery pack and solar panels above my head.  Just above my left shoulder was a connector that would disconnect the power.  I was sure that would kill the auto-pilot but I was still not sure if that was the right thing to do.  What could I do that would give me greater range?  I had to believe the ASFS was giving me the optimum flight profile because if it wasn’t, I was not sure I could do any better.  I imagined all those old movies of airplanes that got shot in the war over Germany – oh damn! Of course, they dumped weight.  I could do that but what could I throw out that would make a difference.  A few pounds would have no effect so throwing out my MP3 player or even the CB would do nothing.  I started to grab for the duffle bag but remembered the last time I moved fast so I carefully moved it from my side to in between my legs.

The shift in weight distribution as I brought the bag forward changed my flight profile – the angle of attack increased – I pitched up just a little – but the ASFS compensated and I kept stable with the same shallow descent.  I rummaged thru the bad and took out my food bag.  I figured I could toss most of that – the water, sandwiches candy bars and other stuff – then I figured why not get rid of it by eating it.  I would get it out of the bag and also feed my hungry stomach.  I began eating the sandwich in big bites and guzzling the water.  I could feel the water bottle getting lighter and tossed the sandwich bags as I finished them.  I imagined how much weight I was saving and how much less I’d have to swim….oh what an idiot!  I wasn’t dumping any weight; I just moved it from the bag to my stomach.  I dropped the rest of the food and cussed at myself for several minutes for being so stupid.

I then reached in and grabbed my camping bag.  I had thought I might have to land and camp somewhere in the mountains for a night or two before I got picked up.  I had a Mylar sleeping bag and a large plastic tarp, a tiny pellet cook stove with a metal cup and a Leatherman multi-tool.  I figured all of it weighed less than 3 pounds.  I grabbed the multi-tool first.  It was one of those really expensive well made tools that has a gazillion blades and tools – pliers, hammer, knife, file, axe, saw…all kinds of stuff.  It was a gift from my sister and probably cost $150 or more.  It could do so many things; I figured I might need it so I tossed it back into the duffle bag.  The pellet stove was just a simple metal “X” that held a fuel pellet and the metal cup.  I tossed it even though it only weighed a few ounces.  I kept the package of fuel pellets and the cup – I figured it might be useful if I ditch in the ocean.  The plastic tarp was a bright yellow and would make a great flag to wave for help and the Mylar sleeping bag was rolled into the size of a tennis ball and looked like polished silver – a great reflector of sun light for a rescue flag.  I tossed all of them back into the bag.

I grabbed the CD and immediately tossed it out the hatch.  I felt good because it had been of no use to me and I imagined the whole glider rose several feet as I watched it fall toward the waves below.  Oh Damn!  The waves below looked so much bigger now.  I snapped my head to the altimeter.  I was down to 3,870 feet and the GPS still pointed just left of straight ahead and said it was 61 miles to go.  Some quick mental calculations showed I was still about the same glide slope I had been on and still destined to hit the water about 10 miles short.  Of course I was rounding off and doing all this in my head so I wasn’t really sure how accurate it was.  Those waves looked a lot bigger now and I had a chill thinking of swimming in those cold white capped waves.

I needed to get serious about this weight toss.  I figured maybe even a little bit of weight loss might help so I stuffed the remaining items from the duffle bag into my pockets and coat vest and unclipped the bag and let it fly away.  As I searched the tiny plastic cabin I was in to something else to tear off and throw away, it dawned on me that this glider was designed with no excess of anything.  Everything was a functioning essential element for flight on….on….Mars!  What do I not need to fly on earth?  Or better, what do I not need for the next 50 minutes to fly over the water toward Bermuda?  My heavy winter boots – I needed them at 15,000 feet but I won’t see that again.  I carefully bent and pulled my knees and slipped off the boots and let them fall thru the bomb bay doors.  My Helmet – I was inside a lexan plastic cockpit and crossing below 3,000 feet to land in water – out goes the helmet.  My heavy insulated leather coat – better keep that.   I threw out the rest of the candy, food and thermos of coffee and the expensive thermos water bottle.  I scanned again and just could not see anything that wasn’t part of what makes me fly.  I kept hoping that I would suddenly zoom up into the sky but I those waves just kept getting bigger and bigger.

I began checking the descent and GPS again.  It was 8:13AM and I was now 39 miles from the north western tip of Somerset Island, off the western tip of Daniel’s Island.  The GPS was pointing to the closest point of land but the glider was still pointing to Ireland Island North, which was about two miles further than Daniel’s Island.  When I checked the wind, as best I could, and watched the waves below me carefully, I concluded that the ASFS was correcting for a slight cross wind out of the north and it might be heading for Daniel’s Island.  I laid in a waypoint in my GPS to create a direct track from my current position to Daniel’s Island so I could check the path I was taking.

I also called up some stored maps on the GPS and noticed that the waters on the northwest side of Bermuda was where all the reefs were for diving that that area was fairly shallow – from 30 to 90 feet deep.  Not deep enough to wade ashore but perhaps it would be shallow enough to reduce the large ocean waves and the presence of deep water white sharks.

I calculated that I had picked up a little speed and flattened my glide slope somewhat to 47 MPH and descending at about 39 feet per minute.  That put me a lot closer but still would need some swimming.  I had not touched the joystick for hours for fear of messing with the autopilot system but I decided to try to see if I could get any altitude – perhaps with a zoom and climb maneuver.  I grabbed the joystick and the glider immediately started resisting my movements.  It shuddered as it tried to respond to two sets of control inputs at once – mine and the ASFS.  It wasn’t working so I stopped.  The only choice was to jerk out the wires and fly fully manual or let continue.  I decided to let it fly for awhile longer.

There wasn’t much to do except sit there and watch the water get closer.  The sky had cleared and the sun was blinding plus I was getting the mirror reflection off the water making it hard to see in the direction of where land might be.  As I came within 20 miles, I began to see the shallower water and a bump on the horizon that was probably Bermuda.  I was down to 1,500 feet and I could see that the AFSF was working much harder now.  It was making flight surface correction every few seconds and making the Mylar skin of the wing vibrate and flex as it tried to maintain level flight in the rough air close to the water.

I thought about the long wire sensor extending down from the glider.  It would act as an anchor and drag me down fast so I got out my multi-tool and prepared to cut it when it was near the water.  I also began thinking of what I might do after I hit the water.  If a boat is nearby, I need only stay afloat until they arrive but if not; I have to get in to dry land somehow.  Swimming is an option but what could I use for a float.  That duffle bag would make an ideal float by simply holding down the zippered end….yeah, that duffle bag that I threw out a few miles back.  Oh! I just remembered the Mylar sleeping bag was just a huge bag that I could fill with air and float on easily.  I felt for it in my jacket pocket and felt secure that it was there.

I was now less than 1,000 feet and the ASFS was working so hard that the fly-by-wire actuators were making a constant clacking and humming sound as they fought the increasing turbulence from the ocean waves and surface winds.  The glider wing tips were weaving and dipping left and right and yawing up and down.  I braced myself on the frame tubes around me and held on.  The sun was bright and shining right in my face but I could see areas of shallower water and the reefs that might be from 10 to 90 feet down since the water was so clear – it was hard to estimate depth.

I was watching the small bulb that was the senor at the end of the long wire extending down from the glider.  It was approaching the water and I reached down with the cutters – getting ready to cut the wire when it touched.  I wanted to wait as long as possible because I did not know what the ASFS would do once it lost that sensor input.  I grabbed the joystick and leaned under the seat to cut the wire.  It was hard to see around the seat and thru the Bombay doors and past the frame rails.  I could only see it with one eye at a time so it was hard to estimate how high it was above the water.  During one violent dip in the rough air, I saw the sensor hit the water and make a small wake of white water.  The reaction by the glider and the ASFS was immediate and dramatic.

I had not even cut the wire yet but the glider jerked several times and I could hear several new actuators moving new areas of the wing.  I spun around in my seat trying to see what was going on but it was hard to see the upper parts of the wing that seemed to be making the noise.  As I moved, the glider was shifting and changing its angle of attack – the pitch up of the front of the wing versus the back of the wing.  I was also shocked that it was actually descending at a much more rapid rate.  I was still at least 10 miles from dry land – that is a long way to swim.  I was now thinking that every second I was in the air, was a few yards I would not have to swim.

I noticed some new LEDs had lit up on the control panel – one was flashing red and one was flashing yellow.  I had no idea what that meant but I was sure it was not good.  The glider was now in a sharp descent that was increasing my speed and moving me very fast toward the water.  I cut the long wire sensor extending down from the glider but it had no effect – except the yellow flashing LED on the dash stopped flashing and was now on steady.  The descent continued.

I figured I was headed for a major crash into the water and I regretted tossing my helmet and gloves.  I figured I was about 50 feet from the water when the glider suddenly pulled up from the dive and slipped into a fast cruise just above the waves.  I was doing 57 MPH and some of the water was hitting my windshield, almost like rain.  I was now passing over exposed reefs and very shallow sand bars and decided that it would not be so bad to ditch out here as I could probably make it to land.

As I looked up, I could now clearly see the island, buildings, telephone poles and cars.  I was about two miles out but I was flying almost level with the buildings.  The glider was going up and down like a roller coaster now and I noticed it was in sync with the waves.  The ASFS was using something called “ground effect” to keep me up.  Air was riding up and down over the waves and as the glider came down, the air between the glider and the waves gets slightly compressed and pushes back up against the glider – giving it a boost in lift.  I had only about a minute to go to make land and I was now for the first time convinced I would make it.  My only concern now was that I was still moving about 40 MPH and that would make for a mighty hard landing on land or water.

I grabbed the joy stick and tried to move it but I could feel the ASFS fighting to control the glider.  I figured even if I muck it up, I still have made it to land.  I pulled back hard on the joystick and the glider shot up to about 200 foot elevation and then nearly stalled and dove back to the water – leveling out just 10 feet above the wave crests.  I was now passing over the surf of the beach – which was not very high because of the long shallow reef that extended out from the north east side of the island.  The glider passed over Daniel’s Island and was coming in to a narrow beach with a long row of small identical cabins.  The glider banked southwest to parallel the beach and then softly and lightly settled onto the beach.  We landed so soft that the glider rolled about 20 feet on the single rear wheel that hung down behind my seat.

It was 9:05AM and the beach was smooth and deserted.  I had approached so low that I was not picked up by any radar and I must have hit a part of a beach resort that was closed.  I grabbed the frame bars and pushed my feet down thru the Bombay doors onto the sand.  Boy did that feel good.   I lifted the glider up and forward and stepped back out from under the glider and let it back down to the ground.  I was standing on the beach and it felt wonderful.  The glider had saved my life – what a story I have to tell Eddy – if he doesn’t have me arrested for messing up his $30 million glider.

I have landed almost exactly 1,000 miles from where I started but the path I took to get here was closer to $1,650 miles.  I sat down on the beach next to the glider and just enjoyed the feeling of being alive.  As I sat there, all I could hear was the small beach waves and a few birds down the beach.  Then I heard a weak voice say, “Hey Gabe!  Are you still alive?”  I looked around but there was no one in sight.  “Hey Gabe, Talk to me”.  The sound was coming from the glider.  I jumped up and stuck my head into the Bombay doors and faced the dashboard just as it said, “Hey Gabe, How you feeling?”  The sound was coming from the dash panel so I just faced it as said, “Who is this”.  “Gabe, it’s me Eddy, we have been tracking you since you left.  Someone will be there in about 30 minutes to pick you and the glider up and bring you back to Vermont”.  I was shocked.  “Eddy, how…what…..why….DAMN….you son of a bitch…..why didn’t you tell me”.  Eddy replied, “Gabe, NASA picked you for this test two years ago.  You fit the profile for a typical trained astronaut and we needed this glider tested in the real world.”  “We have a 727 that is waiting for you at the airport at the north end of the island.  We’ll fly you back and when you get back, you’ll get a new Swift S-1 motorized glider and a payment of $20,000.”  “Is that OK with you, Gabe?”  All I could say was “yes”.

Our Destiny has been Modeled in a Computer

In addition to the space program, NASA funds numerous R&D efforts to examine all aspects of space travel, life in space, the use of technology and the existence of other life out there.  In the realm of SETI, much of the R&D has to do with one of two areas.  One is how and or under what conditions we might actually communicate with other beings and the second is how we humans will react to the news that there is life out there.  Some place in the middle of these two ideas is questions like “Why have we not heard from any other life yet?” and “What level of technology development is necessary to make communications possible?”  Such questions often have to cross over into the realm of sociology, psychology, evolution and logic.  Surprisingly, such analysis lends itself rather easily to computer modeling and to quantifiable analysis.    NASA has been using computer analysis of these kinds of subjects for years and has developed some very good models that allow for the simulation of the past, present and future actions of society, technology and the psychology of the evolving brain.

These models are validated by putting in data about what we knew in 1500 and then letting it predict what would happen to society and morals in 1800.  When it got it wrong, the model was tweaked and run again.  This process was repeated thousands of times until the model predicted what actually happened in 1800.  Then the process began again for different dates.  After thousands of trials like this, they have created social models that very accurately predict the interplay of sociology, psychology, evolution and technology.  

What is not as well known is that once validated for long spans of time, the model is further refined for shorter and shorter periods of time until it can predict social responses on the order of a few decades or less.  Unlike weather modeling that gets more accurate as the period gets shorter, in social modeling, it become more complex because there is no averaging of responses over time.  The short term knee-jerk reactions to immediate news reports can vary the responses wildly.  The processing power needed for shorter periods of time become increasingly very large as the period gets shorter.  In recent years, the power of computers has allowed this model development to reduce the prediction period down to less than 10 years with very high accuracy and under 5 years with accuracies as high as 70%.

It took an incident for NASA to realize how dangerous this model had become.  It was just too tempting to keep from using it to predict the stock market and at least one scientist made a fortune when he use the model to accurately predict the market drop at the end of the third quarter of 2008.  A lot of work went into covering that up and then NASA pulled the black curtain over the whole project.  It has been in deep cover ever since.  I became aware of it because I was the author of a statistical analysis model that could accurately validate the algorithms of other statistical models.  I created my model when I was working for NRL (and later, refined it while working for DARPA) and used it for validating the modeling of new weapons systems in a simulated operational environment.   My model was created to be adaptable to stress other models and NASA knew if it passed my analysis, then they had a good algorithm.  As a result of my involvement, I had full access to their model and tons of reports and prior studies it was used on.  The following is one of the more shocking discoveries I made.

First let me say that after running literally millions of Monte Carlo runs on the NASA model, I validated it to be accurate in its computations.  I found that since 2008, its accuracy has increased 800 fold due mostly to an increase in the processing power of the computers it is running on – a Cray XT5 (Jaguar) now.  It uses a self-correction subroutine that validates its analysis every few seconds – after each 200,000 quadrillion calculations.

I do not and did not know exactly what the algorithms were that it used but for my analysis of their model, I did not have to know that.  I you ask a black box what 2 times 2 is and it gives you an answer of 4, it makes no difference if there are 5,000 computers or 200 monkeys in the box.  If you ask it 600 million such questions and it gets them all right, you can validate its ability to calculate accurately.

I found that the very existence of this model is a huge secret – even from Congress.  The operators and users are screened and watched every day by the Secret Service so they do not abuse the model.  In one document, I discovered that they had named the model “Agora” which in Greek means “a place of assembly and reason” and it was where the famous Greek thinkers (Socrates, Plato and Aristotle) met and thought about things.

I read some of the actual R&D that was performed with Agora since 2008 and found them all to be fascinating but I was allowed into one vault that had numerous bright orange folders marked “NFPR” and “TOP SECRET” and “EXEMPT FIA”.  I had to ask and was told that NFPR was NOT FOR PUBLIC RELEASE and I was told that meant “forever”.  The FIA was for the Freedom of Information Act and these reports were all exempt from every being obtained using the FIA.  This got me very curious so I of course had to read these reports under the excuse that I needed the details to validate my model analysis.

These NFPR reports were all about the same R&D project which was code named “ANT KA” and was shortened to ANTKA which is the Hindi word for TERMINAL.  The meaning of that name was not apparent until I had read most of the report and then it was ominous.

ANTKA began with a simple question.  “Why have we not been able to detect any signals from any other planets?”  It spent many pages showing that with our current detection capability, we should be able to obtain some portions of some elements of the electromagnetic spectrum from other intelligent life.  I was astonished that it said that we could do this from as far away as 2.5 million light years.  That is a really long distance and it reaches out to just over half of all the galaxies in what is called the Local Group and it is estimated that it includes about 1.25 trillion stars.  No one knows how many planets are in that space but if we use Frank Drakes formula and use very conservative values for the unknowns, we come up with about 280 million planets with life and about 3 million with intelligent life that is capable of sending us a message using some aspect of the electro-magnetic spectrum that we are capable of receiving.  This sounds like a lot but Agora validated the estimate with millions of runs of Monte Carlo simulations of all the different kinds of stars and systems in the Local Group.

Having established that there should be signals out there but we are not receiving any, the ANTKA study began trying to determine what was wrong with their reasoning.  Several volumes are filled with various ideas that were tried, analyzed and then discarded as not accounting what is being observed.  Finally, they began looking at the Drake model itself.  If it was wrong, then perhaps the number of viable planets with life is much smaller.  It was at this point that the Agora model was tuned onto the future actions of society, technology and the psychology of the evolving brain.  They built dozens of model variants to examine all aspects of society and technology and slowly began to narrow their analysis onto the issue of how fast the society and technology matures toward the threshold that would allow interplanetary communication to take place.  It was here that the analysis got really scary.

The analysts created models that simulated the growth of society and its technology at a pace that has been verified by countless studies as being an accurate representation of what humans on this planet have exhibited since life first began.  The model included the simulation of such aspects as the diversity of cultures, religions and languages as well as the maturity of social norms and morals.  It accurately modeled this development from our earliest forms of social civilization up to modern times and then it projected it beyond the present into our future.

After doing that, it created a parallel model that mapped out development of technology over time as our brains and our society developed.  This model also included technology in all its forms as it would affect building shelter, food development, transportation, weapons and leisure activities.  This model also accurately modeled technology development from our earliest stone tools up to modern machines and digital systems in our present times and then it projected it beyond the present into our future.  When the two models were joined and the outcome combined as a common destiny, the result was shocking.

What the combined model predicted was that our ability to create very advanced weapons far exceeded our moral or social ability to safely manage those weapons.  The result was that the model predicted that the society would self destruct at a point that is just about where we are today.  In other words, it said that we are incapable of making safe decisions related to the use of the powerful weapons that we are capable of creating an we are now at the exact point in the model in which these models predict that we will self-destruct.

The analysts ran a Monte Carlo simulation allowing multiple variables to be flexed by a wide margin and the results always ended the same – with the destruction of the modeled society.  The Agora model was setup to run 100’s of millions of the Monte Carlo simulations and the usual bell curve was created but with such sharp and steep curves that it virtually proved that except for impossible values of some of the major variables (population growth, education levels, financial markets, etc), we are destined to self destruct because we don’t know how to deal with our own technology.

Among the many scenarios, the actual source or cause of our demise changed from bombs to disease to starvation and others but it always happened.  Speeding up or slowing down one part of the model or the other only delayed or accelerated the end result.  They also tried to imagine what might change in an alien society but they soon discovered that if you create any life form that is capable of any given technology, that same life form is incapable of safely managing it.  When the technology reaches the point that it is capable of destroying large portions of the society, then the society dies and it makes no difference what the technology is or what the form of life is that created the technology.  Essentially the model proved something that social psychologists have known for years – the portion of the brain that creates new ideas develops well in advance of the portion of the brain that makes moral judgments and tempers the aggressive responses of other parts of the brain.  It seems that this is simply a fact of life in all forms – it’s just that we are the first species that has gotten to the point of being able to destroy ourselves.

The ANTKA analysts concluded that the reason we have not received any messages from other planets is because no other life on any other planets have survived long enough to create those messages for any appreciable portion of their existence.

The analysts went on to point out that if this report were to be made public, it would create mass panic and social unrest and could even precipitate the exact destruction that their models predict.  They backed up their conclusions with the results of thousands of simulations that they said not only validate their conclusion but makes it virtually inevitable and imminent.

There was one dissenting vote by one of the analysts.  He wrote that humans were more resilient than the model predicted and that if they knew the results of their research and modeling, they would respond by changing their behavior and avoiding the predicted self-destruction.  He noted that this very scenario was modeled and still resulted in an end of society but he was not convinced even though he also concluded that the model was validated and accurate.

As the author of this expose’, I agree with this one dissenting analyst.  I think he was right.  I think this and have acted on this belief for one simple reason that I think justifies breaking all the security and secrecy barriers involved.  That reason is that….we have no other alternative.  If I am wrong, we all die.  If I am right, we all live.  What would you do?

The Government knows Everything You have Ever Done!

Sometimes our paranoid government wants to do things that technology does not allow or they do not know about yet. As soon as they find out or the technology is developed, then they want it and use it. Case in point is the paranoia that followed 11 Sept 2001 (9/11) in which Cheney and Bush wanted to be able to track and monitor every person in the US. There were immediate efforts to do this with the so-called Patriots Act that bypassed a lot of constitutional and existing laws and rights – like FISA. They also instructed NSA to monitor all domestic radio and phone traffic, which was also illegal, and against the charter of NSA. Lesser known monitoring was the hacking into computer databases and monitoring of emails, voice mails and text messaging by NSA computers. They have computers that can download and read every email or text message on every circuit from every Internet or phone user as well as every form of voice communication.

Such claims of being able to track everyone, everywhere have been made before and it seems that lots of people simply don’t believe that level of monitoring is possible. Well, I’m here to tell you that it not only is possible, but it is all automated and you can read all about the tool that started it all online. Look up “starlight” in combination with “PNNL” on Google and you will find references to a software program that was the first generation of the kind of tool I am talking about.

This massive amount of communications data is screened by a program called STARLIGHT, which was created by the CIA and the Army and a team of contractors led by Battelle’s Pacific Northwest National Lab (PNNL)at a cost of over $10 million. It does two things that very few other programs can do. It can process free-form text and images of text (scanned documents) and it can display complex queries in visual 3-D graphic outputs.

The free-form text processing means that it can read text in its natural form as it is spoken, written in letters and emails and printed or published in documents. For a database program to be able to do this as easily and as fast as it would for formal defined records and fields of a relational database is a remarkable design achievement. Understand this is not just a word search – although that is part of it. It is not just a text-scanning tool; it can treat the text of a book as if it were an interlinked, indexed and cataloged database in which it can recall every aspect of the book (data). It can associate, cross-link and find any word or phrase in relation to any parameter you can think of related to the book – page numbers, nearby words or phrases, words use per page, chapter or book, etc. By using the most sophisticated voice-to-text messaging, it can perform this kind of expansive searching on everything written or spoken, emailed, texted or said on cell phones or landline phones in the US!

The visual presentation of that data is the key to being able to use it without information overload and to have the software prioritize the data for you. It does this by translating the database query parameters into colors and dimensional elements of a 3-D display. To view this data, you have to put on a special set of glasses similar to the ones that put a tiny TV screen in from of each eye. Such eye-mounted viewing is available for watching video and TV – giving the impression you are looking at a 60-inch TV screen from 5 feet away. In the case of STARLIGHT, it gives a completely 3-D effect and more. It can sense which way you are looking so it shows you a full 3-D environment that can be expanded into any size the viewer wants. And then it adds interactive elements. You can put on a special glove that can be seen in the projected image in front of your eyes. As you move this glove in the 3-D space you are in, the glove moves in the 3-D computer images that you see in your binocular eye-mounted screens. Plus this glove can interact with the projected data elements. Let’s see how this might work for a simple example:

The first civilian (unclassified) application of STARLIGHT was for the FAA to analyze private aircraft crashes over a 10-year period. Every scrape of information was scanned from accident reports, FAA investigations and police records – almost all of this was in free-form text. This included full specs on the aircraft, passengers, pilots, type of flight plan (IFR, VFR) etc. It also entered geospatial data that listed departure and destination airports, peak flight plan altitude, elevation of impact, distance and heading data. It also entered temporal data for the times of day, week and year that each event happened. This was hundreds of thousands of documents that would have taken years to key into a computer if a conventional database were used. Instead, high-speed scanners were used that read in reports at a rate of 200 double-sided pages per minute. A half dozen of these scanners completed the data entry in less than two months.

The operator then assigns colors to a variety of ranges of data. For instance, it first assigned red and blue to male and female pilots and then looked at the data projected on a map. What popped up were hundreds of mostly red (male) dots spread out over the entire US map. Not real helpful. Next he assigned a spread of colors to all the makes of aircraft – Cessna, Beachcraft, etc.. Now all the dots change to a rainbow of colors with no particular concentration of any given color in any given geographic area. Next he assigned colors to hours of the day – doing 12 hours at a time – Midnight to Noon and then Noon to Midnight. Now something interesting came up. The colors assigned to 6AM and 6PM (green) and shades of green (before and after 6AM or 6PM) were dominant on the map. This meant that the majority of the accidents happened around dusk or dawn.  Next the operator entered assigned colors to distances from the departing airport – red being within 5 miles, orange was 5 to 10 miles…and so on with blue being the longest (over 100 miles). Again a surprise in the image. The map showed mostly red or blue with very few in between. When he refined the query so that red was either within 5 miles of the departing or destination airport, almost the whole map was red.

Using these simple techniques, an operator was able to determine in a matter of a few hours that 87% of all private aircraft accidents happen within 5 miles of the takeoff or landing runway. 73% happen in the twilight hours of dawn or dusk. 77% happen with the landing gear lowered or with the landing lights on and 61% of the pilots reported being confused by ground lights. This gave the FAA information they needed to improve approach lighting and navigation aids in the terminal control areas (TCAs) of private aircraft airports.

This highly complex data analysis was accomplished by a programmer, not a pilot or an FAA investigator and incorporated 100’s of thousands of reports that were able to be collated into useful data in a matter of hours.  This had never been done before.

As new and innovative as this was, it was a very simple application that used a limited number of visual parameters at a time. But STARLIGHT is capable of so much more. It can assign things like direction and length of a vector, color of the line or tip, curvature and width and taper to various elements of a search. It can give shape to one result and different shape to another result. This gives significance to “seeing” a cube versus a sphere or to seeing rounded corners on a flat surface instead of square corners on an egg-shaped surface.
Everything visual can have meaning but what is important is to spot anomalies, things that are different and nothing is faster doing that than a visual image.

Having 80+ variables at a time that can be interlaced with geospatial and temporal (historical) parameters can allow the program to search an incredible amount of data. Since the operator is looking for trends, anomalies and outflyers, the visual representation of the data is ideal to spot this data without actually scanning the data itself by the operator. Since the operator is visually seeing an image that is devoid of the details of numbers or words, he can easily spot some aspect of the image that warrants a closer look.

In each of these trial queries, the operator can, using his gloved hand to point to any given dot, line or object, call up the original source of the information in the form of a scanned image of the accident report or reference source data. He can also touch virtual screen elements to bring out other data or query elements. For instance, he can merge two queries to see how many accidents near airports (red dots) had more than two passengers or were single engine aircraft, etc. Someone looking on would see a guy with weird glasses waving his hand in the air but in the eyes of the operator, he is pressing buttons, rotating knobs and selecting colors and shapes to alter his room-filling graphic 3-D view of the data.

In its use at NSA, they add one other interesting capability. Pattern Recognition. It can automatically find patterns in the data that would be impossible for any real person to find by looking at the tons of data. For instance, they put in a long list of words that are linked to risk assessments – such as plutonium, bomb, kill, jihad, etc. Then they let it search for patterns.  Suppose there are dozens of phone calls being made to coordinate an attack but the callers are from all over the US. Every caller is calling someone different so no one number or caller can be linked to a lot of risk words. STARLIGHT can collate these calls and find the common linkage between them, and then it can track the calls, caller and discussions in all other media forms.  If the callers are using code words, it can find those words and track them.  It can even find words that are not used in a normal context, such as referring to an “orange blossom” in an unusual manner – a phrase that was once used to describe a nuclear bomb.

Now imagine the list of risk words and phrases to be hundreds of thousands of words long. It includes phrases and code words and words used in other languages. It can include consideration for the source or destination of the call – from public phones or unregistered cell phones. It can link the call to a geographic location within a few feet and then track the caller in all subsequent calls. It can use voice print technology to match calls made on different devices (radio, CB, cell phone, landline, VOIP, etc.) by the same people. This is still just a sample of the possibilities.

STARLIGHT was the first generation and was only as good as the data that was fed into it through scanned documents and other databases of information. A later version, code named Quasar, was created that used advanced data mining and ERP (enterprise resource planning) system architecture that integrated the direct feed from legacy system information gathering resources as well as newer technologies.

(ERP is a special mix of hardware and software that allows a free flow of data between different kinds of machines and different kinds of software and data formats.  For instance the massive COBAL databases at the IRS loaded on older model IBM mainframe computers can now exchange data easily with NSA CRAY computers using the latest and most advanced languages and database designs.  ERP also has resolved the problem that each agency has a different encryption and data security format and process.  ERP does not change any of the existing systems but it makes them all work smoothly and efficiently together.)

For instance, the old STARLIGHT system had to feed recordings of phone calls into a speech-to-text processor and then the text data that was created was fed into STARLIGHT. In the Quasar system, the voice monitoring equipment (radios, cell phones, landlines) is fed directly into Quasar as is the direct feed of emails, telegrams, text messages, Internet traffic, etc.  Quasar was also linked using ERP to existing legacy systems in multiple agencies – FBI, CIA, DIA, IRS, and dozens of other federal and state agencies.

So does the government have the ability to track you? Absolutely! Are they doing so? Absolutely! But wait, there’s more!

Above, I said that Quasar was a “later version”. It’s not the latest version. Thanks to the Patriot Act and Presidential Orders on warrantless searches and the ability to hack into any database, NSA now can do so much more. This newer system is miles ahead of the relatively well known Echelon program of information gathering (which was dead even before it became widely known). It is also beyond another older program called Total Information Awareness (TIA). TIA was compromised by numerous leaks and died because the technology was advancing so fast.

The newest capability is made possible by the new bank of NSA Cray computers and memory storage that are said to make Google’s entire system look like an abacus.  NSA combined that with the latest integration (ERP) software and the latest pattern recognition and visual data representation systems.  Added to all of the Internet and phone monitoring and screening are two more additions into a new program called “Kontur”. Kontur is the Danish word for Profile. You will see why in a moment.

Kontur adds geospatial monitoring of every person’s location to their database. Since 2005, every cell phone now broadcasts its GPS location at the beginning of every transmission as well as at regular intervals even when you are not using it to make a call. This was mandated by the Feds supposedly to assist in 911 emergency calls but the real motive was to be able to track people’s locations at all times. For those few that are still using the older model cell phones, they employ “tower tracking” which uses the relative signal strength and timing of the cell phone signal reaching each of several cell phone towers to pinpoint a person within a few feet.  Of course, landlines are easy to locate as are all internet connections.

A holdover from the Quasar program was the tracking of commercial data which included every purchase made by credit cards or any purchase where a customer discount card is used – like at grocery stores. This not only gives the Feds an idea of a person’s lifestyle and income but by recording what they buy, they can infer other behaviors. When you combine cell phone and purchase tracking with the ability to track other forms of transactions – like banking, doctors, insurance, police and public records, there are relatively few gaps in what they know about you.

Kontur also mixed in something called geofencing that allows the government to create digital virtual fences around anything they want. Then when anyone crosses this virtual fence, they can be tracked. For instance, there is a virtual fence around every government building in Washington DC. Using predictive automated behavior monitoring and cohesion assessment software combined with location monitoring, geofencing and sophisticated social behavior modeling, pattern mining and inference, they are able to recognize patterns of people’s movements and actions as being threatening. Several would-be shooters and bombers have been stopped using this equipment.  You don’t hear about them because they do not want to explain what alerted them to the bad guys presence.

To talk about the “Profile” aspect of Kontur, we must first talk about why or how is it possible because it became possible only when the Feds were able to create very, very large databases of information and still be able to make effective use of that data. It took NSA 35 years of computer use to get to the point of using a terabyte of data. That was back in 1990 using ferrite core memory. It took 10 more years to get to petabyte of storage – that was in early 2001 using 14-inch videodisks and RAID banks of hard drives. It took four more years to create and make use of an exabyte of storage. With the advent of quantum memory using gradient echo and EIT (electromagnetically induced transparency), the NSA computers now have the capacity to store and rapidly search a yottabyte of data and expect to be able to raise that to 1,000 yottabytes of data within two years.  A yottabyte is 1,000,000,000,000,000 gigabytes or 2 to the 80th power.

This is enough storage to store every book that has ever been written in all of history…..a thousand times over.  It is enough storage to record every word of every conversation by every person on earth for a period of 10 years.  It can record, discover, compute and analyze a person’s life from birth to death in less than 12 seconds and repeat that for 200,000 people at the same time.

To search this much data, they use a bank of 16 Cray XT Jaguar computers that do nothing but read and write to and from the QMEM – quantum memory. The look-ahead and read-ahead capabilities are possible because of the massively parallel processing of a bank of 24 other Crays that gives an effective speed of about 270 petaflops. Speeds are increasing at NSA at a rate of about 1 petaflop every two to four weeks. This kind of speed is necessary for things like pattern recognition and making use of the massive profile database of Kontur.

In late 2006, it was decided that NSA and the rest of the intelligence and right wing government agencies would stop this idea of real-time monitoring and begin developing a historical record of what everyone does. Being able to search historical data was seen as essential for back-tracking a person’s movements to find out what he has been doing and whom he has been seeing or talking with. This was so that no one would ever again accuse the government or the intelligence community of not “connecting the dots”.

But that means what EVERYONE does! As you have seen from the above description, they already can track your movements and all your commercial activities as well as what you say on phones or emails, what you buy and what you watch on TV or listen to on the radio. The difference now is that they save this data in a profile about you. All of that and more.

Using geofencing, they have marked out millions of locations around the world to including obvious things like stores that sell pornography, guns, chemicals or lab equipment. Geofenced locations include churches, organizations like Greenpeace and Amnesty International. They have moving geofences around people they are tracking like terrorists but also political opponents, left wing radio and TV personalities and leaders of social movements and churches. If you enter their personal space – close enough to talk, then you are flagged and then you are geofenced and tracked.

If your income level is low and you travel to the rich side of town, you are flagged. If you are rich and travel to the poor side of town, you are flagged. If you buy a gun or ammo and cross the wrong geofence, you will be followed. The pattern recognition of Kontur might match something you said in an email with something you bought and somewhere you drove in your car to determine you are a threat.

Kontur is watching and recording your entire life. There is only one limitation to the system right now. The availability of soldiers or “men in black” to follow-up on people that have been flagged is limited so they are prioritizing whom they act upon. You are still flagged and recorded but they are only acting on the ones that are judged to be a serious threat now.  It is only a matter of time before they can find a way to reach out to anyone they want and curb or destroy them. It might come in the form of a government mandated electronic tag that is inserted under the skin or implanted at birth. They have been testing these devices in use on animals under the disguise of tracking and identification of lost pets. They have tried twice to introduce these to all the people in the military or in prisons. They have also tried to justify putting them into kids for “safety”. They are still pushing them for use in medical monitoring. Perhaps this will take the form of a nanobot.  So small that you won’t even know you have been “tagged”.

These tags need not be complex electronic devices.  Every merchant knows that RFID tags are so cheap that they are now installed at the manufacturing plant for less than 1 cent per item.  They consist if a special coil of wire or foil cut to a very specific length and folded into a special shape.  It can be activated and deactivated remotely.  This RFID tag is then scanned by an RF signal.  If it is active and you have taken it out of the store, it sounds an alarm.  Slightly more sophisticated RFID tags can be scanned to reveal a variety of environmental, location, time and condition data.  All of this information is gathered by a device that has no power source other than the scanning beam from the tag reader.  A 1 cubic millimeter tag – 1/10th the size of a TicTac – can collect and relay a huge amount of data, will have a nearly indefinite operating life and can be made to lodge in the body so you would never know it.

If they are successful in getting the population to accept these devices and then they determine you are a risk, they simply deactivate you by remotely popping open a poison capsule using a radio signal. Such a device might be totally passive in a person that is not a threat but might be lethal or it can be programmed to inhibit the motor-neuron system or otherwise disable a person that is deemed to be a high-risk person

Certainly this sounds like paranoia and you probably say to yourself, that can never happen in a free society.  If you think that, you have just not been paying attention.  Almost everything in this article can be easily researched online.  The code names of Quasar and Kontur are not public knowledge yet but if you look up the design parameters I have described, you will see that they are in common usage by NSA and others.  There is nothing in this article that cannot be verified by independent sources.

As I said in the beginning of this article, if the technology exists and is being used by the government or corporate America and it is public knowledge, then you can bet your last dollar that there is some other technology that is much more effective that is NOT public knowledge that is being used.

Also, you can bet that the public image of “protecting privacy” and “civil rights” have absolutely no limitations or restrictions on the government if they want to do something. The Bush/Cheney assault on our rights is a most recent example but is by no means rare or unusual.  If they want the information, laws against them gathering it have no effect.  They will claim National Security or classified necessity or simply do it illegally and if they get caught, they will deny it.

Here are just a few web links that might convince you that this is worth taking seriously.

Another incredible Rifle and “Bullet”

The folks at the Advanced Weapons Research Center (AWRC) at the Aberdeen Test Center have done it again.  I was recently asked to help design a scope for a new rifle and bullet combination that is unlike anything ever seen before.

Imagine this:

A normal size long barrel sniper rifle shooting a common cartridge size bullet that leaves the barrel at an incredible 15,437 feet per second (over 10,000 MPH) but has no more recoil than an M-14.  The bullet has the flattest trajectory of any weapon ever made and is lethal out to a range of more than 12 miles.  The bullet is also almost completely unaffected by cross winds, Despite these incredible speeds and ranges, the bullet is rock solid stable over its entire flight including while passing transonic speeds.

As you might imagine, using some kind of enhanced aiming device is essential to remove the human errors from the equation but much of the credit for a stable trajectory comes from the gun and bullet design.  To begin with, it is a smooth bore rifle – which virtually eliminates errors like spindrift, Magnus effect and Poisson affect.

The barrel also is oddly shaped.  It uses a specially designed De Laval nozzle about half way down the barrel.  This constricts the barrel and then expands it.  The result is that the gases from the gunpowder create an intense high pressure point that accelerates the bullet by more than 40 times.

The bullet is a sabot round but unlike any you have seen before.  It is two stages.  The actual penetrator is a needle about as long and thick as a pencil lead and about half the weight of a dime (about 1.5 grams).  The rear end of it is slightly expanded into a grooved bulb that acts as both the receiver for the center of pressure and the grooves form subtle stabilizing fins that also impart a stabilizing spin to the bullet.  Toward the pointed end is the center of gravity, like an arrow, so that it remains stable even at very high velocities. The bullet is essentially a dart that is a specially tooled hardened steel spear with integrated fins

The base sabot pad receives the casing powder blast and applies the center of pressure to the rear of the base-pad of the first stage sabot.  The bullet accelerates down the barrel until it reaches the De Laval nozzle.  There, the constricting barrel disintegrates the first sabot stage and passes the much smaller second stage spindle type sabot through the De Laval Nozzle where it is accelerated before leaving the barrel.   The dart-bullet is moving at about 15,400 fps at the muzzle.  The grooved bulb end gives it a spin that is just enough to maintain stability without creating the usual spin errors.  It also keeps the center of pressure and force directly behind the center of gravity – keeping it stable.  This design results in a Ballistic Coefficient of about 39.7.

You might recognize this design by its similarity to other sabot rounds used for tank armor penetrators or sabot flechettes but this is different.  It optimizes all aspects of the design to achieve the highest possible velocity.   The velocity boost provided by the De Laval nozzle in the barrel plus the specially selected gun powder pushing a 10 gram bullet down a 37” barrel gives the dart a super accurate flat trajectory and the longest range for any small arms weapon.

Of course, such small projectile, range and speed introduce obvious problems.  The first is the lethality of such a small penetrator.  That is solved by the unique design of the dart.  The tip of the dart is shaped like a long tapered needle, however, it hides a hollow cavity (filled with sodium and phosphorus).  The thin metal walls of the tip are aerodynamically shaped to withstand the high velocity wind forces but are very fragile to forces from other directions.  Similar to the extraordinary strength shown by an egg when squeezed on the ends but it easily crumbles when squeezed from the sides.  The hollow cavity dart is specifically designed to collapse and peel outward when it penetrates even the slightest resistance.  When that happens, it exposes the small tube of sodium and phosphorus that reacts with air and/or any liquid to rapidly cause the disintegration of the dart.

The dart design is even more complex than just being a hollow-cavity spear.  Beginning about two inches back from the hardened steel tip, the shaft and tail of the dart is made in 5 thin layers from the outside to the middle and in four longitudinal sections.  These layers are made of high-tension spring steel that is bound by a softer and more pliable metallic-bonding agent.   At the end of the dart, the slight bulge that makes up the flight stabilizing tail is hardened to bind and hold all these layers.  Just in front of this bulge, is the hollow cavity containing the sodium and phosphorus.  The net effect of this design is that upon encountering any resistance, the dart will peel from the front to the back like a banana and the peeled sections will immediately curl and flare outward from the dart.  This takes the streamlined dart from a fast moving shaft to a 5 or 6 inch diameter ball of razor sharp hardened steel coils that can expand and expend all of its energy in .0005 seconds and over a distance of less than 9 inches.

During testing, it was found that despite hitting a bullet resistant vest, the dart took nearly the same distance and time to create its deformed ball meaning that it penetrated the vest before the deformation started.  The effect on animal test targets was incredible.  A cow was shot from 4,000 yards and the point of entry was 2.7 mm in diameter and the dart did not exit the animal but upon autopsy, the cow was found to have a cavity of nearly 16 inches in diameter that was effectively mush.  When fired at a human sized model made of ballistic jell, the exit wound was 9 inches in diameter.

The next problem was the speed of the bullet.  This speed creates a projectile that remains supersonic out to a range of more than 12,000 meters (7+ miles).  It can travel that distance in less than 4 seconds.  Such speeds resolve a number of problems and might have introduced errors.  Air fraction heat was one that had to be dealt with.  The dart can heat up to 160 degrees C during flight but this was used to advantage.  The bonding agent used in the sectional makeup of the dart was designed to be hard enough to withstand the heat and blast of firing the weapon but had to be fragile enough to expand when it hits a target.  The air fraction heat partially softens this bonding agent and the job is completed by the initial penetration of the target – explaining why it can penetrate the bullet-resistant vests but still expand in soft tissue.  The sodium and phosphorus is added just to ensure that the maximum energy is dissipated in the target.

The final problem is aiming a weapon that can accurately shoot farther than the shooter can see.  This was solved by integrating this weapon into the Digital Rifle System (DRS) described in an earlier report on this blog.  The DRS-192B, which is now deployed, uses the MDR-192B rifle as its basic weapon component.  In the case of this sabot-firing rifle, the same basic rifle is used but has been modified to handle this sabot round and uses a modified barrel.  These relatively minor modifications can be made in the field making the MDR192S out of the MDR192B.    In both cases, it uses the basic DRS192 system to coordinate aim point using advanced video camera sights (VCS), AIR (autonomous information recon) devices and, of course, the central processing and imaging computer.

The sabot firing MDR192S, integrated into the DRS192 creates a weapon system that can actually shoot over the horizon of the shooter.  The computers can aim the rifle so accurately that during the testing at Aberdeen and in Colorado, we were able to deliver kill shots at targets at ranges of 11.2 and 12.7 miles.  Accuracy improved markedly at ranges of less than 9 miles to where a kill shot was made in 19 of 25 shots.  Refinements in the AIR’s and VCS’s should improve accuracy in the next model.

A few curious aspects of this weapon.  At the target end of the trajectory, the impact is almost completely silent.  It sounds like someone rapped their knuckles on the table.  The sound from the muzzle of the rifle arrives as much as 1 minute later and is often so weak that it is not associated with the bullet strike.

The enhanced DRS system allows the shooter to be within visual sight of the target while the weapon is located up to 12 miles away.  The use of high terrain for weapon placement while using a visual spotter makes for a combination that is nearly impossible to locate or defend against.

DARPA is already working on an explosive tipped dart that can be used against people, vehicles, aircraft and communications equipment.

Getting shot at by the US military is getting down right dangerous.

Invisible Eyes – The Army can see EVERYTHING

As an advisor to the Dept. of Defense (DoD) on issues of advanced technology, I have been called into observe or test or evaluate a number of advanced weapons systems and other combat related new technology equipment. Let me tell you about the latest I investigated in Iraq and Afghanistan.

I was asked to evaluate the combat durability of a new multi-use sensor and communication system that can be deployed from an aircraft. I was flown to Baghlan and after a day’s rest; I was invited on a flight in a C-130. We flew north east over the mountains near Aliabad and approached an outpost base near Khanabad. Just before we landed, we were vectored to a large flat area just north west of the base. The ramp on the C-130 was lowered and we all put on harnesses. A man in combat fatigues carried a large canvas bag to the real of the ramp and pull out one of several devices from the bag. It looked like a small over-inflated inner-tube with two silver colored cylinders on top. It had several visible wires and smaller bumps and boxes in the hub and around the cylinders. It looked like it was perhaps 16 to 18 inches in diameter and perhaps 6 inches thick. The man pulled a tab which pulled out what looked like a collapsible antenna and tossed it out the ramp. He then took others out and did the same as we flow in a large circle – perhaps 20 miles in diameter – over this flat plain near the camp – tossing out 12 of these devices and then a final one that looked different. We then landed at the base.

I was taken to a room where they gave me a slide show about this device. It was called Solar Eye or SE for short. The problem they were addressing is the collection of intelligence on troop movements over a protracted period of time, over a large geographic area. The time periods involved might be weeks or months and the areas involved might be 10 to 25 square miles. It is not cost effective to keep flying aircraft over these areas and even if we did, that covers only the instant that the plane is overhead. Enemy troops can easily hide until the plane or drone is gone and then come out and move again. Even using very small drones gives only a day or two at most of coverage. The vast areas of Afghanistan demanded some other solution.

Stationary transmitters might work but the high mountains and deep valleys make reception very difficult unless a SATCOM dish is used and that is so large that it is easily spotted and destroyed. What was needed was a surveillance system that could monitor movements using visual, RF, infrared and vibration sensors. It had to be able to cover a large area which often meant that it had to be able to look down behind ridge lines and into gullies. It had to be able to operate for weeks or months but not cost much and not provide the enemy any useful parts when and if they found it. This was a tall order but those guys at NRL figured it out. Part of why I was called in is because I worked at NRL and a few of the guys there knew me.

After lunch, we got back to the lecture and I was finally told what this device is. When the device is tossed out, a tiny drogue chute keeps it stable and reduces its speed enough so it can survive the fall. The extended antenna helps to make it land on its bottom or on its side. If it lands on its side, it has a righting mechanism that is amazing. The teacher demonstrated. He dropped an SE on the floor and then stepped back. What I thought was a single vertical antenna was actually made up of several rods that began to bend and expand outward from a single rod left in the center. These other rods began to look like the ribs on an umbrella as then slowly peeled back and bent outward. The effect of these rods was to push the Se upright so that the one center rod was pointing straight up.

When I asked how it did that, I was told it uses memory wire. A special kind of wire that bends to a predetermined shape when it is heated – in this case by an internal battery. After the SE was upright, the wires returned to being straight and aligned around the center vertical rod.

“OK, so the device can right itself – now what?” I said. The instructor referred me back to the slide show on the computer screen. I was shown an animation of what looked like a funny looking balloon expanding from the center of the SE and inflating with a gas that made it rise into the air. He was pointing to the two cylinders and the inflatable inner tube I had seen earlier. The balloon rises into the air and the animation made it appear that it rose very high into the air – thousands of feet high.

The funny looking balloon was shaped like a cartoon airplane with wings and a tail with some odd panels on the top of the wings and tail. I finally said I was tired of being spoon fed these dog and pony shows and I wanted to get to the beef of the device. They all smiled and Ok, here is how it works.

The SE lands and rights itself and then those rods which were used to right it now are rotated and sent downward thru the center of the SE into the ground. They have a small amount of threaded pitch on then and when rotated, they screw into the soil. While they are screwing into the hard ground, they are also being bent again by an electrical current that is making them bend in the soil as they penetrate. The end result looks like someone opened an umbrella under ground beneath the SE. Since these rods are nearly 3 feet long, they anchor the SE to the ground very firmly.

The cylinders then inflate a special balloon that is made of some very special material. The Mylar is coated with a material that makes it act as a solar panel, creating electricity. The special shape of the balloon not only holds it facing into the wind but it also keeps it from blowing too far downwind. Sort of like the way a sailboat can sail into the wind, this balloon can resist the upper level winds by keeping the tether as vertical as possible. The balloon rises to between 5,000 and 15,000 feet – depending on the terrain and the kind of surveillance they want to do. It is held by a very special tether.

I was handed a tangled wad of what looked like the thin fiberglass threads that make up the cloth used for fiberglass boats. It was so lightweight that I could barely feel it. I had a wad about the size of a softball in my hand and the instructor told me I had nearly 2,000 feet in my hand. This tether is made from a combination of carbon fibers and specially made ceramics and it is shaped like an over-inflated triangle. What is really amazing is that it is less than one centimeter wide and made with an unusual color that made it shimmer at times and at other times it seemed to just disappear. The material was actually very complex as I was to learn.

The unique shape and material of the tether uses the qualities of the carbon fiber coating and metallic ceramic core to provide some unusual electromagnetic qualities. The impedance of the tether as seen by the RF signal in it is a function of the time-phased signal modulation. In other words, the modulation of the signal can cause the tether to change its antenna tuning aspects to enhance or attenuate the RF signal being sent or received. Using the central network controller, all of the SEs can be configured to act as alternating transmitters to other SEs and receivers from other SEs. This antenna tuning also comes in handy because every SE base unit also can function as a signal intelligence (SIGINT) receiver – collecting any kind of radiated signal from VLF to SHF. Because the antenna can be tuned to exact signal wavelengths and can simulate any size antenna at any point along its entire length, it can detect even very weak signals. The networking analysis system monitor and processor (SMP) records these signals and sends them via satellite for analysis when instructed to do so by the home central command.

The system combines the unique properties of this tether line with three other technologies. The first is an ultra wide-band (UWB) high frequency, low power and exceptionally long range transceiver that uses the UWB in a well controlled time-phase pulsed system that makes the multiple tethered lines act as a fixed linear array despite their movement and vertical nature. This is sometimes called WiMax using a standard called 802.16 but in this case, the tether functions as a distributed antenna system (DAS) maximizing the passive re-radiation capability of WiMax and making maximum use of the dynamic burst algorithm modulation. This means that when the network controlling system monitor determines that it is an optimum time for a specific SE to transmit, it uses a robust burst mode that enhances the power per bit transmitted while maintaining an optimum signal strength to noise ratio. By using this burst mode method in a smart network deployment topology, the SE overcomes the limitations of WiMax by providing both high average bit rates and long distance transmissions – allowing the SEs to be spaced as much as 100 miles apart. The SE tethers function as both a horizontal and vertical adaptive array antenna in which MIMO is used in combination with a method called Time Delayed Matrix-Pencil method (TDMP) to distinguish direct from reflected signals and to quantify phase shifts between different SE tethers connected to the system monitor. This creates a powerful and highly accurate Direction of Arrival (DOA) capability in very high resolution from nano-scale signal reflections.

Combining the precision DOA capability with an equally precise range capability is accomplished using the time-phased pulse which creates powerful signals that are progressively sent up the tether and then systematically cancelled out at certain distances along the tether using destructive echo resonance pulses. The effect is to move the emitted signal from the bottom of the tether along the tether as if it were a much shorter antenna but was traveling up and down the height of the tether. Since effective range is directly proportional to the height of the transmission, this has the effect of coordinating the emitted signal to distance. Using the range data along with the DOA, every detail of the surrounding topography can be recreated in the computer’s imaging monitor and the processor can accurately detect any movement or unusual objects in the field of coverage.

The second adapted technology is loosely based on a design sometimes referred to as the Leaky Coax or ported coax detector. The unique metallic Mylar and conductive ceramics in the tether give the electrical effect of being a large diameter conductor – making insertion losses almost zero – while allowing for an optimum pattern of non-uniformly spaced slots arranged in a periodic pattern that maximizes and enhances the radiating mode of the simulated leaky coax. The idea is that the emitted signal from one SE is coupled to the receiver in adjacent SEs in a manner that can be nulled out unless changes are made in the area in which the emitted signal is projected. The advantage of using the ported coax coupling method is that the signal needed for this detection process is very low power partly because the system makes use of the re-radiation of the signal in sort of an iterative damper wave that maximizes the detection of any changes in the received direct and reflected signals. In simple terms, the system can detect movement over a very large area by detecting changes in a moving temporal reference signal if anything moves in the covered area. In combination with the ultra wide band, spread spectrum transceiver, this detection method can reach out significant distances with a high degree of accuracy and resolution.

The third adapted technology is loosely based on magnetic resonance imaging (MRI). MRI’s are used to detect non-metallic and soft tissue in the body by using a method that blankets the subject in a time-phased magnetic field and then looks for minute timed changes to reflections of that magnetic field. In the case of the SE, the magnetic field is the WiMax, ultra wideband time-phased signal emitted by the tethers. It can blanket a large area with an electromagnetic field that senses changes in the signal reflection, strength and phase so that it can detect both metal and non-metal objects, including humans.

Variations on these three technologies are combined with a networking analysis system monitor and processor (SMP) that can receive signals and control the emissions from multiple SEs and process them into intelligence data. The system uses a combination of wires and lasers to speed communications to and from the SMP and the SMP can use any one or all of the SEs for selective analysis of specific geographic or electromagnetic signals.

Finally there is the balloon. It rises up above the clouds and sits in the bright sun. It has a surface that performs several functions. The outer layer acts sort of like the reverse of automatic dimming sunglasses. That is, it turns a pale blue under bright direct sunlight but it gets darker and darker as the light dims so that by the time the sun is down completely, the balloon is almost black. Although moon light does cause it to slightly brighten in color, the moon light is so direct that it only affects the top and most the bottom half remains black. During the day, the balloon is one to three miles up and is almost impossible to see without binoculars and knowing exactly where to look. During the night, the only way to know it is there is to see the stars that it blocks but at the long distances, it only blocks a very few stars at a time so again it is nearly impossible to see it. Since the tether is also nearly invisible, you have to be standing right next to the SE to be able to see any of it.

Just under this outer coating is a layer of flexible solar sensitive material that acts as a giant solar panel. It produces about 25 watts of power at peak performance but the SE system uses only about half that so the rest charges a Lithium-Cobalt Ion battery in the SE base unit. This is more than enough to power the system at night with enough left over to cover several cloudy days.

The bottom half of the balloon is coated with a reflective Mylar facing the inside of the balloon while the upper half of the balloon does not have this coating. This creates a reflective collection surface for RF signals being sent to and from satellites and high flying planes. Inside the balloon are antenna elements in this semi-parabolic reflector of several feet wide – making it easy to send and receive signals at very low energy levels. The SHF signals being sent are brought to the balloon’s internal antenna by superimposing them on top of the UWB signals on the carbon fiber Mylar surface of the tether. This is done with remarkable efficiency and hardly any signal loss.

Now that I had gotten the entire presentation, I was taken back into the C-130 where there was a small desk with a computer monitor and other equipment. The screen showed a map with the 12 SEs marked with red blinking dots. An internal GPS provided exact positions for both the SE base units, the central network SMP and the balloons. Beside each red dot was a blue dot off to one side showing the relative position of the balloon. Around each red dot was a light-blue circle that represented the coverage area – each light blue circle overlapped two or more other coverage area circles. Finally, there was a larger light yellow circle around all of the SEs showing the coverage area of the central networking SMP that dropped near the center of the SEs. Altogether, these circles covered an area of about 100 square miles but were capable of coverage over three times that area.

The operator then flipped a few switches and the screen changed over to what looked like an aerial monochrome view of a 3-D topographical map – showing the covered terrain in very good detail using shading and perspective to relate the 3-D effects. Then the circles on the screen began to pulsate and small lights appeared on the screen. These lights were different colors – red for metal objects, blue for animals or people and green for anything else that was moving or was inconsistent with the topography. It was programmed to flag anything that MIGHT be unusual such as objects that had sharp corners or smooth rounded edges or a symmetrical geographic pattern. When the operator moved a circular cursor (trackball) over any of these objects, the data lines on the bottom of the screen would fill with all kinds of information like its speed, direction, height above ground, past and projected paths, etc. Once an object was “hooked” by the trackball, it was given a bogie number and tracked continuously. The trackball also allowed for zooming in on the bogie to get increased detail. We spotted one blue dot and hooked it and then zoomed in on it. It was about 4 miles outside the SE perimeter but we were able to zoom in until it looked like a grainy picture from a poor signal on an old TV set. Despite that detail, it was clear that the object was a goat – actually a ram because we could see his horns. Considering it was about 1 AM at night and this was a goat that was 69 miles from where we were and 4 miles from the nearest SE, that is resolution that was incredible.

We zoomed out again and began a systematic screening of all of the red, blue and green dots on the screen. For objects the size of cars, we could reach out more than 40 miles out from the ring of SEs. For people, we could reach out about 15 miles outside the ring but inside; we could see down to rabbit size animals and could pick out individual electrical power poles and road signs.

I was shown a map of the other locations when the other Solar Eye arrays were located and their coverage areas. This is the primary reason and basis for the upcoming Marjah campaign into the Helmand Province – a huge flat plateau that is ideal for the invisible Solar Eyes.

I May Live Forever !

This story is unlike any other on this blog. For one thing, I am not a medical researcher and have had very little exposure or interest in the medical sciences so I approached this subject from an engineer’s perspective and as an investigator that thinks outside the box. I did not and could not follow some of the intricate details of many of the hardcore medical research reports I read. I mostly jumped to the conclusions and stitched together the thoughts and ideas that made sense to me. In retelling it, I have quoted parts of the medical study for those of you that understand it and then provided a translation based on my own interpretation of the studies. This story is also different because, it may well change my life significantly because, when all my research ended, I began experimenting on myself and as a result, I may live forever. Here’s the whole story from the beginning….

Some time ago, I became interested in life extension and began reading about it, in all its forms. I’m old and getting older so this was something that directly applied to my life. My research began with the known and leading edge of the science of experimental and biomedical gerontology. I read about the actual biology of senescence – the process of aging and what it is that actually ages. I learned the role of telomeres in the cell cycle and how some cells are immortal (germ and keratinocyte stem cells). I also learned that the telomerase enzyme, present in every cell, could turn a mortal cell into an immortal cell by stopping the telomere clock (called the Hayflick Limit) that puts a limited duration on the length of the cells’ telemeres. I learned that stem cells exist in many forms and types and have a wide range of capabilities and effects.

The above is a one paragraph summary of a huge amount of study and research over a period of a year or more and included tons more detail about all aspects of the science and the current R&D taking place in labs all over the word. The aging populations of most of the world’s wealthier nations have increased the interest and the funding for such studies. One estimate is that more than 25 experimental biomedical gerontology research studies are concluded and published somewhere every month.

After a year of reading and study, my research into this subject reminded me of someone that has taken two weeks climbing up a mountain and then he looks up and realizes he has only moved up about 10% of the height of the mountain. I could see that there was an enormous amount of material to study and that I would never really be able to learn it all….but I wanted to reach beyond what was being done and see what else I could find so I changed direction in my studies.

I decided to think outside the box and try to jump directly into the areas of controversial medical research. To do this, I began by looking at history. I have been a student of history all my life. I love the subject. I also love to find that there is almost always some truth to ancient legends and myths. Everything from Noah’s Great flood to Atlantis to the Yeti have some basis in fact or in history that has been embellished over the years by countless retellings. If you look hard enough, you can find the tidbit of truth that started it all.

So I began to look for some connection in ancient myths and legends about immortality and life extension. I used my concept search engine called Plato to help me gather and collate and sift thru all these old stories. As you might guess there are thousands of such references. Stories of the Fountain of Youth, the source of life and the miracle of birth get all mixed up in thousands of references to various aspects of immortality and recovery of youth. To sift thru all this, I used my own designed concept search engine called Plato.

Plato is simply a search engine like Google or Bing but it uses a unique searching technique that I invented that combines a thesaurus search, advanced data mining techniques and pattern recognition with a powerful neural network that provides predictive modeling and computational EDA methods. These modules pass the search syntax back and forth in an iterative Monte Carlo statistical manner to quantify the relationship of the data it finds into applicable concepts without relying on simple key-word searches.

It doesn’t just research my key-word search syntax; it can search for a concept. A simple example is searching for “Houses in the Artic”. It will use a thesaurus lookup to find all substitutes for House and Artic. It will then extend its search into the culture in which House may be different in the context of the Artic so that House will relate to igloo, or tent or ice cave or snow burrow and Artic might include Antarctic, polar north or polar south or “above the artic circle”. It will then collate the findings into a list of the most logical and most well documented response to my original query.

Plato has been my research tool for more than a two decade and I have been enhancing its capabilities almost continuously for most of that time as new methods, software and algorithms become available. I often use commercially available software matched to ERP-style data exchanges or simple macros to interlink and connect the applications with my own coded algorithms. I recently added a module that does a new kind of pattern searching called NORA – non-obvious relationship analysis –, which finds links between facts, and data that would otherwise be missed. NORA can find links to references using nicknames, alternate spellings, foreign word substitutes, and data that is seemingly unrelated but uses nonintuitive and disambiguation algorithms. NORA is actually just the next logical and incremental advance from my original simple Bayesian classifier to my newer neural-net pattern recognition and k-nearest neighbor algorithm (KNN) to a more sophisticated combination of all of those methods to make NORA.

Using NORA, Plato often finds interrelationships that would never have occurred to me and then it documents, prioritizes and presents to me why it is important. Such searches are often done by mainframes and super computers but I don’t have all that so I have to rely on my own version of distributed processing in which I use my own bank of PC’s plus some others that I “borrow” by farming out in an N-Tier architecture of commercial, university and government mainframes and other PC’s. This is particularly used when searches can be performed independently and then the results can be collated and evaluated by my own computers.

As you might expect, when I turned Plato onto this study, it did its usual job of searching, often for more than 5 or 6 days and nights (using my six interlinked computers and 18 terabytes of HDD space plus all the other systems that I could make use of). Each search gave me new insights and allowed me to make the next search more specific and productive. When it was done, it found something very interesting……..apples.

It found that when you condense thousands of ancient myths and legends and folklore, apples come up an extraordinary number of times in relation to stories of immortality and anti-aging. Oh, and not just any apples. It seems that only Golden Delicious and Braeburn apples have the connection to most consistent life-giving affects. Obviously, I had to follow this new idea and read many of the stories and links that Plato had documented. Norse, Greek, Chinese, American Indian and Australian Aborigines mythology all have detailed references to stories that related apples to immortality. Such is the kind of links that simply cannot be a total coincidence. There has to be more to this then just a common fruit food.

This was enough to go on so I went back to the hard sciences of experimental and biomedical gerontology to see if there was any link to apples. I really got frustrated because for months, I found virtually no connection to apples and I was beginning to think I might have gone off in the wrong direction too far. It took more than a year and hundreds of searches that took months of dedicated processing time with Plato’s help – but I finally found it. It turns out the reason it took so long is that one of the critical research papers that made the connection was only published in November of 2009. That paper essentially was the keystone of the whole story and provided the final piece of the puzzle that made everything else work and make sense. Here is the connection but I have to give you some of the other findings so you can see the series of links that leads to apples.

First the hard science: Fibrocyte is a term used to identify inactive mesenchymal multipotent stem cells (MSC), that is, cells that can differentiate into a variety of cell types. The term “Fibrocyte” contrasts with the term “fibroblasts.” Fibroblasts are connective tissue cells characterized by synthesis of proteins of the fibrous matrix, particularly the collagens. When tissue is injured – which includes damaged, worn out, aged or destroyed –, the predominant mesenchymal cells (MSC), the fibroblasts, are able to repair or create new replacement tissues, cells or parts of cells. These fibroblasts MSC’s are derived from the Fibrocyte and from muscle cells and glands.

Recently, the term “Fibrocyte” has also been applied to a blood born cell able to leave the blood, enter tissue and become a fibroblast. As part of the more general topic of stem cell biology, a number of studies have shown that the blood contains marrow-derived cells that can differentiate into fibroblasts. These cells have been reported to express the hematopoietic cell surface markers, as well as collagen. These cells can migrate to wound sites, exhibiting a role in wound healing. There are several studies showing that Fibrocyte mediate wound healing and fibrotic tissue repair.

Time to translate; the above says that one form of stem cells is called a Fibrocyte, which can express (a genetics term meaning create or manifests) as a fibroblast, which is a powerful cell capable of healing or even creating other body cells or cell parts. Fibroblasts can be created from Fibrocyte and from muscle cells. A special form of fibroblasts has been recently found in blood and is called myo-fibroblasts (which just means blood-fibroblasts). Myo-fibroblasts appear to also be created by bone marrow and have been found to be critical to wound healing and tissue repair. Myofibroblasts are a blood-borne stem cell that can give rise to all the other blood cell types but as you will see, they can do more.

OK now let’s jump to another researcher that found that Myofibroblasts in the wound tissue are implicated in wound strengthening by extracellular collagen fiber deposition and then wound contraction by intracellular contraction and concomitant alignment of the collagen fibers by integrin mediated pulling on to the collagen bundles. It can contract by using muscle type actin-myosin complex, rich in a form of actin called alpha-smooth muscle actin. These cells are then capable of speeding wound repair by contracting the edges of the wound. More recently it has been shown that the production of fibroblasts can be enhanced with photobiomodulation.

The translation of the above is that Myofibroblasts exist almost everywhere in the body but not in large quantities. Under certain conditions, muscle tissues, bone marrow and other surfaces within the body can create Myofibroblasts. Since the Myofibroblasts moves within the blood, it can reach everywhere in the body but Fibrocyte and fibroblasts are confined to specific sites within the body.

Myofibroblasts are also a sort of universal repair kit for cells and organs that can strengthen the organs and cells down to the cell wall using collagen, actin and intracellular contraction along with constructive rebuilding using special fibers that re-enforce and rebuild cells and parts of cells.

Perhaps the most important finding is that photobiomodulation can cause the level of fibroblasts in the body to increase. Fibroblasts have a self-renewal capacity to maintain their own population at an approximately constant level within the body but under special conditions created by photobiomodulation, that population can be made to grow larger. Under certain light conditions, fibroblasts increase in the blood for many hours or days before returning to their preset but relatively low constant level.

Low-level laser therapy (LLLT, also known as photobiomodulation, cold laser therapy and laser biostimulation) has long been known as a medical and veterinary treatment, which uses low-level lasers or light-emitting diodes to stimulate or inhibit cellular function. This is a really hot topic in the medical community because of its implications to non-pharmacology and non-invasive healing. Clinical and laboratory research investigating optimal wavelengths, power densities, treatment duration and treatment intervals are being performed in dozens of labs all over the world and these labs are publishing numerous papers on the subject. Among these papers, I (and Plato) have found several studies that show that the density of fibroblast cells and phytochemicals increase significantly under the LLLT.

As a universal repair kit, it would be more desirable to have Myofibroblasts than fibroblasts because Myofibroblasts can move throughout the body and repair more other different kinds of cells. However, since fibroblasts are more abundant than Myofibroblasts and are continually being created by Fibrocyte, the stimulation of making more fibroblasts using a special form of light therapy is a major discovery. At issue is to now get the fibroblasts to create more Myofibroblasts.

It is a well-established fact that apples exhibit strong antioxidant and antiproliferative activities and that their major part of total antioxidant activity is from the combination of phytochemicals. Phytochemicals, including phenolics and flavonoids are the bioactive compounds in apples.

A remarkable finding was made in November 2009. While experimenting with the variables in LLLT treatments and measuring the production of stem cells, it was discovered by accident that apples significantly increased the conversion of fibroblasts cells into Myofibroblasts cells. Further research narrowed the effect to just two types of apples, showing that Golden Delicious and Braeburn apples had the best impact on the health and growth of new Myofibroblasts cells.

Further research has shown that this amazing apple effect on the morphology of the cells, which became larger and stronger in the presence of selected apples – shows nearly identical effects as those from Human Growth Hormone (HGH), which is meant to stimulate the growth of cells. This means that apples could be the missing piece of the puzzle for growth of stronger and more lasting cells and could possibly be substituted for HGH therapy.

The net effect of the LLLT on patients that also have a daily diet of at least one Golden Delicious or Braeburn apple is that there is a significant improvement in cell morphology (structure) and in the quantity of fibroblast cells and that those cells are converted into Myofibroblasts cells in significant quantities.

There is one more piece to the puzzle. Even though Myofibroblasts have this great healing and regenerative powers and can travel anywhere in the body in the blood, we need to direct that effect on the telomeres so that the repairs to that one aspect to the cell can allow the normal cell reproduction and renewal process to continue and not die out with age. To do that, we have to change to another line of scientific inquiry.

Regenerative Medicine is a field of study on how to combine cells, engineering cells, and develop suitable bio-chemical stimulation to improve or replace biological functions, tissues and physio-chemical functions. This is a new field of study that most often makes use of stem cells as the major construction and repair tool. By using stem cells or progenitor cells, they have developed methods to induce regeneration in biologically active molecules. The focus of this work has been on the use of the most concentrated and powerful stem cells from embryonic and umbilical cord blood primarily because they want the fastest and most effective response possible on large repairs – like rebuilding the spine or liver. Although Myofibroblasts are less versatile than the embryonic stem cells, they are also multipotent stem cells – meaning that they can repair or rebuild other cells.

Regenerative Medicine applies the stem cells directly to damaged areas and hopes that they will go to the damaged area and fix it. But this method will not work if the problem to be fixed is every cell in the body. Site-specific Injections won’t work so we have to rely on the body’s natural systems to deliver the stem cells where we want them. The only method to reach them all is by the blood and the only effective stem cell that travels effectively in the blood are the Myofibroblasts.

But even moving thru the blood will not automatically make the Myofibroblasts find and repair the telomeres. I had to find a way to specifically, make the Myofibroblasts address the specific repair of the telomeres. To do so, it must somehow be told where to look and what to fix. I found this is done with a method called telomere-targeting agents (TTA). TTA’s were developed to tag the telomeres of cancer cells and then use a small molecule called BIBR1532, which is a telomerase inhibitor to shorten the cancer cells telomeres, thus destroying the cancer. TTA has only rarely been used to identify a point of repair rather than a point to inhibit or destroy but the difference in the two methods is relatively minor.

So, up to this point, we know that Myofibroblasts are stem cells that possess the ability to rebuild, recreate and reproduce a variety of body cells. This capability is called multipotent progenitor cells (MPC), however, the most recent research has shown that certain specific light frequencies, pulse duration and repeat treatments when using LLLT in the presence of the essential elements of apples, has created not just multipotent stem cells but pluripotent stem cells (PSC). These are cells that can essentially become or repair any cell in the body. It would now appear that we have not yet perfected the transformation of all of the fibroblasts into pluripotent stem cells but many are converted. Many more are converted into MPC’s. Both PSC’s and MPC’s are then applied to rebuild, recreate and produce a variety of body cells through a process known as transdifferentiation. This is not just repair but wholesale recreation or replacement of damaged cells.

These MPC’s and PSC’s can give rise to other cell types that have already been terminally differentiated. In other words, these stem cells can rebuild, recreate and reproduce any other cells. By using the special tagging process called TTA, we can direct these stem cells to seek out and repair the telomeres of cells everywhere in the body.

The next big advance in my research was finding a research project funded by the National Institute of on Aging (NIA), which is a part of the National Institute of Health (NIH). What was odd about this study is that it was taking place at Fort Detrick in Frederick Maryland. This is somewhat unusual because Ft. Detrick is where the Dept of Defense does a lot of its classified and dangerous medical research. You would not normally think of aging as being in that group. It got more confusing when I discovered that the labs being used were a part of the National Interagency Confederation for Biological Research (NICBR). This implied that the program was funded across all of the government medical community and that they had enormous resources to pull from. It also spoke of how important this program was. As I looked into this group more out of curiosity than for content, I discovered that a senior director at the NICBR was an old buddy of mine from my days at DARPA and NRL. He was now a senior ranking officer that managed the funding and scientific direction of multiple programs. I am being somewhat secretive because I don’t want his identity to be known. Suffice it to say that I called my old buddy and we met several times and I eventually got a full rundown on the project that I had uncovered.

In brief, what the NICBR is working on is how to enhance the health and recovery of our soldiers and sailors by natural process means. The program builds upon civilian studies and well-known biological facts such as that our bodies have a powerful defense against the growth of cancers, tumors and other defects like leukemia and lymphoma. Among these defenses are tumor suppressors and they work as described above by inhibiting the telomerase of the cancer cells. In the process, they also have the same but slower effect on normal cells – thus contributing to our aging. That means that if these Myofibroblasts stem cells are enhanced and are TTA tagged to repair and lengthen the telomeres of cells, they will do the same for cancer cells making people highly susceptible to tumors and cancers. If, on the other hand, they enhance the telomerase inhibitor in the cancer cells, they will also accelerate the aging process by reducing the length of telomeres in normal cells. We don’t want either one of these.

This joint NICBR team of researchers found that the accelerated lengthening of telomeres (ALT) is a process that can be enhanced using a tumor suppressor called ataxia-telangiectasia mutated kinase or ATM. Using ATM with Myofibroblasts stem cells and TTA tagging gave a marginal benefit of reducing cancers while having a slightly less reduction of cell aging. There was however, another one of those amazing accidental discoveries. During the testing they had to use various cultured and collected Myofibroblasts batches in their attempt to differentiate the effects of the TTA of normal cells from that of cancer cells.

Quite by accident, it was discovered that one batch of the Myofibroblasts cells had an immediate and profound differentiation of normal and cancer cells. This one batch of stem cells had the simultaneous effect of tagging of the telomeres to cause the repair and lengthening of the telomeres of normal cells while inhibiting the telomerase of the cancer cells. Upon closer examination, it was found that the only variable was that the Myofibroblasts of that batch was collected from the researcher that had been able to enhance Myofibroblasts production using photobiomodulation in the presence of enhanced phytochemicals, including phenolics and flavonoids – the bioactive compounds in ….apples.

The NICBR team immediately zeroed in one the active mechanisms and processes and discovered that when ATM is lacking, aging accelerates but they found that by manipulation of the p53/p16 ink4a expression in the presence of the photobiomodulated Myofibroblasts cells, they can differentiate the effects of the TTA of normal cells from that of cancer cells. The method involves using the catalytic subunit telomerase reverse transcriptase (TERT) of the telomerase enzyme to direct the Myofibroblasts to repair the telomere ends by tagging and using the TTAGGG repeat with manipulated p53/p16ink4a -Rb-mediated checkpoints and a very complicated process that involves bmi-1 and p10arf. I am not really sure what that means but I am assured that this differentiated TTA coding process works and can be used to tag very specific repair sites.

In other words, using differentiated TTA with the photobiomodulated Myofibroblasts, the specially created stem cells can be essentially programmed to rebuild the telomeres back to what they were in childhood without any (known) serious side effects. It was, however, the presence of apples in the process that made the difference between success and failure….again.

Once the differentiated TTA coding is combined with the photobiomodulated Myofibroblasts using TERT, these special stem cells will seek out and perform an endless repair of the normal cell telomeres while suppressing the telomeres on cancer and tumor cells. That will have the effect of stopping or greatly slowing the aging process.

There is just one problem. There are not enough Myofibroblasts in the blood to effect aging sufficiently over a long period of time. But, as described above, fibroblasts are not only created within the bone marrow, but also the quantity of fibroblasts can be stimulated and expanded in the presence of photobiomodulation (LLLT). AND we know from the above studies that the conversion of fibroblasts into Myofibroblasts is greatly enhanced by the unique biochemicals in apples in the presence of LLLT. Soooo….

In Summary:

I found that I could use a combination of certain apples and low-level laser therapy (LLLT), also known as photobiomodulation, to stimulate both the production of fibroblasts and the conversion of fibroblasts into Myofibroblasts. These light-stimulated Myofibroblasts cells, when used in connection with a special cell tagging process called TTA can be made to enhance the telomeres on normal healthy cells while suppressing the growth of telomeres of cancer and tumor cells. The end result is that the cells of the body approach immortality.

This has been a long research on my part and has led me down many dead end paths. Plato helped me with a lot of this research and led me into areas that I would not have otherwise pursued. My former Dept. of Defense contacts gave me access to research databases and research findings that are not all available to the public. Many of the medical studies I read were published in obscure journals or newsletters that are only available from a few sources. Many of the links that I followed were links that I found between two different studies that were not individually aware of each other. In other words, I didn’t do any of the actual research described here but I (and Plato) did make a lot of logical connections between the various research papers I found. To my knowledge, no one has taken this as far as I have except the few medical researcher friends of mine that helped me with getting access to the LLLT and performed the TTA coding for me.

But this is not just another story; I can tell you today that it works. At least, I am pretty sure it works as I have done this on myself.

The use of the LLLT was easy. I have been using 3.75 watts/cm2 (which is a relatively powerful setting that I gradually worked up to over a year of therapy). I have been experimenting with 640 to 720 nanometers for the scan wavelength (just below the infrared range) in bursts of 370 nanoseconds. All of these are settings that have evolved over time and will continue to be refined. Of course, I also have been playing around with various aspects of using the apples – eating them, cooking them, juice, pulp, skins, seeds, etc. The problem is that any change in the treatment does not have an immediate result. I have to wait weeks to see if there are any changes and then they are often very small changes that can easily go unnoticed. Despite this, the results have been subtle but have accumulated into significant changes.

I have always been in good health but until 6 months ago, I had bad arthritis in my hands and knees. That is gone. My skin has lost most of that pixilated look that old people get and many of my “age spots” have disappeared. I use to have morning pains and had trouble stretching out or touching the floor. Now I can keep my knees straight and put my hands flat on the floor. I have not been sick despite several exposures to the flu – including H1N1. I have more energy and have stopped wearing my glasses.

As with all of my articles on this blog, I make no representations about this story except to challenge you to research it for yourself. Most of the basics are readily available on the web and a lot of the details can be found in some good medical databases like Medline, EMBASE, PubMed and WISDOM. I also used ARL, NRL and NIH resources.

This all has taken place over the past 38 months but the self treatments have been only over the past 7 months so I have a long way to go to show I will live longer, let alone, live forever, but right now I am feeling pretty good.

Just so you’ll know, I have submitted several patent applications for various processes and methods that I have described above. In several cases, a patent already exists but thru a process called drug repositioning, I can apply for a patent for an alternative use of an existing patented drug. This is only necessary for the TTA tagging chemical and the patent on that chemical expired in 2007. I have patents in on the LLLT treatment settings that I have found to be most successful (not listed in this article) and in the optimum “application” of the apples to the processes – I didn’t exactly tell the whole story above. There are a few details that make it significantly more effective that I described above and those are the parts that are being patented. I say this just so everyone knows that any attempt to duplicate my processes will be sued by me for patent infringement. I have a lawyer that will not charge me anything unless he wins and he is convinced that he can will any such suit.

I want to also caution everyone that parts of this can be very dangerous. If you get it wrong, you can significantly enhance the growth of tumors or cancers in your body and no one can do anything to stop them. Don’t mess with this.

New Power Source Being Tested in Secret

The next time you are driving around the Washington DC beltway, the New York State Thruway, I80 through Nebraska or I5 running through California or any of a score of other major highways in the US, you are part of a grand experiment to create an emergency source of electric power.  It is a simple concept but complex in its implementation and revolutionary in its technology.  Let me explain from the beginning… 

We cannot generate electricity directly.  We have to use either chemical, mechanical solar or nuclear energy and then convert that energy to electricity – often making more than one conversion such as nuclear to heat to steam to mechanical to electrical.  These conversion processes are inefficient and expensive to do in large quantities.  They are also very difficult to build because of the environmental groups, inspections, regulations, competition with utilities and investment costs.   The typical warfare primer says to target the infrastructure first.  Wipe out the utilities and you seriously degrade the ability of the enemy to coordinate a response.  The US government has bunkers and stored food and water but has to rely mostly on public utilities or emergency generators for electricity.   Since the public utilities are also a prime target, that leaves only the emergency generators but they require large quantities of fuel that must be stored until needed.  A 10-megawatt generator might use 2500 gals of fuel per day.  That mandates a huge storage tank of fuel that is also in demand by car and aircraft.  This is not the kind of tenuous link of survivability that the government likes to rely on. 

The government has been looking for years for ways to bypass all this reliance on utilities out of their control and sharing of fuel with the goal of creating a power source that is exclusively theirs and can be counted upon when all other forms of power have been destroyed.  They have been looking for ways to extend their ability to operate during and after an attack for years.  For the past ten years or more they have been building and are experimenting with one that relies on you and me to create their electricity. The theory is that you can create electricity with a small source of very powerful energy – such as nuclear – or from a very large source of relatively weak energy – such as water or wind.   The difficultly and complexity and cost rises sharply as you go from the weak energy sources to the powerful energy sources.  You can build thousands of wind generators for the cost of one nuclear power plant.  That makes the weak energy sources more desirable for the movement to invest in.  The problem is that it takes a huge amount of this weak energy source to create any large volumes of electricity.  Also the nature of having a clandestine source of power means that they can’t put of a thousand wind generators or build a bunch of dams.  The dilemma comes in trying to balance the high power needs with a low cost while keeping it all hidden from everyone.  Now they have done all that. 

If you have traveled very much on interstate highways, you have probably seen long sections of the highway being worked on in which they cut rectangular holes (about 6 feet long, by 18 inches wide by nearly four feet deep) in the perfectly good concrete highway and then fill them up again.  In some places, they have done this for hundreds of miles – cutting these holes every 20 to 30 feet – tens of thousands of these holes throughout the interstate highway system.  Officially, these holes are supposed to be to fix a design flaw in the highway by adding in missing thermal expansion sections to keep the highway from cracking up during very hot or very cold weather.  But that is not the truth. 

There are three errors with that logic.   (1) The highways already have expansion gaps built into the design.  These are the black lines – filled with compressible tar – that create those miles of endless ‘tickety-tickety-tick” sound as you drive over them.  The concrete is laid down in sections with as much as 3 inches between sections that is filled in with tar.  These entire sections expand and contract in weather and squeeze the tar up into those irritating repeating bumps.  No other thermal expansion is needed. 

(2) The holes they cut (using diamond saws) are dug out to below the gravel base and then refilled with poured concrete.  When done, the only sign it happened is that the new concrete is a different color.  Since they refilled it with the same concrete that they took out, the filling has the same thermal expansion qualities as the original so there is no gain.  If there were thermal problems before, then they would have had the same problems after the “fix”.  Makes no sense.   (3) Finally, the use of concrete in our US interstate system was based on the design of the Autobahn in Germany which the Nazi’s built prior to WWII.  Dozens of years of research was done on the Autobahn and more on our highway system before we built the 46,000 miles of the Eisenhower National System of Interstate and Defense Highways, as it was called back in 1956.  The need for thermal expansion was well known and designed into every mile of highway and every section of overpass and bridge ever built.  The idea that they forgot that basic aspect of physics and construction is simply silly.  Ignoring, for a moment, that this is a highly unlikely design mistake, the most logical fix would have been to simply cut more long thin/narrow lines into the concrete and fill them with tar.  Digging an 18” wide by 6 foot long by 40-inch deep hole is entirely unneeded. 

Ok, so if they are not for thermal expansion, what are they.   Back in 1998, I was held up for hours outside of North Platte, Neb. while traffic was funneled into one lane because they were cutting 400 miles of holes in Interstate 80.  It got me to thinking and I investigated off and on for the next 7 years.  The breakthrough came when I made contact with an old retired buddy of mine that worked in the now defunct NRO – National Reconnaissance Office.  He was trying to be cool but told me to take a close look at the hidden parts of the North American Electric Reliability Corporation (NERC).  I did. It took several years of digging and I found out NERC has their fingers into a lot of pots that most people do not know about but when I compared their annual published budget (they are a nonprofit corporation) with budget numbers by department, I found about $300 million unaccounted for.  As I dug further, I found out they get a lot of federal funding from FERC and the Department of Homeland Security (DHA).  The missing money soon grew to over $900 million because much of it was “off the books”.   

In all this digging, I kept seeing references to Alqosh.  When I looked it up, I found it was the name of a town in northwest Iraq where it is believed that Saddam had a secret nuclear power facility.  That intelligence was proved wrong during the inspections that led up to the second Iraq war but the name kept appearing in NERC paperwork.  So I went looking again and found that it is also a derivation of an Arabic name meaning “the God of Power”.  It suddenly fell into context with the references I had been seeing.  Alqosh is not a place but the name of the project or program that had something to do with these holes that were being cut in the highway.  Now I had something to focus on. As I dug deeper, some of it by means I don’t want to admit to, I found detailed descriptions of Alqosh within NERC and its link to DoD and DHS.  Here’s what I found. 

The concrete that was poured into those holes was a special mixture that contained a high concentration of piezoelectric crystals.  These are rocks (quartz), ceramics and other materials that produce electricity when they are physically compressed.  The mix was enhanced with some custom designed ceramics that also create electricity.  The exact mixture is secret but I found out that it contains berlinite, quartz, Rochelle salt, lead zirconate titanate, polyvinylidene fluoride, sodium potassium niobate and other ingredients. The mix of quartz, polymers and ceramics is very unique and with a very specific intent in mind.  Piezoelectric materials will produce electricity when they are compressed squeezed – this is called the direct piezoelectric effect (like a phonograph needle).  But they also have exactly the opposite effect.  The lead zirconate titanate crystals and other ceramics in the mix will expand and contract in the presence of electricity – this is called the reverse piezoelectric effect.  This is how tiny piezoelectric speakers work.   

The concrete mix, in which a part, was designed to create electricity when compressed by a car passing over it.  Some of these materials react immediately and some delay their response for up to several seconds.  This creates a sort of damper wave of voltage spikes passing back and forth thru the material over a period of time. While some of this mix is creating electricity, some other parts of the specially designed ceramics were intended to flex in physical size when they sensed the electricity from the other quartz materials.  As with the quartz crystals, some of these ceramics delay their responses for up to several seconds.  Sort of like time-released capsules.  The flexing ceramics, in turn, continue the vibrations that cause the quartz to continue creating electric pulses.   

The effect is sort of like pushing a child’s swing.  The first push or vibration comes from the car passing.  That, in turn, creates electricity that makes some of the materials flex and vibrate more.  This push creates more electricity that repeats in an escalating manner until, like the swing, it is producing high waveforms of peak power spikes. The end result of this unique mix of chemicals, crystals, ceramics and polymers is what is called a piezoelectric transformer that uses the acoustic (vibration) (initiated by a car passing) coupling to step up the generated voltages by over 1,500-to-1 into a resonance frequency of about 1 megahertz.  A passing car initiates the series of high voltage electrical pulses that develop constructive resonance with subsequent pressures from passing cars so that the voltage peaks of this resonance can top out at or above 12,700 volts and then tapers off in a constant frequency, decreasing amplitude damper wave until regenerated by the next car or truck.  Multiple axle vehicles can produce powerful signals that can resonate for several minutes. 

Once all this electricity is created, the re-bar on the bottom of the hole also has a special role to play.  It contains a special spiral coil of wire hidden under the outer layer of conducting polymers.  By a careful design of the coil and insulating wires, these re-bars create a simple but highly effective “resonance tank circuit”.   The simplest form of a tank circuit is a coil of wire and a single capacitor.  The value of the inductance of the coil and the capacitance of the capacitor determines the resonance frequency of the circuit.  Every radio and every transmitter made has had a tank circuit in it of one sort or another. The coils of wire on the re-bar create an inductor and the controlled conducting material in the polymer coatings create a capacitor that is tuned to the same resonance frequency as the piezoelectric transformer making for a highly efficient harmonic oscillator that can sustain the “ring” (series resonance voltage magnification over a protracted time domain) for several minutes even with out further injection of energy.   In other words, a car passing can cause one of these concrete patches to emit a powerful high frequency signal for as much as 10 to 20 minutes, depending on the size, weight and speed of the vehicle. 

The final element of this system is the collection of that emitted RF energy.  In some areas, such as the Washington DC beltway, there is a buried cable running parallel to the highway that is tuned to receive and pass this electrical energy into special substations and junction boxes that integrate the power into the existing grid.  These special substations and junction boxes can also divert both this piezoelectric energy as well as grid power into lines that connect directly to government facilities. In other more rural areas, the power collection is by a receiver that is hiding in plain sight.  Almost all power lines have one or more heavy cables that run along the upper most portions of the poles or towers.  These top most cables are not connected to the power lines.  This line is most often used as lightning protection and is grounded into earth ground.   

Along those power lines that parallel highways that have been “fixed” with these piezoelectric generators, this line has been replaced with a specially designed cable that acts as a very efficient tuned antenna to gather the EMF and RF energy radiated by the modified highway re-bar transmitters.  This special cable is able to pick up the radiated piezoelectric energy from distances as far away as 1 mile.  In a few places, this specialized cable has been incorporated into the fence that lines both sides of most interstate highways.  Whether by buried cable, power line antenna or fence-mounted collector, the thousands of miles of these piezoelectric generators pumps their power into a nationwide gird of electric power without anyone being aware of it. The combined effect of the piezoelectric concrete mix, the re-bar lattice and the tuned resonant pickup antennas is to create a highly efficient RF energy transmitter and receiver with a power output that is directly dependent upon the vehicle traffic on the highway.  For instance, the power currently created by rush hour traffic along the Washington DC beltway is unbelievable.  It is the most effective and efficient generators in the US and creates as much as 1.6 megawatts by the inner beltway alone.   

The total amount of power being created nationwide is a secret but a report that circulated within DARPA following the 9/11 attacks said that 67 hidden government bunker facilities were brought online and fully powered in preparation to receive evacuated government personnel.  The report, which was focused on the continuity of services, mentioned that all 67 facilities, with a total demand of an estimated 345 megawatts, “used 9% of the available power of Alqosh”.  By extrapolation, that means that the Alqosh grid can create about 3,800 megawatts or about the power of two large nuclear power plants. So why is it secret?  Three reasons.  (1) The government doesn’t want the bad guys nor the American public to know that we can create power from our highways.   They don’t want the bad guys to know because they don’t want it to become a target.  They don’t want the general public to know because they frankly do not want to share any of this power with the public – even if commercial utility power rates get extraordinarily high and fossil fuel or coal pollution becomes a major problem. 

(2) Some of the materials in the concrete mix are not exactly healthy for the environment, not to mention that millions of people have had their travel plans messed up by the highway construction.  Rain run off and mixtures with hydrocarbons are known to create some pretty powerful toxins – in relatively small quantities but the effects of long term exposures are unknown. (3) Its not done yet.  The system is still growing but it is far from being complete.  A recent contract was released by NERC to install “thermal expansion” sections into the runways of the largest 24 airports in the US.  There is also a plan to expand into every railroad, metro, commuter train, subway and freight train system in the US.  A collaboration between DARPA, NERC and DHS recently produced a report that targets 2025 to complete the Alqosh grid with a total capacity of 26,000 megawatts of generating power. 

The task of balancing the high power needs of the government with a low cost while keeping it all hidden from everyone has been accomplished.  The cost has been buried in thousands of small highway and power line projects spread out over the past 10 years.  The power being created will keep all 140 of the hidden underground bunkers fully powered for weeks or months after natural disaster or terrorists have destroyed the utilities.  The power your government uses to run its lights and toasters during a serious national crisis may just be power that you created by evacuating the city where the crisis began. 

Update: The TRUTH about Lucid Dreaming

Some time ago, I wrote about my experiments with lucid dreaming that expanded into sort of a hyper-sensory capability that allows me to visualize in my mind’s eye, my surroundings including things that I cannot see.  The essay on that got a little carried away and in the tradition of this blog, I enhanced the story to the point that it was pretty crazy.  I can really do lucid dreaming and I can control my dreams and I do have problems with controlling the thoughts of my subconscious mind.  All that is true.  The part that talked about the hyper sensory capabilities, the X-ray vision and the reading other people’s dreams…well that was all part of a lucid dream that I created for myself.  It felt very real when I was dreaming it and I could recall almost every detail of the dream so it was fresh in my thoughts as I wrote that essay.  I enjoyed the idea so much that I selected that dream several times and kept adding to it each time.  Anyway, I just wanted you to know that I wrote that one for fun but I do keep using the lucid dreaming techniques and still enjoy creating my own dreams. 

One part of that other essay that I dreamed up but have actually begun to take more seriously is the mix of my own memories and my lucid dreaming.  I have always had a good memory for details.  In school, I could read fairly fast and then recall most of what I read even months later.  It really helped on tests.  Since I have five degrees and over 450 college credits, you can see that I used that technique often and with good effect. Now I am experimenting with recalling old memories and reconstructing the surroundings at the time of the memory.  In my earlier essay, I mentioned my camping trip with my Dad when I was 12.  Using my lucid dreaming, I can proactively recreate that trip and the environment.  It sometimes gets a little confusing as to what I am creating out of imagination and what I actually remember from the event but I see that as sort of extrapolation between known data points.   

I usually begin with a completely static scene – like a painting on a canvas.  Everything is still, even the water in the stream.  I “walk” around the scene and fill in the unknowns like what color were the tent and the sleeping bags and what kind of trees were nearby.  After I get it just so, I press, “play” and let the scene unfold as I remember it.  When I get to a part I don’t remember, I stop and fill it in.  For instance, I remember setting up camp and then I remember being in a canoe fishing.  I don’t remember where we got the canoe or how we got out on the lake – so I create that part.  The creation process is partly trying to dig deep into my memories and partly just imagination to make up what probably happened.  The whole process is like creating a movie by combining a bunch of scenes together to make a whole story. On one trip, I swung off a 30-foot high cliff on a big thick rope.  As I was out over the water, I lost my grip and fell into the lake.  It was a long drop and I belly flopped.  It knocked the wind out of me and I could not breathe and nearly passed out.  My Dad pulled me out and helped me recover.  I have replayed that scene several times.  I found that I could play it in slow motion.  Since this is a dream based on a memory, I can do just about anything I want.  I then would move around the scene and see how it happened, in great detail.  This was informative and I tried it with a number of other memories – like my motorcycle-bus accident, my time in Viet Nam and while I was a cop.  It has given me a whole new appreciation for those events. 

Since I discovered that I could slow down what I have seen, I decided to conduct a few experiments.  I scanned a book by turning the pages as fast as I could and still look at every page.  Then in my dream, I recalled that memory and slowed it down.  I found that I could, in fact, re-read the book because my eyes had indeed captured the text and images of the book even if my mind had not absorbed the content of the text.  Now in my dream, I could look over my shoulder and read the page.  When I was done with the page, I would advance the scene until the next page was visible.  In this way, I read the book in my dreams.  It really did work. I decided to try to learn a skill this way.  I found a book on how to sculpt clay into statues and busts of people.  I have never done that before so I quick-scanned the pages and then dreamed that I was reading it slowly and was learning it.  In my dream, I then sculpted a beautiful clay statue of Zeus.  I was amazed that in my dream, I was able to apply all the skills of the book without having to actually read the book.  When I woke up, could not wait to try to recreate the Zeus statue.  I setup all the clay and tool and went to work and create a statue that might have been mistaken as a mix between a duck and a camel.  It was an utter disaster.  There was no skill there at all.  It was all in my dream and in my imagination.  I had not learned anything from the quick-scan of the book because I had not actually learned or read anything other than looking at the pictures.  This experiment showed me that I have to be careful when I mix dreaming with memories to avoid creating false memories and wishful thinking. 

Warfare will never be the same….

When I was working for NRL, I had an opportunity to work on a biomarine project that had to do with seeding the ocean with time released pills that would attract sharks and other man-killing creatures (sea snakes, squid, rays, etc.) with the intent of protecting naval vessels.  The object was to find a chemical mix that would be a powerful attractant that could be put into the water and it would work for hours or days.  The attractant would bring in killer sharks or other sea creatures that would reduce or eliminate the underwater threat.  We called the program BAIT – bio-attractant interdiction and targeting. 

 We quickly found that we had to encapsulate the chemicals in a manner similar to time-released medicines in order to make them last more than a few minutes.  I was surprised to find out that the science of micro-encapsulating chemicals was extremely well developed and that precise timing could be achieved with the right selected coating.   

 Even with long time delays of many days, we did not achieve the levels of protection we wanted.  It was decided to try to release these attracting chemicals after they had made actual contact with an enemy diver or min-sub.  Again I was surprised at how much had already been developed in terms of micro-encapsulating chemicals that would release their chemicals upon contact with specific materials – in this case the foam rubber of wet suits.  We got it to work but like a lot of bleeding edge weaponry, it was shelved as being too much trouble for too little gain.  We also to a lot of flack from the Navy Seals that regarded the underwater arena as their battlefield and they did not want it contaminated with other creatures.  The BAIT program was shelved. 

 What it impressed upon me was the whole science of micro-encapsulation and its possibilities.  Late on, when I went to work for DARPA, I recalled this knowledge to solve one of their most ambitious projects.   This is one of those topics that they don’t want me to talk about but technically and officially, the cat is out of the bag and the restrictions are off so here it is…. 


The art and science of camouflage is very well developed and getting better all the time.  We are not too far away from near invisibility using light bending materials or projections but those are for what they call Dynamic Camouflage (DC) used for moving objects like aircraft, ships, soldiers and tanks.  There is a much less well developed science for Static Camouflage (SC) used to hide fixed installations, field units, artillery, command posts, and even entrenched soldiers.  SC is actually not much more developed than it was in WWII – using colors and patterns on tarps and netting to hide under.  To be sure, the colors and patterns are getting better at duplicating the environment but they are still pretty crude. 

One advance that has made these cover-ups more effective is that they have been made to reflect or block radar and IR sensors so as to match the surrounding environment.  This is a big gain because it makes everything under the tarps and nets invisible to aircraft or recon autonomous vehicles.  

In fact, the latest covers used in SC are so good that it has proven to be a serious problem to find and disrupt troop movements and supply lines.  Trucks can simply cover up until the aircraft are out of the area. Or they can even travel with the tarps covering most of the vehicles.  With virtually no radar image, no visible contrast with the surroundings and no IR signature – the only give-away to their presence is the small dust or exhaust trail. 

 DARPA has wanted an effective anti-camouflage capability for years.  I gave it to them and called it METs – micro-encapsulated tags.  It is actually a fairly simple idea that uses the same technology that I used at NRL on the BAIT program.  The signature of the materials used to make most of the equipment that the enemy uses can be uniquely defined in terms of precise chemical formulas for the dyes, paints, fuels, metals and plastics used in their manufacture.  As long as we could find one distinctive chemical that separates their vehicle paint or their clothing dye from ours, we could make a tag for that unique item and all like it.

  The METs were simply small (much less than 1 mm) colored glass balls with an opaque gelatin coating outside.  The glass beads are very round and have a unique coating on them.  The outside coating is like the side of a one-way mirror that you can see thru.  The coating facing the interior of the glass bead is like the mirror side of a one-way mirror.  This is not some new technology.  This design has been used on road signs and reflective markers since the early 1960’s.  It is very effective because it reflects light like a corner reflector – back to the light source – no matter what angle the light comes from.  

The gel-coatings were made to react with those unique chemical compounds found in specific enemy equipment.  Until they make that contact, they are almost totally passive but once they make contact with their design target material, they will immediately get sticky to that material and glue themselves to it.  Green glass balls were on METs that reacted to the paint on their vehicles.  When the green glass METs come in contact with an enemy vehicle, the reaction simply consists of the coating on the glass liquefying and flowing off the glass – exposing the glass.  The coatings do not react to any other chemicals and cannot be washed off.  After it melts off of the top of the glass bead, the coating then hardens slightly, holding the glass bead in place for a short time and then it also dissolves and the glass bead will fall off – clean of any gel-coating at all.  That’s all it does.  

Blue glass beads are inside METs that react to a unique quality in their rubber vehicle wheels.  Red glass beads are inside METs that react to the soles of their boots. Yellow beads react with fuels and oils….and so on.  We have over 300 METs now using various shades of colors plus more than 900 others that reflect different colors for the same surfaces or targets.  This helps in long term surveillance. 

 These METs are so small that you would have to get very close to one – inches – to see it.  Since it looks so much like all the rest of the dirt and dust of the combat zone, it is nearly impossible to see, find or remove.  Millions of these METs are discharged from a high flying aircraft to cover a combat area.  As they fall, the winds spread them out over vast areas.  Sometimes, they are released in even larger quantities during storms so as to blend in with the dust or rain.  Since they are unaffected by rain, snow, heat, or cold, they can remain “active” for months after being deployed. 

 Once the METs have been put into an area, a drone recon plane with some special gear on board is dispatched to scan the area.  The special equipment is a rapidly scanning and modulated laser beam that scans out 45 degrees either side of the flight path using a very narrow beam that is linked to an array of sensors and a GPS.  When the laser beam strikes one of the exposed MET glass beads, the laser light is reflected back to the drone.  The reflected beam is verified as being what was sent out by matching the modulation of the light and then it is timed and recorded so as to determine the exact GPS coordinates of the reflected beam.  The light color is analyzed and verified with repeated scans so that it can be determined what color MET was found.  Once found, the drone will scour the area for other METs. 

  Since the beam is modulated and constantly moving and is not visible to the human eye, it is nearly impossible to detect.  Since the drones cannot be heard on the ground and they travel at night and have a very tiny radar cross-section, the drone itself cannot be detected.  This means that both the dropping off and the detecting of these METs are undetectable and almost totally passive.  No emissions to be jammed.  Nothing to shoot down or avoid.  No way to avoid being detected. 

 The temporal aspects of using METs give them even more value.  Dispersing a layer of METs on day one, allows you to see if anything moves in that area for days after.  Putting down a section layer using slightly different color METs, over time can give a record of when travel occurred and by what volume.  Laying down a coating over a large area and then scanning each day for signature reflections can monitor any traffic in the area.  This works great for locating tracks and trails of enemy traffic during times nighttime or when we are not there but it has its greatest benefit to SCs.  

Static Camouflage (SC) used to hide fixed installations is often very good but

METs will penetrate that camouflage easily.  In fact, because the METs can be made to react to the actual materials used to create the camouflage, these locations now light up like Christmas trees to the scanning drones.  SC is no longer a problem for DARPA or our military.  METs can see into the past by showing us where they have been.  It can make the best camouflage in the world obsolete while being unstoppable to deployed, undetectable by the enemy when in place and cannot be blocked, jammed or fooled. 

  Even telling everyone this now serves no advantage to the enemy since they cannot avoid MET detection.  Our ability to adapt to new materials being used and rapidly produce unlimited quantities of METs will keep us ahead of any attempt to alter or disguise their equipment and therefore we will always be able to find them, no matter where or when they hide.  

 The last I heard, a contract had been released that would create smart bombs and cruise missiles that will use METs as a final fire control aim point.  They will be able to target by color of MET and concentration levels so as to be able to pick and choose targets on a cluttered and massive battlefield or combat zone.  This opens the application to being applied to Dynamic Camouflage (DC) targets as well as SC’s.

 You will see in my other report on the new MDR192 (Military Digital Rifle) that its aiming “system” is also adaptable to using METs.   The MDR192 is a semi-autonomous sniper system that can be operated entirely by remote-control.

 I am not working on it but I have heard that DARPA is also working on a MET that works on the RF frequencies so that air-to-air missiles can use previously deployed METs that paint enemy aircraft.   These new RF METs will essentially be nano-size corner reflectors similar to those used in survival situations.  It was discovered that nearly perfect reflectors could be made with bubble technology on a nano-diameter scale while creating a RCS (radar cross section) that appears to be as much as 400 times larger than the actual target.  This almost totally defeats the use of stealth technology, non-metallic construction (carbon fiber) or very small very fast missiles.

 Earlier studies have shown that the size of the MET can be so small that it can be deployed as an aerosol that hangs in the air or is absorbed by clouds.  These METs are on the order of 1/100th or less than one millimeter in diameter and were renamed as Nano-Encapsulated Tags or NETs.   NETs are so small that they hang in the air like smoke and can form clouds of aerosol NETs.

 NETs will allow autonomous defensive weapons called CIWS (close in weapons systems) like the Mk 15 Phalanx to have an additional mechanism to ID an intruder that has simply flown through a cloud of nano-sized NETs.  Using NETs in combination with the new millimeter radar and the forward looking infrared radar (FLIR) and the visual high resolution multi-spectral data acquisition systems will make the ship’s defenses nearly impenetrable.  Even the best stealth anti-ship missiles traveling at MACH 5 or higher will be unable to reach their targets.


Finally, DARPA has adapted the NET technology to work above and below the ocean’s surface.  Floating METs and NETs activated by passing ships create trails so visible that they can be tracked by satellite.  Using the same NET technology as in the CIWS aerosols and cloud seeding, the Navy can lay down a barrier of liquid tags released at multiple levels from air-dropped bouys.  These tags respond to the rapid and large scale changes in pressure and movement when something as large and as fast as a submarine moves through the tagged water.  Using visual blue-green lasers scanning from multiple levels of a cable dropped from a bouy, the activated tags can be spotted and tracked using RF transmitted signals from the above-water bouy.  This allows precise location and targeting without the target sub even being aware he has been discovered. 

With the advent of METs or NETs on land, in the air and at sea, the idea of hiding or making a surprise attack is a thing of the past.  Warfare will never be the same again.

We now have a gun you would not believe….

I was recently a part of a beta test group for the MDR192 – Military Digital Rifle.  This new weapon is a cross between a video game and a cannon.  In its prototype form, it begins as a modified Barrett M82, 50 cal. sniper rifle in a bullpup configuration.  This SASR (Special Applications Scoped Rifle) uses an improved version of the moving recoil barrel and muzzle mounted recoil deflector to reduce recoil while improving the ability to reacquire the sight picture.


A further modification consists of a small box attached to the monopod socket of the rear shoulder rest and another small box attached to the underside of the fore stock where the detachable bipod would normally be attached.  Inside these two boxes is an intricate mix of servos, gyros and electronics.  There is a quick-mount connection between these boxes and two motorized and articulated tripods that fully support the rifle at any predetermined height and angle.  These boxes are extensions of the Barrett BORS ballistic computer that integrate optical ranging with digital and computer interpolated capabilities. 


The sight has been replaced with very sophisticated video camera with superior optics.   The sight’s camera feed and the two control boxes are then connected to another small box that sits beside the rifle with a radio digital transceiver that uses frequency hopping to avoid jamming and detection.


The system is not done yet.  There are at least two additional video camera sights (VCS) that are placed at some distance from the rifle on their own motorized and articulated tripods.  Up to 6 scopes can be used with this system and they can be placed to completely surround the target area at distances up to 4,000 yards.  This gives a target circle up to 8,000 yards in diameter or about 3.4 miles.  The rifle mounted sight and the multiple VCS’s all have night vision capabilities and can switch to infrared imaging.


The MDR192 shoots a modified M82 50 cal round that uses depleted uranium for weight and an oversized action and barrel to withstand the more powerful gunpowder used to push the 12.7x99mm bullet up to 3,977 fps out the 62 inch barrel.  The rated effective range is 8,290 feet with a maximum range of 29,750 feet; however, this cartridge is lethal out to 24,000 feet.


The perimeter video camera sights (VCS) and the one on the MDR192 are all fed into a laptop computer that communicates with all of them by a wireless network.  The shooter can be located as far away as 500 feet from the rifle.  The computer is on his backpack.  He wears a pair of video goggles that gives him a 3-D image of the target area and using the depth of filed, interpolation and imagery of the multiple VCS’s, he can move his point of view to any position in the target zone that can be seen by or interpolated by the VCS’s and computer.  This includes the real time position of moving human targets.


Using an arm mounted control panel, which includes a button joystick, he can move a tiny red dot around on the screen of his goggles.  This red dot represents the impact point of the MDR192’s bullet.  The computer will fade the red dot to a yellow one if the bullet must penetrate something before hitting the designated target and it fades to blue when it is unlikely that the bullet can penetrate to the target.


The 20 round clip is loaded with Raufoss Mk 211 mod 5 round which is called a multipurpose projectile having the depleted uranium core for armor-piercing, an explosive and incendiary component giving it the HEIAP qualification but these modified rounds also have an adaptive trajectory using one or more of 5 small jets on the boat-tail of the bullet.  These tiny jets do not propel the bullet but rather steer it by injecting air pressure into the slipstream of laminar airflow around the moving bullet.  The gain is the ability to steer the bullet into as much as a 22-degree curve in 2 dimensions.  Given the high explosive aspects of the bullet, hitting within 6 feet of a human would be lethal.


The shooter’s target dot placement controls a laser pointer on each of the VCS’s and the rifle in order to place the hit point on anything that can be hit or killed.  The actual laser dot that the shooter sees in his goggles is not actually projected from the VCS’s but rather is created artificially inside the digital camera as if the shooter was placing it.  This gives the advantage of placing a designated hit spot onto a target that is not actually visible but within the capabilities of the rifle to hit using its penetration, explosive or bullet bending capabilities.


There is, however, a laser and ultrasonic acoustic emission from each of the VCS’s that allow for the precise determination of the air movements in the target zone.  This includes measures of air density, humidity, movement, elevation, etc.  This data is automatically fed into the computer to correct the rifle aim point to compensate for these parameters.


Once the VCS’s are set up and the rifle is mounted on its computerized tripods, the shooter can move away from the rifle’s location and activate the wireless connection to all the scopes and tripods.  The shooter has the ability to move the tripods up and down and left and right.  The rifle’s tripods can actually relocate the rifle by walking the weapon across the ground to reposition it, recover from recoil or to hide it.


The computer is preprogrammed with the full capabilities of the rifle and its ammo so that it will give an accurate and very precise aiming of the weapon based on the dot target and the guns capabilities.  This means that it has been programmed with the exact bullet trajectory so that it can accurately aim and him targets at the extreme range of the bullets – out to 24,000 feet (4.5 miles).  The computer uses this data plus the corrections for air movements and the capabilities of the weapon with respect to kill radius, bullet bending and penetration to accurately aim the rifle to hit the point that the shooter has designated.


The MDR192 passed its beta testing.  My part in the testing was to work on just the trajectory aspects of the computer programming since I had a hand in the original M82 testing to create the adjustable trajectory optical sight that is used on that weapon.  Since I was working with the weapon’s accuracy, I was privy to all of the tests and results.  The official word has not come back yet but from what I observed, it passed its tests with flying colors.  At just over $15,000 each with three VCS’s, this will be a weapon that will be deployed to Afghanistan within the next year.


Modifications that are already being alpha tested include digital timed projectiles similar to the XM25 “smart bullets”.  This will allow for increased reach into protected locations.  They are also developing an add-on to the VCS’s that will sense RF emissions and portray them on the shooters 3-D goggles as shades of colors.  This will allow the pinpointing of cell phones, radios, transmitters, etc.  A third modification is the use of advanced shotgun microphones to pinpoint acoustic emissions.  This will be integrated into existing inputs to refine and improve target locations.


As the inventor of the microencapsulated tags (METs), I was asked to create an interface with the MDR192 and METs.  Once this is done, camouflage of any kind will be completely obsolete and it opens the door for all kinds of possibilities.  For instance, a completely automatic sniper rifle that can autonomously fire at targets that have been precisely verified as enemy combatants.  It can prioritize targets by their threat level.  METs also allow the use of Exacto rounds (Extreme Accuracy Tasked Ordnance) currently being developed by Teledyne.  Currently laser guided bullets are the focus of the guided bullet program but using MET’s, the bullet could be guided by the target – no matter how the target moves.  My computer modeling is almost done and I will be turning over my finding to DARPA by the end of Sept.  I suspect they will move on it quickly as they have earmarked $10 million to develop a guided bullet.


Big Brother is Watching

And He knows Everything You have Ever Done! Sometimes our paranoid government wants to do things that technology does not allow or they do not know about yet. As soon as they find out or the technology is developed, then they do it. Case in point is the paranoia that followed 11 Sept 2001 (9/11) in which Cheny and Bush wanted to be able to track and monitor every person in the US. There were immediate efforts to do this with the so-called Patriots Act that bypassed a lot of constitutional and existing laws and rights – like FISA. They also instructed NSA to monitor all radio and phone traffic, which was also illegal, and against the charter of NSA. Lesser known monitoring was the hacking into computer databases and monitoring of emails by NSA computers. They have computers that can download and read every email on every circuit from every Internet user as well as every form of voice communication. Such claims of being able to track everyone, everywhere have been made before and it seems that lots of people simple don’t believe that level of monitoring is possible. Well, I’m here to tell you that it not only is possible, but it is all automated and you can read all about the tool that started it all online. Look up “starlight” in combination with “PNNL” on Google and you will find references to a software program that was the first generation of the kind of tool I am talking about. This massive amount of communications data is screened by a program called STARLIGHT, which was created by the CIA and the Army and a team of contractors led by Battelle’s Pacific Northwest National Lab (PNNL). It does two things that very few other programs can do. It can process free-form text and it can display complex queries in visual 3-D outputs. The free-form text processing means that it can read text in its natural form as it is spoken, written in letters and emails and printed or published in documents. For a database program to be able to do this as easily and as fast as it would for formal defined records and fields of a relational database is a remarkable design achievement. Understand this is not just a word search – although that is part of it. It is not just a text-scanning tool; it can treat the text of a book as if it were an interlinked, indexed and cataloged database in which it can recall every aspect of the book (data). It can associate and find any word or phrase in relation to any parameter you can think of related to the book – page numbers, nearby words, word use per page, chapter or book, etc. By using the most sophisticated voice-to-text messaging, it can perform this kind of expansive searching on everything written or spoken, emailed, texted or said on cell phones or landline phones in the US! The visual presentation of that data is the key to being able to use it without information overload and to have the software prioritize the data for you. It does this by translating the database query parameters into colors and dimensional elements of a 3-D display. To view this data, you have to put on a special set of glasses similar to the ones that put a tiny TV screen in from of each eye. Such eye-mounted viewing is available for watching video and TV – giving the impression you are looking at a 60-inch TV screen from 5 feet away. In the case of STARLIGHT, it gives a completely 3-D effect and more. It can sense which way you are looking so it shows you a full 3-D environment that can be expanded into any size the viewer wants. And then they add interactive elements. You can put on a special glove that can be seen in the projected image in front of your eyes. As you move this glove in the 3-D space you are in, it moves in the 3-D computer images that you see in your binocular eye-mounted screens. Plus this glove can interact with the projected data elements. Let’s see how this might work for a simple example: The first civilian application of STARLIGHT was for the FAA to analyze private aircraft crashes over a 10-year period. Every scrape of information was scanned from accident reports, FAA investigations and police records – almost all of this was in free-form text. This included full specs on the aircraft, passengers, pilot, type of flight plan (IFR, VFR) etc. It also entered geospatial data that listed departure and destination airports, peak flight plan altitude, elevation of impact, distance and heading data. It also entered temporal data for the times of day, week and year that each event happened. This was hundreds of thousands of documents that would have taken years to key into a computer if a conventional database were used. Instead, high-speed scanners were used that read in reports at a rate of 200 double-sided pages per minute. Using a half dozen of these scanners completed the data entry in less than one month. The operator then assigned colors to a variety of ranges of data. For instance, it first assigned red and blue to male and female pilots and then looked at the data projected on a map. What popped up were hundreds of mostly red (male) dots spread out over the entire US map. Not real helpful. Next he assigned a spread of colors to all the makes aircraft – Cessna, Beachcraft, etc.. Now all the dots change to a rainbow of colors with no particular concentration of any given color in any given geographic area. Next he assigned colors to hours of the day – doing 12 hours at a time – Midnight to Noon and then Noon to Midnight. Now something interesting came up. The colors assigned to 6AM and 6PM (green) and shades of green (before and after 6AM or 6PM) were dominant on the map. This meant that the majority of the accidents happened around dusk or dawn. Next the operator entered assigned colors to distances from the departing airport – red being within 5 miles, orange was 5 to 10 miles…and so on with blue being the longest (over 100 miles). Again a surprise in the image. The map showed mostly red or blue with very few in between. When he refined the query so that red was either within 5 miles of the departing or destination airport, almost the whole map was red. Using these simple techniques, an operator was able to determine in a matter of a few hours that 87% of all private aircraft accidents happen within 5 miles of the takeoff or landing runway. 73% happen in the twilight hours of dawn or dusk. 77% happen with the landing gear lowered or with the landing lights on and 61% of the pilots reported being confused by ground lights. This gave the FAA information they needed to improve approach lighting and navigation aids in the terminal control areas (TCAs) of private aircraft airports. This was a very simple application that used a limited number of visual parameters at a time. But STARLIGHT is capable of so much more. It can assign things like direction and length of a vector, color of the line or tip, curvature and width and taper to various elements of a search. It can give shape to one result and different shape to another result. This gives significance to “seeing” a cube versus a sphere or to seeing rounded corners on a flat surface instead of square corners on an egg-shaped surface. Everything visual can have meaning.  Having 20+ variables at a time that can be interlaced with geospatial and temporal (historical) parameters can allow the program to search an incredible amount of data. Since the operator is looking for trends, anomalies and outflyers, the visual representation of the data is ideal to spot this data without actually scanning the data itself by the operator. Since the operator is visually seeing an image that is devoid of the details of numbers or words, he can easily spot some aspect of the image that warrants a closer look. In each of these trial queries, the operator can using his gloved hand to point to any given dot and call up the original source of the information in the form of a scanned image of the accident report. He can also touch virtual screen elements to bring out other data or query elements. For instance, he can merge two queries to see how many accidents near airports (red dots) had more than two passengers or were single engine aircraft, etc. Someone looking on would see a guy with weird glasses waving his hand in the air but in his eyes, he is pressing buttons, rotating knobs and selecting colors and shapes to alter his 3-D view of the data. In its use at NSA, they add one other interesting capability. Pattern Recognition. It can automatically find patterns in the data that would be impossible for any real person to by looking at the data. For instance, they put in a long list of words that are linked to risk assessments – such as plutonium, bomb, kill, jihad, etc. Then they let it search for patterns. Suppose there are dozens of phone calls being made to coordinate an attack but the callers are from all over the US. Every caller is calling someone different so no one number or caller can be linked to a lot of risk words. STARLIGHT can collate these calls and find the common linkage between them, and then it can tack the calls, caller and discussions in all other media forms. Now imagine the list of risk words and phrases to be tens of thousands of words long. It includes code words and words used in other languages. It can include consideration for the source or destination of the call – from public phones or unregistered cell phones. It can link the call to a geographic location within a few feet and then track the caller in all subsequent calls. It can use voice print technology to match calls made on different devices (radio, CB, cell phone, landline, VOIP, etc.). This is still just a sample of the possibilities. STARLIGHT was the first generation and was only as good as the data that was fed into it through scanned documents and other databases of information. A later version, code named Quasar, was created that used advanced data mining and ERP (enterprise resource planning) system architecture that integrated the direct feed from information gathering resources. For instance, the old STARLIGHT system had to feed recordings of phone calls into a speech-to-text processor and then the text data that was created was fed into STARLIGHT. In the Quasar system, the voice monitoring equipment (radios, cell phones, landlines) is fed directly into Quasar as is the direct feed of emails, telegrams, text messages, Internet traffic, etc. So does the government have the ability to track you? Absolutely! Are they? Absolutely! But wait, there’s more! Above, I said that Quasar was a “later version”. It’s not the latest version. Thanks to the Patriot Act and Presidential Orders on warrantless searches and the ability to hack into any database, NSA now can do so much more. This newer system is miles ahead of the relatively well known Echelon program of information gathering (which was dead even before it became widely known). It is also beyond another older program called Total Information Awareness (TIA). This new capability is made possible by the bank of NSA Cray computers and memory storage that are said to make Google’s entire system look like an abacus combined with the latest integration (ERP) software and the latest pattern recognition and visual data representation systems. Added to all of the Internet and phone monitoring and screening are two more additions into a new program called “Kontur”. Kontur is the Danish word for Profile. You will see why in a moment. Kontur adds geospatial monitoring of a person’s location to their database. Since 2005, every cell phone now broadcasts its GPS location at the beginning of every transmission as well as at regular intervals even when you are not using it to make a call. This was mandated by the Feds supposedly to assist in 911 emergency calls but the real motive was to be able to track people’s locations at all times. For those few that are still using the older model cell phones, they employ “tower tracking” which uses the relative signal strength and timing of the cell phone signal reaching each of several cell phone towers to pinpoint a person within a few feet. A holdover from the Quasar program was the tracking of commercial data which included every purchase made by credit cards or any purchase where a customer discount card is used – like at grocery stores. This not only gives the Feds an idea of a person’s lifestyle and income but by recording what they buy, they can infer other behaviors. When you combine cell phone and purchase tracking with the ability to track other forms of transactions – like banking, doctors, insurance, police and public records, there are relatively few gaps in what they can know about you. Kontur also mixed in something called geofencing that allows the government to create digital virtual fences around anything they want. Then when anyone crosses this virtual fence, they can be tracked. For instance, there is a virtual fence around every government building in Washington DC. Using predictive automated behavior monitoring and cohesion assessment software combined with location monitoring, geofencing and sophisticated social behavior modeling, pattern mining and inference, they are able to recognize patterns of people’s movements and actions as being threatening. Several would-be shooters and bombers have been stopped using this equipment. To talk about the “Profile” aspect of Kontur, we must first talk about why or how is it possible because it became possible only when the Feds were able to create very, very large databases of information and still be able to make effective use of that data. It took NSA 35 years of computer use to get to the point of using a terabyte (1012) of data. That was back in 1990 using ferrite core memory. It took 10 more years to get to petabyte (1015) of storage – that was in early 2001 using 14-inch videodisks and RAID banks of hard drives. It took four more years to create and make use of an exabyte (1018) of storage. With the advent of quantum memory using gradient echo and EIT (electromagnetically induced transparency), the NSA computers now have the capacity to store and rapidly search a yottabyte (1024) of data and expect to be able to raise that to 1,000 yottabytes of data within two years. To search this much data, they use a bank of Cray XT Jaguar computers that do nothing but read and write to and from the QMEM – quantum memory. The look-ahead and read-ahead capabilities are possible because of the massively parallel processing of a bank of other Crays that gives an effective speed of about 270 petaflops. Speeds are increasing at NSA at a rate of about 1 petaflop every two to four weeks. This kind of speed is necessary for things like pattern recognition and making use of the massive profile database of Kontur. In late 2006, it was decided that NSA and the rest of the intelligence and right wing government agencies would stop this idea of real-time monitoring and begin developing a historical record of what everyone does. Being able to search historical data was seen as essential for back-tracking a person’s movements to find out what he has been doing and whom he has been seeing or talking with. This was so that no one would ever again accuse them on not “connecting the dots”. But that means what EVERYONE does! As you have seen from the above description, they already can track your movements and all your commercial activities as well as what you say on phones or emails, what you buy and what you watch on TV or listen to on the radio. The difference now is that they save this data in a profile about you. All of that and more. Using geofencing, they have marked out millions of locations around the world to including obvious things like stores that sell pornography, guns, chemicals or lab equipment. Geofenced locations include churches, organizations like Greenpeace and Amnesty International. They have moving geofences around people they are tracking like terrorists but also political opponents, left wing radio and TV personalities and leaders of social movements and churches. If you enter their personal space – close enough to talk, then you are flagged and then you are geofenced and tracked. If your income level is low and you travel to the rich side of town, you are flagged. If you are rich and travel to the poor side of town, you are flagged. If you buy a gun or ammo and cross the wrong geofence, you will be followed. The pattern recognition of Kontur might match something you said in an email with something you bought and somewhere you drove in your car to determine you are a threat. Kontur is watching and recording your entire life. There is only one limitation to the system right now. The availability of soldiers or “men in black” to follow-up on people that have been flagged is limited so they are prioritizing whom they act upon. You are still flagged and recorded but they are only acting on the ones that are judged to be a serious threat now.It is only a matter of time before they can find a way to reach out to anyone they want and curb or destroy them. It might come in the form of a government mandated electronic tag that is inserted under the skin or implanted at birth. They have been testing these devices in use on animals under the disguise of tracking and identification of lost pest. They have tried twice to introduce these to all the people in the military. They have also tried to justify putting them into kids for “safety”. They are still pushing them for use in medical monitoring. Perhaps this will take the form of a nanobot. If they are successful in getting the population to accept these devices and then they determine you are a risk, they simply deactivate you by remotely popping open a poison capsule using a radio signal. Such a device might be totally passive in a person that is not a threat but might be lethal or it can be programmed to inhibit the motor-neuron system or otherwise disable a person that is deemed to be a high-risk person. Watch out for things like this. It’s the next thing they will do. You can count on it. 

Plato: Unlimited Energy – Here Already!

Plato: Unlimited Energy


If you are a reader of my blog, you know about Plato. It is what I call a software program that I have been working on since the late 1980’s that does what I call “concept searches”. The complete description of Plato is in another story on this blog but the short of it is that it will do web searches for complex interlinked and related or supporting data that form the basis for a conceptual idea. I developed Plato using a variety of techniques including natural language queries, thesaurus lookups, pattern recognition, morphology, logic and artificial intelligence. It is able to accept complex natural language questions, search for real or possible solutions and present the results in a form that logically justifies and validates the solution. Its real strength is that it can find solutions or possibilities that don’t yet exist or have not yet been discovered. I could go on and on about all the wild and weird stuff have used Plato for but this story is about a recent search for an alternative energy source….and Plato found one.

As a research scientist, I have done a considerable amount of R&D in various fields of energy production and alternate energy sources. Since my retirement, I have been busy doing other things and have not kept up with the latest so I decide to let Plato do a search for me to find out what is the latest state-of-the-art in alternate energy and the status of fusion power. What Plato came back with is a huge list of references in support of an source of energy that is being used by the government but is being withheld from the public. This energy source is technical complex but is far more powerful than anything being used today short of the largest nuclear power plants. I have read over most of what Plato found and am convinced that this source of power exists, it is being used but is being actively suppressed by out government. Here is the truth:

On January 25, 1999 a rogue physicist researcher at the University of Texas named Carl Collins clamed to have achieved stimulated decays of nuclear isomers using a second-hand dental x-ray machine. As early as 1988, Collins was saying that this was possible but it took 11 years to get the funding and lab work to do it. By then, it was confirmed by several labs including Dr. Belic at the Stuttgart Nuclear Physics Group. Collins’ results were published in a peer reviewed Physical Review Letters. The science of this is complex but what it amounts to is a kind of cold fusion. Nuclear isomers are atoms with a metastable nucleus. That means that certain when they are created in certain radioactive materials, the protons and neutrons (nucleons) in the nucleus of the atom are bonded or pooled together in what is called an excited state.

An analogue would be like stacking balls into a pyramid. It took energy to get them into that natural state but what Collins found is that it takes relatively little energy to destabilize this stack and release lots of energy. Hafnium and Tantalum are two naturally occurring metastable elements that can be triggered to release their energy with relatively little external excitation.

Hafnium, for instance, releases a photon with an energy of 75 keV (75,000 electron volts) and one gram produces 1,330 megajoules of energy – the equivalent of about 700 pounds of TNT. A five-pound ball is said to be able to create a two-kiloton blast – that is the equivalent to 4,000,000 pounds of TNT. A special type of Hafnium called Hf-178-m2 is capable of producing energy in the exawatt range, that is 10,000,000,000,000,000,000 (1018) watts of energy! This is far more than all the energy created by all the nuclear plants in the US. As a comparison, the largest energy producer in the world today is the Large Hadron Collider (LHC) near Geneva which cost more than $10 billion and can a beam of energy estimated to be 10 trillion watts (1012 ) but that is power that lasts for about 30 nanoseconds (billionths of a second).

Imagine being able to create 1 million (106) times that energy level but sustain it indefinitely? We actually don’t have a power gird capable of doing that but because we are talking about a generator that might be the size of a small house, this technology could be inexpensively replicated all over the US or the world to deliver as much power as needed.

These are, of course, calculated estimates based on extrapolation of Collins’ initial work and that of the follow-on experiments but not one scientist has put forth a single peer reviewed paper that disputes these estimates or the viability of the entire experiment. It is also obvious that the mechanism of excitation would have to be larger than a dental x-ray machine in order to get 1018 watts out of it. In fact, when Brookhaven National Lab conducted its Triggering Isomer Proof (TRIP) test, it used their National Synchrotron Light Source (NSLS) – a powerful laser – as the excitation.

Obviously this was met with a lot of critical reviews and open hostility from the world of physics. This was just another “Cold Fusion” fiasco that was still fresh in everyone’s minds. It was in 1989 that Pons and Fleischmann claimed to have created fusion in a lab at temperatures well below what was then thought to be necessary. It took just months to prove them wrong and the whole idea of cold fusion and unlimited energy was placed right next to astrology, perpetual motion and pet rocks.

Now Collins was claiming that he had done it again – a tiny amount of energy in and a lot of energy out. He was not reporting the microscopic “indications of excess energy” that Pons and Fleischmann claimed. Collins is saying he got large amounts of excess energy (more energy out that went in) on many orders of magnitude above what Pons and Fleischmann claimed.

Dozens of labs across the world began to try to verify or duplicate his results. The biggest problem was getting a hold on the Hafnium needed to do the experiments – it is expensive and hard to come by so it took mostly government sponsored studies to be able to afford it. Surprisingly, some confirmed it and some had mixed results and some discredited him.

In the US, DARPA was very interested because this had the potential for being a serious weapon that would give us a nuclear bomb type explosion and power but would not violate the worldwide ban on nuclear weapons. The US Navy was very interested in it because had the potential for being not only a warhead but also a new and better form of power for their nuclear power fleet ships and subs.

By 2004, the controversy over whether it was viable or not was still raging so DARPA, which had funded some of the labs that had gotten contradictory results, decided to have a final test. They called it the TRiggering Isomer Proof (TRIP) test and it was funded to be done at Brookhaven National Lab.

This had created such news interest that everyone was interested in the results. NASA, Navy, Dept. of Energy (DOE), Dept of Defense (DoD), NRL, Defense Threat Reduction Agency, State Department, Defense Intelligence Agency (DIA), Argonne Labs, Arms Control and Disarmament Agency (ACDA), Los Alamos, MIT Radiation Lab, MITRE, JASON, and dozens of others were standing in line to hear the outcome of this test being conducted by DARPA.

So what happened in the test? No one knows. The test was conducted and DARPA put the lockdown on every scrap of news about the results. In fact, since that test, they have shutdown all other government funded contracts in civilian labs on isomer triggering. The only break in that cover has been a statement from the senior most DOE scientist involved, Dr. Ehsan Khan when he made this statement:

“TRIP had been so successful that an independent evaluation board has recommended further research….with only the most seasoned and outstanding individuals allowed to be engaged”.

There has been no peer review of the TRIP report. It has been seen by a select group of scientists but no one else has leaked anything about it. What is even more astounding is that none of those many other government agencies and organizations have raised the issue. In fact, any serious inquiry into the status of isomer triggering research is met with closed doors, misdirection or outright hostility. The government has pushed it almost entirely behind the black curtain of black projects. Everything related to this subject is now either classified Top Secret or is openly and outwardly discredited and denounced as nonsense.

This has not, however, stopped other nations or other civilian labs and companies from looking into it. But even here, they cannot openly pursue isomer triggering or cold fusion. Now research into such subjects is called “low-energy nuclear reactions” (LENR) or “chemically assisted nuclear reactions (CANR). Success in the experiments of these researchers is measured in the creation of “excess heat” meaning that it has created more (excess) energy than was put into it. Plato has found that some people and labs that have achieved this level of success include:

Lab or company ResearcherUniversity of Osaka, Japan Arata

ENEA, Rome Frascati, Italy Vittorio Violante

Hokkaido University, Japan Mizuno

Energetic Technology, LLC, Omer, Israel Shaoul Lesin

Portland state University, USA Dash

Jet thermal Products, Inc, USA Swartz

SRI, USA McKubre

Lattice Energy, Inc. USA E. Storms

In addition, the British and Russians have both published papers and intelligence reports indicate they may both be working on a TRIP bomb. The British have a group called the Atomic Weapons Establishment (AWE) that has developed a technique called Nuclear Excitation by Electron Transition and are actively seeking production solutions. The Russians may have created an entire isolated research center just for studying TRIP for both weapons and energy sources.

In addition to the obvious use of such a power source to allow us to wean off of fossil fuels, there are lots of other motivations for seeking a high density, low cost power source: global warming, desalination, robotics, mass transportation, long distance air travel, space exploration, etc.

These applications are normal and common sense uses but what application might motivate our government to surppress the news coverage of further research and to wage a disinformation and discredit campaign on anyone that works on this subject? One obvious answer is its potential as a weapon but since that also is well known and common sense, there must be some other reason that the government does not want this to be pursued. What that is will not be found by searching for it. If it is a black project, it will not have internet news reports on it but it might have a combined group of indicators and seemingly disconnected facts that form a pattern when viewed in light of some common motive or cause. Doing that kind of searching is precisely what Plato was designed to do.

What my Plato program discovered is that there are a number of unexplained events and sightings that have a common thread. These events and sightings are all at the fringes of science or are outright science fiction if you consider current common knowledge of science or listen to the government denounce and discredit any of the observers. Things like UFOs that move fast but make no noise, space vehicles that can approach the speed of light, underwater vessels that have been reported to travel faster than the fastest surface ships and beam weapons (light, RF, rail) that can destroy objects as far away as on the moon. What they have in common is that if you consider that there is a high density, compact source of extremely high-powered energy, then these fantastic sightings suddenly become quite plausible.

A power source that can create 10 TeV (tera-electron Volts) is well within the realm of possibility for an isomer-triggered device and is powerful enough to create and/or control gravitons and the Higgs Boson and the Higgs field. See my other blog story on travel faster than light and on dark energy and you will see that if you have enough power, you can manipulate the most fundamental particles and forces of nature to include gravity, mass and even time.

If you can control that much power, you can create particle beam weapons, lasers and rail guns that can penetrate anything – even miles of earth or ocean. If you can create enough energy – about 15 TeV, you can create a negative graviton – essentially negative gravity – which can be used to move an aircraft with no sounds at supersonic speeds. It will also allow you to break all the rules of normal aerodynamics and create aircraft that are very large, in odd shapes (like triangles and arcs) and still be able to travel slowly. Collins estimated that a full-scale isomer triggered generator could generate power in the 1,000 TeV range when combined with the proper magnetic infrastructure of a Collider like the LHC.

Plato found evidence that is exactly what is happening. The possibility of coincidence that all of these sightings have this one single thread in common is beyond logic or probability. The coincidence that these sightings and events have occurred by the hundreds in just the past few years – since the DARPA TRIP test – is way beyond coincidence. It is clear that DARPA put the wraps on this technology because of its potential as a weapon and as an unlimited high-density power source.

The fact that this has been kept hushed up is mostly due to the impact it would have on the economies of the world if we were suddenly given unlimited power that was not based on fossil fuels, coal or hydroelectric power. Imagine the instant availability of all of the electricity that you could use at next to nothing in cost. Markets would collapse in the wake of drops in everything related to oil, gas and coal. That is not a desirable outcome when we are in such a bad financial recession already.

Plato comes up with some wild ideas some times and I often check them out to see if it really is true. I was given perhaps 75 references, of which I have listed only a few in this article but enough that you can see that they are all there and true. I encourage you to search for all the key words, people and labs listed here. Prove this to yourself – it’s all true.

NASA Astrophysics Data System (ADS) Physical Review Papers Vol 99, Issue 17, id. 172502 titled, “Isomer Triggering via Nuclear Excitation by Electron Capture (NEEC) reported confirmed low energy triggering with high energy yields.

Brookhaven National Lab conducted a Triggering Isomer Proof Test (TRIP) using their National Synchrotron Light Source (NSLS) in which they reported; “A successfully independent confirmation of this valuable scientific achievement has been made … and presented in a Sandia Report (SAND2007-2690, January 2008). This was funded by DARPA but pulled the funding right after the test.

FCC Warning: Anomalous Content


Compartment Coded: Megaphone




Enforcement Bureau

Content Enforcement Division (CED)






FCC Violation Notice for the Executive Office of the President



Continuous Custody Courier Delivery

April 20, 2009

Subject: Commercial Broadcast Radio Stations KARB, KARV, KBBR, KCRB, et al

Commercial Radio License CB8I: Warning Notice, Case #EB-2008-2997-RB

Dear Sir:

On August 1, 2007, The FCC/CED discovered a Part 15 violation regarding inappropriate content within the assigned bands of operation of 173 commercial AM and FM broadcast radio stations located in every State. The nature of the inappropriate content appears to be an extremely sophisticated subliminal message that is undetectable by routine spectrum analysis because it is dynamically created by the beat frequencies of the broadcast. This means that any specific analysis of broadcast content will show no embedded or side-band signals, however, the audio modulation of the received broadcast at the receiver’s speaker creates an artificial but highly effective analog influence upon and within any listener.

This signal appears as a result of the signal creating binaural beat tones inside the superior olivary nucleus of the brain stem. Preliminary research has shown that these temporal modulations are creating multiple brainwave synchronizations below the conscious perception thresholds and they are having measurable effects (see below) on the listeners in each of the radio broadcast regions. The signal is not a voice, per se, but rather they have a direct and immediate influence on the inferior colliculus neurons internal to the brain. The affect of this influence has been measured in activated areas of the brain of the primary sensorimotor and cingulate areas, bilateral opercular premotor areas, bilateral SII, ventral prefrontal cortex, subcortically, anterior insula, putamen and thalamus. These areas of the brain and others affected include control of motor reflexes, hunger, vision, decision-making, body temperature control, temperament, smell and memory.

Collaboration with NSA and NRL have provided us with a complete analysis of the signal but this has been of only limited help with the cause and effect on the listening public. At the suggestion of Dr. Wayne Sponson at NSA, the FCC/CED contacted the Sensory Exploitation Division (SED) of NIH, at Fort Detrick, Maryland. We were delayed for 4 weeks in order to process clearances for two members of the FCC/CED (myself and Dr. Edward Willingsley).

In late February, we were able to obtain the following information. The NIH/SED has been working on binaural beats to explore the phenomenon called the Frequency Following Response or Entrainment. They have been highly successful with this field of study, however, their efforts have focused on the creation of infrasound induced beat frequencies to entrain brain waves. This has been shown to impact the delta, theta, alpha, beta and gamma brainwaves.  By contrast, the contaminated signals from these radio stations is created using sounds well above the infrasound range and well within the range of normal music listening.

Dr. Alan Cupfer from NIH’s Neuroscience Research confirmed that entrainment using binaural beat stimulation (or using light) has been shown to be quite effective to affect dream states, focus, anxiety, addiction, attention, relaxation, learning, mood and performance. He also admitted that by first achieving brain synchronization and then applying entrainment to effect constructive or destructive interference with brain frequencies, it is possible to significantly enhance or suppress these brain functions.

NSA computers discovered these signals during their routine monitoring of the broad frequency spectrum of all transmissions. The computers have been recording these signals as an automatic function of finding an anomalous signal, however, because no specific threatening content was recognized by the computers, it was not flagged to any human operators or analysts at NSA. This is a procedural error that has been corrected.

Once the FCC/CED discovered the nature of these anomalous signals in August 2008 and coordinated with NSA, NSA provided our office with archived recordings that date back to 2001 and show an increasing coverage of broadcast stations from the first one found in California to the present 173. They seem to be increasing at a rate of about two per month. It is estimated that approximately 61 million people are currently within the broadcast coverage areas of these stations.

In our two-month exploration of what, if any, impact or objective these broadcasts are having on the listening audience, we have discovered the following:

  1. The subliminal signals appear to be constantly varying at each station and between stations, even when the same music or other recordings are being played. It appears that the anomalous signals are being injected into the broadcast systems at each station’s transmission facility from an exterior source but the means and mechanism of this signal injection has not been determined yet. Until it is, we can’t stop it.
  1. The anomalous signals can be distinguished from non-contaminated signals by means of signal analysis comparisons before and after the use of adaptive filtering. Using a recursive least squares (RLS) and a least mean squares (LMS) in an automated sweep variable filter that seeks a zero cost function (error signal) when compared to a reference baseline. When this computed correction factor is non-zero, the NSA computers determine that the signal is contaminated and they are recorded. These kinds of finite impulse response (FIR) filter structures have proven to be effective at the detection of changes to the baseline reference as small as one cycle at one Giga-Hertz over a period of 24 hours.
  1. Despite being able to detect and isolate the anomalous signal, the combined efforts of NSA, FCC, NIH and NRL have been unable to decode the signal with respect to intelligent content. However, Dr. Tanya Huber and Joel Shiv, two researchers from the National Institute of Science and Technology (NIST) suggested that by examining the non-conscious behavior of the listeners against a baseline, there might be a correlation between signal content and responses. These two researchers have been studying the psychological manipulation of consumer judgements, behavior and motivation since 2004.
  1. Conducting the first macro-survey of listener behavior in each of the broadcast areas initially yielded no anomalous behavior but when micro-communities and community activities were individually examined, some conspicuous changes were noted.
  1. In Mesquite, NV, a change in the recorded anomalous signal coincided with a controversial referendum by the voters on the long-term problems with the Oasis Golf Club. This referendum was notable because it unexpectedly and nearly unanimously reversed a voter survey taken the previous day.
  1. In La Pine, OR, a small farm community with a low power publicly owned station, experienced an uncommonly large increase in the sale of over-the-counter non-steroidal anti-inflammatory agents/analgesics (NSAIAs) such as aspirin, naproxen, Tylenol, and ibuprofen. It appears that the sale was initially motivated by a three week period of a large increase in demand for the analgesic qualities of these drugs but following a week long lull in sales, demand again peaked for three weeks for the antipyretic effects of these drugs. This was validated by a large increase in the sales of thermometers and examination reports of doctor visits. What is unusual is that this appears to have affected nearly every single person in the broadcast area of this small station. The only ones not affected were deaf.
  1. Over the survey of cities and towns, it was discovered that there was a surge in consumer activity associated with a variety of drugs and foods in more than 70 communities over the period analyzed. In each instance, this surge in sales had no prior precedent and lasted for one or two weeks and then returned to normal without reoccurrence.
  1. By contrast, it was also discovered that there was a corresponding decrease in sales of specific drugs and food and drinks in 67 communities – some of which were involved, in the above-mentioned increase of sales. These decreased sales included a drop to nearly zero sales of all drinks containing any form of alcohol or milk. These decreases were especially significant because doctors and local advertisers actively opposed them without effect.

Dozens of other changes in consumer behavior, voter response, mood swings and entertainment sales were discovered but no specific patterns of products, locations, response or demographics were discovered.


The findings of the FCC/CED indicate that a significant and growing population have been and are being manipulated and controlled by listening to radio broadcasts. The degree of control exerted has been nothing short of extraordinary and without precedent. The technology involved has so far eluded detection. The source or objectives of these anomalous signals has also not yet been determined.

It is the speculation of the FCC/CED and of the NIH/SED that this has all the signs of someone or some organization that is actively testing their capabilities on a live and diverse group of test subjects. These tests appear to be random but are systematically exploring the degree of influence and the parts of the brain that can be exploited by these signals. What cannot be determined is what is the final intent or objective or possibly that it has already been accomplished or may be ongoing.

Recommendations: It is recommended that the general public NOT be informed of this situation until we are able to define it further.We recommend the use of deaf analysts be assigned to monitor on-site listening stations in all of the largest radio coverage areas to maintain an observation of changes to behavior. In other areas, automated monitoring can be used to isolate the signals before sending encrypted files to NSA for analysis.We recommend the use of FBI and CIA to examine any commonality between these stations.We recommend that NIST and NIH continue their survey of behavior changes in all of the affected communities.We recommend that NRL and FCC collaborate on the creation of a selective RF counter-measure to the anomalous signals.

We recommend that a cabinet-level task force be created within Homeland Security to assist and coordinate all of the above activities.


Dr. W. Riley Hollingswood Ph.D.

FCC Director, Content Enforcement Division

April 21, 2009 Update:

Following the creation and coordination of the above report, it was reported to this office by NSA that the anomalous signals have been detected in both national broadcast and cable television signals.


Government Secrets #2 They Control You!!

They Control You!!

After reading Government Secrets #1, you should know that I had access to a lot of intelligence over a long career and had a lot of insights into our government’s actions on the international political stage. What I observed first hand and in my historical research is that repeatedly over decades, the US government has gone to great effort to create wars. You will never hear a military person admit this because most of them are not a part of the decision process that commits us to war but because they believe in the idea that we are always right and they will go to prison if they disobey, they will execute the directions to go to war with great gusto. We have a very warped view of our own history. In every war we are the heroes and we fought on the side of right and we did it honorably and with great integrity. Well, that is what the history books would have you believe. Did you ever learn that we issued orders to take no prisoners at the battle of Iwo Jima? Thousands of Japanese were shot with their hands raised in surrender. To be fair, some of them would feign surrender and then pop a grenade but you won’t see this in our history books.Did you know that our attack strategy in Europe was to destroy the civilian population? The worst example occurred on the evening of February 13, 1945, Allied bombers and fighters attacked a defenseless German city, one of the greatest cultural centers of northern Europe. Within less than 14 hours not only was it reduced to flaming ruins, but an estimated one-third of its inhabitants, more than half a million, had perished in what was the worst single event massacre of all time. More people died there in the firestorm, than died in Hiroshima and Nagasaki combined.

Dresden, known as the Florence of the North, was a hospital city for wounded soldiers. Not one military unit, not one anti-aircraft battery was deployed in the city. Together with the 600.000 refugees from Breslau, Dresden was filled with nearly 1.2 million people. More than 700,000 phosphorus bombs were dropped on 1.2 million people. More than one bomb for every 2 people. The temperature in the center of the city reached 1600 centigrade (nearly 3,000 degrees Fahrenheit). More than 260,000 bodies and residues of bodies were counted. But those who perished in the center of the city can’t be traced because their bodies were vaporized or they were never recovered from the hundreds of underground shelters. Approximately 500,000 children, women, the elderly and wounded soldiers were slaughtered in one night.

Following the bomber attack, U.S. Mustangs appeared low over the city, strafing anything that moved, including a column of rescue vehicles rushing to the city to evacuate survivors. One assault was aimed at the banks of the Elbe River, where refugees had huddled during the night. The low-flying Mustangs machine-gunned those all along the river, as well as thousands who were escaping the city in large columns of old men, women and children streaming out of the city.

Did you ever read that in your history books? Did you know that we deliberately avoided all attacks on Hiroshima and Nagasaki so as to ensure that the civilian population would not flee the city?

This sparked my interest to look into “my war” – Viet Nam and I began to study it in detail. I read about its start and how the famous Tonken Gulf Incident was a complete ruse to let Lyndon Johnson boost troops for political gain and out of a personal fear that America might be seen as weak. He had great faith in our might and ability to make a quick and decisive victory so he trumped up a fake excuse to get the famous Tonken Gulf Resolution passed to give him more powers to send troops. The whole war had been just a political whim by a misguided politician and bolstered by the military-industrial complex that profited by massive arms sales, which also happened to be the largest contributors to the political campaigns. More than 50,000 US lives and countless Viet Namese lives later, we left Viet Nam having had almost no effect on the political outcome of the initial civil war effort to reunite the North and the South under communism – except that there were a lot fewer people to do it.

Even our basis for most of the cold war was mostly fake. For instance, I found pretty solid evidence that as early as the early 1960’s there was a massive campaign to create a false missile gap mentality in order to funnel massive money into the military.

Look up Operation Paperclip, it had actually given us a huge advantage in missile technology so the whole basis for the cold war from before the Cuban Missile Crisis to the present is all based on a lie. Despite having the largest nuclear warheads, Russia’s missiles are known for being so poorly guided that an ICBM had a probability of hitting its target with the effective range of its warhead of only 20%. That meant that it would be expected to hit within a +/- 30 miles radius of its target. Our missiles, by contrast, are rated at less than 1,000 feet. In every crisis involving Russia in which we refused to back down, the Russians gave in because they knew that they did not have a chance in a nuclear war exchange with the US. There was never any real missile gap nor any real threat to our world from communism. It was a scapegoat for all our mistakes and expenditures.

Did you know about the testing of bio-weapons, nuke weapons and designer drugs on our own US military? Do you know the truth about the start of Viet Nam? How about Angola, Nicaragua, the Congo, Grenada, Guatemala, Panama, El Salvador, Iran, Iraq, Israel, Argentina and dozens of others? Do you know the real story of the USS Liberty? The list is huge of what is not fully known or understood by the US public. I can guarantee that what you think happened, what is in the history books and the press is NOT what really happened.

Here’s just one example of how the news is not really the news as it happened but as our government wants us to hear it. The Falkland Islands went to war in 1982. One incident we had a lot of intelligence about was the sinking of several British warships. One of these ships was hit and sunk by an Exocet air to surface missile despite the use of extensive electronic countermeasures. Or so that was the way it was reported in the news.

Because of my access to intelligence reports, I found out that the use of electronic countermeasures by the British was nearly flawless in its effectiveness to divert or confuse these missiles. The skipper of the HMS Sheffield, in the middle of a battle, ordered the electronic countermeasures equipment to be shut off because he could not get a message to and from Britain with it on. As soon as his equipment was off, the Argentine air attacks from the Super Etendard launched the Exocet.

OK this was a tragic screw up by a British officer but what our military planners and politicians did with it was the REAL tragedy. The bit about shutting of the electronic countermeasures equipment was deleted from all of the news reports and only the effectiveness of the Exocet was allowed to be published by the US press. The Navy and the Air Force both used this event to create the illusion of an anti-missile defense gap in the minds of the public and politicians and to justify the purchase of massive new defensive systems and ships at the cost of billions of dollars. All based on a false report.

In fact, an objective look at how we have been playing an aggressive game of manifest destiny with the world for the past 150 years would make you wonder how we can have any pride in our nation. From the enslavement of millions of blacks to the genocide of the American Indian to the forceful imposition of our form of government on dozens of sovereign nations, we have been playing the role of a worldwide dictator for decades. It has all been a very rude awakening for me.

The military-industrial complex that President Eisenhower warned us about is real but latter day analysts now call it the “military-industrial-congressional complex”. It is congress and some of the Presidents that we have had that are the power side of the triangle that consists of power, money and control.

The money buys the power because we have the best government in the world that is for sale on a daily basis and that sale is so institutionalized that it is accepted as a way of doing routine business. The bribing agents are called Lobbyists but there is little doubt that when the visit a congressman to influence his vote, they are clearly and openly bribing him with money or with votes. The congressmen, in return, vote to give tax money to the companies that the lobbyists represent. Or perhaps they will vote to allow those companies to retain their status, earnings or advantages even when that is at the cost of damage to the environment, other people or to other nations.

The control comes in the form of propaganda to sway and manipulate the masses; the military might to exert control over our enemies and our allies and the control of the workers and people that empower the congressmen – thus making the interlocking triangle complete.

What is not well known is a basic psychological mechanism that the military-industrial-congressional complex employs that few people understand or realize. Historical Sociologists (people that study how societies think over time and history) have discovered that every successful society in the world and over all of history, has had a scapegoat group of people or country or culture on which to blame all their problems.

Scapegoating is a hostile social – psychological discrediting routine by which people move blame and responsibility away from themselves and towards a target person or group. It is also a practice by which angry feelings and feelings of hostility may be projected, via inappropriate accusation, towards others. The target feels wrongly persecuted and receives misplaced vilification, blame and criticism; he is likely to suffer rejection from those who the perpetrator seeks to influence. Scapegoating has a wide range of focus: from “approved” enemies of very large groups of people down to the scapegoating of individuals by other individuals. Distortion is always a feature.

In scapegoating, feelings of guilt, aggression, blame and suffering are transferred away from a person or group so as to fulfill an unconscious drive to resolve or avoid such bad feelings. This is done by the displacement of responsibility and blame to another that serves as a target for blame both for the scapegoater and his supporters.

Primary examples of this include 1930 Germany in which Hitler used a variety of scapegoats to offset the German guilt and shame of World War I. He eventually chose the Jews and the entire population of Germany readily accepted them as the evil cause of all their problems. The US did this in the south for more than a century after the civil war by blaming everything on the black population. But this is true today for most of our successful countries: The Japanese hate the Koreans, the Arabs hate the Jews, in the southwest of the US, the Mexicans are the targets but in the southeast, it is still the blacks, the Turks hate the Kurds…and so it goes for nearly every country in the world and for all of history.

In some cases the scapegoat might be one religious belief blaming another as in the Muslims blaming the Jews or the Catholics blaming the Protestants. These kinds of scapegoats can extend beyond national boundaries but often are confined to regional areas like the Middle East or Central Europe. Finally, there are the political and ideological scapegoats. For many years, the US has pitted conservatives against liberals and Democrats against Republicans. This often has the effect of stopping progress because each side blames the other for a lack of progress and then opposes any positive steps that might favor the other side or give them the credit for the progress. Unfortunately, this scapegoat blame-game ends up being the essence of the struggles for power and control.

What is not well understood or appreciated is that our government is very well versed in this scapegoating and blame-game as a means to avoid accountability and to confuse the objectives. By creating an enemy that we can blame all our insecurities on – like we did with communism in the cold war – we can justify almost any expense, any sacrifice demanded of the public. If you question or oppose the decisions, then you are branded a communist sympathizer and are ostracized by society. Joseph McCarthy is the worst example of this but it exists today when we say someone is not patriotic enough if they dare to question a funding allocation for Iraq or for a new weapon system.

We, the public, are being manipulated by a power and highly effective psychological mechanism that is so well refined and developed that both the Democrat against Republican parties have an active but highly secretive staff composed of experts in the social psychological propaganda techniques that include, among others, scapegoating. In the Democratic Party this office is called the Committee for Public Outreach. In the Republican Party, their staff is called Specialized Public Relations. Even the names they choose make use of misdirection and reframing. Right now, the Democratic Party has the better group of experts partly because they raided the staff of the Republican office of Specialized Public Relations back in 1996 by offering them huge salary increases. By paying them half a million dollars per year plus bonuses that can reach an additional $50 million, they have secured the best propaganda minds in the world.

In both cases, the staffs are relatively unknown and work in obscure private offices located away from the main congressional buildings. Their efforts are passed as quietly and as low a profile as possible and to only the senior most party members. The reports begin with clearly defined objectives of diverting public attention or countering fact-based reports or justifying some political action or non-action but as they work their way through the system of reviewers and writers, the objective remains the same but the method of delivery gets altered so that the intent is not at all obvious. It is here that the experts in psychology and social science tweak the wording or events to manipulate the public, allies or voters.

The bottom line is that the federal government of the US has a long and verifiable history of lying but it is a fact that the lies that have been discovered are perhaps 5% of the lies that have emanated from the government. If you care to look, you will find that a great deal of what you think you know about our nation’s history, our political motivations and accomplishments and our current motives and justifications are not at all what you think they are. But I warn you – don’t begin this exploration unless you are willing to have your view of your country and even yourself seriously shaken up. But, if you don’t want to see the truth, then at least be open minded enough to listen to what will be declared the radical views that oppose the popular political positions of the day.

Nanobots Contamination of over-the-counter (OTC) Drugs

Nanobots Contamination of over-the-counter (OTC) Drugs

Topics on this Page


Update on FDA’s Investigation

FDA’s Executive Office Warnings/AdvisoriesIntroduction

September 12, 2008: In light of recent evidence from the National Security Agency (NSA), concerning over-the-counter (OTC) Drugs contaminated with nanobots, the FDA has issued a Health Information Advisory to proactively reassure the Office of the President that there is no known health threat from contaminated OTC Drugs manufactured by companies that have met the requirements to sell such products in the United States. Nanobot contamination, if present, poses no apparent risk to health, even to children; however, there may be a risk to privacy.

The nanobots were discovered by NSA because they appear to be activated by an external radio frequency (RF) signal and in response emanate a coded signal. They were found to be less than 1 centimeter long and apparently contain a passive RFID device in addition to a rudimentary mechanism for sensing and memory retention. So far, neither NSA nor FDA has been able to decipher the coded signal. Although this is considerable smaller than the Verichip developed by Kevin Warwick, it is well within the current technology.

These nanites have been found embedded in the center of OTC drugs that come in 325 mgs and larger solid pill form. Contaminated pills range from a low of 1% to a high of 3% of all pills sampled. This is an unusually high level but the method of insertion of these contaminated pills into the manufacturing process of multiple producers has not been determined yet.

Analysis of their exact nature has been complicated by the fact that they seem to be encased with a protective coating that is also highly reactive to light. If a contaminated OTC pill is broken open and the nanite is exposed to light, it immediately disintegrates. Further studies are underway.

The FDA had no knowledge of the presence of these nanobots prior to the notification by NSA in August 2008 and has been hampered in its analysis by a total lack of cooperation from the NSA, however, with NSA’s help, we have been able to determine that in most urban centers, the level of contaminated adults is approximately one in four with slightly greater percentages found in the larger urban centers of New York, Boston, Miami and Dallas.

For some people that take OTC drugs on a regular basis (more than 2 a week), it is possible that they might accumulate more than one nanobot in their system. This does not appear to increase or decrease the health risk to the person but does appear to alter the RF signals emanating from the RFID circuits of the nanites.

The FDA has broadened its domestic and import sampling and testing of OTC drugs from suspected sources but has been unable to define the exact source or sources. FDA has recommended that consumers not consume certain products because of possible contamination with Nanobots. A list of those products is below.

Update on FDA’s InvestigationFebruary 19, 2009: FDA’s ongoing investigation continues to show that the domestic supply of over-the-counter (OTC) Drugs is safe and that consumers can continue using U.S. manufactured OTC Drugs. FDA has concluded that levels of Nanobots alone are at or below 1 pill per thousand (ppt) among all OTC Drugs. This level does not raise public health concerns. FDA has updated its interim risk assessment, issued in early October, with this information:

The FDA has been collecting and analyzing samples of domestically manufactured OTC Drugs for the presence of Nanobots and Nanobots-related RF signal responses. To date, FDA tests have found extremely low levels of Nanobots in one OTC Drugs sample and moderate levels of RF signal responses from concentrations of OTC drugs, such as in a commercial drug store. The benign nature of the nanobots found so far indicate they were designed for tagging, tracking and collection of health information and do not interact with the body or its system and therefore pose no health risk to the public.

To date, statistical data on those individuals that have been contaminated with the nanobots has been limited but several trends have begun to emerge. The number of people contaminated seems to be equally divided among men and women and in a proportional distribution among ethnic and racial groups. The passive RFID tag is responsive to various frequencies in the high UHF and SHF range (922 MHz to 2202 GHz) and appears to makes use of the backscatter coupling method, however, a few known contamination’s could not be activated with any signal source.

Studies have shown that these passive RFID tags can be activated by signals from satellites but have to be read by a receiver located within ten feet. During the testing of nanobots that were actually ingested by people, it was discovered by NSA that the cell phones of the people being tested emanated an unusual signal pattern in response to a band sweep of SHF RF signals. The cell phone activation is being further investigated.

For unknown reasons, some people eliminate or pass their nanobot out of their systems relatively quickly and other people retain the nanobots for either extended periods or permanently (until surgically removed). Further studies are trying to determine what, if any health condition is common among those that retain their nanites. In our sampling of US cities using roaming teams with sweep generators and receivers, it was discovered that the signal being emanated from the RFID tags lasted about 21.7 milliseconds longer than in any other urban center.

As of this FDA Warning, there appears to be no immediate health risk and no reason to unduly alarm the general public with a general public announcement. NSA has indicated they will separately report to the Executive Office of the President on their findings.

Transcript for FDA’s Executive Office Briefing: FDA’s Updated Interim Safety and Risk Assessment of Nanobots and its Analogues in OTC drugs for Humans

November 28, 2008

FDA’s Warnings/Advisories

Recalls Home Page

FDA Home Page | Search FDA Site | FDA A-Z Index | Contact FDA | Privacy |

FDA Website Management Staff

A few of you doubt me?!!


I have gotten a number of comments about the science of my stories. Since I spent most of my life in hard core R&D, science is my life and the way I talk. To read my stories, you have to be willing to either accept that the science behind it is fact or go look it up yourself. You will quickly find that there is damn little, if any fiction, in my stories. I take exception to people that say the science is wrong so I’m going to self analyze one of the stories that I have gotten the most questions about.


In the story about the accidental weapon discovery, I described a C-130 with a multi-bladed prop – See US Patent 4171183 – . Also see and As I said in the story the long and telescoping blade is still classified so there are no public pictures of it.


The ATL (airborne tactical laser) program being run out of the ACTD program by the DUSD(AS&C), an office within OSD. The ACTD program is where the original project was started in cooperation with the Naval Research Lab (NRL). The original objective was to improve the speed and range of long distance transport by aircraft. It followed some research that showed that if the variable pitch of prop were extended outward from the hub further, then the efficiency would improve.


Since a prop is a lifting wing that lifts horizontally, it must maintain a constant angle of attack (AoA) over the entire length of the blade. AoA is the angle between the camber line of the wing and the axis of the flow of air over the blade. Since the relative speed of the prop changes as a function of distance from the hub, the blade must twist or pitch more as you move further out the blade. This was the essential secret that the Wright Brothers discovered in 1902 and is the basic difference between a screw propeller and a wing propeller.

What was discovered in the development of vertical wind turbines is that blades as long as 50 feet but as thin as 5 inches could be made to be more efficient and with higher torque than conventional blades. In wind power, the added torque allows you to turn a larger generator but this is due to the wind passing over the blade making it spin. But in an aircraft the engines would be spinning the blade to make it take a bigger (more efficient) bite out of the air, this would mean being able to create more thrust or it might be able to operate at a higher altitude (in thinner air). Do a Google search for “Vertical Wind Turbine”. You’ll see designs like the WindSpire that is 30 feet tall with blades less than 8 inches wide that is so efficient that it produces 2000 kilowatts and can operate in 8 MPH winds and it can handle 100 MPH gusts.


The guys at NRL took that and reversed it into an efficient propeller design for the C-130 in the hopes that it would give a similar improved performance. The carbon-fiber telescoping blade was just a natural extension of that thinking.


As to the laser beam creating a wide range of frequencies, that is also easy to explain. The Doppler Effect says that an increase in wavelength is received when a source of electromagnetic radiation is moving away from the observer and a decrease in wavelength is received when a source of electromagnetic radiation is moving toward from the observer. This is the basis for the Red Shift (redshift) used by astronomers to examine the movement of starts. It is the reason that a train has a rising pitch whistle as it coming toward you and a decreasing pitch sound as it passes and goes away from you. This is basic high school physics.


As the laser beam was rotated, any observer in a lateral position to the aircraft would see one part of the rotating beam rotating toward them (for example, the part above the prop hub) and another part rotating away from them (in this example, the part below the prop hub). The bottom part would have a redshift to its visible light because it is moving away from the observer. The part of the prop that is moving the slowest, near the hub, would have the least redshift but as the observer looked at the light coming from the laser beam further out on the prop, the speed would increase and the redshift would be greater until the Doppler shift would be so great that the light would shift to a frequency below the visible light spectrum. This would move the light energy into the infrared area but as the light traveled faster and faster, it would shift lower and lower. Since the laser beam extended for miles and the beam was traveling at speeds from a few hundred MPH to thousands of mils per second, the red shift along the beam path constantly moved down the electromagnetic spectrum passed radar, TV, short wave radio and down into the ELF range.


That portion of the prop above the hub was doing the same thing but it was moving toward the observer in the lateral position and so it was giving a blue shift – toward higher frequencies. As the light frequencies compressed into the blue and ultraviolet range, it became invisible to the naked eye but it still was emitting energy at higher and higher frequencies – moving into X-rays and gamma rays at speeds toward the end of the beam.


The end result of this red and blue shift of the light from the laser beam is that there was a cone of electromagnetic radiation emanating from the hub of each of the two engines (on the C-130) or the one engine on the retrofitted 707. This cone radiated out from the hub with a continuously changing frequency to the electromagnetic emissions as the cone widens out behind the aircraft. The intensity of the emissions is directly proportional to the power of the laser and the speed of the props so the highest and lowest frequencies were the most intense. These also happened to be the most destructive.


This is just one story that is firmly based in real and actual science. You have to be the judge if it is true or not but I defy you to find any real flaw in the logic or science. As with all of my stories, I don’t talk about space cadet and tin foil hat stuff. I have 40 years of hard core R&D experience along with four degrees in math, computer modeling, physics and engineering so I’m not your usual science writer but whether it is science fiction or not is up to you to decide. Just don’t make that decision because you don’t believe or understand the science – that is the part that should not be questioned. If you doubt any of it, I encourage you to look it up. It will educate you and allow me to get these very important ideas across to people.

Government Secrets #1 – Be Afraid…Be Very Afraid

I was involved in a long career of classified work for the military and then did classified work for the government after I got out of the military. Doing classified work is often misunderstood by the public. If a person has a Top Secret clearance, that does not mean they have access to all classified information. In fact, it is not uncommon for two people to both have Top Secret (TS) clearances and still not be allowed to talk to each other. It has to do with what the government calls “compartments”. You are allowed your clearance only within certain compartments or subject areas. For instance, a guy that has a TS for Navy weapons systems may not know or allowed to know anything about Army weapon systems. If a compartment is very closely held – meaning that it is separately controlled even within the TS cleared people, then it is given a special but often obscure names and additional controls. For instance, for years (back in the days of Corona but not any more) the compartment for satellite reconnaissance was called “talent-keyhole” and “byeman” and was usually restricted to only people within the NRO – National Reconnaissance Office.

These code words were abbreviated with two letters so talent-keyhole became TK and byeman became BY. As a further safeguard, it is forbidden to tell anyone the code word for your compartment – you are only allowed to tell him or her the two-letter abbreviation. And you cannot ask someone if they are cleared for any particular compartment, you have to check with a third party security force. So if you work in a place like CIA or NRL or NSA, and you want to have a meeting with someone from another department, you meet them at the security station outside your department office area (every department has one). When they arrive, you ask the guard if they are cleared for “TK” and “BY”. The guard then looks at the visitor’s badges and then checks them against a picture logbook he keeps. The picture and codes on the badges and the log book have to match and if they do, then gets out another book that has just the visitor’s numeric coded badge number and looks up his clearances. If he has TK and BY after his badge number, then you are told that he can be admitted to your area for discussions on just the TK and BY programs and subjects. In some departments, the visitors are given brightly colored badges identifying them as being cleared only for specific subject areas. This warns others in the department to cover their work or stop talking about other clearance areas when these visitors are nearby.

There are hundreds of these coded compartments covering all aspects of military and civilian classified topics and programs. If you are high enough or if your work involves a lot of cross-discipline work, you might have a long string of these code words after your name….as I did.

If a program has any involvement with intelligence gathering (HUMINT – human intelligence, SIGINT – signal intelligence or IMINT – imagery intelligence), then it may get additional controls that go well beyond the usual TS background checks. For instance, you might be subjected to frequent polygraph tests or be placed in the PRP – Personal Reliability Program. PRP was a program that constantly monitors people’s lives to see if they ever get even close to be vulnerable or easy targets for spies. In the PRP, your phone might be tapped, your checking accounts are monitored, you debt and income are watched, and your computer is hacked. This is all with the intent of making sure you never get into debt or get psychologically unstable. PRP administers a series of psychological tests that can take up to 3 days to complete every year. These tests can peer into your mind so well that they can feel reasonably confidant that you are mentally stable if these tests say so.

Because of my work, I had a TS clearance for more than 40 years and had a string of two-letter codes after my name that went on for three or four lines on a typewritten page. I was in the “Poly” program and in the PRP and some others that I still can’t talk about. The reason I had so many was because I was involved in doing decision support using computer modeling – Operations Research, Math Modeling and Simulations. This meant I had to have access to a huge range of information from a wide variety of intelligence sources as well as other kinds of R&D work. I then had to be able to analyze this information, model it and present it to the senior decision-makers in an easy to understand form. This meant I was often briefing congressmen, senators, people from the CIA, FBI and high ranking officers from all of the services, JCS and OSD as well as the working level analyst that were giving me their classified data for analysis.

Now I can begin telling you some of what I learned by being exposed to all of that intelligence over all those years but I still have to be careful because although most of my limitations have expired, some are still in effect and I can’t violate them or I will join the ranks of the “disappeared”.

First, let me make it clear that the entire military is run by the top most 1% of the people in the services combined with the top 5% within the federal government. Imagine a pyramid in which only the guys at the top point are deciding where all the rest will go. I’ll call them the “Power Elite”

There are just a handful of officers in the Pentagon and in JCS that make all of the decisions of what the services will do and what direction they will take. Perhaps 50 officers total. These guys are so high in rank and so close to retirement that they have, for intents and purposes, ceased being military people and are simply politicians that wear a uniform. They are essentially the liaison officers for the highest ranking congressmen and the office of the President. They cater to these politicians in order to gain additional power through the control of more money or to feather they nest of future involvement in the political arena.

There are of course a few – a very few notable exceptions. Colon Powell and Dwight Eisenhower are two that come to mind. Officers like General Norman Schwarzkopf are not in this group because they chose not to seek political office or extend their power or control beyond doing their military jobs.

It is easy to see why all of the military is controlled by 1% of the officers. This is an organization based on the “chain-of-command” structure and everyone is taught to follow orders. In fact, once you are in the military, you can go to jail if you do not follow orders and in time of war, you can be executed for not following orders. Most of the bulk of the military is so biased by the indoctrination and propaganda created and put out by the government, that they willingly follow orders without questioning them.

What this 1% of high ranking military and 5% of the federal government have in common is that they measure their success in money and power. The source of that money and power comes from commercial, industrial and business sources. By making decisions that favor these businesses, those businesses, in turn, empower and enrich those involved. What is truly tragic is that this is not a recent occurrence but rather thee has been a Power Elite in our government for many decades – going back to the mid 1800’s.

The 5% of the federal government refers to the most powerful members of the Executive branch – President, VP, Sec. of Defense, Sec. of State, etc. and the top most powerful congressmen and senators. The reason that the newer, younger and less powerful legislators do not fall into this group is because of the way the political parties are setup behind the scenes. The most senior congressmen and senators are put into positions of power and influence over the committees and programs that have the most influence on contracts, budget money and funding controls. When one congressman can control or seriously impact the budget for the entire military or any major commerce area, then he has control over all of the people in those areas. To see who these people are, list all of the congressmen and senators by length of service and take the top 5% and you will have 99% of the list. Not surprisingly, this top 55 also includes some of the most corrupt members of congress – Murtha, Stevens, Rangel, Renzi, Mollohan, Don Young and others.

At the highest levels of security clearances, many people gain insights into how this Power Elite manipulate and twist he system to their gain. When Dick Cheney orchestrated the fake intelligence to support his war on Iraq, don’t think for a minute that the CIA, NSA and Pentagon did not know exactly what he was doing but being good little soldiers that are, by law, not allowed to have a political opinion, they kept quiet. If they had not kept quiet, their personal careers would have been destroyed and their departments or agencies would have been punished by under-funded budgets for years to come.

The money and power comes from lobbyists and donations of funds and promises of votes so that the Power Elite can remain in power and extend their control and riches. A study by Transparency International found that of all the professions and jobs in the world, the one job that is most likely to make you a millionaire the soonest is being a congressman and senator in the US. In a job that pays less than $200K peer year, the net income and wealth of most congressmen and senators rises by 30-40% per year while they are active members of the legislature. That’s a fact!

So where’s the SciFi in all this? It’s just this, these members of the Power Elite have so much control that they can operate a virtual parallel government that functions out of sight of the public and often in complete opposition to the actions of their publicly expressed policies. Of course, statements like this cannot be made without positive and verifiable evidence and I can provide facts you can check and a long history of this occurring going back decades. Read about these incidents in the rest of this series of stories – Government Secrets #2, #3 and #4.

Ocean Dumping – A Summary of Studies

Ocean Dumping – A Summary of 12 Studies Conducted between 1970 and 2001

By Jerry Botana

The dumping of industrial, nuclear and other waste into oceans was legal until the early 1970’s when it became regulated; however, dumping still occurs illegally everywhere.  Governments world-wide were urged by the 1972 Stockholm Conference to control the dumping of waste in their oceans by implementing new laws. The United Nations met in London after this recommendation to begin the Convention on the Prevention of Marine Pollution by Dumping of Wastes and Other Matter which was implemented in 1975. The International Maritime Organization was given responsibility for this convention and a Protocol was finally adopted in 1996, a major step in the regulation of ocean dumping.

The most toxic waste material dumped into the ocean includes dredged material, industrial waste, sewage sludge, and radioactive waste. Dredging contributes about 80% of all waste dumped into the ocean, adding up to several million tons of material dumped each year. About 10% of all dredged material is polluted with heavy metals such as cadmium, mercury, and chromium, hydrocarbons such as heavy oils, nutrients including phosphorous and nitrogen, and organochlorines from pesticides. Waterways and, therefore, silt and sand accumulate these toxins from land runoff, shipping practices, industrial and community waste, and other sources.  This sludge is then dumped in the littoral zone of each country’s ocean coastline.  In some areas, like the so called “vanishing point” off the coast of New Jersey, in the United States, such toxic waste dumping has been concentrated into a very small geographic area over an extended period of time. 

In the 1970s, 17 million tons of industrial waste was legally dumped into the ocean by just the United States.   In the 1980’s, even after the Stockholm Conference, 8 million tons were dumped bincluding acids, alkaline waste, scrap metals, waste from fish processing, flue desulphurization, sludge, and coal ash.

If sludge from the treatment of sewage is not contaminated by oils, organic chemicals and metals, it can be recycled as fertilizer for crops but it is cheaper for treatment centers to dump this material into the ocean, particularly if it is chemically contaminated. The UN policy is that properly treated sludge from cities does not contain enough contaminants to be a significant cause of eutrophication (an increase in chemical nutrients—typically compounds containing nitrogen or phosphorus—in an ecosystem) or to pose any risk to humans if dumped into the ocean, however, the UN policy was based solely on an examination of the immediate toxic effects on the food chain and did not take into account how the marine biome will assimilate and be affected by this toxicity over time.  The peak of sewage dumping was 18 million tons in 1980, a number that was reduced to 12 million tons in the 1990s.

Radioactive Waste

Radioactive waste is also dumped in the oceans and usually comes from the nuclear power process, medical use of radioisotopes, research use of radioisotopes and industrial uses. The difference between industrial waste and nuclear waste is that nuclear waste usually remains radioactive for decades. The protocol for disposing of nuclear waste involves special treatment by keeping it in concrete drums so that it doesn’t spread when it hits the ocean floor however, poor containers and illegal dumping is estimated to be more than 45% of all radioactive waste. 

Surprisingly, nuclear power plants produce by far the largest amount of radioactive waste but contribute almost nothing to the illegal (after the Stockholm Conference) ocean dumping.  This is because the nuclear power industry is so closely regulated and accountable for its waste storage.  Off the coast of southern Africa and in the Indian Ocean, is the greatest accumulation of nuclear wastes.

The dumping of radioactive material has reached a total of about 84,000 terabecquerels (TBq), a unit of radioactivity equal to 1012 atomic disintegrations per second or 27.027 curies. Curie (Ci) is a unit of radioactivity. One curie was originally defined as the radioactivity of one gram of pure radium.  The high point of nuclear waste dumping was in 1954 and 1962, but this nuclear waste only accounts for 1% of the total TBq that has been dumped in the ocean. The concentration of radioactive waste in the concrete drums varies as does the ability of the drums to hold it.  To date, it is estimated that the equivalent of about 227 million grams (about 500,000 pounds) of pure radium has been dumped on the ocean floor.

Until it was banned, ocean dumping of radioactive waste was considered a safe and inexpensive way to get rid of tons of such materials.  It is estimated that the 1960’s and early 1970’s era nuclear power plants in New Jersey (like Oyster Creek – which is located just 21 miles from the Barnegat Lighthouse) and 12 other nuclear power plants located in Pennsylvania, New Jersey, and New York have dumped more than 100,000 pounds of radioactive material into the ocean off the New Jersey coast.

Although some claim the risk to human health is small, the long-term affects of nuclear dumping are not known, and some estimate up to 1,000 deaths in the next 10,000 years as a result of just the evaporated nuclear waste. 

By contrast, biologists have estimated that the ocean’s biome has been and will continue to be permanently damaged by the exposure to radioactive material.  Large scale and rapid genetic mutations are known to occur as dosage levels of radiation increase.  Plant, animal and micro-organisms in the immediate vicinity of leaking radioactive waste will experience the greatest and most radical mutations between successive generations.  However, test show that even long term exposure to diluted radioactive wastes will create accelerated mutations and adaptations.

The Problems with Ocean Dumping

Although policies on ocean dumping in the recent past took an “out of sight- out of mind” approach, it is now known that accumulation of waste in the ocean is detrimental to marine and human health. Another unwanted effect is eutrophication. A biological process where dissolved nutrients cause oxygen-depleting bacteria and plants to proliferate creating a hypoxic, or oxygen poor, environment that kills marine life. In addition to eutrophication, ocean dumping can destroy entire habitats and ecosystems when excess sediment builds up and toxins are released. Although ocean dumping is now managed to some degree and dumping in critical habitats and at critical times is regulated, toxins are still spread by ocean currents. Alternatives to ocean dumping include recycling, producing less wasteful products, saving energy and changing the dangerous material into more benign waste.

According to the United Nations Group of Experts on the Scientific Aspects of Marine Pollution , the amount of ocean dumping actually brings in less pollution than maritime transportation, atmospheric pollution, and land based pollution like run-off. However, when waste is dumped it is often close to the coast and very concentrated as is the case off the coast of New Jersey.

Waste dumped into the ocean is categorized into the black list, the gray list, and the white list. On the black list are organohalogen compounds, mercury compounds and pure mercury, cadmium compounds and pure cadmium, any type of plastic, crude oil and oil products, refined petroleum and residue, highly radioactive waste, any material made for biological or chemical warfare.

The gray list includes water highly contaminated with arsenic, copper, lead, zinc, organosilicon compounds, any type of cyanide, flouride, pesticides, pesticide by-products, acids and bases, beryllium, chromium, nickel and nickel compounds, vanadium, scrap metal, containers, bulky wastes, lower level radioactive material and any material that will affect the ecosystem due to the amount in which it is dumped.

The white list includes all other materials not mentioned on the other two lists. The white list was developed to ensure that materials on this list are safe and will not be dumped on vulnerable areas such as coral reefs.

In 1995, a Global Waste Survey and the National Waste Management Profiles inventoried waste dumped worldwide to determine what countries were dumping waste and how much was going into the ocean. Countries that exceeded an acceptable level would then be assisted in the development of a workable plan to dispose of their waste.

The impact of a global ban on ocean dumping of industrial waste was determined in the Global Waste Survey Final Report the same year. In addition to giving the impact for every nation, the report also concluded that the unregulated disposal of waste, pollution of water, and buildup of materials in the ocean were serious problems for a multitude of countries. The report also concluded that dumping industrial waste anywhere in the ocean is like dumping it anywhere on land. The dumping of industrial waste had reached unacceptable levels in some regions, particularly in developing countries that lacked the resources to dispose of their waste properly.

The ocean is the basin that catches almost all the water in the world. Eventually, water evaporates from the ocean, leaves the salt behind, and becomes rainfall over land. Water from melted snow ends up in rivers, which flows through estuaries and meets up with saltwater.  River deltas and canyons that cut into the continental shelf – like the Hudson Canyon and the Mississippi Cone – create natural channels and funnels that direct concentrated waste into relatively small geographic areas where it accumulates into highly concentrated areas of fertilizers, pesticides, oil, human and animal wastes, industrial chemicals and radioactive materials.  For instance, feedlots in the United States exceed the amount of human waste with more than 500 millions tons of manure each year – about half of which eventually reaches the ocean basin.

Not only does the waste flow into the ocean, but it also encourages algal blooms to clog up the waterways, causing meadows of seagrass, kelp beds and entire ecosystems to die. A zone without any life remaining is referred to as a dead zone and can be the size of entire states, like in coastal zones of Texas and Louisiana and north-east of Puerto Rico and the Turks and Caicos Islands.  All major bays and estuaries now have dead zones from pollution run-off. Often, pollutants like mercury, PCBs and pesticides are found in seafood meant for the dinner table and cause birth defects, cancer and neurological problems—especially in infants.

One of the most dangerous forms of dumping is of animal and human bodies.  The decomposition of these bodies creates a natural breeding ground for bacteria and micro-organisms that are known to mutate into more aggressive and deadly forms with particular toxicity to the animals or humans that they fed on.  Of the mid-Atlantic coast of the United States was a common dumping zone for animals – particularly horses and human bodies up until the early 1900’s.  Today, the most common areas for human body dumping is in India in which their religious beliefs advocate burial in water.  The results of this dumping may be seen in the rise in extremely drug resistant strains of leprosy, dengue fever and Necrotizing Fasciitis bacteria.

One of the largest deep ocean dead zones is in the area between Bermuda and the Bahamas.  This area was a rich and productive fishing ground in the 1700’s and early 1800’s but by the early 20th Century, it was no longer productive and by the mid-1900’s, it was virtually lifeless below 200 feet of depth.  This loss of all life seems to have coincided with massive ocean dumping along the New Jersey and Carolina coasts.


Water recreation is another aspect of human life compromised by marine pollution from human activities like roads, shopping areas, and development in general.  Swimming is becoming unsafe, as over 12,000 beaches in the United States have been quarantined due to contamination from pollutants. Developed areas like parking lots enable runoff to occur at a much higher volume than a naturally absorbent field. Even simply driving a car or making a house warm can leak 28 million gallons of oil into lakes, streams and rivers. The hunt for petroleum through offshore gas and oil drilling leaks extremely dangerous toxins into the ocean and luckily is one aspect of pollution that has been halted by environmental laws.

Environmental Laws

In addition to the lack of underwater national parks, there is no universal law like the Clean Air Act or the Clean Water Act to protect the United States ocean territory. Instead, there are many different laws like the Magnuson-Stevens Fishery Conservation and Management Act , which only apply to certain aspects of overfishing and are relatively ineffective. The act developed in the 1970’s is not based on scientific findings and is regulated instead by the regional fisheries council. In 2000, the Oceans Act  was implemented as a way to create a policy similar to the nationwide laws protecting natural resources on land. However, this act still needs further development and, like many of the conservation laws that exist at this time, it needs to be enforced.

 The total effects of ocean dumping will not be known for years but most scientists agree that, like global warming, we have passed the tipping point and the worst is yet to come.

Perpetual Motion = Unlimited Power….Sort of…

The serious pursuit of perpetual motion has always intrigued me. Of course I know the basic science of conservation of energy and the complexities of friction, resistance, drag and less than 100% mechanical advantage that dooms any pursuit of perpetual motion to failure…but still, I am fascinated at how close some attempts have come. One college professor built a four foot tall Ferris wheel and enclosed its drive mechanism in a box around the hub. He said it was not perpetual motion but that it had no inputs from any external energy source. It did, however, make a slight sound out of that box. The students were to try to figure out how the wheel was turning without any apparent outside power source. It turned without stop for more than two years and none of his students could figure out how. At the end of his third year, he introduced his mechanism. He was using a rolling marble design that was common for perpetual motion machines but that also had been proven to not work. What he added was a tiny IC powered microcircuit feeding a motor that came out of a watch. A Watch! The entire 4 foot high Ferris wheel needed only the additional torque of a watch motor to keep it running for nearly 4 years!

This got me to thinking that if I could find a way to make up that tiny little additional energy input, I could indeed make perpetual motion. Unlike most of my other ideas, this was not something that could easily be simulated in a computer model first. Most of what does not work in perpetual motion is totally unknown until you build it. I also knew that the exchange of energy to and from mechanical motion was too inefficient to ever work so I concentrated on other forms of energy exchange. Then I realized I had already solved this – back in 1963!

Back in 1963, I was a senior in high school. Since 1958, I had been active in science fairs and wanted my last one to be the best. To make a long story short, I won the national science fair that year – sponsored by Bell Telephone. My project was “How far will sound travel” and my project showed that the accepted theory that sound diminishes by one over the square of the distance (the inverse square law) is, in fact, wrong. Although that may occur in an absolutely perfect environment of a point source of emission in a perfectly spherical and perfectly homogeneous atmosphere, it never ever occurs in the real world.

I used a binary counting flashing light circuit to time sound travel and a “shotgun” microphone with a VOX to trigger a measure of speed and power of the sound under hundreds of conditions. This gave me the ability to measure to 1/1000th of a second and down to levels that were able to distinguish between the compressions and rarefaction’s of individual sound waves. Bell was impressed and I got a free trip to the World’s Fair in 1964 and to Bell Labs in Murry Hill NJ.

As a side project of my experiments, I attempted to design a sound laser – a narrow beam of sound that would travel great distances. I did. It was a closed ten-foot long Teflon-lined tube that contained a compressed gas – I used Freon. A transducer (a flat speaker) at one end would inject a single wavelength of a high frequency sound into the tube. It would travel to the other end and back. At exactly 0.017621145 seconds, it would pulse one more cycle at exactly the same time that the first pulse reflected and returned to the transducer. This was timed to exactly coincide with the first pulse so that it was additive, making the first pulse nearly double in amplitude. Since the inside of the tube as smooth and kept at a constant temperature, the losses in one pass through the tube were almost zero. In less than 5 minutes, these reinforcing waves would build the moving pulse to the point of containing nearly all of the gas in the tube into the single wave front of one pulse. This creates all kinds of problems so I estimated that it would only be about 75% efficient but that was still a lot.

Using a specially shaped and designed series of chambers at the end opposite the transducer, I could rapidly open that end and emit the pulse in one powerful burst that would be so strong that the wave front of the sound pulse would be visible and it would remain cohesive for hundreds of feet. It was dense enough that I computed it would have just over 5 million Pascal’s (Pa) of force or about 750 PSI. The beam would widen to a square foot at about 97 meters from the tube. This is a force sufficient to knock down a brick wall.

One way to make the kind of transducer that I needed for this sound laser was to use a carefully cut crystal or ceramic disc. Using the property of reverse piezoelectric effect, the disc will uniformly expand when an electric field is applied. A lead zirconate titanate crystal would give me the right expansion while also being able to respond to the high frequency. The exit chambers were modeled after some parabolic chambers that were used in specially made microphones used for catching bird sounds. The whole thing was perfectly logical and I modeled it in a number of math equations that I worked out on my “slip stick” (slide rule).

When I got to Bell Labs, I was able to get one scientist to look at my design and he was very intrigued with it. He said he had not seen anything like it but found no reason it would not work. I was asked back the next day to see two other guys that wanted to hear more about it. It was sort of fun and a huge ego boost for me to be talking to these guys about my ideas. In the end, they encouraged me to continue thinking and that they would welcome me to work there when I was old enough.

I did keep thinking about it and eventually figured out that if I can improve the speed of response of the sensors and transducer, I could shorten the tube to inches. I also wanted more power out of it so I researched what was the gas with the greatest density. Even this was not enough power or speed, so I imagined using a liquid – water – but it turns out that water molecules are like foam rubber and after a certain point, they absorb the pulses and energy too much. The next logical phase of matter was a solid but that meant that there was nothing that could be emitted. I was stumped…for awhile.

In the late 1970’s I figured, what if I extended the piezoelectric transducer crystal to the entire length of the tube – no air – just crystal. Then place a second transducer at one end to pulse the crystal tube with a sound wave. As the wave travels the length of the crystal tube, the compression and rarefaction’s of the sound wave pulse create stress or strain on the piezoelectric crystal, making it give off electricity by the direct piezoelectric effect.   this is how a phonograph needle works as it bounces on the grooves of the record. 

Since the sound pulse will reflect off the end of the tube and bounce back, it will create this direct piezoelectric effect hundreds of times – perhaps thousands of times – before it is reduced by the transfer into heat. As with my sound laser, I designed it to pulse every single bounce to magnify the amplitude of the initial wave front but now the speed was above 15,000 feet per second so the pulses had to come every 0.0001333 seconds. That is fast and I did not know if current technology was up to the task. I also did not know what it would do to the crystal. I was involved in other work and mostly forgot about it for a long time.

In the late 1980’s, I now was working for DARPA and had access to some great lab equipment and computers. I dug out my old notes and began working on it again. This time I had the chance to actually model and create experiments in the lab. My first surprise was that these direct piezoelectric effects created voltages in the hundreds or even thousands of volts. I was able to get more than 10,000 volts from a relatively small crystal (8 inches long and 2 inches in diameter) using a hammer tap. I never thought it would create this much of a charge. If you doubt this, just take a look at the Mechanism paragraph in Wikipedia for Piezoelectricity.

When I created a simple prototype version of my sound laser using a tube of direct piezoelectric crystal, I could draw off a rapid series of pulses of more than 900 volts using a 1/16th watt amplifier feeding the transducer. Using rectifiers and large capacitors, I was able to save this energy and charge some ni-cads, power a small transmitter and even light a bulb.

This was of great interest to my bosses and they immediately wanted to apply it to war fighting. A friend of mine and I cooked up the idea of putting these crystals into the heels of army boots so that the pressures of walking created electricity to power some low power devices on the soldier. This worked great but the wires, converter boxes, batteries, etc.,  ended up being too much to carry for the amount of power gained so it was dropped. I got into other projects and I dropped it also.

Now flash forward to about 18 months ago and my renewed interest in perpetual motion. I dug out my old notes, computer models and prototype from my DARPA days. I updated the circuitry with some newer faster IC circuits and improved the sensor and power take-off tabs. When I turned it on, I got sparks immediately. I then rebuilt the power control circuit and lowered the amplitude of the input sound into the transducer. I was now down to using only a 9-volt battery and about 30 ma’s of current drain to feed the amplifier.   I estimate it is about a 1/40th watt amplifier.  The recovered power was used to charge a NIMH battery of 45 penlights of 1.2 volts each.

Then came my epiphany – why not feed the amplifier with the charging battery! DUH!

I did and it worked. I then boosted the amplifier’s amplitude, redesigned the power take-off circuit and fed it into a battery that was banked to give me a higher power density. It worked great. I then fed the battery back into an inverter to give me AC. The whole thing is about the size of a large briefcase and weighs about 30 pounds – mostly from the batteries and transformers. I am getting about 75 watts out of the system now but I’m using a relatively small crystal. I don’t have the milling tools to make a larger properly cut crystal but my modeling says that I can get about 500 watts out of a crystal of about 3 inches in diameter by about 12 inches long.

I call my device “rock power” and when I am not using it for power in my shop or on camping trips, I leave it hooked up to a 60 watt bulb. That bulb has been burning now for almost 7 months with no signs of it diminishing. It works! Try it!!!

The Down Side to Lucid Dreams

Some of you may have read my other stories about my experiences with Lucid Dreaming. See LUCID DREAMS and THE POWER OF THE MIND. Now I am going to tell you there is a down side to doing that.

It started when I noticed that I was constantly playing music in my head. Everybody does that but this was different. It was like the background music in a movie. I could “think” this music in my head even while I was actively thinking and even talking about something totally unrelated to the music. Like the music in the movies, I was not always aware that this background music was there but if I had a lull in other thoughts, I would immediately become aware of it.

It was my subconscious mind playing this music and my conscious mind was hearing it while my conscious mind was busy with other thoughts. What was worse, is that I cannot stop it easily. I think to myself – NO MORE MUSIC – over and over again and after several minutes, it stops…only to start again in 10, 20, 60 minutes later.

This sounds silly but my conscious mind seems to have a mind of its own. Yeah, I know that is crazy but why else would I not be able to control it? In my lucid dreams, I have complete control and even instruct my subconscious mind to not do that any more but it doesn’t help much. But outside of those moments when I am expressly trying to control my subconscious mind, it seems to be thinking almost independently of my conscious mind. I say, “almost” because it has begun a new “background activity”.

I am very well aware of both the jokes and the reality of hearing “voices in your head”. These are just a few I found on a bumper sticker site. “You’re just jealous because the voices are talking to me” “The voices in my head are stealing my sanity” “I can’t go to work today – the voices in my head said stay home and clean the guns”. This is no joke. I really do have voices in my head that I don’t seem to have full control over.

In my story THE POWER OF THE MIND, I described over a decade of work with my lucid dreaming and my interactions with my subconscious mind. I have been able to take that to some very remarkable levels to include being able to invade other people’s thoughts and dreams and to extend my remote viewing to some amazing levels. Well now it seems I have a back seat driver to these events. My subconscious mind seems to be working at trying to make these contacts and invasions even during the day when I am otherwise engaged in other activities. It is really annoying.

The other day, I visited a friend; I’ll call her Jane. She had company and I was introduced to “Terry”. As I was introduced, I heard this weak voice in my mind saying she was a smoker and a bad driver and she drinks too much. I was shocked by these comments and could not imagine where they came from since she looked and talked perfectly normal, well dressed and certainly appeared sober. There was nothing to indicate these awful things about this woman that I had just met for the first time.

In my mind, I was literally having an argument in my head between my subconscious mind telling me awful things about Terry while my conscious mind was shouting that all that was nonsense. Meanwhile, I am also having a conversation with Jane and Terry and sitting down for some coffee.

I can’t tell you how distracting these mind games were while I am trying to smile and act cordial. I had to work at not saying some of my responses to my subconscious mind out loud. Just the fact that this was happening at all was annoying and very disconcerting but it was also re-framing the entire visit from a pleasant exchange with a friend to a mental brawl and mental shouting match. Jane had to ask me a several questions twice before I responded because I was so distracted.

I finally had to excuse myself but as I did, so did Terry. As Terry stood up, Jane rushed to help her. I thought she might be disabled or be injured the way that Terry was trying to hold her up but she was in her mid-50’s and seemed quite capable. While Terry was looking for her purse, I wrinkled my brow and shrugged my shoulders to Jane as if to say, what is going on? Without Terry seeing her, Jane curved her hand as if holding a glass and raised it to her face while rolling her head back. The obvious sign that Terry had been drinking. It was only then that I noticed that there was a large empty wineglass next to where Terry had been siting.

Jane and I helped Terry out to her car and she was definitely not able to drive safely. Jane repeatedly said she would drive Terry home – it was just a few blocks. After some effort, we got Terry to agree and I followed them to Terry’s house and then picked up Jane and drove her back home. Just as I was backing out of Terry’s driveway, I noticed deep tire marks on the lawn going right up to the front steps. The first few steps were broken or missing. I made the comment to Jane that somebody missed the driveway. Jane said that happened when Terry was driving home drunk one night and dropped a cigarette into her lap.

I am still annoyed by this running commentary of my conscious world and by the continuous background music but I am learning to live with it. It has not really told be to go home and clean the guns and I am not hearing messages from God. What I am hearing is sort of a news flash or intelligence report from my subconscious mind of matters that I am not immediately aware of and that have, so far. All proved to be correct. I can live with that.

The Power of the Mind!


An idea that we often hear is that we really only use 10% of our brains and if we used all of it we could do some pretty amazing stuff.  It has been speculated that we might be able to do things like remote viewing, telekinesis or mental telepathy or see the future.  This, of course, sounds like crazy talk from some wing-nut with a tinfoil hat but the reality is that a great deal of very serious research has gone into this very subject.

 In 1972, the CIA began a serious 24-year look into remote viewing and clairvoyance.  In 1981, the Defense Intelligence Agency (DIA) began serious studies in the same areas.  These programs had code names like Star Gate, Grill Flame and Center Lane, Sun Streak and others.  DoD kept looking at these subjects up thru June 1995.  Stanford Research Institute (SRI) of Menlo Park, CA., SAIC, Institute for Advanced Studies in Austin and the American Institutes for Research (AIR) all have been or are still working on research in these areas.   

The Cognitive Sciences Lab at Palo Alto Calif. did extensive studies that were critical of the government’s studies of this subject (DIA and CIA).  Their conclusion, published in March of 1996 found, “that a statistically significant effect had been demonstrated” but they also pointed out that the CIA and DoD had ignored compelling evidence and had set the outcome of the studies before they began by using questionable National research Council’s reviews.  “As a result, they have come to the wrong conclusion with regard to the use of anomalous cognition in intelligence operations and significantly underestimated the robustness of the basic phenomenon.”  The reasoning for the government’s perspectives on these studies have been shown to have nothing to do with the science or the efficacy of the research but rather were the petty squabbling of top-heavy bureaucrats and mismanagement of the political and financial support.   

In other words, studies that were conducted by numerous contractors, scientists and government labs over a period of three decades found important evidence that showed this was a viable field of study but for unrelated reasons, they botched the studies and the results so that the net result was that the whole subject has been taboo for serious studies or funding ever since. 

All this is to day that there is much more to this subject than my personal interests.  Lots of very serious scientists, government agencies and academic research facilities have looked and are still looking into these various psychological properties of the mind collectively grouped under headings like parapsychology, “anomalous cognition”, and psi abilities. 

If you read all these reports as carefully as I have, you will find that almost all of them did, in fact, find some statistically significant effect to a greater or less degree.  In fact, some of these studies found capabilities that defied both logic and conventional science so much so, that the scientists involved were ridiculed and derided to the point of nearly destroying their careers, when they tried to get some recognition of their results.  For that reason, many of these kinds of studies are no longer popular or abundant as they once were and are now done, if at all, in secret facilities and those involved are very cautious to keep a low profile. Since I have no interest in research money and have mostly retired from my R&D career, I have no qualms about telling of my adventures and success – especially since they have led to such startling discoveries. 

Most of this is completely verifiable from numerous Internet sources – including many reports from the government and R&D reports from and about programs described above.  For the most part, I have not so much blazed a new trail of research as much as I have combined various proven methods, techniques and processes in a variety of ways that probably were not tried before.  I have used special aids and tools to assist me that have proven to be effective by themselves but have a synergistic effect when combined with other aids and techniques.  In some cases, I have stumbled upon methods or techniques that have been well proven to work but I did not know about them beforehand.  If you doubt any of this, then do your own research on what I am trying and you will find it is all based on sound and proven science.  

What I am about to tell you will be hard to believe because we have all been told that this whole subject area is foolish nonsense and that only tricksters and con-men and deluded space cadets really believe in any of this.  If you are to understand the significance and why it is true, I have to give you some background and tell you the whole story of how I discovered this.  Let me start from the beginning…  

The truth is that humans dream but we don’t know why.  There are lots of theories.  The latest and most accepted is that it is the brain’s way of establishing and organizing our memories.  This sounds plausible until you consider the continuity, complexity and detail of some dreams that bear no relationship to any real-life experience.  It is also thought that dreams might be subconscious manifestations of our emotions but that does not explain the majority of dreams that appear to be about random events and places.  Some people believe that dreams are much more powerful and can tell the future or reveal a person’s innermost feelings.   

One generally accepted biological concept about dreams is that the conscious mind becomes inactive and the subconscious mind takes over.  The subconscious mind is that portion of the brain that is not directly controlled by willful and deliberate thoughts of a person.  It is the part of the brain that runs everything without being told to do so.  It keeps the heart beating, the blood flowing and controls the body’s reaction to temperatures, fear, surprise and other automatic reflexes.   

Some parts of the body seem to be controlled by both the conscious and the subconscious mind.  Like breathing and eye movement.  We can control these parts when we want to but it seems that they shift into automatic for most of the time.  During a dream, the real physical presence around the dreaming person can often be incorporated into the dream.  If you get cold in your bed, your dream might conjure up a dream that involves you getting cold.  If you hear sounds like dogs barking or bells, your dream might also have these sounds.  This implies that the subconscious mind is receptive to the body’s real senses and can incorporate the real world into the dream and yet it can also modify those real world sensations so that they appear in the dream in a totally different form.  Of course, this is simply anecdotal observation and is not a scientific analysis of what is really happening. 

The truth is that our best scientists and researchers don’t know much about dreams beyond what we can observe.  But because we do this every night and there are so many different aspects of it, there is a lot of interest by the hard-core scientists as well as a lot of average people.  I was one of those that was intently curious and wanted to find out more. 

About nine years ago, I began reading and working with lucid dreaming.  This is a technique of training your conscious mind to remain aware and active during a dream so that it can direct and control the subconscious mind and your dreams.   

I had read about brain waves called delta, theta, alpha, beta and gamma that are related to various thought patterns in the brain.  Way back when I was in the Navy, I bought a surplus recording electroencephalograph (EEG) and all of the hookups.  I had played with it as an interface to my computer and eventually had gotten it to recognize binary responses to my thoughts.  I could, for instance, answer yes-no questions using the output of this EEG fed into an A-D converted and then into the game-port on my old computer.  Now I got out that old EEG and began recording my night’s dreaming to map my REM and NREM sleep and to record my brain’s electrical activity.  I wired the EEG into some lights and into my computer so I could trigger various events with my brain waves.  It was fun to experiment and helped me define and refine my lucid dreaming. 

It took me about a year of practice before I had my first lucid dream and it was amazing. 

I tried keeping a log of my dreams and ordering my brain to remember to dream but I found the best technique was to first relax all over and then to imagine that I was walking up a long stairway to a special sleep temple.  I enhanced this image by slightly rubbing my feet together as if I was taking the steps up those stairs.  When I reach the temple, I was asleep and dreaming and was aware I was doing it.  Over time, I could shorten this climb up those stairs and even got to the point that I could do it during the day while waiting or riding a bus or train and while riding in a car.  After awhile, I could enter my dreams as if I was a spectator at a movie but I gradually began trying to exert control over what I was dreaming.   

I tried to enhance the lucid dreaming state with various drugs and herbs and teas and other foods.  I tried melatonin, kava kava, passionflower, St. John’s wort, and various herb teas with and without caffeine.  They all had some effect but I did not like the idea of having to take a drug to make this work so I stopped all of them except a multi-vitamin that gave me a bunch of “B” vitamins, fish oil, choline, and other stuff for an old guy like me. 

Eventually, I gained almost complete control of my dreams so that I could conjure up any event, environment or people I wanted to and then will them to do something.  The nature of this control is a little weird.  The subconscious mind is still creating the dream and can take it in an independent direction if I don’t exert some willpower but I can’t just command it to do my bidding.  I have to think it and want it, to make it happen and even then, it will do it but in a way I might not have chosen to do if my conscious mind had full control.  For instance, if I want to go fishing, it will create the entire fishing environment before I can define the boat or where or what I want to fish for.   

When my subconscious mind leaped ahead like that with something I did not want, it took me a long time and it was hard for me to learn to backup and redesign the dream.  At first, I’d have to dock the boat and walk to another boat in order to change.  Now I have learned how to “reset” the dream and do an instant redesign to something more like I want.  My reset signal is a little weird and I found it by accident.  I dropped an LED flashlight in my bed covers one night.  While in a deep sleep, I suddenly lost the dream I was dreaming as if it had been erased.  I willed myself awake and found that the flashlight was on and was under the covers near my legs.  It was odd that it would have any effect on my legs but I wired up a light to a pressure switch on my finger and then tried to reset a dream.  Eventually, I found that if I put the light under my legs, it would give me just the right amount of mind control over resetting my dreams.  Weird but it works so used it.  Using this reset signal, I can exert a lot of control but I have to work at controlling the switch. 

At first, I felt thrilled by this newfound capability and would sometimes remain excited for hours after I woke up.  After awhile, I realized this was not just being thrilled but I was feeling anxious and nervous.  I sometimes felt guilty for what I was seeing or felt anxiety over simply being in the dream.  This got to be a problem until I started making overt efforts while I was in my dream state to tell myself to be relaxed and calm when I woke.  I practiced meditation and yoga-like poses in my dreams to facilitate this effort with very good results.  Eventually, I was able to calm down and enjoy my dreams and actually feel relaxed and calm after it was over – even if the dream itself was exciting. 

I wanted to expand on my capabilities so I contacted a psychology professor friend of mine that I met back in my government R&D days.  He is a good friend and knows how to keep a secret.  I won’t give you any information that will let you identify him because I don’t think he wants to be seen as being involved in any of this but the truth is; he has been dabbling in it for years.  He pointed me toward lots of studies on the various brain waves and processes like REM and NREM sleep and slow-wave sleep and the stages of sleep and circadian rhythms, etc.  He told me about sleep inducements like being body temperature, (hot baths), a high carbohydrate diet and exercise.  I told him I wanted to stay clear of any drugs but he told me that lots of foods and over-the-counter drugs could affect sleep.  I must have spent months reading all about sleep, how it works and what causes it.  As with dreams, I found that there was a lot about what can be observed about sleep but not much about why we sleep. 

One of the most amazing aspects of lucid dreams is that you can immerse your entire being in the dream so that it seems as real as if you were really there.  You get the sights, smells, feelings and taste of your dream world.  It reminds me very much like the holodeck that they showed on the Star Trek TV series in which the computer could create artificial environments, people and nature. 

This realism is great most of the time but it can also be very disturbing.  I foolishly dreamed I was in a shooting war against some gang members and I got shot.  The link between my conscious and subconscious mind was so powerful and complete that I felt the shock and pain in my dream as if I had really been shot and it forced me to wake up and when I did, my heart was racing, my body felt flushed and I was breathing very hard.  I don’t think I would really die if I died in my dream but I have not wanted to test that theory. 

At first it was fun to experiment with stuff.  I even used it to conjure up famous scientists living and dead to discuss some science problem I was having at work.  Amazingly I often would solve relatively complex problems in these dreams and then take them to work and find out that they worked.  I also found that I could look at a book or magazine as fast I could turn the pages while awake and then recall that book in my dream and see every page clearly and even do a word search of the contents.  It was sort of a weird kind of photographic memory that I could tap into only in my dreams. 

I also explored other sensations and wild experiences.  I made myself able to fly, I swam faster than fish, and I became super strong, and other super powers.  I went through a phase of experimenting with sex and drugs – or what I imagined what drugs would do to you.  I played out all the great movies I had ever watched.  Some of these experiences were so much fun that I really looked forward to getting a good night’s sleep and often wanted to remain asleep in the morning. 

One interesting event happened about this time.  I go for a physical every year but the one I had about this time showed me to be in much better shape than ever in the past.  My blood chemistry was that of a person half my age and I had no visible or detectable problems of any kind.  Since I am an old guy with the typical old- guy problems – arthritis, high cholesterol, high blood pressure, age spots, etc., the doctor was both amazed and confused that I showed no signs of any of these maladies.  He wanted to do the tests over but first asked me a lot of questions.  I mentioned that I had gotten a lot of sleep over the past two years but I was careful to not mention my lucid dreaming.  He did run the tests again and told me that he added a few extras.   

When he got back the results, we had another meeting.  He told me that I had elevated levels of something called dimethyltryptamine and decreased levels of cortisol.  These were not just slight changes but at levels that he thought were way out of the ordinary.  He also noted that I had unusually high levels of acetylcholine, serotonin, dopamine and norepinephrine and said, “no wonder you feel good, you are high on natural uppers”.  I tried to downplay the results and told him it was probably diet. 

After about two years of this, I got tired of all the weird stuff and decided I try to find something really useful I could do.  I wanted to try to do things like remote viewing, telekinesis or mental telepathy, mind reading or see the future.  I quickly discovered that this was not like trying to influence my dreams – I had to exert much more control over my subconscious mind in order to make it focus on what I was trying to do.   This required me to not just direct the subconscious mind with my conscious mind but to actually superimpose the two over each other so that I would have all the benefits of both.   

As with all of this, I did my own research and found that the two sides of the brain are connected by the corpus callosum.  A sort of communications superhighway between the two hemispheres of the brain.  It turns out that this inter-hemispheric communications is vitally important to my efforts to superimpose the conscious mind onto my subconscious mind.  I discovered that being left handed, a musician and having worked in both artistic and math related career jobs, were all contributors to my corpus callosum being very efficient in letting me overlap my subconscious mind with my conscious mind. 

I can’t tell you have many hours it took but about a year later, in 2007, I was able to begin to control the essence of my subconscious.   I began to have some success when I began using a sleep sound machine that gave me 60 choices of various sounds to sleep by.  I began with white noise but soon found that something that had a background beat to it worked better.  To explore more sounds faster, I got two sound machines and put them on the left and right side of my bed for stereo sounds.  After a lot of fiddling with them, I found a range of frequencies that worked best and surprisingly, they worked best when they were not both set to the same tone.  A psychology professor that is a friend of mine told me this was called binaural beats and entrainment and that it was a well developed technique to create infrasound inside the brain.  I read up on this subject and soon was tweaking my sound machines to give me exactly what I needed. 

I also saw an advertisement for a relaxation aid that put flashing LED lights on the inside of some sunglasses so that you could see them with your eyelids closed.  I bought a pair and experimented with them.  I found that different combinations of sounds and flashing lights gave me different capabilities while in my dream state.  With some, I was very pensive and analytical and used them for problem solving and designing.  With other combinations, I was more aggressive and physical and could imagine building things and making decisions better.  I also found that when I used the right combination of lights, I could induce sleep almost immediately and by putting the lights on a timer, I could end my dreams on a set schedule.  My psychology professor friend told me this is called hypnagogia. 

After more than a year of experimentation, I began to have some success but frankly, at first it was a big letdown.  I was able to feel my senses while asleep so that I could feel and hear the environment around my sleeping body.  At the time, I thought “no big deal”.  Almost by accident, I decided to see if I could feel my heart and suddenly I was actually inside my heart – looking at it beating.  I could touch it and feel it and even change its speed of beating, at will.  I continued this exploration all over my body – lungs, brain, ears, eyes, etc.  I tested it by going to my lower left leg where I have been shot.  Some tiny bullet fragments are still in my leg and I could actually see them.  I later matched up where I “saw” them to an X-Ray of my leg and they agreed.  I had actually seen the inside of my real leg…and heart and lungs, etc. 

Then I began trying to push the capabilities of my subconscious to do some of those wild paranormal tricks.  Using all my tricks and aids I have learned over the past 6 years, I was able to eventually totally dominate the subconscious mind with my conscious thought.  The results were amazing.  Every time I tried it I discovered something new I could do or experience.  Here are just a few… 

Hyper-Senses – I was immediately aware of my surroundings at a level far beyond anything I could have imagined.  I could feel the variations in the thread covering of my bed sheets.  I could hear air moving in the room.  I could hear sounds from outside the house like I was using a massive hearing aid.  I could smell individual objects in the room like the wood dresser, the wool rug and the wall paint.  I could feel the vibrations of the furnace and after awhile, I could feel the vibrations of cars driving on the road a block away from my home.   I learned to open my eyes without waking out of my dream state and found I could see colors and detail I never thought possible.  And what was even more amazing was that I could do this selectively so that it was not all flooding my senses at once.  It was like listening to someone talking in a rock concert – only easier – I could tune in our tune out whatever I wanted to concentrate on.  

As I explored these hyper senses, I realized that when I sensed something by smell or touch or hearing, I almost immediately imagined an image of that thing.  When I truck drove down the road, I heard it first and then smelled it and then felt it and as I added each new sensation, I enhanced my image of it until I was convinced it was the garbage truck.  I then looked out the window and – for the first time – visually saw that it was indeed the garbage truck.  I did this with hundreds of things until I could “see” well beyond my visual range. 

X-Ray vision.  Well not exactly x-ray vision, more like having a selective virtual reality vision.  Because I know what it looks like on the other side of a wall, I could look at the wall and then look thru it.  Because of my heightened senses, I could hear, feel, smell and sense things in the next room even if they had been moved since I was in the room last.  It was as if the 3-dimensional qualities of my vision were expanded to include hearing, smell and feelings.  I could “see” – in my mind’s eye – my dog as he was walking across the next room until he walked into my door and was visible. 

Remote viewing  (RV) – or at least something like RV.  My hyper senses and this x-ray vision combined to give me the ability to look outside my house and then into other houses and down the street.  The limit seemed to be about ¼ mile but it was amazing.  I spent hours nosing around inside my neighbor’s houses; listening to their conversations and watching them do stuff. 

These were all just variations on the hyper senses that my mind was giving me but it was rapidly going beyond that.  I added these improved senses to the near photographic memory of what I could do and see now and added in a nearly perfect recall of my own memory to create some really bizarre capabilities.  For instance, I discovered I could revisit a moment in my past and see it in details even beyond what I experienced the first time the event happened.  I remembered going camping with my Dad when I was 12.  I could smell the pine trees and hear the nearby river and feel the heat of the sun on my face.  I could imagine the scene to the point of seeing in 3-D and being able to walk around the scene and see myself back then.  I could play it like a videotape and slow or stop the actions to study and see things that my senses recorded by that I have not remembered in all these years.  It was utterly amazing. 

I was in an accident in which I was hit by a school bus that ran a stop sign.  I cannot remember anything that happened that day from before I got up in the morning until I woke up in the hospital room.  I used these newfound senses of the overlapped mind to revisit that day and follow myself up to the accident.  Just as the accident happened, my visual memory of it went blank but I still was mentally recording the sounds and smells and feelings of the events around me.  I was able to recreate the scene as if I was watching from above looking down on the scene and saw what happened to me.  I heard the bus driver crying when she thought she had killed me.  I felt the ambulance guys working on me and the ride in the gurney to the hospital and the loud sound of the siren blasting.  It was incredible to relive those long forgotten moments.  These discoveries kept me busy for weeks but I wanted to push the limits even more.   

Remembering my earlier studies of brain waves called delta, theta, alpha, beta and gamma and that they are related to various thought patterns in the brain, I wanted to see if I could sense these waves.  I invited a friend over for a late night dinner.  She is a sound sleeper and I have had her over many times before so I knew I could count on her being a good test subject.  She slept in the guest room and after several glasses of wine; I knew she would sleep thru almost any noise. 

I waited until she was well asleep and then entered my dream state and quickly moved into my conscious control state.  While remaining asleep, I walked into her room and sat in a chair by her bed.  I then concentrated on visualizing her brain waves.  To my surprise, I began to see a totally new dream scene.  It took me a minute or two to realize I was seeing her dream.  I was looking at her dream as if it was a 3-D movie being projected on a screen in front of me.  She was dreaming of swimming on a beach.  Within a minute or two, her dream changed to a completely different dream.  Now she was back in college and was studying for a test with some friends.  In another minute or two, it changed again.  I sat there for two hours watching her dream snippets come and go. 

She was a good friend and so I did not try to enter her dreams or influence them in any way.  I somehow thought that was not a good thing to do to a friend.  However, in the weeks that followed, I tried this same effort on several other people – mostly neighbors that I knew very little.  Soon, I was able to do it without leaving my bed.  I could use my remote viewing and hyper senses to “see’ the person’s brain waves and then see their dreams.  I even was able to do this with my neighbor but the range of this was very limited – to about 100 feet.  Once, I even went to a motel and roamed all of the nearby rooms and explored their dreams. 

What I found, over time, is that most people dream in very short dreams that often seem disjointed and illogical.  It sometimes seemed like I was seeing a 2-minute excerpt of a longer movie and the film kept skipping and jumping thru scenes.  Only about one in 25 or 30 tries did I find someone that would dream a coherent story line that I could follow and was interesting.  This disappointment prompted me to move into other new directions. 

I figured if I could use my hyper-senses on someone asleep in the next room or next house; why not try it while they are awake.  I began tweaking my sound machines and lights and searching for the right conditions.  I found that noise from them talking or watching TV or music overwhelmed the sensation of vibrations and made it impossible to read anything.  It was like trying to feel your heart beat while riding on a motorcycle.  I began looking for someone that liked to sit in a quiet room and remain awake.  I began buying books for all my neighbors in the hopes they would sit and read.   

Unfortunately, this too was a disappointment.  I found that most people that are reading are paying attention to what they are reading – duh!  The effect was that the image in their mind was simply a dreamlike version of what they were reading and not of any thoughts that they might have about their lives or living.  Once in awhile, I would get someone that would project themselves into the story but even then, the story moved slowly – at the speed they were reading – and had very little personal insights.  It was time to move on to other efforts. 

We are now up to about a year ago.  I have been doing this for more than ten years and have found it has almost dominated my life.  I started it after I retired and I retired with enough money to never have to work again but I have actually done little else.  I have not interacted with my neighbors and I have done relatively little community service or helping of others other than sending money to charities.  I decided I wanted to get active again outside of myself but I was not sure how or what I could do and I wanted to try to do something with my new found skills.  Everything I could think of seemed like it could not bridge the gap between my mental world and the real world.  I could imagine and dream and sense all kinds of things and situations but there was no way to translate that into the real world.  So that became my goal – find some way to manifest my mental capabilities in the real world. 

I got out my old EEG again and began to experiment with the brain-to-computer interface again.  This time, I was in full control of my dream state, my subconscious and my body.  I quickly found I could create code sequence patterns in my delta waves.  I was able to consciously send patterns of two and three cycles of delta waves into the A-D converter and have the computer interpret them.  In my dream, I imagined that I created a typewriter that had a cord that ran first to my head and then to an antenna so that as I typed my brain waves were being “transmitted” out in the proper pattern for each letter.  I then programmed my computer to recognize about 40 different code patterns based on the old ASCII codes that they use to use in TTY devices back in the 1950’s.  This gave me a way to type in my dreams but have the actual printing occur in the real world on a real computer. 

This is a simplified version of what took me a year to do but I eventually got it working.  I found that I could type in my dream state a whole lot faster than I can type in real life.  In fact, what you are reading right now has taken me about 5 minutes to type.  Several other stories on this blog were also typed this way.  I am going to start a novel next week and hope to have it done by the first of March (it is now Feb 15th). 

I hope you have enjoyed this description of my exploits over the past decade in lucid dreaming.  I can only say that that if I can do it, anybody can.  Give it a try.    

Nuclear Attack Communications

While I was active duty Navy, I was involved in a lot of strategic communication studies that involved researching and computer modeling to find ways to guarantee reliable communications under all circumstances – including things like the worst possible weather, earthquakes, terrorists and nuclear attack. We called it Communications Continuity.  The objective is to have assured communications in the trans and post attack phase of a nuclear war or other crisis.  That means that you cannot rely on any fixed installation since it will be bombed in the opening salvo.  Likewise all of the satellites and fixed communications centers, including phone and computer lines and all of the major nodes on the internet will be destroyed.Despite all this, there are entire networks that are designed to survive and work even after a major attack.  The one that is most reliable is to use is low frequency radio waves in the ELF range.  Extremely Low Frequency.  Way below the AM broadcast band.  These frequencies have two very good characteristics:  They will punch thru the static and noise created by atom bomb blasts and they will penetrate into the water to reach submerged submarines.  Unfortunately, they also have two bad characteristics.  To make use of ELF effectively, you need a BIG antenna to receive and transmit and you need a whole bunch of power – like in the multiple megawatt range.To receive ELF, subs use a trailing wire antenna that can drag behind the submerged sub by more than a mile, if needed.  Aircraft (like SAC bombers) have drop-down wires that can reach out 18,000 feet to snag an ELF signal.  Since these guys mostly receive only, they do not need the power of a megawatt transmitter to respond to these signals.  But someone has to have that power and a really big antenna.  It’s there, right under your nose and you have probably seen it and did not know it.One of the backups to the backups that the military uses to send ELF messages is the power lines that normally deliver power to your homes and businesses.  By cutting these wires at two ends and making some other minor changes, they can turn a stretch of highway telephone pole power wires into a very long ELF antenna.  This allows them to not have to use tuning systems to try to pump out all that power into a ¼ wavelength antenna or shorter, less efficient antenna.The power comes from two 18 wheeler trucks.  One has fuel and a small command post and the other is one huge generator – capable of creating about 20 megawatts of electrical power.  A third vehicle is usually an RV with the crew quarters and other support.  These three vehicles travel in teams around the US – constantly in motion – driving along routes that have been surveyed to make ideal ELF antennas.  They are all disguised as normal 18 wheelers and have all the fake papers to let them move among all the other truckers on the road.At last count, there are 24 of these teams covering an area of 350, 000 square miles from Alaska to Florida and all of Canada.  They never stop.  There are dozens of crews that are rotated out every 45 days at special bases where they can get equipment spares and run testsNext time you have a totally unexplained power outage, look for two 18 wheelers and an RV traveling together or near each other.  You might have just witnessed a test of the emergency communications network.Think this is far fetched.  Consider this.  Each military and intelligence service has an office dedicated to this subject as well as several entire organizations (DIA, DISA, DSS, NRO, NSA, CSC, etc.)but the overall office with DoD is the Assistant Secretary of Defense for Communications, Command, Control and Intelligence (ASD-C3I). There is also a new office called the Assistant Secretary of Defense for Networks and Information Integration (ASD-NII).

About the Author!

My name is not important but it is an unfortunate fact of human nature to associate the credibility of ideas with the person that has them.  This usually serves us well in helping us screen good and bad information and to add doubt where it may be needed.  I cannot fault that logic, however, it has also led to a high level of resistance to new or innovative ideas over the course of history.  That resistance has resulted in delays of years or decades from the initial discovery by some obscure scientist or thinker and the general recognition of the value of their idea.  It is unfortunate that nearly every great idea in science dates back to just such a humble beginning.     

My career began when I got drafted into the Navy after spending three years at the Georgia Institute of Technology.  I had done well but I screwed up during a summer session and lost my “S” deferment.  After three years of nearly non-stop technical schools in the Navy, I qualified for the Navy Enlisted Scientific Education Program (NESEP) and they sent me to the University of New Mexico.  Between my GA. Tech, Navy and correspondence courses, I was given more than 280 credit hours toward various degrees but had to take additional courses to get any of them.  By carefully playing a game of class shuffle, I managed to stay at UNM for three years and left with four degrees – Math, Computer Science, Physics and Engineering.  Of course the Navy knew only about the last one.     

I did a lot of special sponsored courses in computer modeling of various energy projects and made a few findings that my professors took credit for.  I designed the currently used aircraft anti-collision system, I modeled a method of terrain mapping for navigation, I computed the ideal energy-conservation designs for numerous building materials – double and triple paned windows, wall insulation designs, injection foam, etc. and about 40 other similar projects.   

When I left, I was promoted to Lt. USN and told I could go to sea or go to the Navy’s Post Graduate School at Monterey, Calif.  Of course I went to PG school.  38 months later, I left with a Master’s in Operations Research and a PhD in simulation and modeling (dynamic systems and game theory).  I would have been out earlier but my thesis, which was classified (and still is), was one that they wanted me to complete.  It must have been liked because it got me a job at NRL and I wound up moving about 10 miles away.  I got a place right at the end of the airport runway (ug- what a lot of noise) but it was also right near the new golf course.    

I worked there for a few years on some modeling of weather but thought it was very limiting so I accepted an offer from a friend to move over to DARPA.  Here I was allowed to work in a number of areas on my own or on target projects.  I worked on C3I automation, simulation of various communications issues, autonomous weapons systems, robotics and artificial intelligence.  I was eventually given free reign to work on any pet project I had.  That was fun and I worked there for many years or it seemed like it.    So you see, I spent a lifetime in science and research – most of it in the military or as a government worker or contractor.  I often had the advantage of a virtually unlimited budget and very wide latitude on the range of my studies.  But my contribution was always taken for granted.  I was a paid employee and was named only as part of the “project team” or grouped as “contributors”.  This was true even when I worked on the project alone or was the principal investigator or design lead. 

Not getting credit for some design or device or mathematical analysis, especially when it was classified, was never a problem for me until I realized that being unknown meant that I was never peer reviewed for anything and therefore had little or no standing or credibility in the real world scientific community.   In some very closed and classified circles, I am known to have solved problems, refined designs and resolved problems but most of that will never be known by anyone outside of that closed group and, for the most part, they don’t care.   

I had always looked forward to the day that I could have stimulating dialog with fellow scientists about some of the great unknowns but I have found that happens only when the people you are talking to, think of you as being a “fellow scientists”.  If they don’t, then they dismiss your views as not having been vetted or having any credibility.  Attempts to break that wall have proved useless and I am never taken more seriously than the neighborhood high school science teacher.    My work was well received by DoD and I advanced in rank fairly fast with two deep selections one to Cdr.  That means I was promoted earlier than others of my year group.  In 1995, I was deep selected for Captain but told I had to stay in 3 more years to retire at that rank.  I spent 45 days leave mulling over that decision but finally decided to stay and get the extra retirement pay.  In those last three years, I made a lot of industry contacts since I was the PM for several acquisition hush-hush programs.  When I retired in 1998, I got a bunch of medals and commendations that, with a $1.00 would get me a cup of coffee.      

I did not work for others at first but instead began working at home on an investment analysis and prediction modeling program.  It is based on analysis of past patterns but it looks for a preselected level of confidence in the prediction of the future events.  I decided to use it to examine the whole Y2K event and even put up a web site called Profit2000.  Among the other predictions, which are no great feat, my program predicted the rise and fall of the price of gold.  I took my own advice and took out short options on gold before Dec 1999 and put options after and made a bundle.  I sold the predictive software to a Wall Street consulting firm that is still using it but they made me promise not to tell anyone.  They prefer to let everyone think they are brainy experts rather than telling everyone that their investment advice is simply a computer program.I got tired of that and started my own business modeling consulting agency. 

I created some unique computer modeling tools for virtual product development (VPD) and business risk assessments.  One of them is a neural-net BPR/ABC model that uses Monte Carlo analysis of critical variables to do a stochastic analysis of process flow and work flow.  This model automatically creates an optimum business process flow for a business.  I sold the company and several large government contracts I had won and finally really retired in 2005.     

I then moved to the mountains near New Denver, BC, CA.  Here I have built my dream house and am still adding on to it.  Most of the house is actually a cave that I expanded and fixed up but I have several berm and Earthship style outbuildings that blend in with the surroundings with living plants on most of the roofs.  With the abundant lake water to add to my wells and rain/snow melt water, I can afford to use a lot of water to feed plants inside and out.   I wanted to make use of some of my ideas on energy and building but I also did not want copycats or tourists coming to see what I have done.  From more than 100 feet away, it is hard to see most of my buildings and I have about 30,000 square feet of enclosed space.  I like that.    

I use geothermal heat pumps of my own design to do most of the heating and cooling – including keeping the snow off my walkways and driveways.  I use solar PVs to power most everything with some assist from a micro-hydro generator and a small 3 KW wind generator.  I also have my own designed solar thermal heater that gives me heat on any day with sunshine.  I cook up my own vehicle fuel and fly my own aircraft off the lake and dabble in some unique ultra-light designs.  One is an inflatable pontoon two-seater and the other is an ultra-light helo.  I have a huge barn in which I usually have dozens of science experiments going and a bank of top end PCs in a wireless network that blankets my entire property.  I use a yagi for satcomm and a parabolic to send a millimeter wave to cross the lake to link into the landline phones and a backup data link.    

All this gives me nearly total off-grid capability while also being able to keep in contact with anyone and anywhere.  My property is not easily accessible except by the lake and I have that well monitored and protected.  I like the isolation but I also have a place in Vermont and like traveling back and forth between these two retreats at different times of the year.  This may sound like I am a hermit that has given up on people and there are times that I feel like that but I actually have visitors that come up and stay in one of my cabins for up to several weeks at a time.  I allow them to come and go as they please mostly because the nearest cabin is about 200 feet from my house and cave.  I have a few friends within the scientific community that I keep in touch with and with whom I enjoy exchanging ideas.  Most are former military co-workers but a few are from academia and active duty and government agencies.  I just don’t have time to try to play political or social games with people that can’t be honest and forthright.    

This web site is mostly my effort to shout at the moon and let off some of my frustration.  Some of the articles are just for fun, some are serious and some date back two decades to when I was trying to break out of my government cubical by joining Mensa and getting some articles published.    I thought I was being careful but, I ended up getting pounced on by my bosses because I was still doing classified work and I was not to draw attention to myself or my work.      

As far as how you take these articles – I don’t care.  They are either true or not.  They are either of interest to you or not.    It should be obvious that some of them were written in fun but I challenge you to figure out which ones as I have backed-up almost everything with sound scientific evidence or at least a scientific basis.  After 50 years of hard core research and scientific analysis, experience and effort, they are presented here for MY pleasure and you can take it or leave it.      

What I will tell you is that MOST of these articles and MOST of what is in each and every article is absolute fact and many are completely true!    Really!       Take it or leave it!! 

Clinton’s Dead Path

I did not write this nor create it but it shows how far our government has descended into the depths. 

The following is a list of dead people connected with Bill and Hillary Clinton:   James McDougal – Clinton’s convicted Whitewater partner died of an  apparent heart attack, while in solitary confinement. He was a key witnessin Ken Starr’s investigation.  A frail, sick, old man was put into solitary lockupwithout his heart attack medicene despite having a record of several previous attacks and a perscription for the medicene from the prison doctor.    Mary Mahoney – A former White House intern was murdered July 1997at a Starbucks Coffee Shop in Georgetown. The murder happened just after she wasto go public with her story of sexual harassment in the White House.    Vince Foster – Former white House councelor, and colleague ofHillery Clinton at Little Rock’s Rose law firm. Died of a gunshot wound to the head,ruled a suicide.    Ron Brown – Secretary of Commerce and former DNC Chairman.Reported to have died by impact in a plane crash. A pathologist close to the investigation  reported that there was a hole in the top of Brown’s skull resembling agunshot wound. At the time of his death Brown was being investigated, and spokepublicly of his willingness to cut a deal with prosecutors.    C. Victor Raiser II – & – Montgomery raiser Major players in theClinton fund raising organization died in a private plane crash in July 1992.    Paul Tulley – Democratic National Committee Political Director found  dead in a hotel room in Little Rock, September 1992. Described by Clinton asa ” Dear friend and trusted advisor”.    Ed Willey – Clinton fund raiser, found dead November 1993 deep inthe woods in Virginia of a gunshot wound to the head. Ruled a suicide. EdWilley died on the same day his wife Kathleen Willey claimed Bill Clinton groped herin the oval office in the White House. Ed Willey was involved in several Clinton  fund raising events.    Jerry Parks – Head of Clinton’s gubernatorial security team inLittle Rock.  Gunned down in his car at a deserted intersection outside LittleRock.  Park’s son said his father was building a dossier on Clinton. He allegedly  threatened to reveal this information. After he died the files weremysteriously removed from his house.    James Bunch – Died from a gunshot suicide. It was reported that hehad a “Black Book” of people containing names of influential people who visited  prostitutes in Texas and Arkansas.    James Wilson – Was found dead in May 1993 from an aparent hanging  suicide. He was reported to have ties to Whitewater.    Kathy Ferguson – Ex-wife of Arkansas Trooper Danny Ferguson diedin May 1994 was found dead in her living roon with a gunshot to her head. It wasruled a suicide even though there were several packed suitcases, as if she was going  someware. Danny Ferguson was a co-defendant along with Bill Clinton in thePaula Jones lawsuit. Kathy Ferguson was a possible corroborating witness forPaula Jones.    Bill Shelton – Arkansas state Trooper and fiancee of Kathy Ferguson.  Critical of the suicide ruling of his fiancee, he was found dead in June, 1994of a gunshot wound also ruled a suicide at the gravesite of his fiancee.    Gandy Baugh – Attorney for Clinton friend Dan Lassater died byjumping out a window of a tall building January, 1994. His client was aconvicted drug distributor.    Florence Martin – Accountant sub-contractor for the CIA related tothe Barry Seal Mena Airport drug smuggling case. Died of three gunshot wounds.    Suzanne Coleman – Reportedly had an affair with Clinton when he was  Arkansas Attorney General. Died of a gunshot wound to the back of the head,  ruled a suicide. Was pregnant at the time of her death.    Paula Grober – Clinton’s speech interpreter for the deaf from 1978until her death December 9, 1992. She died in a one car accident.    Danny Casolaro – Investigative reporter. Investigating MenaAirport and Arkansas Development Finance Authority. He slit his wrists, apparentsuicide in the middle of his investigation.    Paul Wilcher – Attorney investigating corruption at Mena Airportwith Casolaro and the 1980 “October Surprise” was found deadon a toilet June 22,1993 in his Washington DC apartment. Had delivered a report to Janet Reno 3weeks before his death.    Jon Parnell Walker – Whitewater investigator for Resolution TrustCorp.   Jumped to his death from his Arlington, Virginia apartment balcony August15, 1993 Was investigating Morgan Guarantee scandal.    Barbara Wise – Commerce Department staffer. Worked closely with Ron  Brown and John Huang. Cause of death unknown. Died November 29, 1996. Herbrused nude body was found locked in her office at the Department of Commerce.    Charles Meissner – Assistant Secretary of Commerce who gave JohnHuang special security clearance, died shortly thereafter in a small plane crash.    Dr. Stanley Heard – Chairman of the National Chiropractic HealthCare Advisory Committee died with his attorney Steve Dickson in a small planecrash. Dr. Heard, in addition to serving on Clinton’s advisory councilpersonally treated Clinton’s mother, stepfather and brother.    Barry Seal – Drug running pilot out of Mena Arkansas, Death was no  accident.    Johnny Lawhorn Jr. – Mechanic, found a check made out to Clintonin the trunk of a car left in his repair shop. Died when his car hit a utility pole.    Stanley Huggins – Suicide. Investigated Madison Guarantee. Hisreport was never released.    Hershell Friday – Attorney and Clinton fund raiser died March 1,1994 when his plane exploded.    Kevin Ives & Don Henry – Known as “The boys on the track” case.Reports say the boys may have stumbled upon the Mena arkansas airport drugoperation. Controversaial case whereinitial report of death was due to falling  asleep on railroad track. Later reports claim the 2 boys had been slain before  being placed on the tracks. Many linked to the case died before their  testimony could come before a Grand Jury.    THE FOLLOWING SIX PERSONS HAD INFORMATION ON THE IVES / HENRY CASE    Keith Coney – Died when his motorcycle slammed into the back of atruck July, 1988    Keith McMaskle – Died stabbed 113 times, Nov, 1988    Gregory Collins – Died from a gunshot wound January 1989.    Jeff Rhodes – He was shot, mutilated and found burned in a trashdump in April 1989.    James Milan – Found decapitated. Coroner ruled death due to natural  causes.    Jordan Kettleson – Was found shot to death in the front seat of his  pickup truck in June 1990.    Richard Winters – Was a suspect in the Ives / Henry  deaths. Waskilled in a set-up robbery July 1989    THE FOLLOWING CLINTON BODYGUARDS ARE DEAD.  It has to be assumed that these body guards were not all frail old men.  They most often are fit, active types in good health.    Major William S. Barkley Jr.  Captain Scott J. Reynolds  Sgt. Brian Hanley  Sgt. Tim Sabel  Major General William Robertson  Col. William Densberger  Col. Robert Kelly  Spec. Gary Rhodes  Steve Willis  Robert Williams  Conway LeBleu  Todd McKeehan If you consider the 12 guards all died of natual causes, the totals are: 

Murdered – 15Accidents – 10Suicides – 11Natual – 13 There are only 49 people here but let us assume that this is from a large list of people that Clinton’s know of or had some contact with.  In order for these numbers to be statistically average for the US, the numbers of people it would have to represent is: Murdered –  For 15 to be average, the total sample group would have to be 690,000Accidents – For 10 to be average, the total sample group would have to be 2,500,000Suicides –   For 11 to be average, the total sample group would have to be 69,179 (see below)Natual –      For 13 to be average, the total sample group would have to be 3,172 (see below) (source is from the FBI Uniform Crime Reports) 

The “Black Arts Guide” describes the three methods most often used to hide a murder as being:  1.         private airplane crashes (7 of 10 accidents listed above); 2.         single car crashes (3 of 10 accidents listed above) and 3.         gunshot suicides (7 of 11 suicides listed above). Of the 11 sucides listed, 9 were under “unusual” circumstances or had unsolved aspects.    Of the 11 listed suicides, most were in the age group of 45-54.  For this group, the National Center for Health Statistics listed a national sucide rate of 1 in 6,289.  For 11 to be average, the total sample group would have to be 69,179. Of the 13 deaths by natural causes, 7 were under 50 years old and 2 were under 40.  Overall, there is a 1 in 119 chance of death in any given year from natual causes,however, this figure changes dramatically with age and location.  An age corrected risk for these 13 would be closer to 1 in 244 or  For 13 to be average, the total sample group would have to be 3,172. If Clinton will lie and then order a missle attack, killing hundreds, to divert attention away from his personal issues,  would he let the above listed people get in his way?  Apparently not…….

Dark Matter’s Dark Secret

We all know how Edwin Hubble made his measures of the movement of distant objects and concluded that the universe was expanding.  This caused researchers to wonder if we would expand forever (open), re-collapse (closed) or reach some future steady state (flat).   This also implied that we must have been smaller in the past and therefore the big bang theory was supported.  What did happen and what will happen depends on a lot on the average density of the universe and the exact rate of expansion. 

We are in the middle of the Sloan Digital Sky Survey to quantify these values in greater detail but we now know enough now to know that the visible matter in the universe is not enough to account for what we observe.  In fact, the “missing mass problem” has been around since 1933 and follows from the application of the virial theorem to galactic movements.

As science and math have done many times before, we speculate on a solution and then go looking for proof that that solution exists.  So we created the “dark matter component” and its counterpart – dark energy.  Since this is entirely an imaginary creation, we have given it properties that fit current observations – which is that it is entirely invisible, even though it makes up 96% of the universe.  It has no emissions or reflections of any electromagnetic radiation so we have no idea what it looks like. 

Despite supporting this imaginary construct, cosmologists and astronomers will admit that they cannot suggest what extrapolation of any known physics could account for something that is responsible for so much mass in the universe and yet cannot be detected by any normal observation.

Our only inference that it is there is from observed gravitational effects on visible matter.    In other words, we have a hole in a theory that we have filled with something that cannot be seen or detected by any means.  We also have a detected gravitational anomaly in a group of formulas that predict various galactic motion.  We have neatly solved both problems by linking them to unknown and imaginary attributes of dark matter.   

Ah but math and observations, in this case, are not consistent because we do not see the same level of correlation between galactic rotation curve anomalies and the gravitational implications from galaxies that have a large visible light component.  We also do not see a uniform distribution of dark matter throughout space or even within galaxies.  The ratio of the detected gravitational anomalies attributed to dark matter does not seem to be consistent based on the quantity of stars in a galaxy.  

In fact, in globular clusters and galaxies such as NGC3379, there seems to be little or no dark matter at all and other galaxies have been discovered (VIRGOH121) that are almost entirely dark matter.   Another recent study showed that there are 10 to 100 times fewer small galaxies than predicted by the dark matter theory of galaxy formation.  We can’t even agree on whether there is any dark matter in our own Milky Way galaxy. 

So this imaginary solution has become a unifying concept among most astrophysicists but only if you keep allowing for a long list of inconsistencies and logical anomalies that get dismissed by saying that we don’t know what dark matter is. 

Fortunately, the flip side of cosmology is quantum physics and scientists in that field of study have not been satisfied with expressions of human ignorance and have tried to seek out a plausible answer.  Unfortunately, they have not had a lot of success when solution candidates are put under intense analysis.   Direct detection experiments such as DAMA/Nai and EGRET have mostly been discounted because they cannot be replicated (shades of Cold Fusion).  The neutrino was a candidate for awhile but has mostly been discounted because it moves too fast.  In fact, most relativistic (fast moving) particles cannot be used because they do not account for the clumps of dark matter observed.   Studies and logic have ruled out baryonic particles, electrons, protons, neutrons, neutrinos, WIMPs, and many others.

Up to this point, all this is historical fact and can be easily confirmed.  What we have is a typical scientific anomaly in which a lot of people really fear thinking outside the box.  The box of traditional and institutional thinking.  All of the particle solutions sought so far are simply looking at the heaviest or most massive particles known and asking if that could be dark matter. 

Despite the thinking that the dark matter itself is imaginary, why not expand the possibilities to some truly wild ideas?  What if there are black holes the size of atoms but with the gravitational pull of a pound of lead.  Would the solar wind of bright galaxies blow such small objects away from the galaxy center?  That would account for the reduced dark matter detected in high light-to-mass galaxies.  The math to show that this is possible can be applied to include or dismiss this idea very quickly and perhaps that has been done.  But there is an even better candidate.

Dark Matter and even Dark Energy can be accounted for by the presence of the Higgs Field and the Higgs Boson.   This takes the dark matter search out of the realm of finding an object or particle that exhibits unseen mass and puts it into the realm of being caused by the force of gravity itself. 

The Higgs field is a non-zero vacuum expectation value that permeates every place in the universe at all times and plays the role of giving mass to every elementary particle, including the Higgs boson itself.   If the detected gravitational anomalies are caused by changes in the source of mass itself, then a number of the problems and inconsistencies of dark matter are resolved.

The Higgs field can impart mass to other elementary particles and thus by extension to macro-matter that eventually create the observed massive gravitational fields around certain galaxies.  The variation of the effects of dark matter might simply be the non-homogeneous distribution of the Higgs field itself or on the particles that it acts upon.

This is somewhat supported by what we know that the Higgs field does to elementary particles.  For instance, the Top Quark is an elementary particle that is about the same size as the electron and yet it has over 200,000 times the mass of the electron as a result of the effects of the Higgs field.  We do not know why this occurs but it is firmly established that the Higgs field does NOT impart mass based on size, atomic weight, volume, spin or any other known characteristic of the fundamental particles that we currently know about.  It isn’t a big stretch of the imagination to think that there might be other kinds of interactions between the Higgs field and other matter that is not linear or homogeneous.

Some of the components of the Higgs field, specifically the Goldstone Bosons, are infraparticles which interact with soft photons which might account for the reduced dark matter detected in high light-to-mass galaxies.  I still like the idea that the high light-to-mass galaxies have a low dark matter component because of the solar wind blowing away the particles that the Higgs field acts on or in the thinning of the Higgs field itself.  What specific component of the “solar wind” that is responsible for this outward pressure or push is unknown but such an action does fit observations.

Since we have confirmed the existence of the Higgs boson and the Higgs field, it is perhaps possible to predict what kind of repulsive force it might impart but an extension of the scalar field theory for dark energy might imply applicable consequences for electroweak symmetry breaking in the early universe or some variation of the quintessence field theory.  What we call vacuum energy, the quintessence field, dark energy and the Higgs Field might actually be all variations of the same theme.  One interesting coincidence is that all of these have been speculated to have been created at about the same time period after the big bang event, i.e. very early on in the expansion phase.

The bottom line is that we have far too many reasonable and logical opportunities to explore alternative concepts to explain the gravitational anomalies of the virial theorem to galactic movements without resorting to the distraction of creating a terra incognita label for our lack of imagination and knowledge.

Accidental Weapon Discovery!

A New C-130 Prop leads to an incredible weapon of awesome power! I could get put in jail or worse for revealing this but someone has to stop them before they kill a lot of people.  I have to start from the beginning to make you understand how some very complex physics is now being developed as the worst weapon man has ever conceived.  It all started with a recent discovery by the NRL that has the military scientists scrambling to explain what they have seen.  Here is the whole story: The venerable C-130 is a time tested four-engine, turbo-prop aircraft design that we have simply not been able to improve upon.  It is rugged, can land on almost anything and carries tons of weight and has very long range.  Each new model upgrade has been given a new letter and we are now up to C-130V’s.  There is, however a new prototype being tested at NRL.  The soon-to-be C-130W was to have only two high efficiency, high torque, high-bypass turbo-prop engines using a new synchronized two bladed 21 foot long prop blade using a new fluid version of a variable ratio transmission (FVRT).  This two engine, two bladed prop would seem to be a throwback in design since one of the previous NRL C-130 prototypes used a ten bladed prop but this new blade is very special and in combination with the FVRT, was expected to be a better design The prop blade is very thin and light weight- only about 4 inches at its widest point – and spins at an incredible 45,000 RPM because the high torque engines are able to achieve incredible gear ratios with the FVRT.  The blade telescopes outward from the hub after takeoff to reach its full 21 feet without banging into the ground.  The inside of the blade has a cable that is controlled within the hub.  There is a thin carbon-fiber cable running down the center of the blade to hold it to the hub and to allow it to be extended and still flex as the speed increases.  At the tips and along the blade of the props are tiny electric circuits that send back data about speed, air, temperatures, humidity, air density, and other data that lets the computers tweak the blade shape and engine speeds.  The reason that all this is important will become clear shortly. The light weight, shape and design makes for a blade that can withstand the very high speeds and still function.  In fact, it is the blade that actually sets the peak speed of the prop, not the engine.  The mix of torque, FVRT and air density causes the blade to spin up to a maximum speed and then hold a constant speed for a given set of conditions.  As the air thins at high altitude, the prop spins faster but eventually it is suppose to reach a maximum speed – or so they thought – let me explain. 

As with most props, it does not move the aircraft by pushing air out the back like a jet but by pulling it forward using the forward horizontal “lift’ of the prop blade.  At slow speeds, the shorter blade twists to give a greater angle of attack to bite into more air but at its fully extended length and highest speeds, something different happens. At full speed, the fully extended tips of the blades are moving at 19 miles per second – that is more than 67,000 miles per hour or about 1.1% the speed of light.  It has long since passed Mach 1, in fact it is moving at 61 mach!  In actuality, Mach is meaningless at these speeds because if the plane is not moving very fast, the blade spins in a near vacuum.  The air does not have enough time to close in on the space where the blade was before the other blade spins into the same space.  In fact, the maximum blade speed was initially thought to be while it was on the ground and creating this near vacuum in which there was very little air density so the blade spun faster.  Then as it began to move faster after takeoff, the speed of the rotation actually slowed and then speeded up again. Once it is moving, the blade actually works better and better as the speed of the aircraft increases and so far they have not found a limit on how fast it will go.  They have taken a test C-130 up to just under Mach 1 but were afraid that its wings and large tail could not structurally withstand the turbulence of a trans-sonic flight.  Newer carbon-fiber swept wings are being developed and a new nose and tail The new engines, blade and transmission are really interesting but that is not the cause of all of the buzz.  What is causing a stir is the unusual speed the prop has attained in full speed flight.  The blade started going much faster than anyone had predicted.  In fact, in a test in 2002, as its speed passed 67,000 RPM, it was shut down manually by the pilot for fear of flying apart.  They have spent nearly two years trying to figure out why it is doing this. 

High speed rotation was the intent of the design all along so the bearings, shaft and hub strength and mounting are all very robust and were found to be safe up to 100,000 RPM in test-bed tests.  Simulated testing based on materials strength indicates that it should be safe up to 150,000 RPM but the entire airframe has not been tested to that level of vibration or speed. Recently, a special test bed flight platform was fixed to the nose of an old KC-135 (Boeing 707 jet airframe).  A specially designed blade was built with extra strength and mounted on a new engine with a FVRT.  The blade was made so it telescoped down in size until it was extended to its full length in flight.  This allowed the KC-135 to take off.   It also allowed for the testing of props longer than 21 feet. Once at an altitude of 55,000 ft – normally too high for most prop planes, the new prop and engine were started and gradually run up to maximum.  It passed 50,000 RPM within  a few minutes and continued to climb in speed for more than an hour.  The KC-135 increased in speed with the assist of the prop until the pilot shut off his four jet engines and let the aircraft be driven only by the one prop in the front.  The speed dropped initially but eventually was back to its former speed and accelerating – passing 500 knots within another hour.  High speed cameras were aimed at the prop from several angles on the KC-135’s wing and blade tip data was being recorded.  After flying more than 4 hours at over 500 knots and a steady increase in prop speeds, the pilot brought all the jet engines back online and spun down the prop and retracted it for landing. Upon examination of the cameras and blade tip data, they think they have discovered the reason for the over speed prop.  The end of the prop was slowly changing shape in a totally unexpected way.  More specifically, it appears to be bending and flowing backward as if it was trailing a ribbon behind the blade tip.  As the speed increases, the blade appears to get shorter and shorter while the part that trails behind gets longer and longer – as if it were bending backward. 

This visual evidence is counter to everything known of metal in the presence of this much centrifugal force.  The spinning blade should have such a huge force pulling outward due to the very high centrifugal forces, that nothing should be bending – especially at right angles to the prop and parallel to the line of flight.  Finally an explanation was found: The tip of the prop is traveling at over 1% of the speed of light and that gives it a different temporal (time) relationship to the rest of the aircraft.  Time slows down as you approach the speed of light so the top of the prop is actually in an earlier time than the rest of the aircraft.  Even at 1% of the speed of light, there is a measurable and visual difference.  I have not done the math but the Lorentz-Einstein math says that relativistic time stops at the speed of light so if we assume that we get 1% time-space distortion at 1% the speed of light, we can see and calculate the prop distortions. At 500 MPH, our analysis shows that the tips of the prop are moving at around 100,460 MPH in a circle and also moving forward at 733 feet per second.  If the very tip is actually not in the same time as the rest of the prop, and time is distorted by 1%, then it will appear to be about 7 feet (88 inches) behind (slower than) the rest of the prop.  In other words, what their high speed cameras showed was a prop that curved backward so that the very tip was stretched and bent back by 88 inches.  As the speed of the prop and the speed of the aircraft increased, the length of the curvature and the amount of the prop blade involved increased. What it was seeing is the same prop but as it appears slightly in the past and therefore slightly behind where the prop is now.  As more and more of the prop reached higher and higher speeds, it appears to be further and further behind giving the false impression that it is bending.  It isn’t actually bending, we are just seeing it where it was in a recent past time. 

But that is not all.  The extensive instrumentation of the test prop showed that it actually got easier to spin it as it went faster.  The outward centrifugal forces should make it appear that the prop is getting heavier but the instruments are showing that the prop is actually getting lighter in weight as it spins faster.  That is contrary to what was expected since the centrifugal forces should have increased its apparent weight but measurements don’t reflect that.  An extensive study has revealed why.   Relative to the measurements taken in the present time – relative to the aircraft – there is a portion of the prop blade that is effectively missing because it is spinning in a different time!  The tip is effectively not there NOW because it has moved into an earlier time.  Since it is effectively missing as far as the measurements of present time torque, air resistance, friction, momentum, inertia and centrifugal forces, the engine can spin the remaining part of the prop blade easier.  But, as the engine sees less load, it can spin faster for the same amount of fuel and as it goes faster, more of the blade moves into the past.  Essentially, as the prop speeds up, more and more of the prop is moving into an earlier and earlier time – so the prop continues to go faster and faster. But now it gets really weird.  Since the Lorentz-Einstein math says that relativistic space-time effects are a constant and applies to everything, the guys at NRL hooked up a laser to the hub so that it pointed directly at a very tiny reflector attached at the very end of the prop.  The idea was to use this laser to measure the length distortions of the prop as a result of the centrifugal forces and to a lesser extent to measure the flex distortions.  Because of the dual carbon fiber cables inside the blade and the carbon fiber blade shell, it was not expected to show much distortion – and it didn’t…or rather not in the way they thought it would. Despite the visual distortion of the shape of the blade as shown in the high speed cameras and explained above, the laser showed that the blade was still straight with no bend or distortion.  This confirmed the idea that it was not actually changing shape but was changing time.  In a few frames of the camera shots, higher humidity made the beam visible and it showed that the laser beam bent in exactly the same way that the metal prop did.  The laser beam curved backward and remained exactly parallel along the apparently bent prop blade.  

Quite by accident, one of the reflectors broke off and the laser beam extended passed the end of the prop – out into the air but the cameras showed that it continued to bend until it disappeared completely.  At the time, this was seen as a curiosity so a much more powerful laser was installed and the cameras were re-pointed and the experiment repeated.  This time the clearly visible beam curved back until they were parallel to the flight path and then at about 4200 feet out, it just disappeared.  At that point, it was so far in the past that it effectively was not of this time and could not be photographed. Observable evidence was limited at this point so some of the findings were a function of calculations.  The laser light was increased in power to try to create a more visible beam.  Theoretically, this beam extended well beyond the prop by miles – when stopped, it was measured to be still strong and visible as much as 30 miles from the aircraft.  At the speed of rotation of 50,000 RPM, the laser light at 30 miles was moving at 84.45% of the speed of light meaning that the light beam was experiencing an 84% distortion of the space-time continuum – making the beam change a number of its propagation properties. The accepted theory of science is that light is made up of both waves (like the frequency of the colors of the spectrum) and particles  (photons which have no mass – or so we thought).  The laser light was a single frequency light from a tunable distributed feedback fiber laser having both thermal and piezoelectric control elements giving a single frequency, wavelength and intensity.  Using such a beam of uniform, intensity, high spatial purity and high conversion efficiency, we were able to use the light as a benchmark measure for precise spectrual analysis.  What we expected was minor nano-level changes but what happened was beyond anyting we had imagined. The time distortion created a cone shaped vortex that extended back from the plane in both space and time – effectively blanketing the entire countryside with a virtually continuous flood of coverage from the beam.  The Doppler shift effect on the frequency of the laser light from of the rotating beam altered the signal over the full range of rotation speeds from the hub to the outter most limits of the beam – dispersing a beam of mixed frequencies that went from its broadcast light frequency up to frequencies of cosmic particle frequencies.  The trailing edge of the blade was also emitting the light beam signal but the Doppler effect caused the shift to go down in frequency from its broadcast light frequency down thru all radio frequencies down to ripples in induced direct current.  Essentially this cone shaped beam was making a powerful sonic boom kind of coverage but instead of sound, the landscape was bathed in electro-magnetic frequency (EMF) spectrum radio waves and light of virtually every frequency spectrum from DC to light frequencies and beyond.   

It was quite by accident that we discovered that some of the emissions were in the X-ray and gamma ray range (measured using Compton scattering) and that the ionizing frequencies were having an affect on almost everything.  Upon exploring this further, we could not measure the shortest and longest wavelenght with the equipment we had.  After some calculations, we estimated that we were creating frequencies in the vicinity of the Planck length.  In other words, we were artificially creating the light radiation frequencies that normally exist within and between atomic particles.  We could measure down to around 10 picometers but it was obvious that there was something else there.  The energy needed for these intense particles would normally be in the range of 100 keV but we were seeing them being created by this cone of EMF without the benefit of a massive accelerator. The effects of the pass-over of all of these frequencies was startling.  Since there are harmonic frequencies for virtually everything in existence and this plane was putting out every known frequency from DC to gamma rays and beyond, the destructive harmonic frequency of thousands, perhaps millions of objects was reached.  In addition, the super high frequency of the high end, leading edge (compressed) wave front was bombarding everything with intense high energy ionizing EMF radiation that has only rarely been seen in events like electron-positron annihilation and radioactive decay – on the order of 10-20 Sv (Sieverts)!  After the fly over (mostly in the Nevada desert north of Las Vegas), the ground under the flight path was found to not contain any hard rocks or crystals.  Only sandstone and sedimentary rocks.  Anything that was hard or crystalline was shattered into smaller pieces – dust that was finer than sand – more like talcum powder.  Compounds were broken into their component atomic parts and atomic bonding was being destroyed within molecules.  Anything that could flex, bend or absorb the intense vibrations was mostly unaffected but even most of the plants were wilted and limp.  Those items that were hard, broke apart.  The weak signal, large area dispersal and the very short duration of exposure is the only thing that kept everything from sand to mountains from crumbling.   Several military ground vehicles were in the area and were totally immobilized.  The steel in their vehicles was instantly weakened to the point of falling apart.  “It had the consistency and strength of a Ritz cracker” – said one of the workers.  Even the man’s diamond ring turned to fine shiny dust.  The men were seriously injured by what appeared like massive bleeding but they are keeping all that very secret.  We, in the electronics room, suspect they were reacting to the massive dose of radiation in the X-ray and gamma rate region.  I don’t even want to think about what happened to their teeth and bones.   Now NRL is discussing how to control the beam and its effects but are struggling with the relativistic effects of the time-space distortions and the control of the laser beam. I hope you will take this seriously.  I could get in a lot of trouble for posting this.  If you doubt any of this, check it out.  Do the math.  Read the Lorentz-Einstein math or Doppler and the aerodynamics of prop blades.  The FVRT is not commercially available yet but will be soon.  The two bladed prop is still hush hush but can be found in dozens of aerodynamics books.  What is dangerous is that this plane, using this beam and prop as a weapon could be made to increase the beam power and destroy everything  under it and either side of it for miles – rocks, glass, buildings, people – turning everything into a fine powdery dust or an oozing mass of jelly. We have enough weapons and this is one that kills and destroys everything.  I can only hope tht by letting people know what is happening, we can stop more deaths.

How to Travel at the Speed of Light…or Faster.

I wrote this for a dissertation for a Theoretical Physics seminar last year (Boston). It was peer reviewed but not printed or accepted for the seminar because my credentials were not sufficient to meet the standards of the seminar. I did, however, get a positive commentary from Julius Wess who was given an expanded copy of this article because of his interest in supersymmetry and his work at DESY.

Nov 16, 2004

Super String Theory Update

In articles that have appeared in various publications within the past few months (Jan to June 2004), the Super String Theory has been studied to a much greater level of detail. The Super String Theory is an extension of the Standard Model and is also known as the Supersymmetry Standard Model (SSM). The SSM which is generally accepted as the most accurate to date; however, fails to explain mass fully and it is the one possible theory that can provide the joining of both quantum mechanics and Newtonian physics for a Unified Field Theory – a single model of all matter and energy in the universe.


One aspect of the SSM is that it predicts that there is a pervasive cloud of particles everywhere in space. A hundred years ago, this might have been called the “ether” but we now refer to this as the Higgs Field and the particles in this field are called Higgs Bosons.


This cloud of very small particles (Higgs Bosons) creates a field (the Higgs Field) that interacts with matter to create mass and gravity. The existence of this field is predicted by the Lagrangian function of the Standard Model and provides a description of the Higgs field as a quantum field. The Higgs field permeates all reality and the interaction between this field and other matter (electrons, other bosons, etc.) is what creates the effect we call mass. A dense solid interacts with this field more than a less dense solid creating all of the physical characteristics we attribute to mass – weight, momentum, inertia, etc.


The existence of the Higgs Field and the Higgs Boson was nearly proven in 2000 but the CERN synchrotron isn’t quite strong enough. Newer designs that are being built now should prove this concept within the next 3 to 5 years. To date, every physical prediction that we can achieve of the SSM and the implications of the Higgs field have been shown to be true. Let us speculate for a moment on the possibilities.


The Higgs field’s interaction with matter is what gives us the physical characteristics we attribute to mass – weight, momentum, inertia, etc. Imagine a jet aircraft flying in the air. As we move it faster, the air resists so that it takes a lot of energy to move a large (or heavy) object. If we go too fast, friction will heat up the surface of the wings as it does with re-entry vehicles from space. With jet aircraft, there is a sound barrier that builds up air in front of the aircraft and resists further increases in speed. The energy to move faster increases significantly as you get closer to the sound barrier and then when you exceed it, the energy to fly faster drops back down.


When you remove the air – such as in space – now you need very little energy from the engine to move very large objects or to go very fast. The resistance of the air and gravity is gone and the smallest push or thrust will make even a very large object move or go faster.

This is all fact. Now lets speculate for a moment….


What if Michelson and Morley’s 1879 experiment to find the “ether” medium that light traveled on was right but on such a different scale that they failed to detect what they were looking for. After all, we still are not certain exactly what the Higgs field is but what if that is their ether. It would certainly explain the dual personality of light – acting like both waves and particles. It might also help explain dark matter and dark energy in the universe – but I digress. Let’s speculate for a moment and imagine that the Higgs field does exist (not that big a stretch of the imagination) and that it is the media that keeps light from going any faster than….the speed of light.


Now suppose you didn’t have a Higgs field or could turn it off? If the Higgs field is not there at all, there is no mass, no momentum, no inertia and no weight. If an object has no mass or very little then even a small amount of thrust will push it very fast. Using an ion engine that has a low but very fast thrust, you should be able to push a massless object rapidly to the speed of light and perhaps beyond.


Think about it. Other than the E=MC2 formula and the math that was derived by observations in a Higgs Field universe, why is there an upper limit on speed? Why can’t we go faster than light IF the mass is low and the thrust is fast enough? Suppose like some many things in physics, the limits we have put on our thinking about possibilities is because of the limits we have put on our thinking. In other words, if relativity is flawed or misunderstood with respect to the its framing of the conditions of the math, then perhaps in a different frame of reference, the math is wrong and it does not take infinite energy to push an object to the speed of light.


Now back to facts. A careful read of special relativity will reveal that Einstein said, “the speed of light is constant when measured in any inertial frame”. If, as has been speculated, the Higgs field is responsible for the physical characteristics we attribute to mass – weight, momentum, inertia, etc., and it was possible to somehow remove the Higgs field, and therefore remove the inertial frame, then even the special theory of relativity says that light speed is no longer a constant.


Does it make any sense to even consider this perspective in light of all of the proven experiments and math that have proven General and Special relativity over and over again? The answer is yes if you consider one thing. If the Higgs field permeates all reality and the interaction between this field and other matter (electrons, other bosons, etc.) is what creates the effect we call mass, then how could we imagine that there is any other frame of reference. At the time of Einstein, the Higgs field was unknown so the absence of the Higgs filed could not even be speculated. Now it can be. Or we can imagine frames of reference that might allow objects to alter, interact with or somehow by-pass the effects of the Higgs field. For instance…..


We have speculated that there are particles called tachyons that have a LOWER limit of the speed of light but that are based on the assumptions that they have no mass. If a space ship could be made to have no mass, what would it’s speed limit be?


If the Higgs field is now acting like air and creating a barrier that appears to us to be the limiting factor in the speed of light, then perhaps faster than light travel is possible in the absence of a Higgs field.


How do we get the Higgs field to go away? I don’t know but in 25 or 50 or 75 years, we might know. One hint of a possibility is a startling new find called two-dimensional light. Its called plasmons and it can be triggered when light strikes a patterned metallic surface. In March 2006, the American Physical Society gave demonstrations of plasmons and plasmonic science. They demonstated, for instance, a plasmon microscope that was capable of imaging at scales lower (smaller than) the wavelength of the light they were using to view the object. This is like seeing a marble by firing beach balls at it.


Using a combination of metamaterials, nano-optics, microwaves and plasmonics, David Schurig and David R. Smith at Duke University and his British colleagues (in October 2006) created something that can cause microwaves to move along and around a surface. The effect is exactly like a Klingon cloaking device from Star Trek or like Harry Potter’s Cloak of Invisibility. This is not speculation, they have done it. Similar work by the Imperial College in London and SensorMetrix of San Diego are developing metamaterials capable of rerouting visible light, acoustic waves and other electromagnetic waves.


This is technology today. What will we be able to do in 50 years? Might we be able to sort of pry open a hole in the Higgs field by bending or rerouting the field around an object. You might call this a warped Higgs field or simply a warp field.


If we can warp the Higgs field in a controlled manner, then the temporal implications are another matter but travel at or faster than the speed of light might be possible.


OK, so how do you warp the Higgs field?


(Of course, we are way out in the realm of speculation but isn’t this they way that crazy things like black holes and super novas were first imagined? If our minds can fathom the remotest possibility now, then perhaps when the containment technology and power densities (energies) above the Fermi scale catches up with our imaginations, we can see if works.)


One aspect of the Supersymmetry Standard Model (SSM) is that the strings all vibrate. In fact, every particle and field has a vibration frequency. It is one characteristic attributed to the ‘spin” of a particle. With sufficient energy, it may be possible to create harmonic vibrations to these particles. One aspect of the cloaking device mentioned above is that they use destructive interference to null out the electromagnetic fields of one path and replace it with emissions from another path. This allows them to hide an object while substituting other sensor data that simulates the object not being there at all. The essence is that by controlling the vibrations or frequency on the nano-meter scale, they can manipulate light. Is it possible to extend this thinking to the Higgs field? If so, we might be able to manipulate the Higgs field on, in and around a surface.

It is hard to imagine that something like Bernoulli’s Principle of fluid flow would work on the scale of the Higgs field’s interaction with a surface moving at high speed but it serves as a possible analogy of an area of exploration. Actually, this is not at all that unreasonable.


I have flown in some big military planes. The C-130 has an overhead escape hatch near the flight deck. When we flew in the South Pacific, on a hot day, we would open this hatch and stick our heads out. There is something called laminar air-flow around the aircraft. As the plane moves thru the air at 250 MPH, the air going past it is moving at about that speed (assuming no wind), however, in the last 6 to 8 inches as you move closer to the surface of the plane, the wind slows down (relative to the aircraft) due to friction with the surface. This speed drops rapidly in the last 3 or 4 inches so that the wind passing over the fuselage within the last 2 inches is moving relatively slowly – about 30 to 60 MPH. You can stick you head up enough to get your eyes above the edge of the hatch and it won’t even blow your sunglasses off. I’ve done it.


What if the Higgs field could be warped by sub-nano-level wave manipulations or react in a manner similar to the laminar air-flow around an aircraft but do it on a space ship? What if we helped it a little by moving that field out away from the ship’s surface just a little? Here’s how.


As with recent studies in the use of standing waves to isolate and manipulate objects, it may be possible to seek and find a harmonic frequency that will create compressions and rarefactions in these particle vibrations or fields. If the surface of a vehicle were the emitter and it was properly synchronized, the rarefaction of the standing wave of the harmonic vibrations would create a layer of empty space (rarefactions) around the vehicle totally devoid of Higgs bosons and therefore have no Higgs field, i.e. a warp field.


To understand the impact of reducing or eliminating the Higgs field, let’s look at an example.


Since light is made up of photons and photons in motion have mass (they have no known rest mass), and since photons travel at the speed of light, turning on a flashlight in the absence of a surrounding Higgs field would instantly move the flashlight to the speed of light. The reason is that the photons coming out of the light beam have mass and are moving at the speed of light. This is mass moving all in one direction. The equal and opposite reaction is for the flashlight to move in the direction opposite from the way that the light beam is pointing. Normally the very tiny mass of the photons would have very little effect on the relatively heavy flashlight but if the flashlight had no mass at all, it would be like putting a rocket engine on a feather. The photons would seem to have the effect of a powerful blasting rocket engine making the mass-less flashlight accelerate to speeds nearly equal to the photons moving in the opposite direction.


Since our imaginary vehicle with the vibrating surface also has no mass at all in the absence of a surrounding Higgs field, it could be any size and the same flashlight could also move it to the speed of light.

But what about the people?


Now you ask how could you possibly withstand the acceleration from zero to the speed of light within a second or less. That is easy if you have no mass also. Momentum, inertia and even gravity depend on an object having mass. If you have no mass, you cannot have inertia or momentum.


Imagine for a moment throwing a heavy ball. When you let go of the ball it continues in the direction it is thrown. Now imagine throwing a feather. Actually it is quite hard to throw a feather because the moment you let go of it, it will stop moving forward and drift slowly downward. It has so little mass that any inertia or momentum it has would be quickly overcome by air resistance – regardless of speed.


If you were in a giant space craft but had a device that could create the absence of a surrounding Higgs field, you would have no mass. No momentum and no inertia and no reaction to gravity. A 90 degree turn at 1,000 mph (or any speed) would not be a problem because you cannot experience the “g” forces that an object with mass would experience. Hence, it is possible to make these radical turns and fantastic accelerations without killing everyone.


If, as we have speculated, it is the Higgs field particles (bosons), like air particles, that are artificially creating what we see as being the barrier to going faster than the speed of light, then when we shine that flashlight and the photons come out, they will travel faster than the speed of light until they enter the Higgs field and then they will slow back down to the speed of light. Since we are using the light (photon) thrust in the absence of a surrounding Higgs field, the flashlight might also accelerate the imaginary vehicle with the vibrating surface that has no mass to speeds faster than the speed of light.


Alternatively, imagine a warp field creating a massless vehicle that is powered by the graviton-beam engine described earlier in this report. If you can control this warp field, you can create any degree of mass you like. So you tune it to have the mass of a feather and then tune the graviton-beam to have the attractive or repulsive force of a planet-size object or perhaps the force of a black hole. Now you have as much power as can be obtained and controlled trying to move an object at speeds greater than the speed of light.

The September 2002 Jupiter event allowed Ed Fomalont of the National Radio Astronomy Observatory in Charlottesville, Virginia to prove that gravity’s propagation speed is no greater than lightspeed. This is because gravity, so one theory says, interacts with the Higgs field as a direct result of the Equivalence Principle in the context of Lorentz symmetry, and so it can be said that the nature of the gravity field can be attributed to the Higgs-Goldstone field. This has been postulated from several math and experimental directions and is generally accepted as fact.

The idea is that the Higgs-Goldstone boson may account for gravity and mass is what makes the use of some kind of warp field a possible solution for faster-than-light travel. Note that this approach does not rely on the deformation of space-time, worm-holes, multi-dimensional space or even violations of the equations of general relativity. Remembering that Einstein’s math was based on an inertial frame and this proposition removes that frame of reference.

General relativity (GR) explains these features by suggesting that gravitation forces (unlike electromagnetic forces) is a geometric effect of curved space-time, in which the effects of the space-time distortion is what propagates at light speeds. Problems with the causality principle also exist for Gravitational Radiation (GR) in this connection, such as explaining how the external fields between binary black holes manage to continually update without benefit of communication with the masses hidden behind event horizons. These causality problems would be solved without any change to the mathematical formalism of GR, but only to its interpretation, if gravity is once again taken to be a propagating force of nature in flat space-time with the propagation speed indicated by observational evidence and experiments. Such a change of perspective requires no change in the assumed character of gravitational radiation or its lightspeed propagation.

Although faster-than-light force propagation speeds do violate Einstein special relativity (SR), they are in accord with Lorentzian relativity, which has never been experimentally distinguished from SR-at least, not in favor of SR. Indeed, far from upsetting much of current physics, the main changes induced by this perspective are beneficial to areas where physics has been struggling, such as explaining experimental evidence for non-locality in quantum physics, the dark matter issue in cosmology, and the possible unification of forces. Recognition of a light-speed Higgs field propagation of gravity, as indicated by recent experimental evidence, may be the key to taking conventional physics to the next plateau.

Although certainly in the realm of wild speculation, it is still not beyond imagination nor in conflict with proven science that the graviton beam engine described in another article in combination with a massless vehicle wrapped in a warpped Higgs field could achieve speeds well in excess of light.


As crazy as this sounds, this is completely consistent with our present knowledge of physics. No, it is not proven but it is not disproven and even in its speculative form, it can be seen as compliant with existing math and theories.


The missing element is a sufficient energy source to manipulate Higgs bosons and a control mechanism to create the harmonic vibrating surfaces. It is easy to imagine that in 50 or 100 years we will have the means to do this.


It is also easy to imagine that a civilization on a distant planet that began its life a few million years before we did, could easily have resolved these problems and created devices that can be used in interplanetary travel.

Intergalactic Space Travel

Sometimes is it fun to reverse-engineer something based on an observation or description.  This can be quite effective at times because it not only offers a degree of validation or contradiction of the observation, it also can force us to brainstorm and think outside the box.

As a reasonably intelligent person, I am well aware of the perspective of the real scientific community with regard to UFO’s.  I completely discount 99.95% of the wing-nuts and ring-dings that espouse the latest abduction, crop circle or cattle mutilation theories.  On the other hand, I also believe Drake’s formulas about life on other worlds and I can imagine that what we find impossible, unknown or un-doable may not be for a civilization that got started 2 million years before us – or maybe just 2 thousand years before us.  Such speculation is not the foolish babbling of a space cadet but rather the reasoned thinking outside the box – keeping an open mind to all possibilities.

In that vein as well as a touch of tongue in cheek, I looked for some topic to try my theory of reverse-engineering on that would test its limits.  With all the I hype about the 50th anniversary of Roswell and the whole UFO fad in the news, I decided to try this reverse-engineer approach on UFOs and the little green (gray) men that are suppose to inhabit them. 

As with most of my research, I used Plato to help me out.  If you don’t know what Plato is, then go read my article on it, titled, Plato – My Information Research Tool.

Here goes:


What is the source of their Spacecraft Power? 


 Again, with the help of Plato, I did research of witnesses from all over the world. It is important to get them from different cultures to validate the reports.  When the same data comes from cross‑cultural boundaries, the confidence level goes up. Unfortunately, the number of contactees includes a lot of space cadets and dingalings that compound the validation problem.  I had to run some serious research to get at a reliable database of witnesses.  I found that the most consistent and reliable reports seem to increase as the size of their credit rating, home price and/or tax returns went up. When cross‑indexed with a scale of validity based on professions and activities after their reports, my regression analysis came up with a projected 93% reliability factor for a selected group of 94 witnesses.

 What descriptions are common are these: 

 The craft makes little or no noise.  It emits a light or lights that sometimes change colors.  There is no large blast of air or rocket fuel ejected.  Up close, witnesses have reported being burned as if sunburned.  The craft is able to move very slow or very fast and can turn very fast.  The craft is apparently unaffected by air or lack of it. 

 We can also deduce that: the craft crossed space from another solar system; they may not have come from the closest star; their craft probably is not equipped for multi‑

generational flight; there may be more than one species visiting us.

 What conclusions can be draw from these observations: 

 If you exclude a force in nature that we have no knowledge of then the only logical conclusion you can come to is that the craft use gravity for propulsion.  Feinberg, Feynmann, Heinz, Pagels, Fritzsche, Weinberg, Salam and lately Stephen Hawking have all studied, described or supported the existence of the gauge boson with a spin of two called a graviton.  Even though the Standard Model, supersymmetry and other theories are arguing over issues of spin, symmetry, color and confinement, most agree that the graviton exists.

 That gravity is accepted as a force made up of the exchange of fundamental particles is a matter of record.  The Weinberg‑Salam theory of particle exchange at the boson level has passed every unambiguous test to which it has been submitted.  In 1979, they got the Nobel Prize for physics for their model.

 Repulsive Gravity:

 We know that mass and energy are really the same and that there are four fundamental interactions and that the interactions take place by particle exchange.  Gravity is one of these four interactions.  IF we can produce a graviton, we can control it and perhaps alter it.  Altering it in the same way we can produce a POSITRON using the interaction of photons of energy greater than 1.022MeV with matter.  This is antimatter similar to an electron but with a positive charge.  As early as 1932, positrons were observed. 

It seems logical that we can do the same with gravitons.  It is, after all, gravity that is the only force that has not had an observed repulsive force and yet it doesn’t appear to be so very different than the other three fundamental interactions.

 Einstein and Hawking have pointed out that gravity can have a repulsive force as well as an attractive force.  In his work with black holes, Hawking showed that quantum fluctuations in an empty de Sitter space could create a virtual universe with negative gravitational energy.  By means of the quantum tunnel effect, it can cross over into the real universe. Obviously, this is all math theory but parts of it are supported by observed evidence.  The tunneling effect is explained by quantum mechanics and the Schrodinger wave equations and is applied in current technology related to thin layers of semiconductors.  The de Sitter‑Einstein theory is the basis of the big bang theory and current views of space‑time.

The bottom line is that if we have enough energy to manipulate gravitons, it appears that we can create both attractive and repulsive gravitons.  Ah, but how much power is needed?

 Recipe to Make Gravity

 We actually already know how to make gravitons.  Several scientists have described it.  It would take a particle accelerator capable of about 10 TeV (10 trillion electron volts) and an acceleration chamber about 100 Km long filled with superconducting magnets.

 The best we can do now is with the CERN and the FERMI synchrotrons.  In 1989 they reached 1.8 TeV at the FERMI LAB.  The Superconducting Super Collider (SSC) that was under construction in Ellis County, Texas would have given us 40 TeV but our wonderful “education president”, the first Mr. Bush, killed the project in August 1992.  With the SSC, we could have created, manipulated and perhaps altered a graviton.

 We Need A Bigger Oven

 The reason we are having such a hard time doing this is that we don’t know how else to create the particle accelerators than with these big SSC kind of projects.  Actually, that’s not true.  What is true is that we don’t know how to create the particle accelerators except with these big SSC kind of projects, SAFELY.  A nice nuclear explosion would do it easily but we might have a hard time hiring some lab technicians to observe the reaction.

 What do you think we will have in 50 or 100 or 500 years. Isn’t it reasonable to assume that we will have better, cheaper, faster, more powerful and smaller ways of creating high-energy sources? Isn’t it reasonable to assume that a civilization that may be 25,000 years ahead of us has already done that.  If they have, then it would be an easy task to create gravitons out of other energy or matter and concentrate, direct and control the force to move a craft.

 Silent Operation

 Now let’s go back to the observations.  The movement is silent.  That Fits ‑ gravity is not a propulsive force based on thrust of a propellant.  I imagine the gravity engine to be more like a gimbaled searchlight.  The beam being the attractive or repulsive graviton beam with a shield or lens to direct it in the direction they want to move.

 Sunburns from the UFOs

 How about the skin burns on close witnesses ‑ as if by sunburn? OK lets assume the burn was exactly like sunburn ‑ i.e.   caused by ultraviolet light (UVL).  UVL is generated by transitions in atoms in which an electron in a high‑energy state returns to a less energetic state by emitting an energy burst in the form of UVL.  Now we have to get technical again.  We also have to step into the realm of speculation since we obviously have not made a gravity engine yet.  But here are some interesting subjects that have a remarkable degree of coincidence with the need for high-energy control necessary for the particle accelerator and the observed sunburn effects.

 The BCS theory (Bardeen, Cooper & Schrieffer) states that in superconductivity, the “quantum‑mechanical zero‑point motion” of the positive ions allows the electrons to lower their energy state.  The release of energy is not absorbed as heat, implying it is not in the infrared range.  Recently, the so‑called high temperature ceramic and organic superconducting compounds are also based on electron energy state flow.  Suppose a by‑product of using the superconductors in their graviton particle accelerator is the creation of UVL?

 Perhaps the gimbaled graviton beam engine is very much like a light beam.  A MASER is a LASER that emits microwave energy in a coherent and single wavelength and phase.  Such coherency may be necessary to direct the graviton beam much like directing the steering jets on the space shuttle for precision docking maneuvers. 

A maser’s energy is made by raising electrons to a high-energy state and then letting them jump back to the ground state.  Sound familiar.  The amount of energy is the only difference between the microwave energy and the UVL process.  In fact, microwaves are just barely above the UVL in the electromagnetic spectrum. Suppose the process is less than perfect or that it has a fringe area effect that produces UVL at the outer edges of the energy field used to create the graviton beam.  Since the Grays would consider it exhaust, they would not necessarily shield it or even worry about it.

 But it has got to GO FAST! 

 Finally, we must discuss the speed.  The nearest star is Proxima Centauri at about 1.3 parsecs (about 4.3 light years).  The nearest globular cluster is Omega Centauri at about 20,000 light years and the nearest galaxy is Andromeda at about 2.2 million light years.  Even at the speed of light, these distances are out of reach to a commuter crowd of explorers.  But just as the theory of relativity shows us that matter and energy are the same thing, it shows that space and time are one and the same.  If space and time are related, so is speed.   This is another area that can get real technical and the best recent reference is Hawking’s A Brief History of Time.  In it he explains that it may be possible to travel from point A to point B by simply curving the space‑time continuum so that A and B are closer.  In any case we must move fast to do this kind of playing with time and space and the most powerful force in the universe is Gravity.  Let’s take a minor corollary:

 Ion Engine

 In the mid 60’s, a new engine was invented in which an electrically charged ion stream formed the reaction mass for the thrusters.  The most thrust it could produce was 1/10th HP with a projected maximum of 1 HP if they continued to work on improvements to the design.  It was weak but its Isp (specific impulse ‑ a rating of efficiency) was superior.  It could operate for years on a few pounds of fuel.  It was speculated that if a Mars mission were to leave Earth orbit and accelerate using an ion engine for half the mission and then decelerate for half the distance to Mars, they would get there 5 months sooner than if they had not used it.  The gain came from a high velocity exhaust of the ion engine giving a small but continuous gain in speed.

 Suppose such a small engine had 50,000 HP and could operate indefinitely.  Acceleration would be constant and rapid.  It might be possible to get to .8 or .9 of C (80% or 90% of the speed of light) over time with such an engine.  This is what a graviton engine could do.  At these speeds, the relativistic effects would take effect.   We now have all the ingredients

 Super String theory and other interesting versions of the space‑time continuum and space‑time curvature are still in their infancy.  We must explore them in our minds since we do not have the means to experiment in reality.  We make great gains when we can have a mind like Stephen Hawking working on the ideas.  We lose so much when we have politicians like Bush (Sr or Jr.) stop projects like the SSC.  We can envision the concept of travel and the desire and purpose but we haven’t yet resolved the mechanism.  The fact that what we observe in UFOs is at least consistent with some hard-core leading edge science is encouraging.

This is one subject that really surprises me that we haven’t begun some serious research into.  A lot of theoretical work has already been done and the observed evidence confirms the math.

Alien Life Exists

October 13, 1998

I want to thank you for letting me post your article about gravity shielding that appeared in the March ‘98 WIRED magazine.  Your comments on my article about lightning sprites and the blue-green flash are also appreciated.  In light of our on-going exchange of ideas, I thought you might be interested in some articles I wrote for my WEB forum on “bleeding edge science” that I hosted awhile back.  Some of these ideas and articles date back to the mid-90’s, so some of the references are a little dated and some of the software that I use now is generally available as a major improvement over what I had then.

What I was involved with then can be characterized by the books and magazines I read, a combination of Skeptical Enquirer, Scientific American, Discovery and Nature.  I enjoyed the challenge of debunking some space cadet that had made yet another perpetual motion machine or yet another 250 mile-per-gallon carburetor – both claiming that the government or big business was trying to suppress their inventions.  Several of my articles were printed on the bulletin board that pre-dated the publication of the Skeptical Enquirer.

I particularly liked all the far-out inventions attributed to one of my heroes – Nikola Tesla.  To hear some of those fringe groups, you’d think he had to be an alien implant working on an intergalactic defense system.  I got more than one space cadet upset with me by citing real science to shoot down his gospel of zero-point energy forces and free energy.

Perhaps the most fun is taking some wing ding that has some crazy idea and bouncing that against what we know about in hard science.  Most often than not, they make use of fancy science terms and word that they do not really understand to try to add credibility to their ravings.  I have done this so often, in fact, that I thought I’d take on a challenge and try to play the other side for once.  I’ll be the wing nut and spin a yarn about some off the wall idea but I’ll do it in such a way that I’ll try to really convince you that it is true.  To that, I’m going to use every thing I know about science.  You be the judge if this sounds like a space cadet or not.



Are They Really There?             Life is Easy to Make: 

 Since 1953, with the Stanley Miller experiment, we have, or should have discarded the theory that we are unique in the universe.  Production of organic life and even DNA and RNA have been shown to occur in simple mixtures of hydrogen, ammonia, methane and water when exposed to an electrical discharge (lightning).  The existence of most of these components has been frequently verified by spectral analysis in distant stars but, of course, until recently, we can’t see the star’s planets.  Based on the most accepted star and planet formation theories, most star systems would have a significant number of planets with these elements and conditions.

 Quantifying the SETI

 A radio astronomer, Frank Drake developed some equations that were the first serious attempt to quantify the number of technical civilizations in our galaxy.  Unfortunately, his factors were very ambiguous and various scientists have produced numbers ranging from 1 to 10 billion technical civilizations in just our galaxy.  This condition of a formula is referred to as unstable or ill‑conditioned systems.  There are mathematical techniques to reduce the instability of such equations.  I attempted to do so to quantify the probability of the existence of intelligent life.

 I approached the process a little different.  Rather than come up with a single number for the whole galaxy, I decided to relate the probability to distance from Earth.  Later I added directionality.

 Using the basic formulas Drake used to start, I added a finite stochastic process using conditional probability. This produces a tree of event outcomes for each computed conditional probability.  (The conditions being quantified were those in his basic formula: rate of star formation; number of planets in each system with conditions favorable to life; fraction of planets with on which life develops; fraction of planets that develop intelligent life; fraction of planets that develop intelligent life that evolve technical civilizations capable of interstellar communications and the lifetime of such a civilization).

 I then layered one more parameter onto this by increasing the probability of a particular tree path inversely to the relation of one over the square of the distance.  This added a conservative estimate for the increasing probability of intelligent life as the distance from Earth increases and more stars and planets are included in the sample size.

 I Love Simulation Models

 I used standard values used by Gamow and Hawking in their computations, however, I ignored Riemannian geometry and assumed a purely Euclidean universe.  Initially, I assumed the standard cosmological principles of homogeneity and isotropic distributions.  (I changed that later) Of course this produced 1000’s of probable outcomes but by using a Monte Carlo simulation of the probability distribution and the initial computation factors of Drake’s formula (within reasonable limits), I was able to derive a graph of probability of technical civilizations as a function of distance.

 But I Knew That

 As was predictable before I started, the graph is a rising, non‑linear curve, converging on  if you go out in distance far enough 100%.  Even though the outcome was intuitive, what I gained was a range of distances with a range of corresponding probabilities of technical civilizations.  Obviously, the graph converges to 100% at infinite distances but what was really surprising is that it is above 99% before leaving the Milky Way Galaxy.  We don’t even have to go to Andromeda to have a very good chance of there being intelligent life in space.  Of course, that is not so unusual since our galaxy may have about 200 billion stars and some unknown multiple of planets.

 Then I made It Directional

 I toyed with one other computation.  The homogeneous and isotropic universe used by Einstein and Hawking is a mathematical convenience to allow them to relate the structure of the universe to their theories of space‑time. These mathematical fudge‑factors are not consistent with observation in small orders of magnitude in distance from earth ‑ out to the limits of what we can observe ‑ about 15 billion light years.  We know that there is inhomogeneous or lumps in the stellar density at these relatively close distances.  The closest lump is called the Local Group with 22 galaxies but it is on the edge of a super cluster of 2500 galaxies.  There is an even larger group called the Great Attractor that may contain tens of thousands of galaxies. 

By altering my formula,  I took into account the equatorial system direction (ascension & declination) of the inhomogeneous clustering.  Predictably, this just gave me a probability of intelligent life based on a vector rather than a scalar measure.  It did however, move the distance for any given probability much closer ‑ in the direction of clusters and super clusters.  So much so that at about 351 million light years, the probability is virtually 100%.  At only about 3 million light years, the probability is over 99%. That is well within the Local Group of galaxies.

 When you consider that there are tens of billions of stars and galaxies within detection range by Earth and some unknown quantity beyond detection – it is estimated that there are galaxies numbering as many as a 1 followed by 21 zeros – that is more than all the grains of sand in all the oceans, beaches and deserts in the entire world.  And in each of those galaxies, there are billions of stars!  Now you can begin to see that the formula to quantify the number of technical civilizations in space results in virtually 100% no matter how conservative you make the input values.  It can do no less than prove that life is out there.

Alien Life

I presented the following to a Mensa conference on the paranormal (at Malvern) as a sort of icebreaker, tongue-in-cheek fun discussion.  It turned into the most popular (unofficial) discussion at the conference and created more than two years of follow-on discussions.


January 11, 1998

Sometimes is it fun to reverse-engineer something based on an observation or description.  This can be quite effective at times because it not only offers a degree of validation or contradiction of the observation, it also can force us to brainstorm and think outside the box.

As a reasonably intelligent person, I am well aware of the perspective of the real scientific community with regard to UFO’s.  I completely discount 99.5% of the wing-nuts and ring-dings that espouse the latest abduction, crop circle or cattle mutilation theories.  On the other hand, I also believe Drake’s formulas about life on other worlds and I can imagine that what we find impossible, unknown or un-doable may not be for a civilization that got started 2 million years before us – or maybe just 2 thousand years before us.  Such speculation is not the foolish babbling of a space cadet but rather the reasoned thinking outside the box – keeping an open mind to all possibilities.

In that vein as well as a touch of tongue in cheek, I looked for some topic to try my theory of reverse-engineering on that would test its limits and to test the limits of Plato.  (Plato is the name of my automated research tool)  With all the I hype about the 50th anniversary of Roswell and the whole UFO fad in the news, I decided to try this reverse-engineer approach on UFOs and the little green (gray) men that are suppose to inhabit them. 

What I found was quite surprising.

Who are the Aliens and Where do they come from? 


1.      I began this by first verifying that the most common description of aliens (GREYS) has a high probability of being accurate.  I collected data from all over the world using key word searches of newspaper stories going back for several years and then ran some cross checks on those that did the reporting.  I discarded any eyewitnesses that had any previous recorded sightings or were connected to any organization that supported or studied UFOs.  Of the 961 left, I ran a Monte Carlo analysis on the statistical chances that they had contact with or communicated with other UFO people or with each other.  I then did a regression analysis on their descriptions and the circumstances of their sightings.  All this filtering left me with a small sample size of only 41 descriptions but I was much more confident that I had as credible a group of “witnesses” as I could find.

2.      The surprising result was a 93% correlation of data (coefficient of correlation) that what they described was the same or very similar and that they were reporting the truth, as they knew it to be.  The truth, in this case, can be compared to a baseline or reference description of the classical or typical aliens.  When I did this, I found that I had a group that was so consistent as to have a collective 91% reliability factor as compared to the baseline or reference description.  That is very high ‑ just ask any lawyer.  The assumption here, is that the reference description and the eye witnesses are telling the truth.  If we consider that these 41 descriptions came from countries all over the world and in some cases from areas that did not have mass media news services, it would be more implausible to imagine that they all had conspired or collaborated rather than told the truth.

3.      I also believe in evolution and that its basic concepts are common throughout the universe.


 Now back to the most common description of aliens (GREYS): whitish gray skin; large eyes; small nose, ears and mouth; small in stature (3‑4 ft), large pear‑shaped head, small, thin and fragile body and hands; bi‑pedal (two legs).  Less reliable (74%) is that they make noises that don’t sound like speech or words and sometimes don’t talk at all.

OK, this may or may not be true.  It could somehow have been a descriptions that was dreamed up years ago and has somehow become so universally known that all 41 of my witnesses have heard and repeat the exact same description.  Unlikely but possible.  But let us proceed anyway – as if this was a valid description of real aliens from reliable witnesses. 

 From only this data, I deduced that:

 Their planet is smaller than Earth, heavy atmosphere and further from their sun or circling a dimmer sun than ours. They evolved from life on a planet at least 5 million years older than ours.   And I think I know why they are here.


OK Sherlock, WHY?

 Eyes: The eyes are big because the light where they evolved is weak, i.e., dim or far away from their sun.  They need big eyes to see in the dim light.  That’s a normal evolutionary response.  This might also account for the pale skin color.

 Nose: The nose is small because the atmosphere is heavy.  A small intake of their air is enough to get the air they need to breath.  This also accounts for the small chest.  How big would your lungs be if we had 60% oxygen in our air instead of 21%.  This can also account for how a large brain can survive in a small body.  The head is 10% of the body weight and volume but it uses 40% of the blood oxygen.  A very small creature cannot have a very large head unless the blood carries a very high content of oxygen.

 I say oxygen is what they breath because witnesses seem to agree that they have been seen without helmets or breathing apparatus.  This would also imply that they are carbon-based creatures like us.

 Mouth: The mouth can be small for three reasons.  The body is small and they may not have to eat much.  The air is thick and they can make noises with little effort so they don’t need a big voice passage.  If they have evolved direct mental telepathy, the mouth is not needed to communicate.

 Head: The large head obviously relates to a large brain.  The large brain in that small a body equates to a long evolution.  It might take a long evolution and large brain to figure out how to travel long distances in space.  The triangle or pear‑shaped head is simply a match of large brain to a small mouth and body.

 Morals: If they have evolved to the point of a large brain and extended space travel, they probably have a very different social order than we do.  We tend to compare them to how we would act if we were them and that just doesn’t work.  They are not going to view us the way we would if we were in their place.  The stupid idea that all they want to do is conquer us and dominant the Earth is our projection of our own ideas and fears onto them.  If you had the technology to travel the universe, what possible gain would there be to dominating a primitive society?  .  Why? What for? 

Use of our planet and its resources?  Not when there are 100’s of billions of planets out there.  If you had the technology to travel the universe, wouldn’t you also have the technology to do terra-forming on any planet you found?  We already know how to do this so it is easy to imagine that  futuristic beings would know how.

Slave Labor?  Not likely.  We already have robots that can do fantastic things.  In 1000 years we will have robots to do almost anything we want.  Why use reluctant and technically inferior slaves when you can whip up a robot to do the work.

There is virtually no technical or social problem that we can imagine that a society that is 1000 or more years advanced from us could not easily resolve.

 These aliens are also very non‑aggressive.  Psychologists have long since discovered that learning plays a role in the development of aggressive behavior.   This is observed in all races of mankind as well as in lower animals.

As IQ goes up, all 13 different kinds of aggressive behavior goes down.  If they have hurt people in their explorations it is inadvertent or unintentional.  The same way we don’t set out to harm the primitive tribes that we study in social and medical experiments.

 Eating: They may have very different physical requirements also.  If our health food fad were to really take hold, we might get to a point of being able to separate the pleasure of eating from the need to.  If the pleasure of eating were satisfied in some other way, such as a pill or some sort of external stimulus, then only the nutritional need would be left as an excuse to eat.  Even today we can substitute pills and artificial supplements for real food.  It might even be possible to evolve food and people so that you take in food that entirely metabolizes and in just the right quantity that there is no waste.  The end result would be that we would eat very little and we would have no human waste product at all. The digestive system would change and the elimination parts (bladder, intestines and kidneys) would shrink.  The effect would be to reduce the size of the pelvis and lower body ‑ much as we see in the typical description of a GREY.

 Behavior: They probably also have evolved different requirements for mental existence and thought.  For instance, if you extend Maslow’s Hierarchy of Prepotency above “Self‑Actualization”, what’s next? Altruism? Spontaneous and Total Empathy? Adaptive Radiation? If you have satisfied the motives for power and security and can do anything with technology, what’s next? Perhaps it is to study another planet, the same way we are fascinated by a primitive culture in the Brazilian jungles.  Perhaps they would study us the way we study ants in a colony or bees.  We might be that relatively primitive to them.

 We have recently gained insight into how much damage we do when we inject modern society’s thinking and technology into primitive cultures.  If we evolve for another 500 years and can go explore space and come across a primitive culture that is still warlike and cannot go out into space, wouldn’t we just observe.  If we are trying to do that now, in 500 years we would not only be committed to that concept but our technology would be good enough to allow us to observe without being obtrusive.  Imagine what we would think and would be able to do in 25,000 years.

Now imagine what “they” are thinking as they visit us.

Trans-Dimensional Travel

These articles deal with the fringe in that I was addressing the “science” behind so called UFO’s.

I have done some analysis on life in our solar system other than Earth and the odds against it are very high.  At least, life as we know it.  Even Mars probably did not get past early life stages before the O2 was consumed.  Any biologist will tell you that in our planet evolution, there were any number of critical thresholds of presence or absence of a gas or heat or water (or magnetic field or magma flow) that, if crossed, would have returned the planet to a lifeless dust ball. 

Frank Drakes formulas are a testament to that.  The only reason that his formulas are used to “prove” life exists is because of the enormous quantities of tries that nature has to get it right in the observable universe and over so much time.

One potential perspective is that what may be visiting us, as “UFO’s” could be a race or several races of beings that are 500 to 25,000 years or more advanced than us.  Given the age of the universe and the fact that our sun is probably second or third generation, this is not difficult to understand.  Some planet somewhere was able to get life started before Earth and they are now where we will be in the far distant future.

  Stanley Miller proved that life, as we know it, could form out of organic and natural events during the normal evolution of a class M planet.  But Drake showed that the chances of that occurring twice in one solar system is very high against it.  If you work backwards from their formulas, given the event of earth as an input of some solution of the equations, you would need something like 100 million planets to get even a slight chance for another planet with high‑tech life on it.

  Taken this into consideration and then comparing it to the chances that the monuments on mars are natural formations or some other claim of extraterrestrial life within our solar system, you must conclude that there is virtually no chance for life in our solar system.  Despite this, there are many that point to “evidence” such as the appearance of a face and pyramids in Mars photographs.  It sounds a lot like an updated version of the “canals” that were first seen in the 19th century.  Now we can “measure” these observations with extreme accuracy – or so they would have you believe.

The so‑called perfect measurements and alignment that are supposedly seen on the pyramids and “faces” are very curious since even the best photos we have of these sites have a resolution that could never support such accuracy in measurements.  When you get down to “measuring” the alignment and sizes of the sides, you can pretty much lay the compass or ruler anywhere you want because of the fuzz and loss of detail caused by the relatively poor resolution.  Don’t let someone tell you that they measured down to the decimal value of degrees and to within inches when the photo has a resolution of meters per pixel!

   As for the multidimensional universe; I believe Stephen Hawkin when he said that there are more than 3 dimensions however, for some complex mathematical reasons, a fifth dimension would not necessarily have any relationship to the first four and objects that have a fifth dimension would have units of the first four (l,w,h & time) that are very small ‑ on the order of atomic units of scale.  This means that according to our present understanding of the math, the only way we could experience more than 4 dimensions is to be able to be reduced to angstrom sizes and to withstand very high excitation from an external energy source.   Lets exclude the size issue for a moment since that is an artifact of the math model that we have chosen in the theory and may not be correct.

  We generally accept that time is the 4th dimension after l, w, and h which seem to be related as being in the same units but in different directions.  If time is a vector (which we believe it is) and it is so very different than up, down, etc, then what would you imagine a 5th dimension unit to be?

  Most people think of “moving” into another dimension and it being just some variation of the first 4 but this is not the case.  The next dimension, is not capable of being understood by us because we have no frame of reference. 

Hawkin makes a much better explanation of this in one of his books but suffice it to say that we do not know how to explore this question because we cannot conceive of the context of more than 4 dimensions.  The only way we can explore it is with math ‑ we can’t even graph it because we haven’t got a 5-axis coordinate system.  I have seen a 10 dimensional formula graphed but they did only 3 dimensions at a time. 

Whatever the relationship of a unit called a “second” has with a unit called a “meter”, may or may not be the same relationship as the meter has with “???????” (Whatever the units of the 5th dimension are called).  What could it possibly be?  You describe it for me, but don’t use any reference to the first 4 dimensions.  For instance, I can describe time or length without reference to any of the other known dimensions.  The bottom line is that this is one area that even a computer cannot help because no one has been able to give a computer an imagination ……..yet.  However, it is an area that is so beyond out thinking that perhaps we should not speculate about them coming from another dimension. 

Let’s look at other possibilities.    To do that, take a look at the other article on this blog titled, “Intergalactic Space Travel”.

Achieving the Speed of Light NOW

Scientists have been telling us for some time that it is impossible to achieve the speed of light.  The formula says that mass goes to infinity as you approach C so the amount of power to go faster also rises to infinity.  The theory also says that time is displaced (slows) as we go faster.  We have “proven” this by tiny fractions of variations in the orbits of some of our satellites and in the orbit of Mercury.  For an issue within physics that is seen as such a barrier to further research, shouldn’t we see a more dramatic demonstration of this theory?  I think it should so I made up one.

Let us suppose we have a weight on the end of a string.  The string is 10 feet long and we hook it up to a motor that can spin at 20,000.  The end of the string will travel 62.8 feet per revolution or 1,256,637 feet per minute.  That is 3.97 miles per second or an incredible 14,280 miles per hour.  OK so that is only .0021% of C but for only ten feet of string and a motor that we can easily create, that is not bad.

There are motors that can easily get to 250,000 RPM and there are some turbines that can spin up to 500,000 RPM.  If we can explore the limits of this experimental design, we might find something interesting.   Now let’s get serious. 

Let’s move this experiment into space.  With no gravity and no air resistance, the apparatus can function very differently.  It could use string or wire or even thin metal tubes.  If we control the speed of the motor so that we do not exceed the limitations imposed by momentum, we should be able to spin something pretty fast.

Imagine a motor that can spin 50,000 RPM with a sting mechanism that can let out the string from the center as the speed slowly increases.  Now let’s, over time, let out 1 mile of string while increasing the speed of rotation to 50,000 RPM.  The end will not be traveling at nearly 19 thousand miles per hour or 2.82% of C.

If we boost the speed up to 100,000 RPM and can get the length out to 5 miles, the end of the string will be doing an incredible 188,495,520 miles per hour.  That is more that 28% the speed of light.

What will that look like?  If we have spun this up correctly, the string (wire, tubes, ?) will be pulled taunt by the centrifugal force of the spinning.  With no air for resistance and no gravity, the string should be a nearly perfect vector outward from the axis of rotation.  The only force that might distort this perfect line is momentum but if we have spun this setup slowly so that the weight at the end of the string is pulling the string out of the center hub, then it should be straight. 

I have not addressed the issue of the strength of the wire to withstand the centrifugal force of the spinning weight.  Not that it is trivial but for the purposes of this thought experiment, I am assuming that the string can handle whatever the weight size we use.

Let us further suppose that we have placed a camera exactly on the center of the spinning axis facing outward along the string.  What will it see?  If the theory is correct, then despite the string being pulled straight by the centrifugal force, I believe we will see the string curve backward and at some point it will disappear from view.  The reason is that as you move out on the string, its speed is going faster and faster and closer to the C.  This will cause the relative time at each increasing distance from the center to be slower and appear to lag behind.  When viewed from the center-mounted camera, the string will curve.

If we could use some method to make the string visible for its entire length, its spin would cause it to eventually fade from view when the time at the end of the string is so far behind the present time at the camera that it can no longer be seen.  It is possible that it might appear to spiral around the camera, even making concentric overlapping spiral rings. 

If synchronized clocks were places at the center and at the end of the string, and then we placed a camera at both ends but could view the two images side-by-side at the hub.  Each one would view a clock that started out synchronized and the only difference would be that one is now traveling at some percentage of C faster than the other.  I believe they would read different times as the spin rate increased. 

But now here is a thought puzzle.  Suppose there is an electronic clock at the end of the string as described by the above paragraph but now instead of sending its camera image back to the hub, we send its actual reading by wires embedded in the string back to the hub where it is read side-by-side with a clock that has been left at the hub.  What will it read now?  Will the time distortion alter the speed of the electrons so that they do NOT show a time distortion at the hub?  Or will the speed of the electricity be constant and thus show two different times?  I don’t know.

That isn’t an itch you feel!

I am writing this because I am hoping that publishing this story, people will begin to understand what is going on.  It won’t make any sense unless I start at the beginning.


I got struck by lightning.  Well, actually I wasn’t struck but I was on wet ground near where it struck and the power of the strike passed thru the ground into four others and me while we were playing soccer.  I was out for perhaps 5 hours and have felt weird ever since.  I told the doctors that it felt like I was being poked with pins but they just said that it was the random firing of over stimulated nerve endings.  It made me jump and jerk every time it happened.   That was two years ago.


The pinpricks have continued at random intervals but about six months ago, I began to feel something else.  It felt like someone was touching my skin and then rubbing it.  Sometimes it felt like someone was tapping my leg or arm and when I jerked my head to look, it stopped.  This went on for weeks at all times of the day and night and began to drive me nuts.  I tried creams and different kinds of clothes but nothing helped.  One evening when I just couldn’t get to sleep, I shouted out, “Leave me alone!” as loud as I could.  I was astonished that the feelings stopped immediately and did not return for two whole days.


I was confused and thought about all kinds of silly answers from aliens to fantastic mind control powers but finally decided it was sort of a mental placebo effect of mind over matter.  When the rubbings and tappings started again, they were different.  Softer and gentler.  I also began to realize that the feelings were making patterns on my skin.  It was hard to tell what they were because I usually moved when it started. 

One night, while lying in bed, and out of idle curiosity, I pulled up my shirt and said out loud, “ Do it here”, offering my chest and stomach.  Almost at once, the rubbing sensation started on my bare stomach and chest in the form of much bigger patterns.  Slowly, I realized they were letters written upside down.  I said “A” and I felt the feeling as a letter “A” was being rubbed on my stomach.  I looked around and tried to see if this was a joke or prank but I was really alone.  Other than the annoyance of the rubbing, I had never been hurt by “it” so I wasn’t afraid but I was curious.  I tried out several more letters and they all resulted in the same thing.  Then I asked a question, “Who are you”.  I got no answer.  I asked dozens of questions and got no answers.  Then I said, tap once for yes and twice for no – “are you ghosts”.  I felt a single tap in the middle of my chest!


I can’t remember or type all of the hundreds of questions I asked over the next few hours but I began to get a picture of who or what I was dealing with.  It was dead people – real ghosts – or rather their spirits.  Here is what I learned.


It seems that String Theory is correct; there are several dimensional universes in our space-time continuum.   The spirit or essence of life force is virtually immortal.  While we live that force is expressed as emotional energy.  That is why people feel weak after an emotionally trying event.  Large emotions can even make us faint but women faint more often because they give up their spirit energy faster than men do.  It has something to do with passing or sharing all that spirit energy at child birth.  

I also found out that these spirits can move from one dimension to another through a complex process that involves just the right amount of emotional energy.   This usually happens at the moment of death or near-death experience.  Those that have described near death experiences, in which they seemed to leave their bodies, really did leave their bodies, as spirits.  If the spirit energy is not just right, then they stay within this dimension but can move outside of their bodies.  When the spirit energy is just right, they can move freely between dimensions. 

Through a lot of yes and no questions, I found out that these spirits are so much outside of our dimensional universe and our space-time continuum that they cannot interact with our material universe except for a few isolated exceptions.  They can interact with each other freely and they can hear and see us but cannot be seen by us or make sounds. 

They could not move or influence anything until about one “lifetime” ago.  It took me a long time to realize that one lifetime was in their terms and equated to 1000’s of years in our universe.  They began to be able to focus their spirit essence on a single point but could do no better than to change the electrical properties at that one tiny little point.  The amount of change is so small that it is not enough to change anything electrically powered but they eventually discovered that if they did this on the nerve endings of someone, the person would feel something.  For thousands of years they have been doing this in an attempt to communicate but most humans or animals do not see these small sensations as anything recognizable as communications.


It is the rare situation that someone, like me, gets hypersensitive to these pinpoint electrical changes and can receive these messages more clearly and easily.  My lightning experience made me a better “receiver”.


I found out that this spirit movement between dimensions is not without risks and effects.  When these spirits come into what we know as our universe, they lose all memory of past history and are sucked into a sort of vortex that exists around every newborn infant.  At the moment of birth or close to it, any spirit that is nearby is sucked into the life force of the infant with a completely clean slate for a mind.  Most often it happens because that is exactly what the spirit wants to do and they came here to do that  – I called these the good spirits. 


I found out that sometimes there is some vague awareness of these past lives and experiences and this comes out as faint feelings and thoughts as adults.  This agrees with several religious beliefs about reincarnation and angels and it may account for sudden changes of direction in cultures and religious thinking over the ages.  The feeling of deje’ vue comes from this carry-over awareness of past experiences.


But once in awhile, this being sucked into the life force of the infant happens to those spirits that have just left a dead body and are trying to get back to one of the other dimensions.  They get snatched back into another newborn and are unable to escape until the body gives up the spirit.  I called these the angry spirits.


When the body does die and the spirit is released, almost all of the learning of that person’s life is added to all of the wisdom and learning that occurred before entering the body and it all comes back to the spirit as fresh and recalled memories.  At that moment, they realize who they really are and what has happened to them.  Those that did this willingly, have enjoyed the experience but those that got caught, exit into the spirit realm angry and want to escape back to one of the other dimensions as quickly as possible.


Unfortunately, therein lies the problem.  I didn’t get the exact mix of emotional energies that must be present to move between dimensions but it seems that the exact mix is happening less and less often and the spirits that are released upon death are piling up and cannot get back to one of the other dimensions.  As some of these spirits that want out are pulled back into bodies and released after a human lifetime, they get more and more angry.  These bad spirits are now piling up, waiting to go back to one of the other dimensions and trying to avoid the newborn vortexes. 

It was these angry spirits that were poking and prodding me so much when I was sensitized by the lightning.  Only after I made myself heard, did some of the good spirits come to my rescue and chase away the bad spirits and tried to communicate with me in a nicer manner.


Now that I have told you how all this got started, let me tell you the problem that is developing.  The bad spirits have learned to expand their ability to interact with our bodies.  Their electrical prod rods, as I called them, are getting better at poking into people to give them headaches, pains and bad thoughts.  They can create blind spots in people’s vision and create the sensation of sounds.  They are doing this more and more out of anger and disgust with being stuck in this dimension. 

It may seem like a small response to all of that bent up anger and it certainly is not like what Hollywood would have you believe is the typical ghost haunting but it is, after all, all they can do right now but they are learning more each day.  It is only a matter of time before they can fiddle with DNA or genes to turn on or off genes that do us harm.  It is becoming a war between the good and bad spirits but the tide of battle is turning against the good guys.


The affect on our universe is that more people are getting weird pains, feeling depression and hearing sounds and seeing things that are not there or not seeing things that are there.  How many car accidents and deaths are being caused by these angry spirits?  How often have you heard a noise and nothing was there?  How often do you get a sudden sharp pain or an itch that seems to have no cause?  It’s them.  But what is worse is that after you die, it might be you.


January 10, 1987

As for longevity, there has been some very serious research going on in this area but it has recently been hidden behind the veil of aids research. There is a belief that the immune system and other recuperative and self‑correcting systems in the body wear‑out and slowly stop working.  This is what gives us old‑age skin and gray hair.  This was an area that was studied very deeply up until the early 1980’s.  Most notably were some studies at the U. of Nebraska that began to make some good progress in slowing the biological aging by a careful stimulation and supplementation of naturally produced chemicals.  When the AIDS problem surfaced a lot of money was shifted into AIDS research.  It was argued that the issues related to biological aging were related to the immune issues of AIDS.  This got the researchers AIDS money and they continued their research, however, they want to keep a very low profile because they are not REALLY doing AIDS research. That is why you have not heard anything about their work. 

Because of my somewhat devious links to some medical resources and a personal interest in the subject, I have kept myself informed and have a good idea of where they are and it is very impressive.  Essentially, in the inner circles of gerontology, there is general agreement that the symptomology of aging is due to metabolic malfunction and not cell damage.  This means that it is treatable.  It is the treatment that is being pursued now and as in other areas of medicine in which there is such a large multiplicity of factors affecting each individuals aging process, successes are made in finite areas, one area at a time.  For instance, senility is one area that has gotten attention because of the mapping to metabolic malfunction induced by the presence of metals along with factors related to emotional environment.  Vision and skin condition are also areas that have had successes in treatments.

  When I put my computer research capability to look at this about a year ago, what I determined was that by the year 2024, humans will have an average life span of about 95‑103 years.  It will go up by about 5% per decade after that for the next century, then it will level out due to the increase of other factors.

Lucid Dreams

May 14, 1999      One of the factors I have researched using Plato is my dreams. About 3 years ago, I mastered the technique of lucid dreaming.  I can control not only what I dream about but I can control and direct the dream while I am in it.  I have conjured up people or computers and posed questions to them and had dialogs about complex issues.  When I wake up, I try to remember and save the same logic.   Most often I talk to HAL, which is my mental combination of the computer on the 2001 Space Odyssey movie and the one on Star Trek.  It was actually pretty spooky when I saw episodes on the new Star Trek that were remarkably similar to my controlled dreams.  Way before Geordi or Data did their things on the Holodeck, I had my own holodeck of sorts, in my mind.   What I believe I am doing is taping into using more of my brain than I can while I am awake.  I almost always arrive at an answer but I am not always able to remember everything I did or how to relate it once I am awake.  I have also discovered that I have an incredible ability to see into my own body.  I have “traveled” inside my body and seen a cut in the skin from the inside.  I “knew” that I had a cholesterol problem before I had a measurement made because I had “seen” it.  Actually pretty gross, yuck.   What I am working on now is a modification of the BEAM (Brain Electromagnetic wave Analysis Monitor) to get a direct link between body and computer.  It sounds wild but I have designed it several times in my dreams. The current technology can easily get wave patterns from a variety of machines (electroencephalogram, electromyogram, electroculorgram, and others).  I have focused on the stage 3 or delta sleep phase because delta waves are slow and strong so they are easy to work with.   At issue is the assignment of intelligence to these waves.  We know, for instance that a sound wave pattern represents a sound that can be heard.  Television crews will tell you that a similar looking pattern on an oscilloscope is a particular TV pattern or image.  It is just that we do not know how to assign the relationship to the delta wave pattern.  I think I know how.  I have already gotten it to respond to binary queries such as yes/no, 1/2, true/false and on/off.  Now I am working on the shades between these two. Last week I was able to make a brainwave choice among five menu items!  I thing I will be able to relate the alphabet and numbers if I keep at it.   The mechanism is a simple EEG that I got at a surplus auction and a very sensitive A‑to‑D (analog‑to‑digital) interface to the computer.  The program lets me interpret the 3 to 14 hertz waves that are being detected.  I put in a programmed band‑pass filter to cut off above 5 Hz to limit the response to delta waves for my at‑sleep experiments and to cutoff below 8 Hz to get alpha waves for my awake experiments.  These are the two easiest to detect. I started out by coding the program to respond with a specific answer when it detected a wave of a specific freq and amplitude.  Since this was a simple discrimination of above or below a certain wave pattern, it was easy to detect.  I simply changed my thoughts until I achieved the pattern that made the right signal.  Awake, relaxed was yes and think hard was a no. Then I started making sweeps while recording, mentally what I was dreaming. This allowed me to discover that a concentrated thought of “yes” was a specific pattern different than a concentrated thought of “no”.  I set these two patterns into the program and voila! Brainwave responses.  The pattern recognition algorithm is adopted from the acoustic pattern recognition software that the Navy uses for identifying a submarine’s sonar “signature”.  This is a very well developed algorithm and is used in a number of other areas of science.  I obtained that algorithm as a math model and adopted it to my A‑to‑D interface. 

   The discovery of what a pattern looks like as compared with what I am thinking continues.  I believe that just like in language development, I will be able to soon join these individual images and brain patterns into “sentences” of computer recognizable images.  At first the images will be discrete letters and numbers, then words. Once I get to words and can link them into real word sentences, it is an easy matter to relate the word phrases to images.  Using a neural or expert system and an image generator, I can now imagine how animated figures and scenes can be “thought up”.  

 This lucid dreaming stuff is really a kick, you ought to try it!  

Snipers Save Lives Also

October 15, 2002As you may already know I have been a SWAT Sniper for about 3 years. My specialty is Counter Sniper for executives and dignitary or high profile protection. What this basically means is, I am posted at certain locations looking for a sniper/assassin. Once a threat is detected, I am charged in countering his/her ability to attack by early detection and neutralizing (killing) him/her. Now on to business…This Sniper killing people in the DC Metro area is a skilled sharpshooter and very calculated. Unfortunately he appears very disturbed and has just left a calling card stating he was God. What that probably means is he may escalate the matter by increasing the rate of killing in each attack because he acknowledges the police’s hot pursuit. This sniper knows it’s a matter of time before he is discovered. He thinks he is superior to everyone, but knows eventually he will be caught. He is playing a very sick game that he feels he is winning. He will probably want to sensationalize his confrontations with the police and eventually stop running and have a standoff. The police may be his future targets. 

It is very important that you adjust your regular routine because this is a very deadly individual. I am going to give you my Personal/professional advice on the matter. The Sniper’s MO (his methods) are the following. 

a. 1st wave of attacks were concentrated in an area the suspect was very familiar with.b. It appears these initial attacks were closer, probably less than 100 yards away. Witnesses were hearing loud cracks. c. He definitely is showing off. He is trying to maintain 100 %, one shot for one kill. (Sniper’s creed).d. He is probably not shooting the first person that appears. He is looking for the highest probable kill. This encompasses distance, position, and movement of the individual and excludes physical barriers (vehicles, tress, columns, etc). 

e. He is shooting from areas adjacent to major roadways, thoroughfares, highways, etc. (quick egress). First group was within a couple of miles from the beltway. Child shot in Bowie was a block away from the US 50 on the 197. f. He went up to northern Virginia (70 miles away) to throw police off his trail (diversion, used by snipers for stalking targets and eluding enemy). 

g. He is making this to be a giant ‘Stalk’ around the Metro area. It is a game now. He wants them to come after him (like he is in his own war against the enemy). h. He is probably using any foliage (tree line, woods, bushes) that is around the malls, shopping center, gas stations, and parking lots. 

i. He is able to shoot accurately out to 500 yards (5 football fields) with a scope (depends on individual’s abilities). j. The farther out he/she is, the more difficult of course it is to detect or pinpoint location. This aids in egress as well. (He knows this). 

My advice is to consider the following.  1.      Avoid unnecessary errands.2.      Bring someone along.3.      Do not stand outside your vehicle grabbing things out the car.4.      If you go to the store, put items in back seat of car (nothing in trunk) so that you can grab items and exit quickly.5.      When slowing down or at a stop, keep windows closed (glass deflects bullets, he knows this and he is not shooting through glass anymore).6.      Never walk straight to a door more than 20 feet away. Zig zag and walk at angels. The shooter is setting up on doorways/entrances and waiting for victims to line up on entrance. The hardest shot for a sniper is a target traversing laterally to his/her position (perpendicular)6.      Walk between cars, use them for protection, and NEVER walk in a straight line to a doorway. Park as close as possible.7.      Be mindful of wooded areas and any bushes around establishment. More than likely surrounding areas. Look for reflections (glare of the scope or weapon).8.      Use drive-thru at fast food9.      Look around and make it obvious (looking over your shoulder in the distant) he may hesitate if they think someone notices him. Point to what you are looking at as well. You want to telegraph yourself to others and get them involved.10.  Keep clear of abandoned vehicles, but concern yourself with them. He is probably parking along roads and walking to his shooting position.11.  You are probably safer inside the DC area only because congestion will prevent him an easy egress for the sniper. So if there is a toss up for a store then pick the inner city (not the outer boundaries). The main thing is being careful. Everyone is at risk even the police. He is able to pick the time and place so he has the overall advantage over the police; however, his greatest advantage is that he has no particular target.  He can take what is called “targets of opportunity”.  This means that if he thinks a shot isn’t going to be a clean kill shot, he will wait for one that will be.  That may be the next person of 50 people later.  Some additional considerations:You are safer on a windy day than on a calm day.  It is harder to shoot accurately in the windContrast and a clear edge makes for a good target:                At night try not to let yourself be backlit.  A solid dark backlit shape is a high contrast target and is easy to sight-in on.

· At night, wear dark colors.  If you are not backlit, then you will have a very low contrast with the background and hard to sight in on.

·              In the day, wear light colors but not loud or bright colors.  You do not want to attract attention or provide a high contrast target.

·              Wear scarfs, long coats, hats, loose clothes, carry bags, etc.  Anything that will make it hard to detect the exact aim-point on the torso or head.

The one-shot, one-kill credo often causes the shooter to go for a headshot, as has been the case several times so far.  If a zigzag walking path is not possible and dodging and weaving would make you feel like a weirdo, then try just moving your head.  Rubbing your neck while you move your head looks natural and still makes for an almost impossible headshot from any distance. 


Plato Rises!

This is one of two email exchanges I had with some other writer/scientists in which we explored the outer edge of science.  You have to remember that way back then, there were some ideas that have since died off – like the monuments and face on Mars and the popularity of UFOs. 

Some of this is part of an on-going dialog so it may seem like there is a part missing because this is a response to a previously received message.  I think you get the gist of what is going on.  Despite the fact that this is mostly from 9 years ago, the science described and projected then is still valid or has not yet been proven wrong. You might find these interesting. 


October 13, 1998

I want to thank you for letting me post your article about gravity shielding that appeared in the March ‘98 WIRED magazine.  Your comments on my article about lightning sprites and the blue-green flash are also appreciated.  In light of our on-going exchange of ideas, I thought you might be interested in some articles I wrote for my BBS and WEB forums on “bleeding edge science” that I hosted awhile back.  Some of these ideas and articles date back to the mid-90’s, so some of the references are a little dated and some of the software that I use now is generally available as a major improvement over what I had then.

What I was involved with then can be characterized by the books and magazines I read, a combination of Skeptical Enquirer, Scientific American, Discovery and Nature.  I enjoyed the challenge of debunking some space cadet that had made yet another perpetual motion machine or yet another 250 mile-per-gallon carburetor – both claiming that the government or big business was trying to suppress their inventions.  Several of my articles were printed on the bulletin board that pre-dated the publication of the Skeptical Enquirer.

I particularly liked all the far-out inventions attributed to one of my heroes – Nikola Tesla.  To hear some of those fringe groups, you’d think he had to be an alien implant working on an intergalactic defense system.  I got more than one space cadet upset with me by citing real science to shoot down his gospel of zero-point energy forces and free energy.


These articles deal with the fringe in that I was addressing the “science” behind UFO’s.

  I have done some analysis on life in our solar system other than Earth and the odds against it are very high.  At least, life as we know it.  Even Mars probably did not get past early life stages before the O2 was consumed.  Any biologist will tell you that in our planet evolution, there was any number of critical thresholds of presence or absence of a gas or heat or water that, if crossed, would have returned the planet to a lifeless dust ball.  Frank Drakes formulas are a testament to that.  The only reason that his formulas are used to “prove” life exists is because of the enormous quantities of tries that nature has to get it right in the observable universe and over so much time.

  One potential perspective is that what may be vesting us, as “UFO’s” could be a race or several races of beings that are 500 to 25,000 years more advanced than us.  Given the age of the universe and the fact that our sun is probably second or third generation, this is not difficult to understand.  Some planet somewhere was able to get life started before Earth and they are now where we will be in the far distant future.

  Stanley Miller proved that life, as we know it, could form out of organic and natural events during the normal evolution of a class M planet.  But Drake showed that the chances of that occurring twice in one solar system is very high against it.  If you work backwards from their formulas, given the event of earth as an input of some solution of the equations, you would need something like 100 million planets to get even a slight chance for another planet with high‑tech life on it.

  Taken this into consideration and then comparing it to the chances that the monuments on mars are natural formations or some other claim of extraterrestrial life within our solar system, you must conclude that there is virtually no chance for life in our solar system.  Despite this, there are many that point to “evidence” such as the appearance of a face and pyramids in Mars photographs.  It sounds a lot like an undated version of the “canals” that were first seen in the 19th century.  Now we can “measure” these observations with extreme accuracy – or so they would have you believe.

The so‑called perfect measurements and alignment that are supposedly seen on the pyramids and “faces” are very curious since even the best photos we have of these sites have a resolution that could never support such accuracy in measurements.  When you get down to “measuring” the alignment and sizes of the sides, you can pretty much lay the compass or ruler anywhere you want because of the fuzz and loss of detail caused by the relatively poor resolution.  Don’t let someone tell you that they measured down to the decimal value of degrees and to within inches when the photo has a resolution of meters per pixel!

   As for the multidimensional universe; I believe Stephen Hawkin when he said that there are more than 3 dimensions however, for some complex mathematical reasons, a fifth dimension would not necessarily have any relationship to the first four and objects that have a fifth dimension would have units of the first four (l,w,h & time) that are very small ‑ on the order of atomic units of scale.  This means that according to our present understanding of the math, the only way we could experience more than 4 dimensions is to be able to be reduced to angstrom sizes and to withstand very high excitation from an external energy source.   Lets exclude the size issue for a moment since that is a result of the math model that we have chosen in the theory and may not be correct.

  We generally accept that time is the 4th dimension after l, w, and h which seem to be related as being in the same units but in different directions.  If time is a vector (which we believe it is) and it is so very different than up, down, etc, then what would you imagine a 5th dimension unit to be?

  Most people think of “moving” into another dimension and it being just some variation of the first 4 but this is not the case.  The next dimension, is not capable of being understood by us because we have no frame of reference. 

Hawkin makes a much better explanation of this in one of his book but suffice it to say that we do not know how to explore this question because we cannot conceive of the context of more than 4 dimensions.  The only way we can explore it is with math ‑ we can’t even graph it because we haven’t got a 5-axis coordinate system.  I have seen a 10 dimensional formula graphed but they did only 3 dimensions at a time.  Whatever the relationship of a unit called a “second” has with a unit called a “meter”, may or may not be the same relationship as the meter has with “???????” (Whatever the units of the 5th dimension are called).  What could it possibly be?  You describe it for me, but don’t use any reference to the first 4 dimensions.  For instance, I can describe time or length without reference to any of the other known dimensions.  The bottom line is that this is one area that even a computer cannot help because no one has been able to give a computer an imagination ……..yet.

  As for longevity, there has been some very serious research going on in this area but it has recently been hidden behind the veil of aids research. There is a belief that the immune system and other recuperative and self‑correcting systems in the body wear‑out and slowly stop working.  This is what gives us old‑age skin and gray hair.  This was an area that was studied very deeply up until the early 1980’s.  Most notably were some studies at the U. of Nebraska that began to make some good progress in slowing the biological aging by a careful stimulation and supplementation of naturally produced chemicals.  When the AIDS problem surfaced a lot of money was shifted into AIDS research.  It was argued that the issues related to biological aging were related to the immune issues of AIDS.  This got the researchers AIDS money and they continued their research, however, they want to keep a very low profile because they are not REALLY doing AIDS research. That is why you have not heard anything about their work. 

Because of my somewhat devious links to some medical resources and a personal interest in the subject, I have kept myself informed and have a good idea of where they are and it is very impressive.  Essentially, in the inner circles of gerontology, there is general agreement that the symptomatology of aging is due to metabolic malfunction and not cell damage.  This means that it is treatable.  It is the treatment that is being pursued now and as in other areas of medicine in which there is such a large multiplicity of factors affecting each individuals aging process, successes are made in finite areas, one area at a time.  For instance, senility is one area that has gotten attention because of the mapping to metabolic malfunction induced by the presence of metals along with factors related to emotional environment.  Vision and skin condition are also areas that have had successes in treatments.

  When I put my computer research capability to look at this about a year ago, what I determined was that by the year 2024, humans will have an average life span of about 95‑103 years.  It will go up by about 5% per decade after that for the next century, then it will level out due to the increase of other factors.

_____________ Are They Really There? ___________ Life is Easy to Make:

 Since 1953, with the Stanley Miller experiment, we have, or should have discarded the theory that we are unique in the universe.  Production of organic life and even DNA and RNA have been shown to occur in simple mixtures of hydrogen, ammonia, methane and water when exposed to an electrical discharge (lightning).  The existence of most these components has been frequently verified by spectral analysis in distant stars but, of course, until recently, we can’t see the star’s planets.  Based on the most accepted star and planet formation theories, most star systems would have a significant number of planets with these elements and conditions.

 Quantifying the SETI

 A radio astronomer, Frank Drake developed some equations that were the first serious attempt to quantify the number of technical civilizations in our galaxy.  Unfortunately, his factors were very ambiguous and various scientists have produced numbers ranging from 1 to 10 billion technical civilizations in just our galaxy.  This condition of a formula is referred to as unstable or ill‑conditioned systems.  There are mathematical techniques to reduce the instability of such equations.  I attempted to do so to quantify the probability of the existence of intelligent life.

 I approached the process a little different.  Rather than come up with a single number for the whole galaxy, I decided to relate the probability to distance from Earth.  Later I added directionality.

 Using the basic formulas Drake used to start, I added a finite stochastic process using conditional probability. This produces a tree of event outcomes for each computed conditional probability.  (The conditions being quantified were those in his basic formula: rate of star formation; number of planets in each system with conditions favorable to life; fraction of planets with on which life develops; fraction of planets that develop intelligent life; fraction of planets that develop intelligent life that evolve technical civilizations capable of interstellar communications and the lifetime of such a civilization).

 I then layered one more parameter onto this by increasing the probability of a particular tree path inversely to the relation of one over the square of the distance.  This added a conservative estimate for the increasing probability of intelligent life as the distance from Earth increases and more stars and planets are included in the sample size.

 I Love Simulation Models

 I used standard values used by Gamow and Hawking in their computations, however, I ignored Riemannian geometry and assumed a purely Euclidean universe.  Initially, I assumed the standard cosmological principles of homogeneity and isotropic distributions.  (I changed that later) Of course this produced 1000’s of probable outcomes but by using a Monte Carlo simulation of the probability distribution and the initial computation factors of Drake’s formula (within reasonable limits), I was able to derive a graph of probability of technical civilizations as a function of distance.

 But I Knew That

 As was predictable before I started, the graph is a rising, non‑linear curve, converging on 100%.  Even though the outcome was intuitive, what I gained was a range of distances with a range of corresponding probabilities of technical civilizations.  Obviously, the graph converges to 100% at infinite distances but surprisingly, it is above 99% before leaving the Milky Way Galaxy.  We don’t even have to go to Andromeda to have a very good chance of there being intelligent life in space.  Of course, that is not so unusual since our galaxy may have about 200 billion stars and some unknown multiple of planets.

 Then I made It Directional

 I toyed with one other computation.  The homogeneous and isotropic universe used by Einstein and Hawking is a mathematical convenience to allow them to relate the structure of the universe to their theories of space‑time. These mathematical fudge‑factors are not consistent with observation in small orders of magnitude in distance from earth ‑ out to the limits of what we can observe ‑ about 15 billion light years.  We know that there is inhomogeneous or lumps in the stellar density at these relatively close distances.  The closest lump is called the Local Group with 22 galaxies but it is on the edge of a super cluster of 2500 galaxies.  There is an even larger group called the Great Attractor that may contain tens of thousands of galaxies. 

By altering my formula so that I took into account the equatorial system direction (ascension & declination) of the inhomogeneous clustering.  Predictably, this just gave me a probability of intelligent life based on a vector rather than a scalar measure.  It did however, move the distance for any given probability much closer ‑ in the direction of clusters and super clusters.  So much so that at about 351 million light years, the probability is virtually 100%.  At only about 3 million light years, the probability is over 99%. That is well within the Local Group of galaxies.

 When you consider that there are tens of billions of stars and galaxies within detection range by Earth and some unknown quantity beyond detection – it is estimated that there are galaxies numbering as many as a 1 followed by 21 zeros – that is more that all the grains of sand in all the oceans, beaches and deserts in the entire world.  And in each of those galaxies, there are billions of stars!  Now you can begin to see that the formula to quantify the number of technical civilizations in space results in virtually 100% no matter how conservative you make the input values.  It can do no less than prove that life is out there.

My Inventions!

Independently developed designs, concepts, applications and inventions 

The following are my inventions.  These are real devices or ideas that have been created, designed, modeled and/or researched enough to know that they will function or perform as stated.  Over the past 25 years, I have tried numerous times to get someone interested in these ideas without success.  I have even written to several offices with the Department of Defense and Homeland Security offering some of these ideas for free – no strings attached.  Still no bites.  Anybody interested?

1.                  A rifle scope that can be used with long range sniper rifles such as the .50 Cal M82A1/2/3, AS50 and the M107.  The scope can be sighted in at 100 yards and remains sighted in out to 6,000 yards.

2.                  A rapidly deployed access/entry screening system that not only detects explosives, drugs and firearms while not impeding the flow of traffic, it also will automatically capture any suspect in which there is an alert from the detectors.

3.                  *A rapidly deployed foolproof identification system that allows issuance while not impeding the flow of traffic but cannot be copied, duplicated or spoofed.

4.                  *A battlefield activity detection system that will allow for completely passive detection and identification of equipment and people regardless of the type or method of camouflage used.  It can be deployed from an aircraft completely undetected and provides is information without any detectable emissions.

5.                  **A novel extension of current RFID technology to cost-effectively allow identification of the entire contents of ships, trucks, containers, pallets, boxes and crates with currently used and deployed technology.

6.                  A novel use of an in situ nationwide data network to deploy NBC detectors nationwide without the expense of building a new network.  System would require only minor changes in existing system and addition of NBC detectors.

7.                  *An automatic and computerized method to detect stress in airframes, buildings and ships before they become visible or cause structural weakness.  System is easy and cheap to deploy and would work as a warning or diagnostic tool.

8.                  **A relatively simple addition to the management of the Ready Reserve and National Guard forces that will provide an advanced national technology capability in response to emergencies or threats to our technological infrastructure.  This method will increase Reserve and Guard recruitment, improve our national response capability, significantly reduce our costs for high technology skills and improve employment on a national level.

9.                  *An improvement to sailboat hull design that significantly reduces weight, improves performance, increases speed and improves stability.

10.              An improvement in propeller design that allows for rapid adaptation from fast powerful and high speed thrust to quiet, no cavitation thrust and from forward to reverse while maintaining shaft speed in one direction.

11.              **A design and plan for the cost-effective deployment of a secure wireless computer networking on military bases in support of: (1) base-wide networking (to and within buildings that would otherwise be too expensive to wire into a network); (2) field training using computer aided training (CBT) and computer aided instruction (CAI); (3) base-wide facility management (inspections, work order requests, reports, supply, vehicle tracking, etc.) and (4) inexpensive and automated monitoring and physical security with motion-activated or event activated video/audio recording; (5) delivery of internet access to base housing and barracks.

12.              *A design and plan for the cost-effective deployment of a method to reduce the cost of the use of electricity on military bases.  Method applies the use of an “energy profile” against power rates to optimize loads and energy use.

13.              *Business and technical plan for the improvement of science and math teaching effectiveness in grade school and high school while offering the potential for improved recruiting, lower educational budgets and increased employment of retiring military members.

14.              **A design and plan for the implementation of a web-based mentoring system of retired and active military to provide advise, insights, information and moral support for junior active duty members and potential recruits.  Method is based on proven effectiveness in similar applications.

15.              **A design and plan for the improved analysis of the “Personnel Pipeline” from recruiting management, through training to force readiness.  Method uses a dynamic, real-time, 3-D graphical representation of data in an intuitive visual presentation that greatly improves demand and trend analysis.  Plan uses a highly sophisticated software designed for 3-D graphical representation of data that is already created, in use and owned by the US government.

16.              **A design and plan for the rapid sorting, categorization and improved analysis of the freeform text of intelligence reports from all sources.  Design allows for the immediate cross connection of multiple agency computer systems without regard for the processor, language, encryption or network protocols while allowing for greatly improved identification of trends, developing issues, pattern recognition of events and emphasis tracking and analysis of targets.  Method uses a combination of methods derived from document management systems and visual data representation for automatic pattern recognition.  It uses dynamic, real-time, 3-D graphical representation of freeform data in an intuitive visual presentation that greatly improves demand and trend analysis.  Plan uses a highly sophisticated software designed for 3-D graphical representation of data that is already created, in use and owned by the US government.

17.              A unique solar panel that is made from discarded parts of old appliances but will create enough heat to boil water with only about 2 sq feet of panel surface.  Entire solar heater system design will heat a garage without any externally added energy.

18.              Simple device to allow any cordless phone to be used as a remote computer modem creating an inexpensive ($20) home network without using NIC’s, hubs or routers.

19.              Design for a simple and inexpensive geo-thermal heating system that is totally passive (no externally added energy, no maintenance, no controls, no need for any attention, no moving parts) but will remove snow and ice from driveways, sidewalks and roads.  It also can be used to reduce (but not replace) home heating costs.

20.              New design for a clock that uses colors to relate time value in an intuitive analog manner that can be artistically matched to a décor’s color scheme while providing a novel modern version of a “grandfather clock”.

21.              Design for a device that will automatically “scan” and create detailed architectural drawings of old buildings, caves, tunnels, etc.  It will create a 3-D wire-frame and a fully surface texturized exact scale renderings from which accurate measurements can be taken (of the graphic model) that will be within .001” of the real surface.

* = Report, proposal or design            ** = PowerPoint presentation or software developed

Plato – The Birth of an Automated Research Tool

In the early 80’s, I was in Mensa and was trying to find some stimulating discussions of the outer limits of science.  I was an R&D manager for the Navy and was working for NRL in some very interesting but highly classified research.  I was careful to avoid any talk about my work but I really wanted to explore areas that I could talk about.  This was one of several attempts to do that.  I sent the message below to a young professor at Lawrence Livermore National Labs, who was running a Mensa discussion forum on ARPANET, in the hopes of getting something started.  He was working with artificial intelligence in math and robotic systems at the time.   

Remember, this was written in 1984.  The Apple Mac was one year old.  TCP/IP has just been introduced on ARPANET. Windows 1.0 was introduced in 1884 but I did not begin using it until version 3.1 came out.  The fastest processor was an Intel 286.  Most all software ran in DOS.  This message was originally sent via UUCP but I saved it as ascii text onto tapes and then later translated it to disks with the idea of someday writing a book, but I never did.   Enjoy….. 


This is my first contact with one of the Mensa discussion forums.    I found a few guys that were willing to talk to me but it seems I ticked off a lot of others by my lack of due respect for puzzles and my references to the “wing nuts and space cadets” that inhabit and comment on most of the Mensa forums.   🙂   I eventually formed my own forum, web site and discussion groups and a bunch of us proceeded to talk our way into a lot of business together. 

=====================================================================  September 9, 1984 

Hi.  I’m new to this board but I have an interest in the subjects you discuss.  I’d like to open a dialog with some of you about your ideas and what you are interested in and have analyzed or studied that may be interesting.  I’m no Mensa guru but I do like a mental challenge and the application of science but more importantly, I think there is a difference between knowledge and wisdom.  I seek the latter.   

Who am I: I guess what I need to do first is try to tell you who I am and perhaps try to establish a little credibility so that you won’t think I’m really am a National Enquirer writer or some wing nut with wild ideas.  Then I’ll present some simple but somewhat radical ideas to start with and see how it goes.  If there is any interest in going further, I’d love to get into some really heavy stuff about life, existence and the future.  I am particularly interested in discussing cosmology and the human animal, but that is for later.   

I’ve been developing a methodology for predicting events and narrowly defined aspects of certain subjects based on social and technical inputs from a vast information correlation program I use……But that should wait until I find out if anyone is even interested in this stuff. 

I have been working near the Washington DC area for a number of years.   I am a researcher that deals in analysis and logic.  I enjoy a mental challenge similar to what I perceive that many Mensa types like but I don’t enjoy the meaningless math puzzles or secret decoder ring stuff.  I prefer to ask or pursue the real mysteries of life and nature.   

I have a few technical degrees and have traveled and been schooled all over the world.  That was mostly a product of my parents being in the military and my early jobs.  I became interested in computers as soon as they came out.  I helped build an ALTAIR at the University of New Mexico’s engineering fair in 1971-2.  That was where the first “microcomputer” was created.  The Altair came a few months later.  It introduced me to computers but I immediately switched over to the software aspects of computers rather than become a hardware hacker.  I got an EE degree first, so I understand the hardware, I just think its secondary to getting the job done.  Then I got a CS degree and began to see the possibilities.  I did 40 credit hours of studies in computer simulations and loved it.  I was using math so much in my CS degree that I discovered that for one more semester, I could also get an BS in Applied Math – which I did.  Then I discovered that with just one more semester, I could get a degree in Physics so I did that too.  By then my parents were out of money and I had to get a job.  Ever since then I have been messing with computers. I was particularly fascinated by the speed of computers.  I won an award one time for being the only student that solved a particular math problem using an algorithm that would fit into 2K of RAM.  I did it simply by adding one to a variable and checking to see if that solved the equation ‑ if it didn’t I added one more.  It worked.  While working on one of the first OCR studies, I was captivated by the fact that the computer could find any text, no matter how much it had to search, in seconds that might take a person years to find.    That has been a driving force every since.   

 What is my Resource Tool? I liked software but I wanted to get to the things that I could see that a computer could do ‑ not spend my time writing code.  I became good at modifying and interfacing existing software to do what I wanted.  I found that this was much easier than writing my own code.  I got the original WordStar to talk to Visicalc and dBASE on an old CP/M Kaypro so that I could get automatic documents that self‑updated themselves.  That was fun but I wanted to apply the efforts more to real world applications.   

The programming was slow because I tend to think in pictures and I wanted the programming to think in pictures also.  I found a program that would reverse engineer a source code listing into a flow chart of the program.  It was crude but it worked.    I figured it would be even better if you could go the other way ‑ input a flowchart and get a compiler to write the code.     I bought a flow chart program and a Fortran compiler and made them talk to each other so that I could use the graphics of the flow chart program to create a chart of my program flow and then feed it into the compiler to get object code.   I have improved on it for the last several years so that I can input natural language variables and verbs and it interprets for me.  If it doesn’t understand some variable relationship and can’t figure it out by seeing it in context, it stops and asks me.  I now can spend most of my time Using a program instead of writing it.   

 CLICK! Necessity if the Mother of Innovation

The first real application of this program was when I became a player in the stock market and discovered it was easy to improve my investment decisions if I could get my hands on the right information.  The information was available, there was just no way to find it, link it and give it structure and purpose using the speed of the computer.  That was the start of my effort to create a better information search and retrieval system.   

 The Hardware + Software

In short, I created some special searching software that helps me find anything about anything and then it automatically identifies links, relationships and implications for me.  I know that sounds like a bunch of pie in the sky but it really isn’t all that hard to do.  There are, in fact several programs on the market now that do the same thing, only I did it first on a Radio Shack TRS‑80 in 1979.  Then again on an Apple II+ in 1983 and again in 1987 on a Mac and most recently on an MS‑DOS machine (from PC to XT to 286 and now a 386).   

My method has evolved over the years and now uses some fuzzy logic and thesaurus lookup techniques along with a smart indexing and interfacing to my CD‑ROM and hard disk databases.  I built it over several years in modular form as I added new hardware or new processing capabilities.  The flowchart compiler helped me move the code from one machine to another since the source code (the flow chart itself) remained essentially the same, only the compiler code changed.   I now have a mini‑LAN of four computers and it will pass tasks to other computers, in the form of macros, so I can get parallel searches going on several different information resources at the same time.    That also lets me proceed with the analysis while some slow peripheral, like the tape deck, is searching.   

 De Facto Credibility

This search software will also interface into script files for on‑line searches like IQUEST, DIALOG and about 270 others including several military and government databases and gateways (NTIS, DTIC, FEDLINK, etc.  ) that I have access to as a function of my job.  For the CompuServe Information System (CIS), the command structure that programs like TAPCIS uses makes it easy to initiate an on‑line search.  The slowest part of it is waiting for the on‑line responses from the dial‑up service that I am using but at work I can use some really fast lines on ARPANET. 

I also have access to a few foreign databases that are the equivalent of our UPI, AP and CIS’s IQUEST.  The European (based in Germany) databases have lots of technical data and the Japanese databases have collated worldwide news reports from about 30 nations.  I use some lines from Cable & Wireless that I am able to bill to my job.  The translation services allow me to issue a search in English and read the response in English but the data searched is in one of several languages.   I can get into a lot of this stuff for free but there is also a lot that costs money.  That’s one of the reasons I got permission and started using all these resources at work.    

 Plato is Born

Still, the on‑line search costs are why I tried to build up my own research capabilities.  I use a page‑feeder scanner and OCR software to read in books and other texts to add to the info databases that I can search.  There is a used bookstore near me that sells books dirt-cheap or will take trades for other stuff (non‑books).  This makes it possible for me to buy a book, rip it apart and feed it into the page‑feed scanner.  Then I can throw the book away.  Since I never, ever let anyone else use the database and never quote directly out of the texts, its not a copyright violation.   

400 CD‑ROMs, 90 (14 inch) laser disks, 250 or so tapes and perhaps 5000 disks of compressed (zipped) text files gives me immediate access to my own database of about 500 gigabytes of text or about 500 million pages.  Some of this has line pictures but most of it is just pure text because the OCR software does not translate the images – just the text.  That is a loss but if I think the image is important, I scan it and save it on a disk.  Add to this on‑line access to about 3500 databases, including some I can get to at work, containing perhaps 50,000 times as much as I have, and you get some idea of how powerful my search capability can be.  I call my search program, “Plato”.   

 Concept Searches: With Plato, I am able to input a search “concept” instead of a search “syntax”.  It will automatically cross‑reference and expand the search into related subjects based on parameters I set.  It took a long time to learn hour to phrase my search syntax but I usually get back just the data I want.  Plato saves the search paths, resources and references in a bibliography format if I need to refer to the source.   

When you think about it, it is all pretty simple and commonly used techniques used in lots of commercially available software.  The search of compressed (zipped) text data is done real well by Lotus Magellan.   Lots of search software is available but I found a mix of GOPHER and FOLIO VIEWS with some added fuzzy logic and thesaurus lookup techniques that I enhanced after seeing some spell checkers that looked up words phonetically and with missing letters.  The interfacing was simply a matter of finding hooks in other programs or putting front‑ends on them to get them to talk to each other.  If all else fails, I just use a script file and/or keyboard macro in a BAT or BIN file to simulate the manual typing in and reading out of text.  That always works.  Linking Information Resources: 

There are lots of programs that can search one database or a selected set of data sources.  All I did was add a few extra features (script and macro files) to make it move from one reference to another, to quantify the validity of the data and added some interfacing software that I wrote to make other programs, that already do parts of this, work together.   Using some of the research techniques and capabilities that Plato allows, I have been able to identify some very interesting linkages and cross‑references to concepts that may be of interest to people in this forum.  I have also been able to fairly easily dismiss some of the quackery and screwballs that sometimes frequent these idea exchanges.   

 And Then What?

I am a serious and scientific researcher and I am not interested in some of the nuts and liars that grab scientific or technical words at random and make up their own versions of reality.  On the other hand, I consider the majority of science to be somewhat boring.  I may not KNOW everything but I don’t need to if I can find out what I need to know in only a few minutes on the computer.  Besides, even if the answer to any question is right there on the screen, I still have to read it and after awhile, that mounts up to a lot of reading.   

It’s like having a dictionary.  Anytime you wanted to know what a word means you’d look it up, but most people wouldn’t sit around all day looking up words just for fun.  Now imagine the same thing with a very good set of encyclopedias.    There would be a lot more information but after awhile, just knowing that you can find it would be enough.  Now imagine a set of encyclopedias that contains 87 billion, 500 million pages of text!  That’s how big my dictionary is.  Ok so its not really that big but we are talking about the size of hundreds of libraries. The one advantage that I think I have over many people is that I believe that the answer to most of our questions are out there somewhere.  Many people don’t even think to ask if they really believe that the answer is not available.  Let me give you an example.  I worked as a part-time consultant to government contractors for a while and I often dealt with clients that were preparing a proposal for a contract.  When I tell them that I can get detailed information about what their other competitors are doing, most think I can’t or it would be done by illegal means.  I can and it’s legal.  I can, in many cases get not only what the competitors are going to bid but their cost structures and their past performance.  I can even get the salaries of the people doing the bidding.  After awhile, my clients start asking me to get information that it would never occur to them to ask for before I came on the scene.   

 Monotony: Getting back to that incredible large dictionary, it might be fun to look up stuff for a while but pretty soon you would stop looking up random subjects and try to find some real challenges.  I got to that point about 4 years ago, shortly after I finished the prototype for my first PC based search software.  I have expanded its capabilities as new databases became available.  The addition of the scanner to read in hardcopy text was a big improvement.  I was able to select books in topic areas I wanted or to fill in gaps in coverage.    The scanner(s) has been going, on average, at about 2‑4 hours a day for the last several years.   

 The Hawking Incident

As I added new data, it was fun for a few days to search for some incredibly minuscule detail.  Or to try out a fuzzy search and chase down some concept.  I particularly liked writing to Steven Hawking and telling him I thought I had determined the size of the universe.  He was very polite when he said, “I know!”.   

That incident was one of many where I began following a trail of information that made me believe I had “uncovered” some new idea or concept that “I” had not heard before only to find out that upon deeper research, it has already been discovered.  With all this information, it is a very humbling thought to realize that someone out there knows at least some part of all of it.   I guess there is something to be said for being able to consolidate and cross‑reference all of this information and focus it down for a single person.  It has the net effect of allowing me to ask questions that lead me into areas that I would never have known to follow into.   

It is very useful to integrate across scientific study areas.  For instance, medical people seem to know very little about electronics or physics and vise versa.  The result is that scientists in each field limit their view of the world by only seeing it from their own field of study.  Only in the last few years has there begun to be a cross mixing.  Things like a tiny pill made of SMD (surface‑mount devices) that a patient swallows.  The pill has a sensor array and a transmitter that sends data to a receiver outside the body.    The term non‑invasive gets redefined.   It seemed like ages before they began to introduce virtual reality to medical systems and robotics and yet it seemed to me to be a perfectly natural mix.   I felt that as soon as a movie like TRON was made, that it would be only a matter of time before robotics, animation and computer graphics would be combined into a 3‑D viewer but it seems that it is just now catching on.   

But What Has this all got to do with you?

Now it is at this point that I must chose a topic to discuss with people of this forum.  I enjoy almost any intellectual discussion from religion to cosmology to the human potential but I prefer a topic that is perhaps a little further out than most of these and that mixes a lot of hard-core science and math with some logic and speculation.   

I am very curious about the fringes of science.  The areas where conventional science is afraid or unwilling to conduct real research but that has an unusual following of “believers”.    _____________________________________________________________ So, Dennis, what do you think? 

___________________________________________________ 2007 Update: 

In the late 1990’s, I updated Plato with a modern windows GUI interface and OOPs OCX files and modules.  I expanded into a Dbase DBMS engine and SQL interfaces.  I was able to multiplex multiple modems using some ISP software so I could use multiple lines of input.  Later, I extended this to multiple computers on a TCP/IP network using broadband.  It still relied on macros and keyboard simulators to interface with other commercial and proprietary software but its parallel operations equated to massive procession power. I have continued to make use of a lot of web sites and online services that I can access as a result of my government work and that gives me a huge increase over simple web searches.  I also have improved the bi-directional translation capability so I can tap into databases created in other countries. 

I have also since expanded its ability to search for themes, concepts and related ideas while improving its ability to quantify the relevance of those findings.  It still takes hours to resolve most of my searches but I let it work overnight and sometimes over the weekend to find and resolve my searches.   The end result is a very useful tool that I find helpful but, as noted above, it is not perfect and still falls far short of the human mind. 

Welcome to 21st Century Science…Fiction?

Welcome to the world of near fiction or perhaps it is near science.  The best stories have always been those that are true but maybe not totally true – We readily accept this idea in writings like historical novels and movies like docu-dramas.  So why not in science?   The following stories are true or not – only you can decide but don’t be too quick to judge until you check it out.  By that, I mean, use the web to see if you can find “anything” in these stories that is not based on real or possible science.  You might be surprised.