Category Archives: This is Probably True !

Some of these stories don’t need to be fiction to sound incredible. I have traveled the world and lived a long life. Here is what I have learned.

B-17 Miracle

The B-17 Miracle and PVT Sam Sarpolus

A mid-air collision on February 1, 1943 between a B-17 and a German fighter over the Tunis dock area became the subject of one of the most famous photographs of World War II. An enemy fighter attacking a 97th Bomb Group formation went out of control, probably with a wounded or dead pilot.   An Me109 crashed into the lead aircraft of the flight, ripped a wing off the Fortress, and caused it to crash. The enemy fighter then continued its crashing descent into the rear of the fuselage of a Fortress named All American, piloted by Lt. Kendrick R. Bragg, of the 414th Bomb Squadron. When it struck, the fighter broke apart, but left some pieces in the B-17. The left horizontal stabilizer of the Fortress and left elevator were completely torn away. The vertical fin and the rudder had been damaged, the fuselage had been cut almost completely through – connected only at two small parts of the frame – most of the control cables were severed, and the radios, electrical and oxygen systems were damaged.   The two right hand engines were out and one on the left had a serious oil pump leak.  There was also a hole in the top that was over 16 feet long and 4 feet wide at it’s widest and the split in the fuselage went all the way to the top gunner’s turret.  Although the tail actually bounced and swayed in the wind and twisted when the plane turned, one single elevator cable still worked, and the aircraft still flew-miraculously!  The turn back toward England had to be very slow to keep the tail from twisting off.  They actually covered almost 70 miles to make the turn home.

The tail gunner was trapped because there was no floor connecting the tail to the rest of the plane.  The waist and tail gunners used straps and their parachute harnesses in an attempt to keep the tail from ripping off and the two sides of the fuselage from splitting apart more.  British fighters intercepted the All American over the Channel and took one of the pictures that later became famous – you can easily find it on the internet.  The figher pilots also radioed to the base describing the empennage (tail section) was “waving like fish tail” and that the plane would not make it and to send out boats to rescue the crew when they bailed out.

Two and a half hours after being hit, the aircraft made an emergency landing and when the ambulance pulled alongside, it was waved off for not a single member of the crew had been injured. No one could believe that the aircraft could still fly in such a condition. The Fortress sat placidly until the crew all safely existed through the door in the fuselage, at which time the entire rear section of the aircraft collapsed onto the ground and the landing gear folded. The rugged old bird had done its job.

This event topped off an impressive streak of good luck that the crew of the All American experienced.  In all of the 414th Bomb Squadron for the entire war, they were the only crew that survived without a single major injury for their entire 25 mission assignment.  This incident was on their 25th mission and as a result, the entire crew were given orders to other non-combat assignments following their return from this flight.

See  http://www.reddog1944.com/414th_Squadron_Planes_and_Crews.htm

http://garfieldsteamhouse.org/History/WWII/WWII-B17-Survival-Story.php

B-17 “All American” (414th Squadron, 97BG)


 
That is the story that has been told and repeated for the past 70 years but there is something that has only recently come to light.  Lt. Bragg was busy flying the plane but he was in constant contact with the two waist gunners SGT. Henry Jebbson and PVT Michael “Mike” Zuk, as they kept Bragg informed of the condition of the tail and made their attempts to strap it to the rest of the plane.  Henry and Mike also tried several times to reach the tail gunner – PVT Sam Sarpolus – but there just was too much body damage to the aircraft.  All of the crew have since died except Mike and Sam and this new aspect of the story comes from Mike.  Sam was the youngest member of the crew at only 19 years old – with red hair and freckles.  Mike was the next youngest

I met Mike at a Silver Eagles meeting in Pensacola in 2004.  He was 81 and very frail and talked slow because of a stroke but there was nothing wrong with his mind.  Few of the other party goers were willing to take the time to talk to Mike but I did.  I took him into another room where we talked for more than 4 hours.  He told me about the flight and his life after that.  He became an enlisted pilot (a Silver Eagle) during the war and ferried aircraft over to England from the US.  When I asked him if any of his crew was still alive, he said, “Only Sam, and of course he will be for a long time”.  I wondered what he meant and asked.  He smiled and said there was much more to the story than anyone has ever said.  It wasn’t Henry and himself that held the plane to together.  It wasn’t Lt. Bragg’s careful flying…..it was Sam.

Mike went on, “the whole time we were flying that day after the collision, Sam sat backwards in the tail gunners seat with his hands out like he was stopping traffic and his eyes closed.  He never moved from that position….except once.  One of the fighters flew too close to us and his prop wash shook the All American hard.  We heard metal cracking and one of the two beams of the frame that was holding it together snapped.  At that moment, Sam opened his eyes and looked straight at the broken beam and pointed to it with one hand while still holding the other out “stopping traffic”.  Henry and I turned to look at what Sam was pointing to just in time to see a blinding light come from the break.  When our eyes cleared, we could see that the beam had been fused back together again.  We both snapped back to looking at Sam and he had gone back to holding his hands up with his eyes closed but he had a smile on his face.

He sat like that until after we landed.  They had to cut open the front of this gunner’s position and pull him out thru the window.  All the time with him holding his hands out.  Everyone thought he was scared or frozen stiff.  When he was put down on the ground, he still had his eyes closed.  I finally told him that everyone was out of the plane and he opened one eye and looked at me and said, “Really?”.  I assured him everyone was safe and then he put his arms down.  When he did, the old B-17 broke right in half – the tail fell off, the #3 engine burst into flames and the landing gear collapsed.  Sam looked at Mike and me and smiled and said, “Don’t tell anybody – I’ll explain later”.

It was three weeks later before we met with Sam in a quiet pub and had a long talk with him.  Sam said he didn’t know how he does it but he can move stuff and make things happen just by thinking about it.  He said he’s been busy during most of the flights keeping bullets from hitting any of the crew members.  We were the only crew that ever flew 25 missions without having a single crewman shot up.  We just stared at him and then both Henry and I said “bullshit” at the same time.  Sam said, “No, really, let me show you”.  He pulled out his K-bar sheath knife and handed it to Henry and told him to stab his hand.  Henry said, “No” so Sam said, “OK, then just stab this table napkin”.  Henry raised up the knife and plunged it down onto the table.  The table made a loud thud but the knife stopped about one inch above the napkin.  Henry pushed with both hands and then leaned his entire body onto the knife but it would not go that last inch into the table.  Sam said that it was harder to do bullets but he had a lot of practice.

We spent hours talking and testing Sam over the next few days before he went back to the US and we were reassigned to a USO tour to talk up our flight in the All American.  It seems that Sam has a rather well developed ability of telekinesis that allows him to control objects with his mind.  Not just move them but manipulate them even at an atomic scale.  That was how he welded the aluminum beams in the B-17 and created a sort of force field around each crewman when we were attacked.  We wanted to tell other people and told Sam that he would be famous if he would let us but he made us promise to keep it a secret.  Mike said I was the first person that he has ever told.  After telling me, Mike sat there very quietly as if he was regretted telling me.  I waited awhile and we sipped our drinks.  Mike finally spoke, “I wonder if Sam remembers me?”.  I asked if he had seen Sam since the war.  Mike said, “The next time we talked was about 1973 or so.  We met at a Silver Eagle Reunion in San Diego.  I didn’t know Sam had gotten his enlisted pilot’s license also.  That was the only reunion that Sam ever attended.  When I saw him, I recognized him immediately and then realized that the reason I recognized him so quickly was because he looked pretty much like he did 30 year earlier.  He had grown a mustache and dyed his hair but he did not look like he had aged much at all.  He and I went off into a corner of the bar and talked for hours.  It seems he liked helping people and he got a job as a paramedic on a rescue truck.  He was very well qualified and confided in me that he often used his powers to help him in an emergency.  Because he seemed to not age very fast he could only stay for a few years at each job but his skills were in high demand and he could get a job anywhere he went.  He also had had jobs as a policeman and a highway patrol officer”.  Mike would stop and stare at the floor every so often as he would get lost in memories and thoughts.

One of these moments that Mike stopped to stare turned into several minutes.  I said his name several times but he did not respond.  Finally, I touched his arm and asked if he was OK.  Mike got a grimace on his face and then grabbed his chest and rolled out of his chair onto the floor.  I recognized the signs of a heart attack and I called for help.  In an instant, a large crowd of people had gathered around him and calls for a doctor and 911 were shouted.  Someone put a large coat over Mike to keep him warm and another put a rolled up coat under his head for a pillow.

As I was sitting in my chair, holding his hand, someone with a hat on, bent down from the crowd and leaned over Mike.  He put one hand on Mike’s forehead and the other under the coat on his chest.  I thought it might be a doctor trying to check his vital signs but the person just frozen in that position.  I watched intently and then noticed a slight glow of light coming from under the coat.  No one else seemed to notice but I’m sure I did not imagine it.  After about 15 seconds, Mike opened his eyes and looked up.  He smiled and said, “Hi Sam”.  The man in the hat then got up and melted back into the crowd.  I asked Mike if he was OK and he said he felt fine that the he wanted to get up off the floor.

As I helped him up, I saw the man with the hat go out the door of the room we were in.  I sat Mike down and rushed out the door but there was no one anywhere in sight.  I rushed back to Mike who was shooing everyone away and sipping his drink.  I sat down with him and said, “Was that Sam?”.  Mike said, “Oh yea, he seems to come whenever I need him – that’s the third time he has done that”.  “Done what?” I asked.  Mike winked at me and said, “You know, you saw it”.  Then he said, ”I’m getting tired and I need to go. It has been good talking with you”.  I asked if we could talk again but Mike told me he was traveling back home early the next morning.  I asked if he knew where I could find Sam.  Mike turned to me and smiled and said, “I have no idea where he lives but every time I have needed him, he shows up”.

I spent two years searching for Sam with no luck.  I carried a picture of him from his days of flying the B-17 but had it cropped and colored so that it did not look like an old picture.  I showed it to anyone I thought might have seen him.  He did not have a social security number and there were no public records of his name anywhere in the US.  During my travels, I passed through Las Vegas and just out of habit, I showed Sam’s picture around.  The second night I was there, the desk clerk at my hotel said he recognized Sam.  He came about twice a year for only two or three days and played the roulette and Keno for a few hours in each of several hotels and then he would leave town.  He seemed to have remarkably good luck and the desk clerk said that he was always generous with the tips and always seemed to be smiling.  I smiled and agreed.

I figured I had been looking for Sam the wrong way.  Instead of trying to find someone that had seen him by showing his picture, I took another tack.  I started by looking in newspapers and online for unusual happenings that seemed to be unexplained or that were very much out of the ordinary.  I started with the first few days after he was last seen in Las Vegas and looked in a 500 mile circle around Vegas.  I was surprised at how many such events were reported on the internet and in YouTube videos but by reading each one, I narrowed it down.

One was for a small town in central Utah called Eureka – just south of Salt Lake City.  They had reported that someone had tipped a waitress at the local truck stop with $500.  It turned out that she needed about that much to be able to pay for a home medical device that her son needed for his severe asthma.  I drove to this small town and found the waitress.  Her name was Sally.  She was reluctant to talk about it because of all the news attention she had gotten but when I showed her Sam’s picture, she clearly recognized his face but she hesitated for a minute and then said that was not him.  I assured her that I was not a reporter and that I did not want to harm him.  I showed her my previous stories about the B17 and his days in the Silver Eagles.  She sat down with me in a quiet corner of the diner and we talked.  She said he was quick to pick up on her sadness about her son and he listened intently as she described the problem.  She had saved for an aspirator for her son Jimmy but times were tough and not many people were leaving tips and business at the truck stop was slow outside of tourist season.  When Sam left, he smiled and held my hand and said “thank you and say hello to Jimmy for me”.  Sally stopped for a moment and then said, “to this day, I don’t know what he was thanking me for – I only gave him coffee and he didn’t even finish that.”

I used the date Sam was here in Eureka and began the search again.  I found another story in Ketchem, Idaho where someone paid to have a house rebuilt for a single mother with four kids.  The husband had been killed in Iraq in 2009 and she had struggled to make ends meet but when a fire burned down their house, she was faced with having to send her kids to foster homes.  Someone paid a local contractor to build an entire house on their old lot and then put $10,000 into a bank account in her name.  She never saw the donor but at Perry’s restaurant on First Ave., a waitress that received a $100 tip confirmed that it saw Sam.

I repeated this searching pattern and tracked down more than a dozen places where Sam had stopped by some remote town or obscure business and helped out someone.  Most often he paid for something or gave money to someone.  About half the time, no one knew it was him but what he did seemed to follow a pattern.  He would show up just as some situation was about as serious as it can get and he seemed to know exactly what was needed and exactly who needed it.  He never seemed to stay overnight in the towns where he helped someone and he didn’t seem to do much investigating or asking around.  He often spent less than 10 minutes at the place where he did his good deed and then he was gone completely out of town.  I didn’t meet one person that knew his name.

I followed his trail up through Idaho and western Montana, then east through North Dakota and then south all the way to southern Texas.  He did his good deeds about every 300 to 400 miles about every other day.   Sam stopped along the way at casinos that were on Indian reservations and he also bought lottery tickets the day before the drawings.  He often won.  He always paid the IRS taxes immediately but I found out that he was using different social security numbers so that no one really knew who he was.

In Kansas, I found a State trooper that told me about a 25 car pileup that happened in a major storm on I-235 just outside of McPherson.  Lots of people were hurt but when the paramedics came, they found that no one had any broken bones or life-threatening injuries.  16 of the accident victims said that someone had come to their car shortly after the crash and “fixed” them.  They described a young looking man with red hair and freckles that calmed them down and then rubbed their legs or arms where it hurt and it stopped hurting.  The medics said that the blood found in some of the cars indicated that there had been some very serious injuries but when the examined the people inside, they found no cuts or bleeding from any of them.  No one saw Sam come or leave and most of them just called him an angel.

I don’t know who or what Sam is and maybe he doesn’t either.  He roams around doing good deeds, saving lives and bringing a little peace and happiness to everyone he meets.  He obviously wanted to remain unknown and I finally decided that I needed to honor that so I went home.

 

 

My Bathtub – My Fountain of Youth

I have written many times that I spend quite a bit of time at my place in BC, Canada. It is an isolated place that was carved out of an existing cave in a rock face near a lake. I have added a lot of my power generation gadgets and installed lots of technology to give me all the comforts of home while being miles from the nearest town (by road). Actually, New Denver is only a few miles away, across the lake. I had a Cessna 206h with pontoons until last year and then I traded up to a Berive Be-103 Snipe. It’s a twin engine 6 seater that gives me increased range and speed to fly back to the states. I liked the reversible props for water and ice control and opted for the radar, extra fuel tanks and autopilot. I can go over 1,000 miles in one hop at 120 kts. It’s a great plane, even though it is Russian.

The living space in BC is a large natural cave that I expanded it significantly and added poured concrete floors to level it. I have 14 different rooms; the largest is about 30 feet long and 40 feet wide with a 25-foot ceiling. Most of the cave is carved out of hard stone but when I was building in it, I tapped into a fresh water spring that was flowing in a rather well defined channel through the stone. I maintained the natural channel – even enhanced it – but tapped into the water to fill a massive cistern that is carved into the rock wall, high up in the cave. This gives me a huge water supply (over 2,000 gals) and also creates the gravity-flow water pressure for the sinks, hot tub, showers and my favorite, the soaking tub.

The water actually tastes a little weird so for drinking and cooking, I run it through a bubbling ionizer that bubbles a constant flow of negative ion air through the water. This has a great filtering effect as well as purifying it of all bacteria and other stuff. There is also a larger version of the ionizer in the cistern.

The hot tub and the soaking tub are carved out of the stone but I bored and drilled into both to give me bubblers and water jet outlets. I’ve been using both for about 18 years now and love it but recently I found out it may be better than I ever thought it could be.

I have always been thought by others to be younger than my real age. I always assumed it was just luck and good genes but about a year ago, a doctor told me that I was way past just being young looking. I had the skin and organ function of a man 30 or more years younger than my real age. I feel fine for someone that was in college when Kennedy got shot so I figured he was just being kind but he wanted to run some tests. He did and came back and said that I was a real medical miracle and that he wanted to do a paper on me. I said I’d cooperate but I did not want to be identified in the study. He agreed.

I won’t bore you with the details of the tests and results but suffice it to say that I was the source of a lot of interest in a relatively narrow medical field that is into longevity and life span studies. After testing me for 6 months and then coming to my two residences and testing everything I eat and touch, I got the report last week. It seems that my two tubs and the spring water at the cave are partially the cause of my good fortune. The water has a very high content of magnesium bicarbonate. Just since 2002, there has been a lot of interest in magnesium bicarbonate following a study done in Australia in which they studied cows and sheep that were living 30% to 45% longer than normal and were able to continue to have normal offspring, even into their advanced years. After a two-year study, it was determined that it was the water that was high in magnesium bicarbonate. Look it up, you’ll see that there is now a commercial company that is selling the water from the ranch that the cows had been drinking.

I have not only been drinking this water for the past 18 years but have been bathing and soaking in it on a regular basis. I seem to have lucked out and as a result may end up living to be a lot older than I ever expected to. I think this is a good thing.

Run Silent, Fast and Undetectable – US Navy Submarines

An old 1955 movie was called “Run Silent, Run Deep”, about the US Navy’s submarine service in WWII. Our subs today are quite different and as you will see, a new movie might be named “Run Silent, Run Fast, Invisibly”. Subs today go faster than you would imagine, quieter than anyone thought possible and – thanks to a contribution I made 15 years ago – they are now almost invisible. My small part had to do with the stealth aspects of subs. The exact nature of stealth technology is a secret and I, for one, will not give it away but I can tell you something about it. But first, I have to explain a little about the technology.

Imagine you have a very large bundle of sewing needles. Tens of thousands of them. Now imagine you can set them all upright, with their pointy ends pointing up and pushed together as close as they can get. If you then looked down on those pointy ends, it would look very black. The reason is that the light on the sides near the points reflects inward and keeps reflecting as it bounces further and further down into the mass of needles. Officially, this is called, “the angle of incidence (the incoming light) equals the angle of reflection (the bounced light). With each reflection, a little of the light energy is absorbed and converted to heat. Because of the shape and angle of the needles, the light never reflects back outward thus making it appear to be totally black. In physics, this is called a “black body”.

This is essentially what stealth technology is like only at a microscopic scale. Aircraft are painted with a special kind of paint that has tiny but densely packed little pointy surfaces that act just like those needles. When radar hits the aircraft, the paint absorbs all of the radar’s energy and lets none reflect back to the enemy receiver. When no radar reflection is seen, it is assumed that there is nothing out there to be seen.

Sonar for subs works pretty much the same as radar but instead of radio frequency (RF) energy, it uses sound. Sound is emitted and a reflection is heard. This is called active sonar. Because subs are relatively noisy in the water, it is also possible to just listen for their noise and then figure out what direction the noise is coming from. That is called passive sonar. The props, engine noise and just the water rushing over and around the sub makes noise. The faster you go, the more these things make more and louder noise.

Despite their best efforts at sub design, even our subs create some sounds and they are, of course, going to reflect an active sonar bing when that is used. However, the US is the worlds best at creating very quiet subs. It is mostly because of the secret design of the props that are able to turn fast but not create cavitation – which makes a lot of noise underwater. Flush mounted hatches and even screw heads also make our subs quiet. In the 1960’s and 70’s, going over 15 knots under water was like screaming, “here I am”. In the 1980’s and early 1990’s, we could go up to 25 knots in relative silence. The latest subs – built or being built – can go over 35 knots and still remain mostly quiet.

That means that the enemy has to use active sonar to try to find them and that gives away the enemy’s position. At that point, they become easy targets.

Pushing a 400 foot long sub underwater at 35 knots is no easy chore but due to some amazing designs in the hull shape and the power plant and props, that is nowhere near the limit of the potential speed possible. Our subs could do as much as 85 knots underwater (that’s nearly 100 MPH!) but they would sound like a freight train and would create a wake large enough to be visible from space. Since stealth is the primary tactic of subs, that kind of speed was simple not reasonable….until now.

While I was at NRL, I presented a paper on how to create a totally quiet sub. Even if it were producing a lot of mechanical or hydroaction noise, my method would make it totally silent. More importantly, it would also completely hide the sub even from active sonar bings.

The advantages of this are significant. In a combat environment, going slow to keep the sub quiet also makes it linger in dangerous areas longer but going fast makes it easier to locate and track. Being able to launch weapons and then move very fast out of the area in total silence – even to active sonar – would be a game changer for submarine warfare.

Since I was a former pilot and worked in an entirely different department from the sub guys, the first reaction to my suggestion was “Yeah, Right – a flyboy is going to tell us how to make a sub quiet”. That was back in 1998. I recently found out that the latest sub, the Virginia, SSN-774, incorporates my design in an applied active acoustic quietness system that they now call Seawolf. When I contacted some old NRL friends and asked them about it, they were reluctant to talk about it until I started quoting my research paper from 1998. They said, “YOU wrote that paper!” Then they began to tell me the whole story.

It seems that my paper sat in files for six months before it was read by someone that understood and recognized it for what it could do. After a few preliminary computer models and some lab scale experiments, they were able to get funding for some major research and within a three months, they were proposing to incorporate the idea into the next class of subs. That was in early 2000. It was decided to incorporate the design into the last sub in the Seawolf class of subs – SSN-23, the USS Jimmy Carter. It proved to be effective but the SSN-23 was mostly a test bed for further development and a modified design was planned for the next class – the USS Virginia. After seeing how effective it was, the entire rest of the Seawolf class of subs was cancelled so that all the efforts could be put into the Virginia class with this new technology. My design was improved; named after the Seawolf class where it’s design was finalized and retrofitted into the Virginia before it was turned over to the Navy in 2004.

Soon after this discussion, I was invited to a party of the sub guys down near Groton. Since I was still at my Vermont residence, I figured, why not. I could go to my Canada residence right after the party. So last September; I flew my plane down to Elizabeth Field at the southwest end of Fishers Island. The sub guys from Groton had a nice retreat on the island at the end of Equestrian Ave. After I arrived, I was shown to a room upstairs in the large house and told to meet in the Great Room at 5PM. When I went down to the party, I got a huge surprise. I was the guest of honor and the party was being thrown for me.

It seems that they lost the cover page to my original 1998-research paper and never knew who wrote it. Several people at NRL had suggested that it was mine but they were sure that it had to have come from one of their own sub community guys but could never find the right one. When I sent someone a copy I had kept, they determined that I was the original author and deserved the recognition. It seems that my idea has actually been a game changer for the entire submarine warfare community in both tactics and strategy as well as hull design, combat operations, even weapons design. I was apparently quite a hero and did not even know it.

I enjoyed the party and met a lot of my old NRL buddies that were now admirals or owners of major corporations or renowned research scientists within select circles of mostly classified technologies. I got a lot of details about how they had implemented my idea and about some of the mostly unexpected side benefits that I had suggested might be possible in my paper. It was humbling and almost embarrassing to be honored for an idea that was now 15 years old and was mostly refined and developed by a host of other researchers. I began looking forward to getting on to my Canada retreat.

Two days later, I flew out for BC, Canada with a large handful of new contacts, renewed old contacts and lots of new ideas and details of new technologies that were being developed. I also ended up receiving several offers to do some research and computer modeling for some problems and developing technologies that some of the partygoers needed help on. I’ll probably end up with a sizable income for the next few years as a result of that party.

I suppose you’re interested in what exactly was this fantastic technology I designed way back in 1998 that has proved to be so popular in 2012. It was actually a pretty simple concept. Most techies have heard of noise-canceling headphones. They work by sensing a noise and then recreating an identical sounding noise with a phase shift of 180 degrees. When a sound is blended with the same sound but phase shifted by 180 degrees, what you get is total silence. This works very well in the confined and controlled environment of a headphone but was thought to be impossible to recreate in an open environment of air or water. I simple created a computer model that used a Monte Carlo iterative algorithm that quantified the location, intensity, lag time, and other parameters for an optimum installation on a sub. It took the super computers at NRL several hours to refine a design of placement, power, sensors and other hardware and temporal design aspects but when it was done, I was surprised at the degree of efficiency that it was theoretically possible to achieve. I wrote all this into my 1998 paper, mostly out of the hopes that my computer model would be used and I could get another project funded.

My paper included a reference to where and how to run my model on the NRL computers and eventually, it was used as their primary design optimization tool for what would later be called the Seawolf Acoustic Quieting and Stealth System (SAQ-SS). The actual modeling software I created and left on the NRL computers began being called SAQ, which got shortened to pronouncing it as “SACK”. As it developed and got seen by more people and they saw the side benefits on the whole stealth effect, it was called SAQ-SS, which evolved into “SACKSS”, then into SAQ-SaS and into SACK-SAS and was eventually called the “SUCCESS” system.

Those side benefits I keep referring to are worth mentioning. When an active sonar bing or sound wave front is detected by the SAQ system, it activates a hull mounted sound modulator that causes the hull itself to act as a giant speaker or transducer to initiate a response wave form that is 180 degrees out of sync from the incoming sound. This effectively nulls out the sound. The same happens for sounds created by the mechanics of the sub that is passed by conduction to the hull. In this case, the hull is modulated so that it completely absorbs any sounds that might otherwise pass through it to the water.

Another side benefit is that the SAQ system created the opportunity, for the first time, for the sub commander to “see” directly behind his own sub. In the past, the noise from the prop, the engine and the distortion of the water because of the prop wash; the rear of the sub was a blind spot. To see back there, the sub had to make wide slow turns to the left and right or they had to drag a towed array – sort of a remote sonar – on a cable behind the sub. Despite having rear facing torpedo tubes, the sub could not effectively use active or passive sonar for about 30 degrees astern. This, of course, was the approach of any hunter-killer subs that wanted to get a sure-fire launch at another sub target. Because of the hull nullification and the ambient noise cancellation of the SAQ system, the aft facing sensors and sonar’s now are very effective at both detection and fire control for torpedo launch. There is still some loss of resolution as compared with any other direction due to the water disturbance in the prop wash but a good sonar operator can compensate.

The final side benefit of the SAQ system is that, for the first time, it allows a sub to travel as fast as it is capable of going, even in a confined combat environment, without being detected. It was this benefit that led to the immediate cancellation of the remaining balance of the Seawolf class of subs and go directly to the Virginia class. Using a design similar to a jetski engine, called a Propulsor or jet pump, the Virginia is capable of speeds far in excess of any of its predecessors. Despite very high speeds, the Virginia class of subs will be undetectable by sonar – allowing it to move as fast as the engine can push it. Exact speeds and limits of depth of all US Navy subs is highly classified but prototype tests on the USS Jimmy Carter reached 67 knots or about 78 MPH and that was before enhancements and design changes were made. My guess is that the SSN-774 and its sister boats will be able to exceed 90 MPH when fully submerged – perhaps over 100 MPH.

The current version of the SAQ system is so effective that when it was tested in war-games against surface ships and P-3 ASW aircraft, it created a huge argument that had to be resolved by the CNO of the Navy. The USS Virginia was able to simulate the kill of all 19 ships in the exercise without being detected by any of them or by any ASW aircraft or helo. The squadron commanders of the S3 and P3 aircraft and the captains of the ASW destroyers filed formal complaints against the sub commanders for cheating during the exercise. They claimed that there was no sub anywhere in the exercise area and that the simulated kills were all as a result of cheating in the exercise computer models. The fighting between the aircraft, surface and sub communities was so fierce that the CNO had to call a major conference to calm everyone down and explain what and how the exercise went the way it did.

I am pleased that my idea from 15 years ago was eventually found to be valid and that I have contributed in some manner to our security and ability to meet any threat.

The Aurora Exists but Its Not What You Think

The Aurora is the new jet that people have been saying is the replacement for the SR-71- it is real but it isn’t what you’d think it is. First a little history.

The U-2 spy plane was essentially a jet powered glider. It had very long wings and a narrow body that could provide lift with relatively little power. It used the jet engine to take it very high into the air and then it would throttle back to near idle and stay aloft for hours. The large wings were able to get enough lift in the high thin air of the upper atmosphere partly because it was a very light weight plane for its size. Back in the early 60’s, being high was enough protection but still allowed the relatively low resolution spy cameras to take good photos of the bad guys.

When Gary Powers’ U-2 got shot down, it was because the Soviets had improved their missile technology in both targeting and range and because, we gave the Russians details about the flight – but that is another story. The US stopped the U-2 flights but immediately began working on a replacement. Since shear altitude was no longer a defense, they opted for speed and the SR-71 was born. Technically, the SR-71 (Blackbird) was not faster than the missiles but, because of its speed (about Mach 3.5) and its early attempt at stealth design, by the time they had spotted the spy plane and coordinated with a missile launch facility, it was out of range of the missiles.

The CIA and the Air Force used the Blackbird until the early 1980’s when it was retired for spying and used only for research. At the time, the official word for why it was retired was that satellite and photographic technology had advanced to the point of not needing it any more. That is only partially correct. A much more important reason is that the Russians had new missiles that could shoot down the SR-71. By this time, Gorbachev was trying to mend relations with the west and trying to move Russia into a more internationally competitive position so he openly told Regan that he had the ability to shoot down the SR-71 before he actually tried to do it. Regan balked so Gorbachev conducted a “military exercise” in the Spring of 1981 in which the Russians made sure that the US was monitoring one of their old low orbit satellites and then during a phone call to Regan, the satellite was “disabled” – explosively.

At the time it was not immediately clear how they had done it but it wasn’t long before the full details were known. A modified A-60 aircraft code named “SOKOL-ESHELON,” which translates to “Falcon Echelon”, flying out of Beriev airfield at Taganrog, shot down the satellite with an airborne laser. When Regan found out the details, he ordered the Blackbird spy missions to stop but he demanded that Gorbachev give him some assurances that the A-60 would not be developed into an offensive weapon. Gorbachev arranged for an “accident” in which the only operational A-60 was destroyed by a fire and the prototype and test versions were mothballed and never flew again.

The spy community – both the CIA and DoD – did not want to be without a manned vehicle spy capability so they almost immediately began researching a replacement. In the meantime, the B-1, B-2 and B-117 stealth aircraft were refined and stealth technology was honed to near perfection. The ideal spy aircraft would be able to fly faster than the SR-71, higher than the U-2 and be more invisible than the B117 but it also had to have a much longer loiter time over the targets or it would not be any better than a satellite.

These three requirements were seen to be mutually exclusive for a long time. The introduction and popularity of unmanned autonomous vehicles also slowed progress but both the CIA and DoD wanted a manned spy plane. The CIA wanted it to be able to loft more sophisticated equipment into the complex monitoring of a dynamic spy situation. DoD wanted it to be able to reliably identify targets and then launch and guide a weapon for precision strikes. For the past 30 years, they have been working on a solution.

They did create the Aurora which uses the most advanced stealth technology along with the latest in propulsion. This, at least satisfied two of the ideal spy plane requirements. It started with a very stealthy delta-wing design using an improved design of the SR-71 engines, giving it a top speed of about Mach 4.5 and a ceiling of over 80,000 feet but that was seen as still too vulnerable. In 2004, following the successful test of NASA’s X-43 scramjet reaching Mach 9.8 (about 7,000 MPH), DoD decided to put a scramjet on the Aurora. Boeing had heard that DoD was looking for a fast spy jet and they attempted to bust into the program with their X-51a but DoD wanted to keep the whole development secret so they dismissed Boeing and pretended there was no such interest in that kind of aircraft. Boeing has been an excluded outsider ever since.

In 2007, DARPA was testing a Mach10 prototype called the HyShot – which actually was the test bed for the engine planned for the Aurora. It turns out that there are a lot technological problems to overcome that made it hard to resolve a working design in the post-2008 crashed economy and with the competition from the UAV’s while also trying to keep the whole development secret. They needed to get more money and find somewhere to test that was not being watched by a bunch of space cadets with tin foil hats that have nothing better to do than hang around Area 51, Vandenberg and Nellis.

DoD solved some of these issues by bringing in some resources from the British and got NASA to foot some of the funding. This lead to the flight tests of the HiFire in 2009 and 2010 out of the Woomera Test Range in the outback of South Australia. The HiFire achieved just over 9,000 MPH but it also tested a new fuel control system that was essentially the last barrier to production in the Aurora. They used a pulsed laser to ignite the fuel while maintaining the hypersonic flow of the air-fuel mixture. They also tested the use of high velocity jets of compressed gas into the scramjet to get it started. These two innovations allowed the transition from the two conventional jet engines to the single scramjet engine to occur at a lower speed (below Mach5) while also making the combination more efficient at very high altitudes. By late 2010, the Aurora was testing the new engines in the Woomera Test Range and making flights in the 8,000 to 9,700 MPH range.

During this same period, the stealth technology was refined to the point that the Aurora has a RCS (radar cross-section) of much less than 1 square foot. This means that it has about the radar image of a can of soda and that is way below the threshold of detection and identification of most radars today. It can fly directly into a radar saturated airspace and not be detected. Because of its altitude and speed and the nature of the scramjet, it has an undetectable infrared signature also and it is too high to hear audibly. It is, for allintents and purposes, invisible.

This solved two of the three spy plane criteria but they still had not achieved a long loiter time. Although the scramjet is relatively fuel efficient, it really is only useful for getting to and from the surveillance site. Once over the spy area, the best strategy is to fly as slow as possible. Unfortunately, wings that can fly at Mach 10 to Mach 12 cannot support the aircraft at much slower speeds – especially in the thin air at 80,000 feet.

Here is where the big surprise pops up. Thanks to the guys at NRL and a small contribution I made to a computer model, the extended loiter time problem was something that they began working on back in 2007. It started back when they retrofitted the HyShot engine into the Aurora, then NRL convinced the DARPA program manager to also retrofit the delta wings of the Aurora with a swing capability, similar to the F-14 TomCat. The result would be a wing that expands like a folding Japanese fan. In fast flight mode, the wing would be tucked into the fuselage making the aircraft look like the long tapered blade of a stiletto knife. In slow flight mode, the wings would fan out to wider than an equilateral triangle with a larger wing surface area.

As with any wing, it is a compromise design of flying fast and slow. The swing wing gave the Aurora a range increase from reduced drag while using the scramjet. It also allowed the wing loading to be expanded slightly giving it more lift at slower speeds and in thinner air. However, most of the engineers on the project agreed that these gains were relatively minor and it was not worth the added cost in building and maintenance. This was not a trivial decision as it also added weight and took up valuable space in the fuselage that was needed to put in the modified scramjet and added fuel storage. Outside of NRL, only two people were told why they needed to do this wing modification and how it could be done. Those two were enough to get the funding and NRL won the approval to do it.

What NRL had figured out was how to increase lift on the extended wing by a factor of 10 or more over a conventional wing. This was such a huge increase that the aircraft could shut off its scramjet and run one or both of its conventional jet engines at low idle speeds and still stay aloft – even at extreme altitudes. Normally, this would require a major change in wing shape and size to radically change the airfoil’s coefficient of lift of the wing but then the wing would be nearly useless for flying fast. A wing made to fold from one type wing (fast) to another (slow) would also be too complex and heavy to use in a long-range recon role. The solution that NRL came up with was ingenious and it turns out it partly used a technology that I worked on earlier when I was at NRL.

They designed a series of bladders and chambers in the leading edge of the wing that could be selectively expanded by pumping in hydraulic fluid and expanding these bladders to alter the shape of the wing from a near symmetric chambered foil to that of a high lift foil. More importantly, it also allowed for a change in the angle of attack (AoA) and therefore, the coefficient of lift. They could achieve AoA change without altering the orientation of the entire aircraft – this kept drag very low. This worked well and would be enough if they were at a lower altitude but in the thin air at 80,000+ feet, the partial vacuum created by the wing is weakened by the thin air. To solve that, they devised a way to create a much more powerful vacuum above the wing.

When they installed the swing-wing, there were also some additions to some plumbing between the engines and the wing’s suction surface (upper surface, at the point of greatest thickness). This plumbing consisted of very small and lightweight tubing that mixes methane and other gases from an on-board cylinder with super heated and pressurized jet fuel to create a very high volatile mix that is then fed to special diffusion nozzles that are strategically placed on the upper wing surface. The nozzles atomize the mixture into a fine mist and spray it under high pressure into the air above the wing. The nozzles and the pumped fuel mixture are timed to stagger in a checkerboard pattern over the surface of the wing. This design causes the gas to spread in an even layer across the length of the wing but only for about 2 or 3 inches above the surface.

A tiny spark igniter near each nozzle causes the fuel to burn in carefully timed bursts. The gas mixture is especially designed to rapidly consume the air in the burning – creating a very high vacuum. While the vacuum peaks at one set of nozzles, another set of nozzles are fired. The effect is a little like a pulse jet in that it works in a rapid series of squirt-burn-squirt-burn repeated explosions but they occur so fast that they blend together creating an even distribution of enhanced vacuum across the wing.

You would think that traveling at high Mach speeds would simply blow the fuel off the wing before it could have any vacuum effect. Surprisingly, this is not the case. Due to something called the laminar air flow effect, the relative speed of the air moving above the wing gets slower and slower as you get closer to the wing. This is due to the friction of the wing-air interface and results in a remarkable slow relative air movement within 1 to 3 inches of the wing. This unique trick of physics was known as far back as WWII when crew members on B-29’s, flying at 270 knots, would stick their heads out of a hatch and scan for enemy fighters with binoculars. If they kept within about 4 or 5 inches of the outer fuselage surface, the only effect was that they would get their hair blow around. The effect on the Aurora was to keep the high vacuum in close contact with the optimum lifting surface of the wing.

Normally, the combination of wing shape and angle of attack, creates a pressure differential above and below the wing of only 3 to 5 percent. The entire NRL design creates a pressure differential of more than 35% and a coefficient of lift that is controllable between .87 and 9.7. This means that with the delta wing fully extended; the wing shape bladders altering the angle of attack and the wing surface burn nozzles changing the lift coefficient, the Aurora can fly at speeds as low as 45 to 75 MPH without stalling – even at very high altitudes.

At the same time, it is capable of reducing the angle of attack and reshaping the wing into a chambered wing (a very thin symmetric) shape and then sweeping the delta wing into a small fraction of its extended size so that it can achieve Mack 15 under scramjet power. For landing and takeoff and for subsonic flight, it can adjust the wing for optimum fuel or performance efficiency while using the conventional jet engines.

My cohorts at NRL tell me that the new version of the Aurora is now making flights from the Woomera Test Range in the outback of South Australia to Johnston Atoll (the newest test flight center for black ops aircraft and ships) – a distance of 5,048 miles – in just over 57 minutes – which included the relatively slow speed climb to 65,000 feet. The Aurora then orbited over Johnson Atoll for 5 ½ hours before flying back to Woomera. In another test, the Aurora left Woomera loaded with fuel and a smart bomb. It flew to Johnson Atoll and orbited for 7 hours before a drone target ship was sent out from shore. It was spotted by the Aurora pilot and then bombed by the laser-guided bomb and then the pilot returned to Woomera.

I was also told that at least three of the precision strikes of Al Quida hideouts were, in fact, hit by the Aurora and then credited to a UAV in order to maintain the cover.

The Aurora is the fastest and the slowest highest altitude spy aircraft ever made and if the pilots don’t make a mistake, you may never see it.

Unthethered planets Are Not What the Seem

  

Two seemingly unrelated recent discoveries were analyzed by a group at NASA with some surprising and disturbing implications.  These discoveries came from a new trend in astronomy and cosmology of looking at “voids”.

  The trend is to look at areas in the sky that appear to not have anything there.  This is being done for three reasons. 

  

(1) In 2009, the Hubble was trained on what was thought to be an empty hole in space in which no previous objects have ever been observed.  The picture used the recently improved Wide Field and Planetary Camera #2 to do a Deep Field image.   The image covered 2.5 arc minutes – the width of a tennis ball as seen from 100 meters away.  The 140.2 hour exposure resulted in an image containing more than 3,000 distinct galaxies at distances going out to 12.3 billion light years away.  All but three of these were unknown before the picture was taken.  This was such an amazing revelation that this one picture has its own Wikipedia page (Hubble Deep Field) and it altered our thinking for years to come.

  

(2) The second reason is that for this image and for every other image or closer examination of voids, new and profound discoveries have been made.  Using radio frequencies, infrared, UV, and all the other wavelengths that we have cameras, filters and sensors to detect, have resulted in new findings every time they are used on “voids”.

  

(3) In general, the fields of astronomy and cosmology have been getting crowded with many more researchers than there are telescopes and labs to support them.  Hundreds of scientists in these fields do nothing but comb through the images and data of past collections to find something worth studying.  Much of that data has been reexamined hundreds of times and there is very little left to discover about it.  The new data from these examinations of voids has created a whole new set of raw data that can be examined from dozens of different perspectives to find something that all these extra scientists can use to make a name for themselves.

  

To that end, Takahiro Sumi and his team Osaka University recently examined one of these voids and found 10 Jupiter sized planets but the remarkable aspect is that these planets were “unthethered” to any star or solar system.  They were not orbiting anything.  In fact they seem to be moving in random directions at relatively high speeds and 8 of the 10 are actually accelerating.  Takahiro Sumi speculates that these planets might be the result of a star that exploded or collided but that is just a guess.

  

In an unrelated study at the radio telescope array in New Mexico, Albert Swenson and Edward Pillard announced that they found a number of anomalous RF and infrared emissions coming from several areas of space that fall into the category of being voids.  One of those void areas that had one of the strongest signals was the same area that Takahiro Sumi had studies.  Their study was unique because they cross-indexed a number of different wavelength measurements of the same area and found that there were very weak moving points of infrared emissions that appeared to be stronger sources of RF emissions with an unidentified energy emission in the 1.5 to 3.8 MHz region.   This study produced a great deal of measurement data but made very few conclusions about what they meant. 

  

The abundance of raw data was ripe for one of those many extra grad students and scientists to examine the data and correlate it to something.  The first to do so was Eric Vindin, a grad student doing his doctoral thesis on the arctic aurora.  He was examining something called the MF-bursts in the auroral roar – which an attempt to find the explicit cause of certain kinds of aurora emissions.  What he kept coming back to was that there was a high frequency component present in the spectrograms of the magnetic field fluctuations that were expressed at significantly lower frequencies.  Here is part of his conclusion:

  

“There is evidence that such waves are trapped in density enhancements in both direct measurements of upper hybrid waves and in ground-level measurements of the auroral roar for an unknown fine frequency structure which qualitatively matches and precedes the generation of discrete eigenmodes when the Z-mode maser acts in an inhomogeneous plasma characterized by field-aligned density irregularities.  Quantitative comparison of the discrete eigenmodes and the fine frequency structure is still lacking.”

  

To translate that for real people to understand, Vindin is saying that he found a highly modulated high frequency (HF) (what he called a “fine frequency structure “) signal embedded in the magnetic field fluctuations in the earth’s magnetic field that makes up and causes the background visual emissions we know as the Auroral Kilometric Radiation (AKR).  He can cross index these modulations of the HF RF to changes in the magnetic field on a gross scale but has not been able to identify the exact nature or source of these higher frequencies.   He did rule out that the HF RF was coming from Earth or the atmosphere.  He found that they were in the range from 1.5 to 3.8 MHz.  Vindin also noted that the HF RF emissions were very low power as compared to the AKR and occurred slightly in advance (sooner) than changes in the AKR.  His study, published in April 2011, won him his doctorate and a job at JPL in July of 2011.

  

Vindin did not extrapolate his findings into a theory or even a conclusion but the obvious implication of these findings is that these very weak HF RF emissions are causing the very large magnetic field changes in the AKR.  If that is true, then it is a cause-and-effect that has no known correlation in any other theory, experiment or observation.

  

Now we come back to NASA, two teams of analysts lead by Yui Chiu and Mather Schulz, working as hired consultants to the Deep Space Mission Systems (DSMS) within the Interplanetary Network Directorate (IND) of JPL.   Chiu’s first involvement was to publish a paper critical of Eric Vindin’s work.  He went to great effort to point out that the relatively low frequency of 1.5 to 3.8MHz is so low in energy that it is highly unlikely to have extraterrestrial origins and it is even more unlikely that it would have any effect on the earth’s magnetic field.  This was backed by a lot of math equations and physics that showed that such a low frequency could not travel from outside of the earth and still have enough energy to do anything – much less alter a magnetic field.  He showed that there is no know science that would explain how an RF emission could alter a magnetic field.  Chiu pointed out that NASA uses UHF and SHF frequencies with narrow beam antennas with extremely slow modulations to communicate with satellites and space vehicles because it takes the higher energy in those much higher frequencies to travel the vast distances of space.  It also takes very slow modulations to be able to send any reliable intelligence on those frequencies.  That is why it often takes several days to send a single high resolution picture from a space probe.  Chiu also argued that received energies from our planetary vehicles was about as strong as a cell phone transmitting from 475 miles away – a power rating in the nanowatt range.  Unless his HF RF signal originate from an unknown satellite, I could not have come from some distant source in space.

  

The motivation of this paper by Chiu appears to be the result of a professional disagreement that he had with Vindin shortly after Vindin came to work at JPL.  In October of 2011, Vindin published a second paper about his earlier study in which he addressed most of Chiu’s criticisms.  He was able to show that the HF RF signal was received by a polar orbiting satellite before it was detected at an earth-bound antenna array.  He antenna he was using was a modified facility that was once a part of the Defense Early Warning (DEW) line of massive (200 foot high) movable dish antennas installed in Alaska.  The DEW line signals preceded but appeared to be synchronized with the aurora field changes.  This effectively proved that the signal was extraterrestrial. 

  

Vindin also tried to address the nature of the HF RF signal and its modulations.  What he described was a very unique kind of signal that the military has been playing with for years. 

  

In order to reduce the possibility of a radio signal being intercepted, the military uses something called “frequency agility”.  This is a complex technique that breaks up the signal being sent into hundreds of pieces per second and then transmits each piece on a different frequency.  The transmitter and receiver are synchronized so that the receiver is jumping its tuning to match the transmitter’s changes in the transmission frequency.  If you could follow the jumps, it would appear to be random jumps but it actually follows a coded algorithm.  If someone is listening to any one frequency, they will hear only background noise with very minor and meaningless blips, clicks and pops.  Because a listener has no way of knowing where the next bit of the signal is going to be transmitted, it is impossible to rapidly tune a receiver to intercept these kinds of transmissions.  Frequency agile systems are actually in common usage.  You can even buy cordless phones that use this technique. 

  

As complex as frequency agility is, there are very advanced, very wide-band receivers and computer processors that can reconstruct an intelligent signal out of the chopped up emission.  For that reason, the military have been working on the next version of agility.  

  

In a much more recent and much more complicated use of frequency agility they are attempting to combine it with agile modulation.  This method breaks up both the frequency and the modulation of the signal intelligence of the transmission into agile components.  The agile frequency modulation (FM) shifts from the base frequency to each of several sidebands and to first and second tier resonance frequencies as well as shifting the intermediate (IF) frequency up and down.  The effect of this is to make it completely impossible to locate or detect any signal intelligence at all in an intercepted signal.  It all sounds like random background noise. 

  

Although it is impossible to reconstruct an agile frequency that is also modulation agile (called “FMA”), it is possible, with very advanced processors to detect that there is a signal present that is FMA modified.  This uses powerful math algorithms that take several hours of processing on massive amounts of recorded data and uses powerful computers to resolve the analysis many hours after the end of the transmission.  And even then it can only confirm to a high probability that there is a presence of an FMA signal without providing any indication of what is being sent. 

  

This makes it ideal for use on encrypted messages but even our best labs have been able to do it only when the transmitter and the receiver are physically wired together to allow them to synchronize their agile reconstruction correctly.  The NRL is experimenting with mixes of FMA and non-FMA and digital and analog emissions all being sent at the same time but it is years away from being able to deploy a functional FMA system.

  

I mention all this because as part of Vindin’s rebuttal, he was able to secure the use of the powerful NASA signal procession computers to analyze the signals he recorded and was able to confirm that there is a 91% probability that the signal is FMA.  This has, of course, been a huge source controversy because it appears to indicate that we are detecting a signal that we do not have the technology to create.  The NRL and NSA has been following all this with great interest and has independently confirmed Vindin’s claims.

  

What all this means is that we may never be able to reconstruct the signal to the point of understanding or even seeing text, images or other intelligence in it but what it does absolutely confirm is that the signal came from an intelligent being and was created specifically for interstellar communications.  There is not even a remote chance that anything in the natural world or in the natural universe could have created these signals out of natural processes.  It has to be the deliberate creation of intelligent life.

  

What came next was a study by Mather Schulz that is and has remained classified.  I had access to it because of my connections at NRL and because I have a lot of history in R&D in advanced techniques in communications.  Schulz took all these different reports and put them into a very logical and sequential argument that these unthethered planets were not only the source o the FMA signals but they are not planets at all.  They are planet size spaceships.

  

Once he came to this conclusion, he went back to each of the contributing studies to find further confirmation evidence.  In the Takahiro Sumi study from Osaka University and in the Swenson and Pillard study, he discovered that they had detected that the infrared emissions were much stronger on the side away from the line of travel and that there was a faint trail of infrared emissions behind each of the unthethered planets. 

  

This would be consistent with the heat emissions from some kind of a propulsion system that was pushing the spaceship along.  What form of propulsion would be capable of moving a planet-size spaceship is unknown but the fact that we can detect the IR trail at such great distances indicates that it is producing a very large trail of heated or ionized particles that extend for a long distance behind the moving planets.  The fact that he found this on 8 of the 10 unthethered planets was positive but then he also noted that the two that do not have these IR emissions, are the only ones that are not accelerating.  This would also be consistent with heat emissions from a propulsion system that is turned off and the spaceship is coasting.

  

The concept of massive spaceships has always been one of the leading solutions to sub-light-speed interplanetary travel.  The idea has been called “Generations Ships” that would be capable of supporting a population large enough and for a long enough period of time to allow multiple generations of people to survive in space.  This would allow survival for the decades or centuries needed to travel between galaxies or star systems.  Once a planet is free from its gravitational tether to its solar system star, it would be free to move in open space.  The solution of replacing the light and heat from their sun is not a difficult technological problem when you consider the possible use of thermal energy from the planet’s core.  Of course, a technology that has achieved this level of advanced science would probably find numerous other viable solutions.

  

Schulz used a combination of the Very Large Array of interferometric antennas at Socorro, New Mexico along with the systems at Pune, India and Arecibo, PR to collect data and then had the bank of Panther Cray computers at NSA analyze the data to determine that the FMA signals were coming from the region of space that exactly matched the void measured and studies by Takahiro Sumi.  NSA was more than happy to let Schulz use their computers to prove that they had not dropped the ball and allowed someone else on earth to develop a radio signal that they would not be able to intercept and decipher.

  

Schulz admitted that he cannot narrow down the detection to a single unthethered planet (or spaceship) but he can isolate it to the immediate vicinity of where they were detected.  He also verified the Swenson and Pillard finding that other voids had similar but usually weaker readings.  He pointed out that there may be many more signal sources from many more unthethered planets but outside of these voids, the weak signals were being deflected or absorbed by intervening objects.  He admitted that finding the signals in other voids did not confirm that they also had unthethered planets but he pointed out that it does not rule out that possibility either.

  

Finally, Schulz setup detection apparatus to simultaneously measure the FMA signals using the network of worldwide radio telescopes at the same time taking magnetic, visual and RF signals from the Auroral Kilometric Radiation (AKR).  He got the visual images with synchronized high speed video recordings from the ISIS in cooperation with the Laboratory for Planetary Atmospherics out of the Goddard SFC. 

  

Getting NSA’s help again, he was able to identify a very close correlation of these three streams of data to show that it was, indeed, the FMA signal originating from these unthethered planets that preceded and apparently was causing corresponding changes in the lines of magnet force that was made visible in the AKR.  The visual confirmation was not on shape or form changes in the AKR but in color changes that occurred at a much higher frequency than the apparent movements of the aurora lights.  What was being measured were the increase and decrease in the flash rate of individual visual spectrum frequencies.  Despite the high speed nature of the images, they were still only able to pick up momentary fragments of the signal – sort of like catching a single frame of a movie every 100 or 200 frames.  Despite this intermittent nature of the visual measurements, what was observed exactly synchronized with the other magnetic and RF signals – giving a third source of confirmation.  Schulz provided some very shallow speculation that the FMA signal is, in fact, a combined agile frequency and modulation signal that includes both frequencies and modulation methods that are far beyond our ability to decipher it. 

  

This detection actually supports a theory that has been around for years – that a sufficiently high enough frequency that is modulated in harmonic resonance with the atomic level vibrations of the solar wind – the charged particles streaming out of the sun that create the Aurora at the poles – can be used to create harmonics at very large wavelengths – essentially creating slow condensations and rarefactions in the AKR.  This is only a theory based on some math models that seem to make it possible but the control of the frequencies involved are far beyond any known or even speculated technology so it is mostly dismissed.  Schulz mentions it only because it is the only known reference to a possible explanation for the observations.  It has some validity because the theory’s math model exactly maps to the observations.

  

Despite the low energy, low frequency signal and despite the fact that we have no theory or science that can explain it, the evidence was conclusive and irrefutable.  Those unthethered planets appear to be moving under their own power, are emitting some unknown kind of signal that is somehow able to modulate our entire planet’s magnetic field.  The conclusion that these are actually very large spaceships, containing intelligent life that is capable of creating these strange signals, seems to be unavoidable.

  

The most recent report from Schulz was published in late December 2011.  The fallout and reactions to all this is still in its infancy.  I am sure they will not make this public for a long time, if ever.  I have already seen and heard about efforts to work on this at several DoD and private classified labs around the world.  I am sure this story is not over. 

  

We do not now know how to decode the FMA signals and we don’t have a clue how it is affecting the AKR but our confirmed and verified observations have pointed us to only one possible conclusion – we are not alone in the universe and whoever is out there has vastly improved technologies and intelligence than we do.

  

IBAL – The latest in Visual Recon

  

The latest addition to reconnaissance is a new kind of camera that takes a new kind of picture.  The device is called a plenoptic camera or a light-field camera.  Unlike a normal camera that takes a snapshot of a 2D view, the plenoptic camera uses a microlens array to capture a 4D light field.  This is a whole new way of capturing an image that actually dates back to 1992 when Adelson and Wang first proposed the design.  Back then, the image was captured on film with limited success but it did prove the concept.  More recently, a Stanford University team built a 16 megapixel electronic camera with a 90,000-microlens array that proved that the image could be refocused after the picture is taken.   Although this is technology that has already made its way into affordable consumer products, as you might expect, it has also been extensively studied and applied to military applications.

 

To appreciate the importance and usefulness of this device, you need to understand what it can do.  If you take a normal picture of a scene, the camera captures one set of image parameters that include focus, depth of field, light intensity, perspective and a very specific point of view.  These parameters are fixed and cannot change.  The end result is a 2-dimensional (2D) image.  What the light field camera does is to capture all of the physical characteristics of the light of a given scene so that a computer can later recreate the image in such detail that it is as if the original image is totally recreated in the computer.  In technical terms, it captures the watts per steradian per meter squared along a ray of radiance.  This basically means that it captures and can quantify the wavelength, polarization, angle, radiance, and other scalar and vector values of the light.   This results in a five dimensional function that can be used by a computer to recreate an image in the computer as if you were looking at the original image at the time the photo was taken.

 

This means that after the picture is taken, you can refocus on different aspects of the image, you can zoom in on different parts of the image and the resolution is such that you can even zoom in on parts of the image without a significant loss of resolution.  If the light field camera is capturing a moving video of a scene, then the computer can render a perfect 3-dimentional representation of the image taken.  For instance, using a state-of-the-art light field camera and taking an aerial light field video from a UAV drone at 10,000 feet altitude, of a city, the data could be used to zoom in on images within the city such as the text of a newspaper that someone is reading or the face of a pedestrian.  You could recreate the city in a highly dimensionally accurate 3D rendering that you could then traverse from a ground-level perspective in a computer model of the city.  The possibilities are endless.

 

As usual, it was NRL that started the soonest and has developed the most useful and sophisticated applications for the light field camera.  Because this camera creates its most useful results when it is used as a video camera, the NRL focused on that aspect of it early on.  The end result was the “IBAL” (pronounced “eyeball”) – for Imaging Ballistic Acquisition of Light.

 

The IBAL is a micro-miniature focused plenoptic camera that uses a masked synthetic aperture in from of an array of 240,000 microlenses that each capture a 24 megapixel video image.  This is accomplished by a massively overclocked processor that takes just 8 seconds of video images at a frame rate of 800 frames per second.  This entire device fits into the nose of an 80 mm mortar round or in the M777 155mm howitzer.  It can also be fired from a number of other artillery and shoulder-launched weapons as a sabot round.  The shell is packed with a powerful lithium battery that is designed to provide up to 85 watts of power for up to two minutes from ballistic firing to impact.  The round has a gyro-stabilized fin control that maintains the camera pointed at the target in one of two modes.  The first mode is to fire the round at a very high angle – 75 to 87 degrees up.  This gives the round a very steep trajectory that allows it to capture its image as it is descending from a few thousand feet of altitude.  Since the resolution is very high, it captures its images as soon as it is aligned and pointed at the ground.  The second mode is to fire the IBAL at a low trajectory – 20 to 30 degrees elevation.  In this mode the gyro maintains the camera, pointing thru a prism, at the ground as the round traverses the battle zone.  In both cases, it uses the last few seconds of flight to transmit a compressed data burst on a UHF frequency to a nearby receiver.  The massive amount of data is transmitted using the same kind of compression algorithm used by the intelligence community for satellite reconnaissance imagery data.   One final aspect of the ballistic round is that it has a small explosive in the back that assures that it is completely destroyed upon impact.  It even has a backup phosphorous envelope that will ignite and melt all of the electronics and optics if the C4 does not go off.

 

Since the object is to recon and not attack, the actual explosive is really quite small and when it goes off, the explosion is almost entirely contained inside the metal casing of the round.  Using the second mode –  low trajectory – of firing, the round would pass over the battle zone and land far beyond without attracting much attention.  In a more active combat environment, the high trajectory mode would attract little attention.  If noticed at all, it would appear to be a dud.

 

The data is received by a special encrypted digital receiver that decodes it and feeds it into the IBAL processor station which is a powerful laptop that can be integrated into a number of other visual representation systems including 3D imaging projectors, 3D rendering tables and virtual-reality goggles.  The data can be used to recreate the images captured in a highly detailed 3-D model that is so accurate that measurements can be taken from the image that are accurate to within one-tenth of an inch. 

 

The computer is also able to overlay any necessary fire-control grid onto the image so that precise artillery control can be vectored to a target.  The grid can be a locally created reference or simply very detailed latitude and longitude using GPS measures.  As might be expected, this imagery information is fully integrated into the CED (combat environmental data) information network and into the DRS (digital rifle system) described by me in other reports.   This means that within seconds of firing the IBAL, the 3D image of the combat zone is available on the CED network for all the soldiers in the field to use.  It also is available for snipers to plan out their kill zones and to the artillery to fine tune their fire control.  Since it sees the entire combat zone from the front, overhead and back, it can be used to identify, locate and evaluate potential targets such as vehicles, mortar positions, communications centers, enemy headquarters and other priority targets.

 

Using this new imaging system in combination with all the other advances in surveillance and reconnaissance that I have described here and others that I have not yet told you about, there is virtually no opportunity for an enemy to hide from our weapons.

“SID” Told Me! The Newest Combat Expert

Sensor Fusion is one of those high tech buzzwords that the military has been floating around for nearly a decade. It is suppose to describe the integration and use of multiple sources of data and intelligence in support of decision management on the battlefield or combat environment. You might think of a true sensor fusion system as a form of baseline education. As with primary school education, the information is not specifically gathered to support a single job or activity but to give the end user the broad awareness and knowledge to be able to adapt and make decisions about a wide variety of situations that might be encountered in the future. As you might imagine, providing support for “a wide variety of situations that might be encountered in the future” takes a lot of information and the collation, processing and analysis of that much information is one of the greatest challenges of a true sensor fusion system.

  

One of the earliest forms of sensor fusion was the Navy Tactical Data System or NTDS. In its earliest form, it allowed every ship in the fleet to see on their radar scopes the combined view of every other ship in the fleet. Since the ships might be separated by many miles, this effectively gave a radar umbrella that was hundreds of square miles in every direction – much further than any one ship could attain. It got a big boost when the added the radar of airborne aircraft that could fly Carrier Air Patrol (CAP) from 18,000 feet altitude. Now every ship could see as if they had radar that looked out hundreds of miles distant and thousands of square miles of coverage. 

  In the latest version, now called the Cooperative Engagement Capability (CEC), the Navy has also integrated fire control radar so that any ship, aircraft or sub can fire on a target that can be seen by any other ship, aircraft or sub in the fleet, including ships with different types of radars – such as X-Band, MMWL, Pulsed Doppler, phased array, aperture synthesis (SAR/ISAR), FM-CW, even sonar. This allows a Guided Missile Cruiser to fire a missile at a target that it physically cannot see but that can be seen by some other platform somewhere else in the combat arena. Even if a ship has no radar at all, of its own, it can benefit from the CEC system and “see” what any other ship can see with their radar.  That is sensor fusion.

  

The end result, however, is a system that supports wide variety of situations from the obvious combat defensive tactics and weapons fire control to navigation to air-sea rescue. Each use takes from the CEC system that portion of the total information available that it needs for its specific situation.

  

The Army has been trying to incorporate that kind of sensor integration for many years. So far, they have made strides in two areas. One is the use of UAV’s (unmanned aerial vehicles) and the other is in the helmet mounted systems.  Both of these gather observed information at some remote command post where it is manually processed, analyzed, prioritized and then selectively distributed to other forces in the combat area. There are dozens of minor efforts that the Army is calling sensor fusion but it really is just a single set of sensors with a dedicated objective to feed a specific system with very specific data. An example of this is the Guardian Angel program that was designed to detect improvised explosive devices (IEDs) in Iraq and Afghanistan. Although it mixed several different types of detection devices that overlaid various imagery data, each sensor was specifically designed to support the single objective of the overall system. A true sensor fusion system gathers and combines data that will be used for multiple applications and situations.

  

A pure and fully automated form of this technology is sometimes referred to as multi-sensor data fusion (MSDF) and has not yet been achieved, until now. MSDF has been the goal of DoD for a long time. So much so that they even have a Department of Defense (DoD) Data Fusion Group within the Joint Directors of Laboratories (JDL). The JDL defined MSDF as the “multilevel, multifaceted process of dealing with the automatic detection, association, correlation, estimation and combination of data and information from multiple sources with the objective to provide situation awareness, decision support and optimum resource utilization by and to everyone in the combat environment”. That means that the MSDF must be able to be useful not just to the Command HQ and to the generals or planners but to the soldiers on the ground and the tank drivers and the helo pilots that are actively engaged with the enemy in real time – not filtered or delayed by processing or collating the data at some central information hub.

  

There are two key elements of MSDF that make it really hard to implement in reality. The first is the ability to make sense of the data being gathered. Tidbits of information from multiple sensors are like tiny pieces of a giant puzzle. Each one can, by itself, can provide virtually no useful information but become useful only when combined with hundreds or even thousands other data points to form the ultimate big picture. It takes time and processing power to do that kind of collating and processing and therein lays the problem. If that processing power is centrally located, then the resulting big picture is no longer available in real time and useful to an actively developing situation. Alternatively, if the processing power is given to each person in the field that might need the data, then it becomes a burden to carry, maintain and interpret the big picture in the combat field environment by every solider that might need it, As the quantity, diversity and complexity of the data being integrated rises, so does the processing power and complexity increase at an exponential rate. The knowledge and skills of the end user also rises to the point that only highly trained experts are able to use such systems.

  

The second problem is the old paradox of information overload. On the one hand, it is useful to have as much information as possible to fully analyze a situation and to be ready for any kind of decision analysis that might be needed. On the other hand, any single given situation might actually need only a small portion of the total amount of data available. For instance, imagine a powerful MSDF network that can provide detailed information about everything happening everywhere in the tactical environment. If every end user had access to all of that data, they would have little use for most of it because they are interested in only that portion that applies to them. But knowing what they will need now and in the future makes it important that they have the ability to access all of it. If you give them that ability, you complicate the processing and training to be able to use it. If you limit what they might need, then you limit their ability to adapt and make decision.  A lot of data is a good thing but too much is a bad thing and the line between those two is constantly changing.

  

I was a consultant to Naval Research Labs (NRL) in a joint assignment to the JDL to help the Army develop a new concept for MSDF. When we first started, the Army has visions of having a vast MSDF system that would provide everything to everyone but when we began to examine some of the implications and limitations of such a system, it became clear that we would need to redefine their goals. After listening to them for a few weeks I was asked to make a presentation on my ideas and advice to them. I thought about it for a long time and then created just three slides. The first one showed a graphic depiction of the GPS system.. In front of two dozen generals and members of the Army DoD staff, I put up the first slide and then asked them to just think about it. I waited for a full five minutes. They were a room of smart people and I could see the look on their faces when they realized that what they needed was a system like the GPS system.  It provides basic and relatively simple information in a standardized format that is then used for a variety of purposed from navigation to weapons control to location services.  The next question came quickly and that was “what would the nature of a similar system be like for the Army in a tactical environment?” That’s when I put up my next slide. I introduced them to “CED” (pronounced as “SID”).

  

Actually, I called it the CED (Combat Environmental Data) network. In this case, the “E” for Environment means the physical terrain, atmosphere and human construction in a designated area. The true tactical combat environment. It uses an array of sensors that already existed that I helped developed at the NRL for the DRS – the Digital Rifle System. As you might recall, I described this system and its associated rifle, the MDR-192 in two other reports that you can read. The DRS uses a specially designed sensor called the “AIR” for autonomous information recon device. It gathers a variety of atmospheric data (wind, pressure, temperature, humidity) as well as a visual image, a laser range-finder scan of its field of view and other data such as vibrations, RF emissions and infrared scans. It also has an RF data transmitter and a modulated laser beam transmission capability. All this is crammed into a device that is 15 inches long and about 2.5 cm in diameter that is scattered, fired, air dropped or hidden throughout the target area. The AIR’s are used to support the DRS procession computer in the accurate aiming of the MDR-192 at ranges out to 24,000 feet or about 4.5 miles.

  

The AIR’s are further enhanced by a second set of sensors called the Video Camera Sights or VCS. The VCS consist of a high resolution video image cameras combined with lasers scanning beams that are combined in the DRS processing computer to render a true and proportional 3D image of the field of view.  The DRS computer integrates the AIR and VCS data so that an entire objective area can be recreated in finite 3D detail in computer images.  Since the area is surrounded with VCS systems and AIR sensors are scattered throughout the area, the target area can be accurately recreated so that the DRS user can see almost everything in the area as if he were able to stand at almost any location anywhere within the target area.  The DRS user is able to accurately see and measure and ultimately target the entire area – even if he is on the other side of the mountain from the target area.  The power of the DRS system is the sensor fusion of this environment for the purpose of aiming the MDR-192 at any target anywhere in the target area.

  

My second slide showed the generals that using AIR and VCS sensor devices combined with one new sensor, of my design, an entire tactical zone can be fully rendered in a computer. The total amount of data available is massive but the end user would treat it like the GPS or the DRS system, pulling down only the data that is needed at that moment for a specific purpose.  That data and purpose can be in support of a wide variety of situations that may be encountered in the present or future a wide variety of end users.

   

My third slide was simple a list of what the CED Network would provide to the Army generals as well as to each and every fielded decision maker in the tactical area. I left this list on the screen for another five minutes and began hearing comments like, “Oh my god”, “Fantastic!” and “THAT’S what we need!”

  

Direct and Immediate Benefits and Applications of the CED Network

  ·        Autonomous and manned weapons aiming and fire control

  ·        Navigation, route and tactical planning, attack coordination

  ·        Threat assessment, situation analysis, target acquisition

  ·        Reconnaissance, intelligence gathering, target identity

  ·        Defense/offence analysis, enemy disposition, camouflage penetration

  

My system was immediately accepted and I spent the next three days going over it again and again with different levels within the Army and DoD. The only additional bit of information I added in those three days was the nature of the third device that I added to the AIR and VCS sensors.  I called it the “LOG” for Local Optical Guide. 

  

The LOG mostly gets its name from its appearance. It looks like a small log or a cut branch of a tree that has dried up.  In fact, great effort has gone into making it look like a natural log so that it will blend in.  There are actually seven different LOGs – in appearance – but the insides are all the same.  It contains four sensor modules: (1) a data transceiver that can connect to the CED network and respond to input signals.  The transceiver sends a constant flow of images and other data but it also will collect and relay data received from other nearby sensors.  In order to handle the mixing of data, all the transmitters are FM and frequency agile – meaning that they transmit a tiny fraction of data on a VHF frequency and then hop to another frequency for the next few bits of data.  The embedded encryption keep all the systems synchronized but the effect of it is that it is nearly impossible to intercept, jam or even detect the presence of these signals; (2) six high resolution cameras that have night vision capabilities.  These cameras are located so that no matter how the LOG is placed on the ground, at least two cameras will be useful for gathering information.  The lenses of the cameras can be commanded to zoom from a panoramic wide angle to telephoto with a X6 zoom but it will default to a wide angle; (3) an atmospheric module that measures wind, temperature, humidity and pressure; (4) a finally, it has an acoustic and vibration sensing module with six microphones located on each surface that is accurate enough to be able to give precise intensity and a crude directionality to sensed sounds.  It has a fifth self-destruct module that is powerful enough to completely destroy the LOG and do damage to anyone trying to dismantle it.

  

The LOG works in conjunction with the AIR for sound sensing of gunfire. Using the same technology that is applied in the Boomerang gunfire locator that was developed by DARPA and BBN Technologies, the CED system can locate the direction and distance to gunfire within one second of the shot.  Because the target area is covered with numerous LOG and AIR sensors, the accuracy of the CED gunfire locator is significantly more accurate than DARPA’s Boomerang system.  

  

The total CED system consists of these three modules – LOG, AIR and VCS and a receiving processing module that can take the form of a laptop, a handheld or a backpack system. Although the computer processor (laptop) used in the DRS was a very sophisticated analyzer of that system’s sensor inputs, the computer processors for the CED system are substantially more advanced in many ways.  The most important difference is that the CED system is a true network that places all of the sensory data on-the-air in an RF transmitted cloud of information that saturates the target area and nearby areas.  It can be tapped into by any CED processor anywhere within range of the network.  Each CED or DRS processor pulls out of the network just the information it needs for the task at hand.  To see how this works, here are some examples of the various uses of the CED system:

  

SNIPER

  Either a DRS or a CED processor can be sued to support the sniper. The more traditional snipers using standard rifles will tap into the CED network to obtain highly accurate wind, temperature, pressure and humidity data as well as precise distance measurements.  Using the XM25 style HEAB munitions that are programmed by the shooter, nearly every target within the CED combat area hit and destroyed.  The CED computers can directly input data into the XM25/HEAB system so that the sniper does not have to use his laser range-finder to sight in the target.  He can also be directed to aim using the new Halo Sight System (HSS).  This is a modified XM25 fire control sight that uses a high resolution LCD thin-film filter that places a small blinking dot at the aim-point of the weapon.  This is possible because the CED processor can precisely place the target and the shooter and can calculate the trajectory based on sensor inputs from the LOG and AIR and VCS sensor grid of the network.  It uses lasers from the AIR’s to locate the shooter and images from the VCS and LOG sensors to place the target.  The rest is just mathematical calculations of the aim point to put an HEAB or anti-personnel 25mm round onto the target.  It is also accurate enough to support standard sniper rifles, the M107/M82 .50 cal. Rifle or the MDR-192.  Any of these can be fitted with the HSS sight for automated aim point calculations.

  

In the case of the MDR-192, the rifle is mounted on a digitally controlled tripod that is linked directly to the DRS or CED computer. The effect is to create an autonomous small caliber artillery weapon.  That means that an operator of a CED (or DRS) computer that has tapped into the CED network can identify a target somewhere in the covered combat arena and send that data to any one of several MDR-192 rifles that have been placed around the combat area.  Each autonomous MDR-192 has an adjustment range of 30 degrees, left and right of centerline and 15 degrees up and down.  Since the range of the typical MDR-192 is up to 24,000 feet, four rifles could very effectively cover a target area of up to four square miles.  The computer data will instruct the selected MDR-192 to aim the rifle to the required aim point – accounting for all of the ballistic and environmental conditions – and fire.  As described in the report of the MDR-192 and DRS, the system can be accessed by an operator that is remotely located from the rifles and the target area – as much as 5 miles away. 

  

Recent tests of the CED system and the MDR-192 have proven their effectiveness. The only defense that the enemy has is to stay in an underground bunker.

  

Artillery

  The CED network is the ultimate forward observe for artillery placement of smart weapons. Using the visual sensors of the LOG and VCS and the gunfire locator sensors of the LOG and AIR sensors, any target within the entire combat arena can be very precisely located.  It can then be identified with GPS coordinates for the dropping of autonomous weapons such as a cruise missile or it can be illuminated with a laser from a nearby AIR or MDR-192 for smart weapon fire control aim point. 

  

Even standard artillery has been linked into the CED system. A modified M777 Howitzer (155mm) can be linked into the CED system.  It uses a set of sensors that have been strapped to the barrel that can sense its aim point within .ooo3 degrees in three dimensions.   The CED network data is sent to a relay transmitter and then sent up to 18 miles away to the M777 crew.  The M777 is moved in accordance with some simple arrows and lights until a red light comes on, indicating that the aim point has been achieved for the designated target – then they fire.  Tests have been able to place as many as 25 rounds within a 10 foot (3 meters) radius from 15 miles away using this system.

  

Intelligence and Reconnaissance

  The CED system is also ideally suited to completely define the enemy distribution and activity and covertly pre-identify targets for a later assault or barrage. The AIR and LOG systems can pick up sounds that can be matched to the LOG and VCS images and video to place and identify points of activity, vehicles and radios.  The VCS and AIR imaging capability can map movements and identify specific types of equipment, weapons and vehicles in the area.  During the battle, snipers and other gunfire can be located with the acoustic gunfire locator using the AIR and LOG sensors.  The LOG and VCS systems also have gun flash identifiers that can distinguish muzzle flash in images – even in complete darkness or the brightest daylight.

  

One of the remarkable additions to the CED processors is the ability to recreate an accurate 3D animation of the target area. This is a 3D rendering of the area that is accurate enough that measurements can be taken from the 3D image and will be accurate to within fractions of an inch to the real world layout.  This is useful to pass the 3D rendering back to an HQ or forward planning area for use in the planning, training and management of an assault.

  

The CED network has just finished field testing in several isolated combat areas in Afghanistan but it has proven to be most effective. Work has already begun on improving the AIR, LOG and VCS sensors in an effort to consolidate, miniaturize and conceal them to a greater degree.  They are also working on an interface to an autonomous UAV that will add aerial views using laser, IR and visual sensors.

  

He troops that have used this system consider it the smartest and most advanced combat information system ever devised and the comment that “CED told me” is becoming recognized as the best possible source of combat information.

The Problems with Cosmology

Why the Universe does NOT add up!

In 2008, Lead re­search­er Al­ex­an­der Kash­lin­sky of NASA’s God­dard Space Flight Cen­ter in Green­belt, and his team, completed a study of three years of da­ta from a NASA sat­el­lite, the Wilkin­son Mi­cro­wave An­i­sot­ro­py Probe (WMAP) using the kinematic Sunyaev-Zel’dovich effect. They found evidence of a common motion of dis­tant clus­ters of ga­lax­ies of at least 600 km/s (2 million miles per hour) toward a 20-degree patch of sky between the constellations of Centaurus and Vela.

 

Kash­lin­sky and col­leagues sug­gest what­ev­er is pulling on the mys­te­ri­ously mov­ing gal­axy clus­ters might lay out­side the vis­i­ble uni­verse.  Telescopes cannot see events earlier than about 380,000 years after the Big Bang, when the the Cosmic Microwave Background (CMB) formed; this corresponds to a distance of about 46 billion (4.6×1010) light years. Since the matter causing the net motion in Kash­lin­sky’s proposal is outside this range, it would appear to be outside our visible universe.

Kash­lin­sky teamed up with oth­ers to iden­ti­fy some 700 clus­ters that could be used to de­tect the ef­fect. The as­tro­no­mers de­tected bulk clus­ter mo­tions of nearly two mil­lion miles per hour, to­ward a 20-degree patch of sky be­tween the con­stella­t­ions of Cen­tau­rus and Ve­la. Their mo­tion was found to be con­stant out to at least about one-tenth of the way to the edge of the vis­i­ble uni­verse.

 

Kash­lin­sky calls this col­lec­tive mo­tion a “dark flow,” in ana­logy with more fa­mil­iar cos­mo­lo­g­i­cal mys­ter­ies: dark en­er­gy and dark mat­ter. “The dis­tri­bu­tion of mat­ter in the ob­served uni­verse can­not ac­count for this mo­tion,” he said.

According to standard cosmological models, the motion of galaxy clusters with respect to the cosmic microwave background should be randomly distributed in all directions.  The find­ing con­tra­dicts con­ven­tion­al the­o­ries, which de­scribe such mo­tions as de­creas­ing at ev­er great­er dis­tances: large-scale mo­tions should show no par­tic­u­lar di­rec­tion rel­a­tive to the back­ground.  If the Big Bang theory is correct, then this should not happen so we must conclude that either (1) their measurements are wrong or (2) the big bang theory is wrong. Since they have measured no small movement (2 million MPH) by 700 galaxy clusters all moving in the same direction, it seems unlikely that their observations are wrong. So that leaves us to conclude perhaps the whole big bang theory is wrong.

 

In fact, there are numerous indicators that our present generally accepted theory of the universe is wrong and has been wrong all along.   Certainly our best minds are trying to make sense of the universe but when we can’t do so, we make up stuff to account for those aspects we cannot explain.

 

For instance, current theory suggests that the universe is between 13.5 and 14 billion years old.  This was developed from the Lambda-CDM Concordance model of the expansion evolution of the universe and is strongly supported by high-precision astronomical observations such as the Wilkinson Microwave Anisotropy Probe (WMAP).  However, Kash­lin­sky’s team calculates that the source of the dark flow appears to be at least 46.5 billion light years away.  That would make it three times older than the known universe!  Whatever it is would have to be more than 30 billion years older than the Big Bang event.

 

Or perhaps we got it all wrong.  Consider the evidence and the assumptions we have drawn from them.

 

The Big Bang is based on Big Guesses and Fudge Factors


ΛCDM or Lambda-CDM is an abbreviation for Lambda-Cold Dark Matter. It is frequently referred to as the concordance model of big bang cosmology, since it attempts to explain cosmic microwave background observations, as well as large scale structure observations and supernovae observations of the accelerating expansion of the universe. It is the simplest known model that is in general agreement with observed phenomena.

 

·         Λ (Lambda) stands for the cosmological constant which is a dark energy term that allows for the current accelerating expansion of the universe.  Currently, 0.74, implying 74% of the energy density of the present universe is in this form.  That is an amazing statement – that 74% of all the energy in the universe is accounted for by this dark energy concept.  This is a pure guess based on what has to be present to account for the expansion of the universe.  Since we have not discovered a single hard fact about dark energy – we don’t know what it is or what causes it or what form it takes – Lambda is a made up number that allows the math formulas to equal the observations in a crude manner.  We do not know if dark energy is a single force or the effects of multiple forces since we have no units of measure to quantify it.  It is suppose to be an expansion force that is countering the effects of gravity but it does not appear to be anti-gravity nor does it appear to be emanating from any one location or area of space.  We can observe the universe out to about 46 billion light years and yet we have not found a single observable evidence for dark energy other than its mathematical implications.

 

·         Dark matter is also a purely hypothetical factor that expresses the content of the universe that the model says must be present in order to account for why galaxies do not fly apart.   Studies show that there is not enough mass in most large galaxies to keep them together and to account for their rotational speeds, gravitational lensing and other large structure observations.  The amount of mass needed to account for the observations is not just a little bit off.  Back in 1933, Fritz Zwicky calculated that it would take 400 times more mass than is observed in galaxies and clusters to account for observed behavior.  This is not a small number.  Dark matter accounts for 22% of all of the matter in the universe.  Since Zwicky trusted his math and observations to be flawless, he concluded that there is, in fact, all the needed mass in each galaxy but we just can’t see it.  Thus was born the concept of dark matter.  Although we can see 2.71 x 10 23 miles into space, we have not yet observed a single piece of dark matter.  To account for this seemingly show-stopping fact, advocates say, “well, duh, it’s DARK matter”, you can’t SEE it!”.  However, it appears that it is not just dark but also completely transparent because areas of dense dark matter do not stop stars from being visible behind the dark matter.  So, 22% of all the mass in the universe cannot be seen, is, in fact, transparent, has never ever been observed, and does not appear to have had any direct interactions with any known mass other than the effects of gravity.

 

·         The remaining 4% of the universe consists of 3.6% intergalactic gas and just 0.4% makes up all of the matter (and energy) that makes up all the atoms (and photons) of all the visible planets and stars in the universe. 

 

ΛCDM is a model.   ΛCDM says nothing about the fundamental physical origin of dark matter, dark energy and the nearly scale-invariant spectrum of primordial curvature perturbations: in that sense, it is merely a useful parameterization of ignorance.

 

One last problem with modern cosmology.  There is a very poor agreement between quantum mechanics and cosmology.  On numerous levels and subjects, quantum mechanics does not scale up to account for cosmological observations and cosmology does not scale down to agree with quantum mechanics.  Sir Roger Penrose, perhaps one of the pre-eminent mathematicians in the world, has published numerous studies documenting the failure of our math to accurately reflect our observed universe and vice versa.  He can show hundreds of failures of math to account for observations while showing hundreds of observations that contradict the math we believe in.

 

The truth is that we have done the best we can but we should not fool ourselves that we have discovered the truth.  Much as we once believed in ether, astrology, a flat earth and the four humours – we must be willing to expand our thinking that notions like dark matter are ingenious and inventive explanations that account for observations but probably do not relate to factual and realistic natural phenomenon.

 

There is, however, a logical and quite simple explanation of all of the anomalies and observations that perplex cosmology today.  That simple explanation is described in the next report called “Future Cosmology”.

Fast Boat – No Power

 

I grew up around boats and have had several of my own – power and sail.  I also did the surfing scene in my youth but that was back when the boards were 12 feet long and weighted 65 pounds or more.  When I had a sailing sloop, I was fascinated by being able to travel without an engine.  I began experimenting with what other kinds of thrust or moving force I could use to move me over water.  I eventually came up with something that is pretty neat.

 

My first attempt was to put an electric trolling motor on my 12-foot fiberglass surfboard and a small lawn mower battery.  Later, I added a solar panel to charge the battery.  A newer one that I tried about two years ago was much larger and made enough power that I could use the motor at low speed for several hours.  I put a contoured lounge chair and two tiny outriggers on it and traveled from Mobile AL to Pensacola, FL, non-stop in one day.  I liked it but not fast enough.

 

Surfing always surprised me at how fast you can go.  Even normal ocean and Gulf waves move faster than most boats – averaging about 25 MPH.  I wanted to make a boat that could use that power.  A boat that was featured in an article in Popular Science especially motivated me.  The Suntory Mermaid II, an aluminum catamaran was built by Yutaka Terao in 2007 and has been tested.  It will sustain a speed of 5 knots using an articulated fin (foil) that is activated by the up and down motion of the boat in the waves.  This obviously works but it is slow and obviously depends on bobbing up and down.  I wanted a smoother ride and to go faster.  Much faster.  It took a few years but I did.

 

At first I took the purely scientific approach and tried to computer model the Boussinesq equations along with the hull formula and other math calculations to help design a method for keeping the boat in the optimum point on the wave.  I even got Plato to help and this gave me some background but the leap from model to design was too difficult to design and I was confident I could figure it out. 

 

What I learned is that ocean waves vary by wavelength and that varies their speed.  The USS Ramapo calculated that waves they encountered were moving at 23 meters per second and had energy of 17,000 kilowatts in one-meter length of those waves.  That is 51 miles per hour and enough energy to move a super freighter.  That is about twice as fast as the average wave.  Waves with a wavelength of about 8 meters in deep water will have a speed of about 10 m/s or about 22 miles per hour – a very respectable speed for a boat.  The energy in a wave is equal to the square of its height – so a 3m wave is 9 times more powerful than a 1m wave but even a 1 meter wave has more than enough energy to move a boat hull through the water.

 

I started with a small 21-foot motorsailer with a squared off stern and a deep draft keel.    I selected this because it had a narrow hull and had a deep draft for a boat this size.  It also had an unusual keel design – instead of a deep narrow keel, it extended from just aft of the bow, down to a draft of nearly 5 feet all the way back to the stern and then rose vertically straight up to the transom – giving an area of almost 85 square feet of keel to reduce the lateral forces of wave and wind action.

 

I installed electric motor thrusters below the waterline on the port and starboard of the stern with an intake facing down on the stern.  These were water jet thrusters I salvaged from some old outboards with bad engines.  I put in electric starter motors from cars to run the jet thrusters.  This gave me near instant yaw control so I could keep the stern of the boat facing the wave. 

 

After I got the yaw thrusters working and tested, I replaced the inefficient starter motors with brushless DC motors.  My new water jet thrusters are mounted on fore and aft look like a shrunk down version of the Azimuth Stern Drives (ASD) or “Z” drives used in ASD tugs.  The gimbaled thruster housing extends outside the hull while the BLDC motors are safely inside.

 

I then experimented with the transom/stern design and found that having a weather deck (one that could take on and empty a wave of water without sinking the boat) was essential but it could also simply be a sealed deck so that water could not get onto the deck.  I started with the former and ended with the latter.  The obvious intent is to optimize the design so as to minimize the problem of broaching – when a wave overtakes a boat and can pushes it sideways and capsizes the boat.

 

I also wanted to make sure that the pressure from the wave on the stern was strong and focused on creating thrust for the boat.  I called this addition the pushtram.  To do this I tested several shapes for a concave design of a fold-out transom (pushtram) that extended down to the bottom of the keel.  This ended up taking the shape of a tall clam-shell that could fold together to form a rudder but when opened, it presented a 4 foot wide by 5 foot deep parabolic pushing surface to for the wave. 

 

The innovation on this pushtram design came when I realized that facing the concave portion of the design toward the bow instead of aft, gave it a natural stability to keep the boat pointed in the direction of the wave travel.  As the boat points further away from being perpendicular to the wave, the pushtram exerts more and more rotational torque to direct the boat back to pointing perpendicular to the wave.  This design essentially all but eliminates the danger of broaching.

 

The lifting of the stern and plowing of the bow is also a problem so I also installed a set of louvers that closed with upward travel and opened with downward travel of the stern in the water.  This controls the pitch fore and aft of the boat as it moves in and out of the waves.  This “pitch suppressor” stuck out aft from the lower most point of the hull for about 4 feet and was reinforced with braces to the top of the transom. 

 

After some experimenting, I also added a horizontal fin (foil) under the bow that was motorized to increase its lift when the rear louvers closed tightly as controlled from a computer.  This bow-foil lift was created by a design I had developed for the US Navy that uses oil pumped into heavy rubber bladders to selectively reform the lifting (airfoil) effect of the blade.  The all-electric control could change the upper and lower cambers of the foil in less than a second.  Combined with a small change in the angle of attack (to prevent cavitation), I could go from a lift coefficient of zero to more than 10.5 (using Kutta-Joukowski’s theorem).  I also used my computer modeling to optimize laminar flow and minimize the Kutta condition, keeping the drag coefficient below 0.15.

 

The effect of this weird underwater configuration was to allow me to control the stern to keep it perpendicular to the wave front with the yaw jets and long keel.  I then used the louvers and front foil to keep the stern down and the bow up as waves pushed the boat.  The computer controller for all this was the real innovation.

 

I used eight simple ultrasonic range finders that I took from parking sensors for cars and placed them on the four sides of the ship.  Four were pointing horizontal and 4 were pointing down.  The horizontal ones gave me distance to the wave, if it was visible to that sensor and the ones pointing down gave me the freeboard or height of the deck above the water line.  I also installed a wind vane and aeronometer for wind speed and relative direction. I fed all this into a computer that then used servos and relays to control the yaw jets, foil and rudder.

 

I had modeled the management software in a simulated ocean wave environment using a Monte Carlo analysis of the variable parameters and it took four days of running but the modeling found the optimum settings and response times for various combinations of input values. I also developed settings to allow for angles other than 90 degrees to the following waves so I could put the boat on a reach to the winds.  This placed a heavy and constant load on the yaw thrusters but I found that my boat was lightweight enough to go as much as 35 to 40 degrees left and right of perpendicular to the wave front.

 

At first, I kept the sail mast and kept the inboard motor of the motorsailer but after getting more confidence in the boat’s handling, I took both off.  I do keep a drop-down outboard motor for getting in and out of the harbor. 

 

In operation, I would use the drop down outboard to get out of the harbor and into the Gulf and facing in the direction of the wave travel.  While the outboard is still running, I open up the pushtram and lower the bow-foil and aft pitch-suppressor and bring the computer online.  The software is preprogrammed to run a quick test of the thrusters and bow-foil and gives the boat a little wiggle to let me know it is all working.  I then run the outboard up to what’s needed to get me on a wave crest and then shut it down.  Within a few waves, the boat settles into the perfect location on the wave to receive the optimum benefit of the gravity, wave motion and system settings.  The end result was a boat that travels +/- 40 degrees to the direction the wind is blowing at sustained speed up to 35 knots or more all day long without using any gas.

 

Waves being as inconsistent as they are, the thrusters and bow-foil and pitch-suppressors kick in every few minutes to try to correct for a change in wave or wind direction or when I drop a wave and have to pick up another.  Between the pitch-suppressor and the pushtram, it usually only takes about 2 or 3 waves to get back up to speed again.  This happens slightly more often as I deviate from the pure perpendicular direction using the thrusters but it still keeps me moving at almost the speed of the waves for about 80 to 90% of the time.

 

I recently tested an improvement that will get me to +/- 60 degrees to the wind’s direction so I can use the boat under a wider range of wind and wave conditions.  I found that using some redesigned shapes on the pushtram, I can achieve a stable heading that is nearly 60 degrees off the wind.  The innovation came when I mixed the use of the hydraulic reshapeable bow-foil idea on the pushtram.  By using the computer to dynamically reshape the pushtram using pumped up oil bladders controlled by the computer, I can create an asymmetric parabolic shape that also creates a stable righting force at a specific angle away from the wind.

 

I also recently incorporated a program that will take GPS waypoint headings and find a compromise heading between optimum wave riding and the direction I want to go.  This was not as hard as it seems since I need only get within 60 degrees either side of the wind direction.  Using the computer, I calculate an optimum tack relative to the present wind that will achieve a specific destination.  Because it is constantly taking in new data, it is also constantly updating the tack to accommodate changes in wind and wave direction.  It gives me almost complete auto-pilot control of the boat.  I even set it up with automatic geofencing so that if the system gets too far off track or the winds are not cooperating, it sounds an alarm so I can use other power sources.

 

I began using a 120-watt solar panel that charges the batteries with a small generator for backup.  I keep a few hours of fuel in the on-board tank for the outboard in case the waves and wind die or I need to cruise the inland waterways or intercoastal.

 

Once I’m in the sweet spot of the wave and traveling at a constant speed, the ride is smooth and steady. 

 

I have found that the power of the wave is sufficient that I could have considerable additional drag and still not change my speed or stability.  I jury-rigged a paddle-wheel generator and easily produced about 300 watts of power with no changes in my computer controller or control surface settings.  This plus the solar panels now can keep up with the usage rates for the electric thrusters on most days without depleting any of the battery reserve.

 

I am now working in a drop-down APU – auxiliary power unit – which will produce all the power; I need on board with enough left over to charge some large batteries.  My plan is to then use the battery bank to eliminate the need for the outboard motor and gas.   I figure I can get about 800 watts out of the APU and can feed into a bank of 12 deep cycle batteries.  When the winds are not right, I just turn the yaw thrusters to act as main propulsion and take off.  

 

I recently took my boat on a trip from Jacksonville Fla. (Mayport), up the coast to Nags Head and then on to Cape May, NJ.   There was an Atlantic high pressure off South Carolina that was slowly moving north so I got out in it and caught the northerly winds and waves.  The total distance was about 1,100 miles.  Being retired from the US Navy, I used the launching facilities at the Mayport Naval Station to put to sea about 8AM on a Monday morning.  I pulled into the Cape May Inlet about 7:30PM on Tues.  That was just under 30 hours of wave powered travel at an average speed of about 27 knots.  Not bad for an amateur.  The best part is that I used just over two gallons of gas and most of the trip I just let the boat steer itself.

 

All the modeling in the world does not hold a candle to an hour in the real world.  I observed firsthand how frequently that the waves are always parallel to the last one and how often that they don’t all go in the same direction.  I also observed groups of waves – called the long wavelength – of waves.  The effect of all that is that the boat did not ride just one wave but lost and gained waves constantly but at irregular intervals.  Sometimes I would ride a wave for as much as 20 minutes and sometimes it was 3 or 4 minutes.  A few times, I got caught in a mix-master of waves that had no focus and had to power out with the outboard.  This prompted me to speed up my plans for installing the APU and the bank of aux batteries so I can make more use of the electric powered thrusters for main propulsion so that I could add that into the computer controller to help maintain and steady the speed.

 

I powered around to a friend’s place off Sunset Lake in Wildwood Crest.  He had a boat barn with a lift that allowed me to pull my boat out of the water and change the inboard propeller shaft.  Earlier, I had taken the inboard engine out and the prop off last year but left the shaft.  This gave me tons of room because I also took out the oversize fuel tank. 

 

I salvaged one of the electric motor/generators from a crashed Prius and connected it to the existing inboard propeller shaft.  I then mounted a 21″ Solas Alcup high thrust, elephant ear propeller.  This prop is not meant for speed but it is highly efficient at medium and slow speeds.  The primary advantage of this prop is that it produces a large amount of thrust when driven at relatively slow speeds by the motor.  It also can be easily driven by water flowing past it to drive the generator.

 

I used a hybrid transmission that allows me to connect a high torque 14.7 HP motor-generators and converter to the propeller shaft and to a bank of 12 deep cycle batteries in a parallel-serial arrangement to give a high current 72 volt source.  This combination gives me a powerful thrust but also produces as much as a 50 amp current at RPMs that can readily be achieved while under wave power.

 

Now I have a powerful electric motor on the shaft and a bank of deep cycle batteries in the keel.   The motor-generator plus the solar panels and the APU easily create enough charging current to keep the batteries topped off while still giving me about 5 hours of continuous maximum speed electric power with no other energy inputs.  However, in the daytime, with the solar panels and APU working, I can extend running time to about 9 hours.  If I have wave powered travel for more 6 hours out of every 24, I can run nearly non-stop.

 

 I am now working on a refined controller for all these changes.  The plan is to have the motor kick on if the speed drops below a preset limit.  The computer will also compute things like how fast and how far I can travel under electric power using only the batteries, solar panels, APU and motor-generator in various combinations.  I’ll also be adding a vertical axis wind turbine that I just bought.  It produces nearly 1 kW and is only 9 feet tall and 30″ in diameter.  For under $5,000, it will be mounted where the sail mast use to be but it will be on a dampened gimbal that will maintain it in an upright vertical position while the boat goes up and down the waves.  By my calculations, on a sunny day with a 10 knot wind, I should be able to power the electric drive all day long without tapping the batteries at all.

 

These changes will be made by mid-July 2010 and then I am reasonably confident that I can travel most any direction, day or night, for a virtually unlimited distance.

 

My next trip was planned for hugging the coastline from Cape May south to Key West – then around the Gulf down to the Panama Canal – thru to the Pacific and up the coast to San Francisco.  An investor there has challenged me that if can make that trip; he will buy my boat for $1.5M and will build me a much larger version – a Moorings 4600 using a catamaran GRP hull.  Using a catamaran hull should boost the efficient of the wave drive to almost perfection. 

 

This trip was all set and then BP has to go a screw it up.  I figure I’ll make the trip in 2011.

The Fuel you have never heard of….

 

I have always been fascinated by the stories of people that have invented some fantastic fuel only to have the major oil companies suppress the invention by buying the patent or even killing the inventor.  The fascination comes from the fact that I have heard these stories all my life but have never seen any product that might have been invented by such a person.  That proves that the oil companies have been successful at suppressing the inventors….or it proves that such stories are simply lies.  Using Plato – my research software tool, I thought I would give it a try.  The results were far beyond anything I could have imagined.  I think you will agree.

 

I set Plato to the task of finding what might be changed in the fuel of internal combustion engines that might produce higher miles per gallon (MPG).  It really didn’t take long to return a conclusion that if the burned fuel had more energy in the burning, it would give better MPG for the same quantity of fuel.  It further discovered that if the explosion of the fuel releases its energy in a shorter period of time, it works better but it warned that the engine timing becomes very critical.

 

OK so, what I need is a fuel or a fuel additive that will make the spark plug ignite a more powerful but faster explosion within the engine.  I let Plato work on that problem for a weekend and it came up with Nitroglycerin (Nitro).  It turns out that Nitro actually works precisely because its explosion is so fast.  It also is a good chemical additive because it is made of nitrogen, oxygen and carbon so it burns without smoke and releases only those elements or compounds into the air. 

 

Before I had a chance to worry about the sensitive nature of Nitro, Plato provided me with the answer to that also.  It seems that ethanol or acetone will desensitize Nitro to workable safety levels.  I used Plato to find the formulas and safe production methods of Nitro and decided to give it a try.

 

Making Nitro is not hard but it is scary.  I decided to play it safe and made my mixing lab inside of a large walk-in freezer.  I only needed to keep it below 50F and above 40F so the freezer was actually off most of the time and it stayed cool from the ice blocks in the room.  The cold makes the Nitro much less sensitive but only if you don’t allow it to freeze.  If you do that, it can go off just as a result of thawing out.  My plan was to make a lot of small batches to keep it safe until I realized that even if very small amounts, it was enough to blow me up if it ever went off.  So I just made up much larger batches and ended up with about two gallons.

 

I got three gas engines – a lawn mower, a motorcycle and an old VW Bug.  I got some gas of 87 octane but with 10% ethanol in it.  I also bought some pure ethanol additive and put that in the mix.  I then added the Nitro.  The obvious first problem was to determine how much to add.  I decided to err of the side of caution and began with very dilute mixtures – one part Nitro into 300 parts gas.   I made-up just 100 ml of the mixture and tried it on the lawn mower.  It promptly blew up.  Not actually exploded but the mixture was so hot and powerful that it burned a hole in the top of the cylinder and broke the crankshaft and burned off the valves.  That took less than a minute of running.

 

I then tried a 600:1 ratio in the motorcycle engine and it ran for 9 minutes on the 100 ml.  It didn’t burn up but I could tell very little else about the effects of the Nitro.  It tried it again with 200 ml and determined that it was running very hot and probably would have blown a ring or head gasket if I tried it for any longer.  I had removed the motorcycle engine from an old motorcycle to make this experiment but now I regretted that move.  I had no means to check torque or power.  The VW engine was still in the Bug so I could actually drive it.  This opened up all kinds of possibilities.

 

I gas it up and drove it with normal gas first.  I tried going up and down hills, accelerations, high speed runs and pulling a chain attached to a tree.  At only 1,400 cc, it was rated at only 40 HP when it was in new condition but now it had much less than that using normal gas.

 

I had a Holly carb on the engine and tweaked it to a very lean mixture and lowered the Nitro ratio to 1,200 to 1.   I had gauges for oil temp and pressure and had vacuum and fuel flow sensors to help monitor real-time MPG.  It ran great and outperformed all of the gas-only driving tests.  At this point I knew I was onto something but my equipment was just too crude to do any serious testing.  I used my network of contacts in the R&D community and managed to find some guys at the Army vehicle test center at the Aberdeen Test center (ATC).  A friend of a friend put me in contact with the Land Vehicle Test Facility (LVTF) within the Automotive Directorate where they had access to all kinds of fancy test equipment and tons of reference data.  I presented my ideas and results so far and they decided to help me using “Special Projects” funds.  I left them with my data and they said come back in a week.

 

A week later, I showed up at the LVTF.  They said welcome to my new test vehicle – a 1998 Toyota Corona.  It is one of the few direct injection engines with a very versatile air-fuel control system.  They had already rebuilt the engine using ceramic-alloy tops to the cylinder heads that gave them much greater temperature tolerance and increased the compression ratio to 20:1.  This is really high but they said that my data supported it.  Their ceramic-alloy cylinder tops actually form the combustion chamber and create a powerful vortex swirl for the injected ultra-lean mixture gases.

 

We stared out with the 1,200:1 Nitro ratio I had used and they ran the Corona engine on a dynometer to test and measure torque (ft/lbs) and power (HP).  The test pushed the performance almost off the charts.  We repeated the tests with dozens of mixtures, ratios, air-fuel mixes and additives.  The end results were amazing.

 

After a week of testing, we found that I could maintain a higher than normal performance using a 127:1 air fuel ration and a 2,500:1 Nitro to gas ratio if the ethanol blend is boosted to 20%.  The mixture was impossible to detonate without the compression and spark of the engine so the Nitro formula was completely safe.  The exhaust gases were almost totally gone – even the Nox emissions were so low that a catalytic converter was not needed.  Hydrocarbon exhaust was down in the range of a Hybrid.  The usual problem of slow burn in ultra-lean mixtures was gone so the engine produced improved power well up into high RPMs and the whole engine ran at lower temperatures for the same RPM across all speeds.  The real thrill came when we repeatedly measured MPG values in the 120 to 140 range.

 

The rapid release and fast burn of the Nitro allowed the engine to run an ultra-lean mixture that gave it great mileage while not having any of the usual limitations of lean mixtures.  At richer mixtures, the power and performance was well in excess of what you’d expect of this engine.  It would take a major redesign to make an engine strong enough to withstand the torque and speeds possible with this fuel in a normal 14:1 air-fuel mixture.  Using my mix ratio of 120+:1 gave me slightly improved performance but at better than 140 MPG.  It worked.  Now I am waiting for the buyout or threats from the gas companies.

 

July 2010 Update:

 

The guys at ATC/LVTF contacted my old buddies at DARPA and some other tests were performed.  The guys at DARPA have a test engine that allows them to inject high energy microwaves into the combustion chamber just before ignition and just barely past TDC.  When the Nitro ratio was lowered to 90:1, the result was a 27 fold increase in released energy.  We were subsequently able to reduce the quantity of fuel used to a level that created the equivalent of 394 miles per gallon in a 2,600 cc 4-cyl engine.  The test engine ran for 4 days at a speed and torque load equal to 50 miles per hour – and did that on 10 gallons of gas – a test equivalent of just less than 4,000 miles!  A new H-2 Hummer was rigged with one of these engines and the crew took it for a spin – from Calif. To Maine – on just over 14 gallons of gas.  They are on their way back now by way of northern Canada and are trying to get 6,000 miles on less than 16 gallons.

 

The government R&D folks have pretty much taken over my project and testing but I have been assured that I will be both compensated and protected.  I hope Obama is listening.

The Government knows Everything You have Ever Done!

Sometimes our paranoid government wants to do things that technology does not allow or they do not know about yet. As soon as they find out or the technology is developed, then they want it and use it. Case in point is the paranoia that followed 11 Sept 2001 (9/11) in which Cheney and Bush wanted to be able to track and monitor every person in the US. There were immediate efforts to do this with the so-called Patriots Act that bypassed a lot of constitutional and existing laws and rights – like FISA. They also instructed NSA to monitor all domestic radio and phone traffic, which was also illegal, and against the charter of NSA. Lesser known monitoring was the hacking into computer databases and monitoring of emails, voice mails and text messaging by NSA computers. They have computers that can download and read every email or text message on every circuit from every Internet or phone user as well as every form of voice communication.

Such claims of being able to track everyone, everywhere have been made before and it seems that lots of people simply don’t believe that level of monitoring is possible. Well, I’m here to tell you that it not only is possible, but it is all automated and you can read all about the tool that started it all online. Look up “starlight” in combination with “PNNL” on Google and you will find references to a software program that was the first generation of the kind of tool I am talking about.

This massive amount of communications data is screened by a program called STARLIGHT, which was created by the CIA and the Army and a team of contractors led by Battelle’s Pacific Northwest National Lab (PNNL)at a cost of over $10 million. It does two things that very few other programs can do. It can process free-form text and images of text (scanned documents) and it can display complex queries in visual 3-D graphic outputs.

The free-form text processing means that it can read text in its natural form as it is spoken, written in letters and emails and printed or published in documents. For a database program to be able to do this as easily and as fast as it would for formal defined records and fields of a relational database is a remarkable design achievement. Understand this is not just a word search – although that is part of it. It is not just a text-scanning tool; it can treat the text of a book as if it were an interlinked, indexed and cataloged database in which it can recall every aspect of the book (data). It can associate, cross-link and find any word or phrase in relation to any parameter you can think of related to the book – page numbers, nearby words or phrases, words use per page, chapter or book, etc. By using the most sophisticated voice-to-text messaging, it can perform this kind of expansive searching on everything written or spoken, emailed, texted or said on cell phones or landline phones in the US!

The visual presentation of that data is the key to being able to use it without information overload and to have the software prioritize the data for you. It does this by translating the database query parameters into colors and dimensional elements of a 3-D display. To view this data, you have to put on a special set of glasses similar to the ones that put a tiny TV screen in from of each eye. Such eye-mounted viewing is available for watching video and TV – giving the impression you are looking at a 60-inch TV screen from 5 feet away. In the case of STARLIGHT, it gives a completely 3-D effect and more. It can sense which way you are looking so it shows you a full 3-D environment that can be expanded into any size the viewer wants. And then it adds interactive elements. You can put on a special glove that can be seen in the projected image in front of your eyes. As you move this glove in the 3-D space you are in, the glove moves in the 3-D computer images that you see in your binocular eye-mounted screens. Plus this glove can interact with the projected data elements. Let’s see how this might work for a simple example:

The first civilian (unclassified) application of STARLIGHT was for the FAA to analyze private aircraft crashes over a 10-year period. Every scrape of information was scanned from accident reports, FAA investigations and police records – almost all of this was in free-form text. This included full specs on the aircraft, passengers, pilots, type of flight plan (IFR, VFR) etc. It also entered geospatial data that listed departure and destination airports, peak flight plan altitude, elevation of impact, distance and heading data. It also entered temporal data for the times of day, week and year that each event happened. This was hundreds of thousands of documents that would have taken years to key into a computer if a conventional database were used. Instead, high-speed scanners were used that read in reports at a rate of 200 double-sided pages per minute. A half dozen of these scanners completed the data entry in less than two months.

The operator then assigns colors to a variety of ranges of data. For instance, it first assigned red and blue to male and female pilots and then looked at the data projected on a map. What popped up were hundreds of mostly red (male) dots spread out over the entire US map. Not real helpful. Next he assigned a spread of colors to all the makes of aircraft – Cessna, Beachcraft, etc.. Now all the dots change to a rainbow of colors with no particular concentration of any given color in any given geographic area. Next he assigned colors to hours of the day – doing 12 hours at a time – Midnight to Noon and then Noon to Midnight. Now something interesting came up. The colors assigned to 6AM and 6PM (green) and shades of green (before and after 6AM or 6PM) were dominant on the map. This meant that the majority of the accidents happened around dusk or dawn.  Next the operator entered assigned colors to distances from the departing airport – red being within 5 miles, orange was 5 to 10 miles…and so on with blue being the longest (over 100 miles). Again a surprise in the image. The map showed mostly red or blue with very few in between. When he refined the query so that red was either within 5 miles of the departing or destination airport, almost the whole map was red.

Using these simple techniques, an operator was able to determine in a matter of a few hours that 87% of all private aircraft accidents happen within 5 miles of the takeoff or landing runway. 73% happen in the twilight hours of dawn or dusk. 77% happen with the landing gear lowered or with the landing lights on and 61% of the pilots reported being confused by ground lights. This gave the FAA information they needed to improve approach lighting and navigation aids in the terminal control areas (TCAs) of private aircraft airports.

This highly complex data analysis was accomplished by a programmer, not a pilot or an FAA investigator and incorporated 100’s of thousands of reports that were able to be collated into useful data in a matter of hours.  This had never been done before.

As new and innovative as this was, it was a very simple application that used a limited number of visual parameters at a time. But STARLIGHT is capable of so much more. It can assign things like direction and length of a vector, color of the line or tip, curvature and width and taper to various elements of a search. It can give shape to one result and different shape to another result. This gives significance to “seeing” a cube versus a sphere or to seeing rounded corners on a flat surface instead of square corners on an egg-shaped surface.
Everything visual can have meaning but what is important is to spot anomalies, things that are different and nothing is faster doing that than a visual image.

Having 80+ variables at a time that can be interlaced with geospatial and temporal (historical) parameters can allow the program to search an incredible amount of data. Since the operator is looking for trends, anomalies and outflyers, the visual representation of the data is ideal to spot this data without actually scanning the data itself by the operator. Since the operator is visually seeing an image that is devoid of the details of numbers or words, he can easily spot some aspect of the image that warrants a closer look.

In each of these trial queries, the operator can, using his gloved hand to point to any given dot, line or object, call up the original source of the information in the form of a scanned image of the accident report or reference source data. He can also touch virtual screen elements to bring out other data or query elements. For instance, he can merge two queries to see how many accidents near airports (red dots) had more than two passengers or were single engine aircraft, etc. Someone looking on would see a guy with weird glasses waving his hand in the air but in the eyes of the operator, he is pressing buttons, rotating knobs and selecting colors and shapes to alter his room-filling graphic 3-D view of the data.

In its use at NSA, they add one other interesting capability. Pattern Recognition. It can automatically find patterns in the data that would be impossible for any real person to find by looking at the tons of data. For instance, they put in a long list of words that are linked to risk assessments – such as plutonium, bomb, kill, jihad, etc. Then they let it search for patterns.  Suppose there are dozens of phone calls being made to coordinate an attack but the callers are from all over the US. Every caller is calling someone different so no one number or caller can be linked to a lot of risk words. STARLIGHT can collate these calls and find the common linkage between them, and then it can track the calls, caller and discussions in all other media forms.  If the callers are using code words, it can find those words and track them.  It can even find words that are not used in a normal context, such as referring to an “orange blossom” in an unusual manner – a phrase that was once used to describe a nuclear bomb.

Now imagine the list of risk words and phrases to be hundreds of thousands of words long. It includes phrases and code words and words used in other languages. It can include consideration for the source or destination of the call – from public phones or unregistered cell phones. It can link the call to a geographic location within a few feet and then track the caller in all subsequent calls. It can use voice print technology to match calls made on different devices (radio, CB, cell phone, landline, VOIP, etc.) by the same people. This is still just a sample of the possibilities.

STARLIGHT was the first generation and was only as good as the data that was fed into it through scanned documents and other databases of information. A later version, code named Quasar, was created that used advanced data mining and ERP (enterprise resource planning) system architecture that integrated the direct feed from legacy system information gathering resources as well as newer technologies.

(ERP is a special mix of hardware and software that allows a free flow of data between different kinds of machines and different kinds of software and data formats.  For instance the massive COBAL databases at the IRS loaded on older model IBM mainframe computers can now exchange data easily with NSA CRAY computers using the latest and most advanced languages and database designs.  ERP also has resolved the problem that each agency has a different encryption and data security format and process.  ERP does not change any of the existing systems but it makes them all work smoothly and efficiently together.)

For instance, the old STARLIGHT system had to feed recordings of phone calls into a speech-to-text processor and then the text data that was created was fed into STARLIGHT. In the Quasar system, the voice monitoring equipment (radios, cell phones, landlines) is fed directly into Quasar as is the direct feed of emails, telegrams, text messages, Internet traffic, etc.  Quasar was also linked using ERP to existing legacy systems in multiple agencies – FBI, CIA, DIA, IRS, and dozens of other federal and state agencies.

So does the government have the ability to track you? Absolutely! Are they doing so? Absolutely! But wait, there’s more!

Above, I said that Quasar was a “later version”. It’s not the latest version. Thanks to the Patriot Act and Presidential Orders on warrantless searches and the ability to hack into any database, NSA now can do so much more. This newer system is miles ahead of the relatively well known Echelon program of information gathering (which was dead even before it became widely known). It is also beyond another older program called Total Information Awareness (TIA). TIA was compromised by numerous leaks and died because the technology was advancing so fast.

The newest capability is made possible by the new bank of NSA Cray computers and memory storage that are said to make Google’s entire system look like an abacus.  NSA combined that with the latest integration (ERP) software and the latest pattern recognition and visual data representation systems.  Added to all of the Internet and phone monitoring and screening are two more additions into a new program called “Kontur”. Kontur is the Danish word for Profile. You will see why in a moment.

Kontur adds geospatial monitoring of every person’s location to their database. Since 2005, every cell phone now broadcasts its GPS location at the beginning of every transmission as well as at regular intervals even when you are not using it to make a call. This was mandated by the Feds supposedly to assist in 911 emergency calls but the real motive was to be able to track people’s locations at all times. For those few that are still using the older model cell phones, they employ “tower tracking” which uses the relative signal strength and timing of the cell phone signal reaching each of several cell phone towers to pinpoint a person within a few feet.  Of course, landlines are easy to locate as are all internet connections.

A holdover from the Quasar program was the tracking of commercial data which included every purchase made by credit cards or any purchase where a customer discount card is used – like at grocery stores. This not only gives the Feds an idea of a person’s lifestyle and income but by recording what they buy, they can infer other behaviors. When you combine cell phone and purchase tracking with the ability to track other forms of transactions – like banking, doctors, insurance, police and public records, there are relatively few gaps in what they know about you.

Kontur also mixed in something called geofencing that allows the government to create digital virtual fences around anything they want. Then when anyone crosses this virtual fence, they can be tracked. For instance, there is a virtual fence around every government building in Washington DC. Using predictive automated behavior monitoring and cohesion assessment software combined with location monitoring, geofencing and sophisticated social behavior modeling, pattern mining and inference, they are able to recognize patterns of people’s movements and actions as being threatening. Several would-be shooters and bombers have been stopped using this equipment.  You don’t hear about them because they do not want to explain what alerted them to the bad guys presence.

To talk about the “Profile” aspect of Kontur, we must first talk about why or how is it possible because it became possible only when the Feds were able to create very, very large databases of information and still be able to make effective use of that data. It took NSA 35 years of computer use to get to the point of using a terabyte of data. That was back in 1990 using ferrite core memory. It took 10 more years to get to petabyte of storage – that was in early 2001 using 14-inch videodisks and RAID banks of hard drives. It took four more years to create and make use of an exabyte of storage. With the advent of quantum memory using gradient echo and EIT (electromagnetically induced transparency), the NSA computers now have the capacity to store and rapidly search a yottabyte of data and expect to be able to raise that to 1,000 yottabytes of data within two years.  A yottabyte is 1,000,000,000,000,000 gigabytes or 2 to the 80th power.

This is enough storage to store every book that has ever been written in all of history…..a thousand times over.  It is enough storage to record every word of every conversation by every person on earth for a period of 10 years.  It can record, discover, compute and analyze a person’s life from birth to death in less than 12 seconds and repeat that for 200,000 people at the same time.

To search this much data, they use a bank of 16 Cray XT Jaguar computers that do nothing but read and write to and from the QMEM – quantum memory. The look-ahead and read-ahead capabilities are possible because of the massively parallel processing of a bank of 24 other Crays that gives an effective speed of about 270 petaflops. Speeds are increasing at NSA at a rate of about 1 petaflop every two to four weeks. This kind of speed is necessary for things like pattern recognition and making use of the massive profile database of Kontur.

In late 2006, it was decided that NSA and the rest of the intelligence and right wing government agencies would stop this idea of real-time monitoring and begin developing a historical record of what everyone does. Being able to search historical data was seen as essential for back-tracking a person’s movements to find out what he has been doing and whom he has been seeing or talking with. This was so that no one would ever again accuse the government or the intelligence community of not “connecting the dots”.

But that means what EVERYONE does! As you have seen from the above description, they already can track your movements and all your commercial activities as well as what you say on phones or emails, what you buy and what you watch on TV or listen to on the radio. The difference now is that they save this data in a profile about you. All of that and more.

Using geofencing, they have marked out millions of locations around the world to including obvious things like stores that sell pornography, guns, chemicals or lab equipment. Geofenced locations include churches, organizations like Greenpeace and Amnesty International. They have moving geofences around people they are tracking like terrorists but also political opponents, left wing radio and TV personalities and leaders of social movements and churches. If you enter their personal space – close enough to talk, then you are flagged and then you are geofenced and tracked.

If your income level is low and you travel to the rich side of town, you are flagged. If you are rich and travel to the poor side of town, you are flagged. If you buy a gun or ammo and cross the wrong geofence, you will be followed. The pattern recognition of Kontur might match something you said in an email with something you bought and somewhere you drove in your car to determine you are a threat.

Kontur is watching and recording your entire life. There is only one limitation to the system right now. The availability of soldiers or “men in black” to follow-up on people that have been flagged is limited so they are prioritizing whom they act upon. You are still flagged and recorded but they are only acting on the ones that are judged to be a serious threat now.  It is only a matter of time before they can find a way to reach out to anyone they want and curb or destroy them. It might come in the form of a government mandated electronic tag that is inserted under the skin or implanted at birth. They have been testing these devices in use on animals under the disguise of tracking and identification of lost pets. They have tried twice to introduce these to all the people in the military or in prisons. They have also tried to justify putting them into kids for “safety”. They are still pushing them for use in medical monitoring. Perhaps this will take the form of a nanobot.  So small that you won’t even know you have been “tagged”.

These tags need not be complex electronic devices.  Every merchant knows that RFID tags are so cheap that they are now installed at the manufacturing plant for less than 1 cent per item.  They consist if a special coil of wire or foil cut to a very specific length and folded into a special shape.  It can be activated and deactivated remotely.  This RFID tag is then scanned by an RF signal.  If it is active and you have taken it out of the store, it sounds an alarm.  Slightly more sophisticated RFID tags can be scanned to reveal a variety of environmental, location, time and condition data.  All of this information is gathered by a device that has no power source other than the scanning beam from the tag reader.  A 1 cubic millimeter tag – 1/10th the size of a TicTac – can collect and relay a huge amount of data, will have a nearly indefinite operating life and can be made to lodge in the body so you would never know it.

If they are successful in getting the population to accept these devices and then they determine you are a risk, they simply deactivate you by remotely popping open a poison capsule using a radio signal. Such a device might be totally passive in a person that is not a threat but might be lethal or it can be programmed to inhibit the motor-neuron system or otherwise disable a person that is deemed to be a high-risk person

Certainly this sounds like paranoia and you probably say to yourself, that can never happen in a free society.  If you think that, you have just not been paying attention.  Almost everything in this article can be easily researched online.  The code names of Quasar and Kontur are not public knowledge yet but if you look up the design parameters I have described, you will see that they are in common usage by NSA and others.  There is nothing in this article that cannot be verified by independent sources.

As I said in the beginning of this article, if the technology exists and is being used by the government or corporate America and it is public knowledge, then you can bet your last dollar that there is some other technology that is much more effective that is NOT public knowledge that is being used.

Also, you can bet that the public image of “protecting privacy” and “civil rights” have absolutely no limitations or restrictions on the government if they want to do something. The Bush/Cheney assault on our rights is a most recent example but is by no means rare or unusual.  If they want the information, laws against them gathering it have no effect.  They will claim National Security or classified necessity or simply do it illegally and if they get caught, they will deny it.

Here are just a few web links that might convince you that this is worth taking seriously.

http://www.democracynow.org/2006/3/1/how_major_corporations_and_government_plan

http://www.spychips.com/

http://starlight.pnl.gov/
http://en.wikipedia.org/wiki/Starlight_Information_Visualization_System
http://www.google.com/#q=starlight+pnnl&hl=en&prmd=v&source=univ&tbs=vid:1&tbo=u&ei=1zZGTNGkNYSBlAfTsISTBA&sa=X&oi=video_result_group&ct=title&resnum=4&ved=0CC8QqwQwAw&fp=d706bc2a5dba00d4

http://gizmodo.com/5395095/the-nsa-to-store-a-yottabyte-of-your-phone-calls-emails-and-other-big-brothery-stuff

http://www.greaterthings.com/News/Chip_Implants/index.html

http://computer.howstuffworks.com/government-see-website1.htm

http://www.newsweek.com/2010/02/18/the-snitch-in-your-pocket.html

http://venturebeat.com/2010/06/25/government-sites-to-track-behavior-target-content/

http://www.seattlepi.com/local/269969_nsaconsumer12.html

http://www.usatoday.com/news/washington/2006-05-11-nsa-reax_x.htm

Plato: Unlimited Energy – Here Already!

Plato: Unlimited Energy

 

If you are a reader of my blog, you know about Plato. It is what I call a software program that I have been working on since the late 1980’s that does what I call “concept searches”. The complete description of Plato is in another story on this blog but the short of it is that it will do web searches for complex interlinked and related or supporting data that form the basis for a conceptual idea. I developed Plato using a variety of techniques including natural language queries, thesaurus lookups, pattern recognition, morphology, logic and artificial intelligence. It is able to accept complex natural language questions, search for real or possible solutions and present the results in a form that logically justifies and validates the solution. Its real strength is that it can find solutions or possibilities that don’t yet exist or have not yet been discovered. I could go on and on about all the wild and weird stuff have used Plato for but this story is about a recent search for an alternative energy source….and Plato found one.

As a research scientist, I have done a considerable amount of R&D in various fields of energy production and alternate energy sources. Since my retirement, I have been busy doing other things and have not kept up with the latest so I decide to let Plato do a search for me to find out what is the latest state-of-the-art in alternate energy and the status of fusion power. What Plato came back with is a huge list of references in support of an source of energy that is being used by the government but is being withheld from the public. This energy source is technical complex but is far more powerful than anything being used today short of the largest nuclear power plants. I have read over most of what Plato found and am convinced that this source of power exists, it is being used but is being actively suppressed by out government. Here is the truth:

On January 25, 1999 a rogue physicist researcher at the University of Texas named Carl Collins clamed to have achieved stimulated decays of nuclear isomers using a second-hand dental x-ray machine. As early as 1988, Collins was saying that this was possible but it took 11 years to get the funding and lab work to do it. By then, it was confirmed by several labs including Dr. Belic at the Stuttgart Nuclear Physics Group. Collins’ results were published in a peer reviewed Physical Review Letters. The science of this is complex but what it amounts to is a kind of cold fusion. Nuclear isomers are atoms with a metastable nucleus. That means that certain when they are created in certain radioactive materials, the protons and neutrons (nucleons) in the nucleus of the atom are bonded or pooled together in what is called an excited state.

An analogue would be like stacking balls into a pyramid. It took energy to get them into that natural state but what Collins found is that it takes relatively little energy to destabilize this stack and release lots of energy. Hafnium and Tantalum are two naturally occurring metastable elements that can be triggered to release their energy with relatively little external excitation.

Hafnium, for instance, releases a photon with an energy of 75 keV (75,000 electron volts) and one gram produces 1,330 megajoules of energy – the equivalent of about 700 pounds of TNT. A five-pound ball is said to be able to create a two-kiloton blast – that is the equivalent to 4,000,000 pounds of TNT. A special type of Hafnium called Hf-178-m2 is capable of producing energy in the exawatt range, that is 10,000,000,000,000,000,000 (1018) watts of energy! This is far more than all the energy created by all the nuclear plants in the US. As a comparison, the largest energy producer in the world today is the Large Hadron Collider (LHC) near Geneva which cost more than $10 billion and can a beam of energy estimated to be 10 trillion watts (1012 ) but that is power that lasts for about 30 nanoseconds (billionths of a second).

Imagine being able to create 1 million (106) times that energy level but sustain it indefinitely? We actually don’t have a power gird capable of doing that but because we are talking about a generator that might be the size of a small house, this technology could be inexpensively replicated all over the US or the world to deliver as much power as needed.

These are, of course, calculated estimates based on extrapolation of Collins’ initial work and that of the follow-on experiments but not one scientist has put forth a single peer reviewed paper that disputes these estimates or the viability of the entire experiment. It is also obvious that the mechanism of excitation would have to be larger than a dental x-ray machine in order to get 1018 watts out of it. In fact, when Brookhaven National Lab conducted its Triggering Isomer Proof (TRIP) test, it used their National Synchrotron Light Source (NSLS) – a powerful laser – as the excitation.

Obviously this was met with a lot of critical reviews and open hostility from the world of physics. This was just another “Cold Fusion” fiasco that was still fresh in everyone’s minds. It was in 1989 that Pons and Fleischmann claimed to have created fusion in a lab at temperatures well below what was then thought to be necessary. It took just months to prove them wrong and the whole idea of cold fusion and unlimited energy was placed right next to astrology, perpetual motion and pet rocks.

Now Collins was claiming that he had done it again – a tiny amount of energy in and a lot of energy out. He was not reporting the microscopic “indications of excess energy” that Pons and Fleischmann claimed. Collins is saying he got large amounts of excess energy (more energy out that went in) on many orders of magnitude above what Pons and Fleischmann claimed.

Dozens of labs across the world began to try to verify or duplicate his results. The biggest problem was getting a hold on the Hafnium needed to do the experiments – it is expensive and hard to come by so it took mostly government sponsored studies to be able to afford it. Surprisingly, some confirmed it and some had mixed results and some discredited him.

In the US, DARPA was very interested because this had the potential for being a serious weapon that would give us a nuclear bomb type explosion and power but would not violate the worldwide ban on nuclear weapons. The US Navy was very interested in it because had the potential for being not only a warhead but also a new and better form of power for their nuclear power fleet ships and subs.

By 2004, the controversy over whether it was viable or not was still raging so DARPA, which had funded some of the labs that had gotten contradictory results, decided to have a final test. They called it the TRiggering Isomer Proof (TRIP) test and it was funded to be done at Brookhaven National Lab.

This had created such news interest that everyone was interested in the results. NASA, Navy, Dept. of Energy (DOE), Dept of Defense (DoD), NRL, Defense Threat Reduction Agency, State Department, Defense Intelligence Agency (DIA), Argonne Labs, Arms Control and Disarmament Agency (ACDA), Los Alamos, MIT Radiation Lab, MITRE, JASON, and dozens of others were standing in line to hear the outcome of this test being conducted by DARPA.

So what happened in the test? No one knows. The test was conducted and DARPA put the lockdown on every scrap of news about the results. In fact, since that test, they have shutdown all other government funded contracts in civilian labs on isomer triggering. The only break in that cover has been a statement from the senior most DOE scientist involved, Dr. Ehsan Khan when he made this statement:

“TRIP had been so successful that an independent evaluation board has recommended further research….with only the most seasoned and outstanding individuals allowed to be engaged”.

There has been no peer review of the TRIP report. It has been seen by a select group of scientists but no one else has leaked anything about it. What is even more astounding is that none of those many other government agencies and organizations have raised the issue. In fact, any serious inquiry into the status of isomer triggering research is met with closed doors, misdirection or outright hostility. The government has pushed it almost entirely behind the black curtain of black projects. Everything related to this subject is now either classified Top Secret or is openly and outwardly discredited and denounced as nonsense.

This has not, however, stopped other nations or other civilian labs and companies from looking into it. But even here, they cannot openly pursue isomer triggering or cold fusion. Now research into such subjects is called “low-energy nuclear reactions” (LENR) or “chemically assisted nuclear reactions (CANR). Success in the experiments of these researchers is measured in the creation of “excess heat” meaning that it has created more (excess) energy than was put into it. Plato has found that some people and labs that have achieved this level of success include:

Lab or company ResearcherUniversity of Osaka, Japan Arata

ENEA, Rome Frascati, Italy Vittorio Violante

Hokkaido University, Japan Mizuno

Energetic Technology, LLC, Omer, Israel Shaoul Lesin

Portland state University, USA Dash

Jet thermal Products, Inc, USA Swartz

SRI, USA McKubre

Lattice Energy, Inc. USA E. Storms

In addition, the British and Russians have both published papers and intelligence reports indicate they may both be working on a TRIP bomb. The British have a group called the Atomic Weapons Establishment (AWE) that has developed a technique called Nuclear Excitation by Electron Transition and are actively seeking production solutions. The Russians may have created an entire isolated research center just for studying TRIP for both weapons and energy sources.

In addition to the obvious use of such a power source to allow us to wean off of fossil fuels, there are lots of other motivations for seeking a high density, low cost power source: global warming, desalination, robotics, mass transportation, long distance air travel, space exploration, etc.

These applications are normal and common sense uses but what application might motivate our government to surppress the news coverage of further research and to wage a disinformation and discredit campaign on anyone that works on this subject? One obvious answer is its potential as a weapon but since that also is well known and common sense, there must be some other reason that the government does not want this to be pursued. What that is will not be found by searching for it. If it is a black project, it will not have internet news reports on it but it might have a combined group of indicators and seemingly disconnected facts that form a pattern when viewed in light of some common motive or cause. Doing that kind of searching is precisely what Plato was designed to do.

What my Plato program discovered is that there are a number of unexplained events and sightings that have a common thread. These events and sightings are all at the fringes of science or are outright science fiction if you consider current common knowledge of science or listen to the government denounce and discredit any of the observers. Things like UFOs that move fast but make no noise, space vehicles that can approach the speed of light, underwater vessels that have been reported to travel faster than the fastest surface ships and beam weapons (light, RF, rail) that can destroy objects as far away as on the moon. What they have in common is that if you consider that there is a high density, compact source of extremely high-powered energy, then these fantastic sightings suddenly become quite plausible.

A power source that can create 10 TeV (tera-electron Volts) is well within the realm of possibility for an isomer-triggered device and is powerful enough to create and/or control gravitons and the Higgs Boson and the Higgs field. See my other blog story on travel faster than light and on dark energy and you will see that if you have enough power, you can manipulate the most fundamental particles and forces of nature to include gravity, mass and even time.

If you can control that much power, you can create particle beam weapons, lasers and rail guns that can penetrate anything – even miles of earth or ocean. If you can create enough energy – about 15 TeV, you can create a negative graviton – essentially negative gravity – which can be used to move an aircraft with no sounds at supersonic speeds. It will also allow you to break all the rules of normal aerodynamics and create aircraft that are very large, in odd shapes (like triangles and arcs) and still be able to travel slowly. Collins estimated that a full-scale isomer triggered generator could generate power in the 1,000 TeV range when combined with the proper magnetic infrastructure of a Collider like the LHC.

Plato found evidence that is exactly what is happening. The possibility of coincidence that all of these sightings have this one single thread in common is beyond logic or probability. The coincidence that these sightings and events have occurred by the hundreds in just the past few years – since the DARPA TRIP test – is way beyond coincidence. It is clear that DARPA put the wraps on this technology because of its potential as a weapon and as an unlimited high-density power source.

The fact that this has been kept hushed up is mostly due to the impact it would have on the economies of the world if we were suddenly given unlimited power that was not based on fossil fuels, coal or hydroelectric power. Imagine the instant availability of all of the electricity that you could use at next to nothing in cost. Markets would collapse in the wake of drops in everything related to oil, gas and coal. That is not a desirable outcome when we are in such a bad financial recession already.

Plato comes up with some wild ideas some times and I often check them out to see if it really is true. I was given perhaps 75 references, of which I have listed only a few in this article but enough that you can see that they are all there and true. I encourage you to search for all the key words, people and labs listed here. Prove this to yourself – it’s all true.

NASA Astrophysics Data System (ADS) Physical Review Papers Vol 99, Issue 17, id. 172502 titled, “Isomer Triggering via Nuclear Excitation by Electron Capture (NEEC) reported confirmed low energy triggering with high energy yields.

Brookhaven National Lab conducted a Triggering Isomer Proof Test (TRIP) using their National Synchrotron Light Source (NSLS) in which they reported; “A successfully independent confirmation of this valuable scientific achievement has been made … and presented in a Sandia Report (SAND2007-2690, January 2008). This was funded by DARPA but pulled the funding right after the test.

Government Secrets #2 They Control You!!

They Control You!!

After reading Government Secrets #1, you should know that I had access to a lot of intelligence over a long career and had a lot of insights into our government’s actions on the international political stage. What I observed first hand and in my historical research is that repeatedly over decades, the US government has gone to great effort to create wars. You will never hear a military person admit this because most of them are not a part of the decision process that commits us to war but because they believe in the idea that we are always right and they will go to prison if they disobey, they will execute the directions to go to war with great gusto. We have a very warped view of our own history. In every war we are the heroes and we fought on the side of right and we did it honorably and with great integrity. Well, that is what the history books would have you believe. Did you ever learn that we issued orders to take no prisoners at the battle of Iwo Jima? Thousands of Japanese were shot with their hands raised in surrender. To be fair, some of them would feign surrender and then pop a grenade but you won’t see this in our history books.Did you know that our attack strategy in Europe was to destroy the civilian population? The worst example occurred on the evening of February 13, 1945, Allied bombers and fighters attacked a defenseless German city, one of the greatest cultural centers of northern Europe. Within less than 14 hours not only was it reduced to flaming ruins, but an estimated one-third of its inhabitants, more than half a million, had perished in what was the worst single event massacre of all time. More people died there in the firestorm, than died in Hiroshima and Nagasaki combined.

Dresden, known as the Florence of the North, was a hospital city for wounded soldiers. Not one military unit, not one anti-aircraft battery was deployed in the city. Together with the 600.000 refugees from Breslau, Dresden was filled with nearly 1.2 million people. More than 700,000 phosphorus bombs were dropped on 1.2 million people. More than one bomb for every 2 people. The temperature in the center of the city reached 1600 centigrade (nearly 3,000 degrees Fahrenheit). More than 260,000 bodies and residues of bodies were counted. But those who perished in the center of the city can’t be traced because their bodies were vaporized or they were never recovered from the hundreds of underground shelters. Approximately 500,000 children, women, the elderly and wounded soldiers were slaughtered in one night.

Following the bomber attack, U.S. Mustangs appeared low over the city, strafing anything that moved, including a column of rescue vehicles rushing to the city to evacuate survivors. One assault was aimed at the banks of the Elbe River, where refugees had huddled during the night. The low-flying Mustangs machine-gunned those all along the river, as well as thousands who were escaping the city in large columns of old men, women and children streaming out of the city.

Did you ever read that in your history books? Did you know that we deliberately avoided all attacks on Hiroshima and Nagasaki so as to ensure that the civilian population would not flee the city?

This sparked my interest to look into “my war” – Viet Nam and I began to study it in detail. I read about its start and how the famous Tonken Gulf Incident was a complete ruse to let Lyndon Johnson boost troops for political gain and out of a personal fear that America might be seen as weak. He had great faith in our might and ability to make a quick and decisive victory so he trumped up a fake excuse to get the famous Tonken Gulf Resolution passed to give him more powers to send troops. The whole war had been just a political whim by a misguided politician and bolstered by the military-industrial complex that profited by massive arms sales, which also happened to be the largest contributors to the political campaigns. More than 50,000 US lives and countless Viet Namese lives later, we left Viet Nam having had almost no effect on the political outcome of the initial civil war effort to reunite the North and the South under communism – except that there were a lot fewer people to do it.

Even our basis for most of the cold war was mostly fake. For instance, I found pretty solid evidence that as early as the early 1960’s there was a massive campaign to create a false missile gap mentality in order to funnel massive money into the military.

Look up Operation Paperclip, it had actually given us a huge advantage in missile technology so the whole basis for the cold war from before the Cuban Missile Crisis to the present is all based on a lie. Despite having the largest nuclear warheads, Russia’s missiles are known for being so poorly guided that an ICBM had a probability of hitting its target with the effective range of its warhead of only 20%. That meant that it would be expected to hit within a +/- 30 miles radius of its target. Our missiles, by contrast, are rated at less than 1,000 feet. In every crisis involving Russia in which we refused to back down, the Russians gave in because they knew that they did not have a chance in a nuclear war exchange with the US. There was never any real missile gap nor any real threat to our world from communism. It was a scapegoat for all our mistakes and expenditures.

Did you know about the testing of bio-weapons, nuke weapons and designer drugs on our own US military? Do you know the truth about the start of Viet Nam? How about Angola, Nicaragua, the Congo, Grenada, Guatemala, Panama, El Salvador, Iran, Iraq, Israel, Argentina and dozens of others? Do you know the real story of the USS Liberty? The list is huge of what is not fully known or understood by the US public. I can guarantee that what you think happened, what is in the history books and the press is NOT what really happened.

Here’s just one example of how the news is not really the news as it happened but as our government wants us to hear it. The Falkland Islands went to war in 1982. One incident we had a lot of intelligence about was the sinking of several British warships. One of these ships was hit and sunk by an Exocet air to surface missile despite the use of extensive electronic countermeasures. Or so that was the way it was reported in the news.

Because of my access to intelligence reports, I found out that the use of electronic countermeasures by the British was nearly flawless in its effectiveness to divert or confuse these missiles. The skipper of the HMS Sheffield, in the middle of a battle, ordered the electronic countermeasures equipment to be shut off because he could not get a message to and from Britain with it on. As soon as his equipment was off, the Argentine air attacks from the Super Etendard launched the Exocet.

OK this was a tragic screw up by a British officer but what our military planners and politicians did with it was the REAL tragedy. The bit about shutting of the electronic countermeasures equipment was deleted from all of the news reports and only the effectiveness of the Exocet was allowed to be published by the US press. The Navy and the Air Force both used this event to create the illusion of an anti-missile defense gap in the minds of the public and politicians and to justify the purchase of massive new defensive systems and ships at the cost of billions of dollars. All based on a false report.

In fact, an objective look at how we have been playing an aggressive game of manifest destiny with the world for the past 150 years would make you wonder how we can have any pride in our nation. From the enslavement of millions of blacks to the genocide of the American Indian to the forceful imposition of our form of government on dozens of sovereign nations, we have been playing the role of a worldwide dictator for decades. It has all been a very rude awakening for me.

The military-industrial complex that President Eisenhower warned us about is real but latter day analysts now call it the “military-industrial-congressional complex”. It is congress and some of the Presidents that we have had that are the power side of the triangle that consists of power, money and control.

The money buys the power because we have the best government in the world that is for sale on a daily basis and that sale is so institutionalized that it is accepted as a way of doing routine business. The bribing agents are called Lobbyists but there is little doubt that when the visit a congressman to influence his vote, they are clearly and openly bribing him with money or with votes. The congressmen, in return, vote to give tax money to the companies that the lobbyists represent. Or perhaps they will vote to allow those companies to retain their status, earnings or advantages even when that is at the cost of damage to the environment, other people or to other nations.

The control comes in the form of propaganda to sway and manipulate the masses; the military might to exert control over our enemies and our allies and the control of the workers and people that empower the congressmen – thus making the interlocking triangle complete.

What is not well known is a basic psychological mechanism that the military-industrial-congressional complex employs that few people understand or realize. Historical Sociologists (people that study how societies think over time and history) have discovered that every successful society in the world and over all of history, has had a scapegoat group of people or country or culture on which to blame all their problems.

Scapegoating is a hostile social – psychological discrediting routine by which people move blame and responsibility away from themselves and towards a target person or group. It is also a practice by which angry feelings and feelings of hostility may be projected, via inappropriate accusation, towards others. The target feels wrongly persecuted and receives misplaced vilification, blame and criticism; he is likely to suffer rejection from those who the perpetrator seeks to influence. Scapegoating has a wide range of focus: from “approved” enemies of very large groups of people down to the scapegoating of individuals by other individuals. Distortion is always a feature.

In scapegoating, feelings of guilt, aggression, blame and suffering are transferred away from a person or group so as to fulfill an unconscious drive to resolve or avoid such bad feelings. This is done by the displacement of responsibility and blame to another that serves as a target for blame both for the scapegoater and his supporters.

Primary examples of this include 1930 Germany in which Hitler used a variety of scapegoats to offset the German guilt and shame of World War I. He eventually chose the Jews and the entire population of Germany readily accepted them as the evil cause of all their problems. The US did this in the south for more than a century after the civil war by blaming everything on the black population. But this is true today for most of our successful countries: The Japanese hate the Koreans, the Arabs hate the Jews, in the southwest of the US, the Mexicans are the targets but in the southeast, it is still the blacks, the Turks hate the Kurds…and so it goes for nearly every country in the world and for all of history.

In some cases the scapegoat might be one religious belief blaming another as in the Muslims blaming the Jews or the Catholics blaming the Protestants. These kinds of scapegoats can extend beyond national boundaries but often are confined to regional areas like the Middle East or Central Europe. Finally, there are the political and ideological scapegoats. For many years, the US has pitted conservatives against liberals and Democrats against Republicans. This often has the effect of stopping progress because each side blames the other for a lack of progress and then opposes any positive steps that might favor the other side or give them the credit for the progress. Unfortunately, this scapegoat blame-game ends up being the essence of the struggles for power and control.

What is not well understood or appreciated is that our government is very well versed in this scapegoating and blame-game as a means to avoid accountability and to confuse the objectives. By creating an enemy that we can blame all our insecurities on – like we did with communism in the cold war – we can justify almost any expense, any sacrifice demanded of the public. If you question or oppose the decisions, then you are branded a communist sympathizer and are ostracized by society. Joseph McCarthy is the worst example of this but it exists today when we say someone is not patriotic enough if they dare to question a funding allocation for Iraq or for a new weapon system.

We, the public, are being manipulated by a power and highly effective psychological mechanism that is so well refined and developed that both the Democrat against Republican parties have an active but highly secretive staff composed of experts in the social psychological propaganda techniques that include, among others, scapegoating. In the Democratic Party this office is called the Committee for Public Outreach. In the Republican Party, their staff is called Specialized Public Relations. Even the names they choose make use of misdirection and reframing. Right now, the Democratic Party has the better group of experts partly because they raided the staff of the Republican office of Specialized Public Relations back in 1996 by offering them huge salary increases. By paying them half a million dollars per year plus bonuses that can reach an additional $50 million, they have secured the best propaganda minds in the world.

In both cases, the staffs are relatively unknown and work in obscure private offices located away from the main congressional buildings. Their efforts are passed as quietly and as low a profile as possible and to only the senior most party members. The reports begin with clearly defined objectives of diverting public attention or countering fact-based reports or justifying some political action or non-action but as they work their way through the system of reviewers and writers, the objective remains the same but the method of delivery gets altered so that the intent is not at all obvious. It is here that the experts in psychology and social science tweak the wording or events to manipulate the public, allies or voters.

The bottom line is that the federal government of the US has a long and verifiable history of lying but it is a fact that the lies that have been discovered are perhaps 5% of the lies that have emanated from the government. If you care to look, you will find that a great deal of what you think you know about our nation’s history, our political motivations and accomplishments and our current motives and justifications are not at all what you think they are. But I warn you – don’t begin this exploration unless you are willing to have your view of your country and even yourself seriously shaken up. But, if you don’t want to see the truth, then at least be open minded enough to listen to what will be declared the radical views that oppose the popular political positions of the day.

A few of you doubt me?!!

 

I have gotten a number of comments about the science of my stories. Since I spent most of my life in hard core R&D, science is my life and the way I talk. To read my stories, you have to be willing to either accept that the science behind it is fact or go look it up yourself. You will quickly find that there is damn little, if any fiction, in my stories. I take exception to people that say the science is wrong so I’m going to self analyze one of the stories that I have gotten the most questions about.

 

In the story about the accidental weapon discovery, I described a C-130 with a multi-bladed prop – See US Patent 4171183 – . Also see http://usmilnet.com/smf/index.php?topic=9941.15 and http://www.edwards.af.mil/news/story.asp?id=123089573. As I said in the story the long and telescoping blade is still classified so there are no public pictures of it.

 

The ATL (airborne tactical laser) program being run out of the ACTD program by the DUSD(AS&C), an office within OSD. The ACTD program is where the original project was started in cooperation with the Naval Research Lab (NRL). The original objective was to improve the speed and range of long distance transport by aircraft. It followed some research that showed that if the variable pitch of prop were extended outward from the hub further, then the efficiency would improve.

 

Since a prop is a lifting wing that lifts horizontally, it must maintain a constant angle of attack (AoA) over the entire length of the blade. AoA is the angle between the camber line of the wing and the axis of the flow of air over the blade. Since the relative speed of the prop changes as a function of distance from the hub, the blade must twist or pitch more as you move further out the blade. This was the essential secret that the Wright Brothers discovered in 1902 and is the basic difference between a screw propeller and a wing propeller.

What was discovered in the development of vertical wind turbines is that blades as long as 50 feet but as thin as 5 inches could be made to be more efficient and with higher torque than conventional blades. In wind power, the added torque allows you to turn a larger generator but this is due to the wind passing over the blade making it spin. But in an aircraft the engines would be spinning the blade to make it take a bigger (more efficient) bite out of the air, this would mean being able to create more thrust or it might be able to operate at a higher altitude (in thinner air). Do a Google search for “Vertical Wind Turbine”. You’ll see designs like the WindSpire that is 30 feet tall with blades less than 8 inches wide that is so efficient that it produces 2000 kilowatts and can operate in 8 MPH winds and it can handle 100 MPH gusts.

 

The guys at NRL took that and reversed it into an efficient propeller design for the C-130 in the hopes that it would give a similar improved performance. The carbon-fiber telescoping blade was just a natural extension of that thinking.

 

As to the laser beam creating a wide range of frequencies, that is also easy to explain. The Doppler Effect says that an increase in wavelength is received when a source of electromagnetic radiation is moving away from the observer and a decrease in wavelength is received when a source of electromagnetic radiation is moving toward from the observer. This is the basis for the Red Shift (redshift) used by astronomers to examine the movement of starts. It is the reason that a train has a rising pitch whistle as it coming toward you and a decreasing pitch sound as it passes and goes away from you. This is basic high school physics.

 

As the laser beam was rotated, any observer in a lateral position to the aircraft would see one part of the rotating beam rotating toward them (for example, the part above the prop hub) and another part rotating away from them (in this example, the part below the prop hub). The bottom part would have a redshift to its visible light because it is moving away from the observer. The part of the prop that is moving the slowest, near the hub, would have the least redshift but as the observer looked at the light coming from the laser beam further out on the prop, the speed would increase and the redshift would be greater until the Doppler shift would be so great that the light would shift to a frequency below the visible light spectrum. This would move the light energy into the infrared area but as the light traveled faster and faster, it would shift lower and lower. Since the laser beam extended for miles and the beam was traveling at speeds from a few hundred MPH to thousands of mils per second, the red shift along the beam path constantly moved down the electromagnetic spectrum passed radar, TV, short wave radio and down into the ELF range.

 

That portion of the prop above the hub was doing the same thing but it was moving toward the observer in the lateral position and so it was giving a blue shift – toward higher frequencies. As the light frequencies compressed into the blue and ultraviolet range, it became invisible to the naked eye but it still was emitting energy at higher and higher frequencies – moving into X-rays and gamma rays at speeds toward the end of the beam.

 

The end result of this red and blue shift of the light from the laser beam is that there was a cone of electromagnetic radiation emanating from the hub of each of the two engines (on the C-130) or the one engine on the retrofitted 707. This cone radiated out from the hub with a continuously changing frequency to the electromagnetic emissions as the cone widens out behind the aircraft. The intensity of the emissions is directly proportional to the power of the laser and the speed of the props so the highest and lowest frequencies were the most intense. These also happened to be the most destructive.

 

This is just one story that is firmly based in real and actual science. You have to be the judge if it is true or not but I defy you to find any real flaw in the logic or science. As with all of my stories, I don’t talk about space cadet and tin foil hat stuff. I have 40 years of hard core R&D experience along with four degrees in math, computer modeling, physics and engineering so I’m not your usual science writer but whether it is science fiction or not is up to you to decide. Just don’t make that decision because you don’t believe or understand the science – that is the part that should not be questioned. If you doubt any of it, I encourage you to look it up. It will educate you and allow me to get these very important ideas across to people.

Government Secrets #1 – Be Afraid…Be Very Afraid

I was involved in a long career of classified work for the military and then did classified work for the government after I got out of the military. Doing classified work is often misunderstood by the public. If a person has a Top Secret clearance, that does not mean they have access to all classified information. In fact, it is not uncommon for two people to both have Top Secret (TS) clearances and still not be allowed to talk to each other. It has to do with what the government calls “compartments”. You are allowed your clearance only within certain compartments or subject areas. For instance, a guy that has a TS for Navy weapons systems may not know or allowed to know anything about Army weapon systems. If a compartment is very closely held – meaning that it is separately controlled even within the TS cleared people, then it is given a special but often obscure names and additional controls. For instance, for years (back in the days of Corona but not any more) the compartment for satellite reconnaissance was called “talent-keyhole” and “byeman” and was usually restricted to only people within the NRO – National Reconnaissance Office.

These code words were abbreviated with two letters so talent-keyhole became TK and byeman became BY. As a further safeguard, it is forbidden to tell anyone the code word for your compartment – you are only allowed to tell him or her the two-letter abbreviation. And you cannot ask someone if they are cleared for any particular compartment, you have to check with a third party security force. So if you work in a place like CIA or NRL or NSA, and you want to have a meeting with someone from another department, you meet them at the security station outside your department office area (every department has one). When they arrive, you ask the guard if they are cleared for “TK” and “BY”. The guard then looks at the visitor’s badges and then checks them against a picture logbook he keeps. The picture and codes on the badges and the log book have to match and if they do, then gets out another book that has just the visitor’s numeric coded badge number and looks up his clearances. If he has TK and BY after his badge number, then you are told that he can be admitted to your area for discussions on just the TK and BY programs and subjects. In some departments, the visitors are given brightly colored badges identifying them as being cleared only for specific subject areas. This warns others in the department to cover their work or stop talking about other clearance areas when these visitors are nearby.

There are hundreds of these coded compartments covering all aspects of military and civilian classified topics and programs. If you are high enough or if your work involves a lot of cross-discipline work, you might have a long string of these code words after your name….as I did.

If a program has any involvement with intelligence gathering (HUMINT – human intelligence, SIGINT – signal intelligence or IMINT – imagery intelligence), then it may get additional controls that go well beyond the usual TS background checks. For instance, you might be subjected to frequent polygraph tests or be placed in the PRP – Personal Reliability Program. PRP was a program that constantly monitors people’s lives to see if they ever get even close to be vulnerable or easy targets for spies. In the PRP, your phone might be tapped, your checking accounts are monitored, you debt and income are watched, and your computer is hacked. This is all with the intent of making sure you never get into debt or get psychologically unstable. PRP administers a series of psychological tests that can take up to 3 days to complete every year. These tests can peer into your mind so well that they can feel reasonably confidant that you are mentally stable if these tests say so.

Because of my work, I had a TS clearance for more than 40 years and had a string of two-letter codes after my name that went on for three or four lines on a typewritten page. I was in the “Poly” program and in the PRP and some others that I still can’t talk about. The reason I had so many was because I was involved in doing decision support using computer modeling – Operations Research, Math Modeling and Simulations. This meant I had to have access to a huge range of information from a wide variety of intelligence sources as well as other kinds of R&D work. I then had to be able to analyze this information, model it and present it to the senior decision-makers in an easy to understand form. This meant I was often briefing congressmen, senators, people from the CIA, FBI and high ranking officers from all of the services, JCS and OSD as well as the working level analyst that were giving me their classified data for analysis.

Now I can begin telling you some of what I learned by being exposed to all of that intelligence over all those years but I still have to be careful because although most of my limitations have expired, some are still in effect and I can’t violate them or I will join the ranks of the “disappeared”.

First, let me make it clear that the entire military is run by the top most 1% of the people in the services combined with the top 5% within the federal government. Imagine a pyramid in which only the guys at the top point are deciding where all the rest will go. I’ll call them the “Power Elite”

There are just a handful of officers in the Pentagon and in JCS that make all of the decisions of what the services will do and what direction they will take. Perhaps 50 officers total. These guys are so high in rank and so close to retirement that they have, for intents and purposes, ceased being military people and are simply politicians that wear a uniform. They are essentially the liaison officers for the highest ranking congressmen and the office of the President. They cater to these politicians in order to gain additional power through the control of more money or to feather they nest of future involvement in the political arena.

There are of course a few – a very few notable exceptions. Colon Powell and Dwight Eisenhower are two that come to mind. Officers like General Norman Schwarzkopf are not in this group because they chose not to seek political office or extend their power or control beyond doing their military jobs.

It is easy to see why all of the military is controlled by 1% of the officers. This is an organization based on the “chain-of-command” structure and everyone is taught to follow orders. In fact, once you are in the military, you can go to jail if you do not follow orders and in time of war, you can be executed for not following orders. Most of the bulk of the military is so biased by the indoctrination and propaganda created and put out by the government, that they willingly follow orders without questioning them.

What this 1% of high ranking military and 5% of the federal government have in common is that they measure their success in money and power. The source of that money and power comes from commercial, industrial and business sources. By making decisions that favor these businesses, those businesses, in turn, empower and enrich those involved. What is truly tragic is that this is not a recent occurrence but rather thee has been a Power Elite in our government for many decades – going back to the mid 1800’s.

The 5% of the federal government refers to the most powerful members of the Executive branch – President, VP, Sec. of Defense, Sec. of State, etc. and the top most powerful congressmen and senators. The reason that the newer, younger and less powerful legislators do not fall into this group is because of the way the political parties are setup behind the scenes. The most senior congressmen and senators are put into positions of power and influence over the committees and programs that have the most influence on contracts, budget money and funding controls. When one congressman can control or seriously impact the budget for the entire military or any major commerce area, then he has control over all of the people in those areas. To see who these people are, list all of the congressmen and senators by length of service and take the top 5% and you will have 99% of the list. Not surprisingly, this top 55 also includes some of the most corrupt members of congress – Murtha, Stevens, Rangel, Renzi, Mollohan, Don Young and others.

At the highest levels of security clearances, many people gain insights into how this Power Elite manipulate and twist he system to their gain. When Dick Cheney orchestrated the fake intelligence to support his war on Iraq, don’t think for a minute that the CIA, NSA and Pentagon did not know exactly what he was doing but being good little soldiers that are, by law, not allowed to have a political opinion, they kept quiet. If they had not kept quiet, their personal careers would have been destroyed and their departments or agencies would have been punished by under-funded budgets for years to come.

The money and power comes from lobbyists and donations of funds and promises of votes so that the Power Elite can remain in power and extend their control and riches. A study by Transparency International found that of all the professions and jobs in the world, the one job that is most likely to make you a millionaire the soonest is being a congressman and senator in the US. In a job that pays less than $200K peer year, the net income and wealth of most congressmen and senators rises by 30-40% per year while they are active members of the legislature. That’s a fact!

So where’s the SciFi in all this? It’s just this, these members of the Power Elite have so much control that they can operate a virtual parallel government that functions out of sight of the public and often in complete opposition to the actions of their publicly expressed policies. Of course, statements like this cannot be made without positive and verifiable evidence and I can provide facts you can check and a long history of this occurring going back decades. Read about these incidents in the rest of this series of stories – Government Secrets #2, #3 and #4.

Ocean Dumping – A Summary of Studies

Ocean Dumping – A Summary of 12 Studies Conducted between 1970 and 2001

By Jerry Botana

The dumping of industrial, nuclear and other waste into oceans was legal until the early 1970’s when it became regulated; however, dumping still occurs illegally everywhere.  Governments world-wide were urged by the 1972 Stockholm Conference to control the dumping of waste in their oceans by implementing new laws. The United Nations met in London after this recommendation to begin the Convention on the Prevention of Marine Pollution by Dumping of Wastes and Other Matter which was implemented in 1975. The International Maritime Organization was given responsibility for this convention and a Protocol was finally adopted in 1996, a major step in the regulation of ocean dumping.

The most toxic waste material dumped into the ocean includes dredged material, industrial waste, sewage sludge, and radioactive waste. Dredging contributes about 80% of all waste dumped into the ocean, adding up to several million tons of material dumped each year. About 10% of all dredged material is polluted with heavy metals such as cadmium, mercury, and chromium, hydrocarbons such as heavy oils, nutrients including phosphorous and nitrogen, and organochlorines from pesticides. Waterways and, therefore, silt and sand accumulate these toxins from land runoff, shipping practices, industrial and community waste, and other sources.  This sludge is then dumped in the littoral zone of each country’s ocean coastline.  In some areas, like the so called “vanishing point” off the coast of New Jersey, in the United States, such toxic waste dumping has been concentrated into a very small geographic area over an extended period of time. 

In the 1970s, 17 million tons of industrial waste was legally dumped into the ocean by just the United States.   In the 1980’s, even after the Stockholm Conference, 8 million tons were dumped bincluding acids, alkaline waste, scrap metals, waste from fish processing, flue desulphurization, sludge, and coal ash.

If sludge from the treatment of sewage is not contaminated by oils, organic chemicals and metals, it can be recycled as fertilizer for crops but it is cheaper for treatment centers to dump this material into the ocean, particularly if it is chemically contaminated. The UN policy is that properly treated sludge from cities does not contain enough contaminants to be a significant cause of eutrophication (an increase in chemical nutrients—typically compounds containing nitrogen or phosphorus—in an ecosystem) or to pose any risk to humans if dumped into the ocean, however, the UN policy was based solely on an examination of the immediate toxic effects on the food chain and did not take into account how the marine biome will assimilate and be affected by this toxicity over time.  The peak of sewage dumping was 18 million tons in 1980, a number that was reduced to 12 million tons in the 1990s.

Radioactive Waste

Radioactive waste is also dumped in the oceans and usually comes from the nuclear power process, medical use of radioisotopes, research use of radioisotopes and industrial uses. The difference between industrial waste and nuclear waste is that nuclear waste usually remains radioactive for decades. The protocol for disposing of nuclear waste involves special treatment by keeping it in concrete drums so that it doesn’t spread when it hits the ocean floor however, poor containers and illegal dumping is estimated to be more than 45% of all radioactive waste. 

Surprisingly, nuclear power plants produce by far the largest amount of radioactive waste but contribute almost nothing to the illegal (after the Stockholm Conference) ocean dumping.  This is because the nuclear power industry is so closely regulated and accountable for its waste storage.  Off the coast of southern Africa and in the Indian Ocean, is the greatest accumulation of nuclear wastes.

The dumping of radioactive material has reached a total of about 84,000 terabecquerels (TBq), a unit of radioactivity equal to 1012 atomic disintegrations per second or 27.027 curies. Curie (Ci) is a unit of radioactivity. One curie was originally defined as the radioactivity of one gram of pure radium.  The high point of nuclear waste dumping was in 1954 and 1962, but this nuclear waste only accounts for 1% of the total TBq that has been dumped in the ocean. The concentration of radioactive waste in the concrete drums varies as does the ability of the drums to hold it.  To date, it is estimated that the equivalent of about 227 million grams (about 500,000 pounds) of pure radium has been dumped on the ocean floor.

Until it was banned, ocean dumping of radioactive waste was considered a safe and inexpensive way to get rid of tons of such materials.  It is estimated that the 1960’s and early 1970’s era nuclear power plants in New Jersey (like Oyster Creek – which is located just 21 miles from the Barnegat Lighthouse) and 12 other nuclear power plants located in Pennsylvania, New Jersey, and New York have dumped more than 100,000 pounds of radioactive material into the ocean off the New Jersey coast.

Although some claim the risk to human health is small, the long-term affects of nuclear dumping are not known, and some estimate up to 1,000 deaths in the next 10,000 years as a result of just the evaporated nuclear waste. 

By contrast, biologists have estimated that the ocean’s biome has been and will continue to be permanently damaged by the exposure to radioactive material.  Large scale and rapid genetic mutations are known to occur as dosage levels of radiation increase.  Plant, animal and micro-organisms in the immediate vicinity of leaking radioactive waste will experience the greatest and most radical mutations between successive generations.  However, test show that even long term exposure to diluted radioactive wastes will create accelerated mutations and adaptations.

The Problems with Ocean Dumping

Although policies on ocean dumping in the recent past took an “out of sight- out of mind” approach, it is now known that accumulation of waste in the ocean is detrimental to marine and human health. Another unwanted effect is eutrophication. A biological process where dissolved nutrients cause oxygen-depleting bacteria and plants to proliferate creating a hypoxic, or oxygen poor, environment that kills marine life. In addition to eutrophication, ocean dumping can destroy entire habitats and ecosystems when excess sediment builds up and toxins are released. Although ocean dumping is now managed to some degree and dumping in critical habitats and at critical times is regulated, toxins are still spread by ocean currents. Alternatives to ocean dumping include recycling, producing less wasteful products, saving energy and changing the dangerous material into more benign waste.

According to the United Nations Group of Experts on the Scientific Aspects of Marine Pollution , the amount of ocean dumping actually brings in less pollution than maritime transportation, atmospheric pollution, and land based pollution like run-off. However, when waste is dumped it is often close to the coast and very concentrated as is the case off the coast of New Jersey.

Waste dumped into the ocean is categorized into the black list, the gray list, and the white list. On the black list are organohalogen compounds, mercury compounds and pure mercury, cadmium compounds and pure cadmium, any type of plastic, crude oil and oil products, refined petroleum and residue, highly radioactive waste, any material made for biological or chemical warfare.

The gray list includes water highly contaminated with arsenic, copper, lead, zinc, organosilicon compounds, any type of cyanide, flouride, pesticides, pesticide by-products, acids and bases, beryllium, chromium, nickel and nickel compounds, vanadium, scrap metal, containers, bulky wastes, lower level radioactive material and any material that will affect the ecosystem due to the amount in which it is dumped.

The white list includes all other materials not mentioned on the other two lists. The white list was developed to ensure that materials on this list are safe and will not be dumped on vulnerable areas such as coral reefs.

In 1995, a Global Waste Survey and the National Waste Management Profiles inventoried waste dumped worldwide to determine what countries were dumping waste and how much was going into the ocean. Countries that exceeded an acceptable level would then be assisted in the development of a workable plan to dispose of their waste.

The impact of a global ban on ocean dumping of industrial waste was determined in the Global Waste Survey Final Report the same year. In addition to giving the impact for every nation, the report also concluded that the unregulated disposal of waste, pollution of water, and buildup of materials in the ocean were serious problems for a multitude of countries. The report also concluded that dumping industrial waste anywhere in the ocean is like dumping it anywhere on land. The dumping of industrial waste had reached unacceptable levels in some regions, particularly in developing countries that lacked the resources to dispose of their waste properly.

The ocean is the basin that catches almost all the water in the world. Eventually, water evaporates from the ocean, leaves the salt behind, and becomes rainfall over land. Water from melted snow ends up in rivers, which flows through estuaries and meets up with saltwater.  River deltas and canyons that cut into the continental shelf – like the Hudson Canyon and the Mississippi Cone – create natural channels and funnels that direct concentrated waste into relatively small geographic areas where it accumulates into highly concentrated areas of fertilizers, pesticides, oil, human and animal wastes, industrial chemicals and radioactive materials.  For instance, feedlots in the United States exceed the amount of human waste with more than 500 millions tons of manure each year – about half of which eventually reaches the ocean basin.

Not only does the waste flow into the ocean, but it also encourages algal blooms to clog up the waterways, causing meadows of seagrass, kelp beds and entire ecosystems to die. A zone without any life remaining is referred to as a dead zone and can be the size of entire states, like in coastal zones of Texas and Louisiana and north-east of Puerto Rico and the Turks and Caicos Islands.  All major bays and estuaries now have dead zones from pollution run-off. Often, pollutants like mercury, PCBs and pesticides are found in seafood meant for the dinner table and cause birth defects, cancer and neurological problems—especially in infants.

One of the most dangerous forms of dumping is of animal and human bodies.  The decomposition of these bodies creates a natural breeding ground for bacteria and micro-organisms that are known to mutate into more aggressive and deadly forms with particular toxicity to the animals or humans that they fed on.  Of the mid-Atlantic coast of the United States was a common dumping zone for animals – particularly horses and human bodies up until the early 1900’s.  Today, the most common areas for human body dumping is in India in which their religious beliefs advocate burial in water.  The results of this dumping may be seen in the rise in extremely drug resistant strains of leprosy, dengue fever and Necrotizing Fasciitis bacteria.

One of the largest deep ocean dead zones is in the area between Bermuda and the Bahamas.  This area was a rich and productive fishing ground in the 1700’s and early 1800’s but by the early 20th Century, it was no longer productive and by the mid-1900’s, it was virtually lifeless below 200 feet of depth.  This loss of all life seems to have coincided with massive ocean dumping along the New Jersey and Carolina coasts.

Recreation

Water recreation is another aspect of human life compromised by marine pollution from human activities like roads, shopping areas, and development in general.  Swimming is becoming unsafe, as over 12,000 beaches in the United States have been quarantined due to contamination from pollutants. Developed areas like parking lots enable runoff to occur at a much higher volume than a naturally absorbent field. Even simply driving a car or making a house warm can leak 28 million gallons of oil into lakes, streams and rivers. The hunt for petroleum through offshore gas and oil drilling leaks extremely dangerous toxins into the ocean and luckily is one aspect of pollution that has been halted by environmental laws.

Environmental Laws

In addition to the lack of underwater national parks, there is no universal law like the Clean Air Act or the Clean Water Act to protect the United States ocean territory. Instead, there are many different laws like the Magnuson-Stevens Fishery Conservation and Management Act , which only apply to certain aspects of overfishing and are relatively ineffective. The act developed in the 1970’s is not based on scientific findings and is regulated instead by the regional fisheries council. In 2000, the Oceans Act  was implemented as a way to create a policy similar to the nationwide laws protecting natural resources on land. However, this act still needs further development and, like many of the conservation laws that exist at this time, it needs to be enforced.

 The total effects of ocean dumping will not be known for years but most scientists agree that, like global warming, we have passed the tipping point and the worst is yet to come.

Perpetual Motion = Unlimited Power….Sort of…

The serious pursuit of perpetual motion has always intrigued me. Of course I know the basic science of conservation of energy and the complexities of friction, resistance, drag and less than 100% mechanical advantage that dooms any pursuit of perpetual motion to failure…but still, I am fascinated at how close some attempts have come. One college professor built a four foot tall Ferris wheel and enclosed its drive mechanism in a box around the hub. He said it was not perpetual motion but that it had no inputs from any external energy source. It did, however, make a slight sound out of that box. The students were to try to figure out how the wheel was turning without any apparent outside power source. It turned without stop for more than two years and none of his students could figure out how. At the end of his third year, he introduced his mechanism. He was using a rolling marble design that was common for perpetual motion machines but that also had been proven to not work. What he added was a tiny IC powered microcircuit feeding a motor that came out of a watch. A Watch! The entire 4 foot high Ferris wheel needed only the additional torque of a watch motor to keep it running for nearly 4 years!

This got me to thinking that if I could find a way to make up that tiny little additional energy input, I could indeed make perpetual motion. Unlike most of my other ideas, this was not something that could easily be simulated in a computer model first. Most of what does not work in perpetual motion is totally unknown until you build it. I also knew that the exchange of energy to and from mechanical motion was too inefficient to ever work so I concentrated on other forms of energy exchange. Then I realized I had already solved this – back in 1963!

Back in 1963, I was a senior in high school. Since 1958, I had been active in science fairs and wanted my last one to be the best. To make a long story short, I won the national science fair that year – sponsored by Bell Telephone. My project was “How far will sound travel” and my project showed that the accepted theory that sound diminishes by one over the square of the distance (the inverse square law) is, in fact, wrong. Although that may occur in an absolutely perfect environment of a point source of emission in a perfectly spherical and perfectly homogeneous atmosphere, it never ever occurs in the real world.

I used a binary counting flashing light circuit to time sound travel and a “shotgun” microphone with a VOX to trigger a measure of speed and power of the sound under hundreds of conditions. This gave me the ability to measure to 1/1000th of a second and down to levels that were able to distinguish between the compressions and rarefaction’s of individual sound waves. Bell was impressed and I got a free trip to the World’s Fair in 1964 and to Bell Labs in Murry Hill NJ.

As a side project of my experiments, I attempted to design a sound laser – a narrow beam of sound that would travel great distances. I did. It was a closed ten-foot long Teflon-lined tube that contained a compressed gas – I used Freon. A transducer (a flat speaker) at one end would inject a single wavelength of a high frequency sound into the tube. It would travel to the other end and back. At exactly 0.017621145 seconds, it would pulse one more cycle at exactly the same time that the first pulse reflected and returned to the transducer. This was timed to exactly coincide with the first pulse so that it was additive, making the first pulse nearly double in amplitude. Since the inside of the tube as smooth and kept at a constant temperature, the losses in one pass through the tube were almost zero. In less than 5 minutes, these reinforcing waves would build the moving pulse to the point of containing nearly all of the gas in the tube into the single wave front of one pulse. This creates all kinds of problems so I estimated that it would only be about 75% efficient but that was still a lot.

Using a specially shaped and designed series of chambers at the end opposite the transducer, I could rapidly open that end and emit the pulse in one powerful burst that would be so strong that the wave front of the sound pulse would be visible and it would remain cohesive for hundreds of feet. It was dense enough that I computed it would have just over 5 million Pascal’s (Pa) of force or about 750 PSI. The beam would widen to a square foot at about 97 meters from the tube. This is a force sufficient to knock down a brick wall.

One way to make the kind of transducer that I needed for this sound laser was to use a carefully cut crystal or ceramic disc. Using the property of reverse piezoelectric effect, the disc will uniformly expand when an electric field is applied. A lead zirconate titanate crystal would give me the right expansion while also being able to respond to the high frequency. The exit chambers were modeled after some parabolic chambers that were used in specially made microphones used for catching bird sounds. The whole thing was perfectly logical and I modeled it in a number of math equations that I worked out on my “slip stick” (slide rule).

When I got to Bell Labs, I was able to get one scientist to look at my design and he was very intrigued with it. He said he had not seen anything like it but found no reason it would not work. I was asked back the next day to see two other guys that wanted to hear more about it. It was sort of fun and a huge ego boost for me to be talking to these guys about my ideas. In the end, they encouraged me to continue thinking and that they would welcome me to work there when I was old enough.

I did keep thinking about it and eventually figured out that if I can improve the speed of response of the sensors and transducer, I could shorten the tube to inches. I also wanted more power out of it so I researched what was the gas with the greatest density. Even this was not enough power or speed, so I imagined using a liquid – water – but it turns out that water molecules are like foam rubber and after a certain point, they absorb the pulses and energy too much. The next logical phase of matter was a solid but that meant that there was nothing that could be emitted. I was stumped…for awhile.

In the late 1970’s I figured, what if I extended the piezoelectric transducer crystal to the entire length of the tube – no air – just crystal. Then place a second transducer at one end to pulse the crystal tube with a sound wave. As the wave travels the length of the crystal tube, the compression and rarefaction’s of the sound wave pulse create stress or strain on the piezoelectric crystal, making it give off electricity by the direct piezoelectric effect.   this is how a phonograph needle works as it bounces on the grooves of the record. 

Since the sound pulse will reflect off the end of the tube and bounce back, it will create this direct piezoelectric effect hundreds of times – perhaps thousands of times – before it is reduced by the transfer into heat. As with my sound laser, I designed it to pulse every single bounce to magnify the amplitude of the initial wave front but now the speed was above 15,000 feet per second so the pulses had to come every 0.0001333 seconds. That is fast and I did not know if current technology was up to the task. I also did not know what it would do to the crystal. I was involved in other work and mostly forgot about it for a long time.

In the late 1980’s, I now was working for DARPA and had access to some great lab equipment and computers. I dug out my old notes and began working on it again. This time I had the chance to actually model and create experiments in the lab. My first surprise was that these direct piezoelectric effects created voltages in the hundreds or even thousands of volts. I was able to get more than 10,000 volts from a relatively small crystal (8 inches long and 2 inches in diameter) using a hammer tap. I never thought it would create this much of a charge. If you doubt this, just take a look at the Mechanism paragraph in Wikipedia for Piezoelectricity.

When I created a simple prototype version of my sound laser using a tube of direct piezoelectric crystal, I could draw off a rapid series of pulses of more than 900 volts using a 1/16th watt amplifier feeding the transducer. Using rectifiers and large capacitors, I was able to save this energy and charge some ni-cads, power a small transmitter and even light a bulb.

This was of great interest to my bosses and they immediately wanted to apply it to war fighting. A friend of mine and I cooked up the idea of putting these crystals into the heels of army boots so that the pressures of walking created electricity to power some low power devices on the soldier. This worked great but the wires, converter boxes, batteries, etc.,  ended up being too much to carry for the amount of power gained so it was dropped. I got into other projects and I dropped it also.

Now flash forward to about 18 months ago and my renewed interest in perpetual motion. I dug out my old notes, computer models and prototype from my DARPA days. I updated the circuitry with some newer faster IC circuits and improved the sensor and power take-off tabs. When I turned it on, I got sparks immediately. I then rebuilt the power control circuit and lowered the amplitude of the input sound into the transducer. I was now down to using only a 9-volt battery and about 30 ma’s of current drain to feed the amplifier.   I estimate it is about a 1/40th watt amplifier.  The recovered power was used to charge a NIMH battery of 45 penlights of 1.2 volts each.

Then came my epiphany – why not feed the amplifier with the charging battery! DUH!

I did and it worked. I then boosted the amplifier’s amplitude, redesigned the power take-off circuit and fed it into a battery that was banked to give me a higher power density. It worked great. I then fed the battery back into an inverter to give me AC. The whole thing is about the size of a large briefcase and weighs about 30 pounds – mostly from the batteries and transformers. I am getting about 75 watts out of the system now but I’m using a relatively small crystal. I don’t have the milling tools to make a larger properly cut crystal but my modeling says that I can get about 500 watts out of a crystal of about 3 inches in diameter by about 12 inches long.

I call my device “rock power” and when I am not using it for power in my shop or on camping trips, I leave it hooked up to a 60 watt bulb. That bulb has been burning now for almost 7 months with no signs of it diminishing. It works! Try it!!!

The Power of the Mind!

  

An idea that we often hear is that we really only use 10% of our brains and if we used all of it we could do some pretty amazing stuff.  It has been speculated that we might be able to do things like remote viewing, telekinesis or mental telepathy or see the future.  This, of course, sounds like crazy talk from some wing-nut with a tinfoil hat but the reality is that a great deal of very serious research has gone into this very subject.

 In 1972, the CIA began a serious 24-year look into remote viewing and clairvoyance.  In 1981, the Defense Intelligence Agency (DIA) began serious studies in the same areas.  These programs had code names like Star Gate, Grill Flame and Center Lane, Sun Streak and others.  DoD kept looking at these subjects up thru June 1995.  Stanford Research Institute (SRI) of Menlo Park, CA., SAIC, Institute for Advanced Studies in Austin and the American Institutes for Research (AIR) all have been or are still working on research in these areas.   

The Cognitive Sciences Lab at Palo Alto Calif. did extensive studies that were critical of the government’s studies of this subject (DIA and CIA).  Their conclusion, published in March of 1996 found, “that a statistically significant effect had been demonstrated” but they also pointed out that the CIA and DoD had ignored compelling evidence and had set the outcome of the studies before they began by using questionable National research Council’s reviews.  “As a result, they have come to the wrong conclusion with regard to the use of anomalous cognition in intelligence operations and significantly underestimated the robustness of the basic phenomenon.”  The reasoning for the government’s perspectives on these studies have been shown to have nothing to do with the science or the efficacy of the research but rather were the petty squabbling of top-heavy bureaucrats and mismanagement of the political and financial support.   

In other words, studies that were conducted by numerous contractors, scientists and government labs over a period of three decades found important evidence that showed this was a viable field of study but for unrelated reasons, they botched the studies and the results so that the net result was that the whole subject has been taboo for serious studies or funding ever since. 

All this is to day that there is much more to this subject than my personal interests.  Lots of very serious scientists, government agencies and academic research facilities have looked and are still looking into these various psychological properties of the mind collectively grouped under headings like parapsychology, “anomalous cognition”, and psi abilities. 

If you read all these reports as carefully as I have, you will find that almost all of them did, in fact, find some statistically significant effect to a greater or less degree.  In fact, some of these studies found capabilities that defied both logic and conventional science so much so, that the scientists involved were ridiculed and derided to the point of nearly destroying their careers, when they tried to get some recognition of their results.  For that reason, many of these kinds of studies are no longer popular or abundant as they once were and are now done, if at all, in secret facilities and those involved are very cautious to keep a low profile. Since I have no interest in research money and have mostly retired from my R&D career, I have no qualms about telling of my adventures and success – especially since they have led to such startling discoveries. 

Most of this is completely verifiable from numerous Internet sources – including many reports from the government and R&D reports from and about programs described above.  For the most part, I have not so much blazed a new trail of research as much as I have combined various proven methods, techniques and processes in a variety of ways that probably were not tried before.  I have used special aids and tools to assist me that have proven to be effective by themselves but have a synergistic effect when combined with other aids and techniques.  In some cases, I have stumbled upon methods or techniques that have been well proven to work but I did not know about them beforehand.  If you doubt any of this, then do your own research on what I am trying and you will find it is all based on sound and proven science.  

What I am about to tell you will be hard to believe because we have all been told that this whole subject area is foolish nonsense and that only tricksters and con-men and deluded space cadets really believe in any of this.  If you are to understand the significance and why it is true, I have to give you some background and tell you the whole story of how I discovered this.  Let me start from the beginning…  

The truth is that humans dream but we don’t know why.  There are lots of theories.  The latest and most accepted is that it is the brain’s way of establishing and organizing our memories.  This sounds plausible until you consider the continuity, complexity and detail of some dreams that bear no relationship to any real-life experience.  It is also thought that dreams might be subconscious manifestations of our emotions but that does not explain the majority of dreams that appear to be about random events and places.  Some people believe that dreams are much more powerful and can tell the future or reveal a person’s innermost feelings.   

One generally accepted biological concept about dreams is that the conscious mind becomes inactive and the subconscious mind takes over.  The subconscious mind is that portion of the brain that is not directly controlled by willful and deliberate thoughts of a person.  It is the part of the brain that runs everything without being told to do so.  It keeps the heart beating, the blood flowing and controls the body’s reaction to temperatures, fear, surprise and other automatic reflexes.   

Some parts of the body seem to be controlled by both the conscious and the subconscious mind.  Like breathing and eye movement.  We can control these parts when we want to but it seems that they shift into automatic for most of the time.  During a dream, the real physical presence around the dreaming person can often be incorporated into the dream.  If you get cold in your bed, your dream might conjure up a dream that involves you getting cold.  If you hear sounds like dogs barking or bells, your dream might also have these sounds.  This implies that the subconscious mind is receptive to the body’s real senses and can incorporate the real world into the dream and yet it can also modify those real world sensations so that they appear in the dream in a totally different form.  Of course, this is simply anecdotal observation and is not a scientific analysis of what is really happening. 

The truth is that our best scientists and researchers don’t know much about dreams beyond what we can observe.  But because we do this every night and there are so many different aspects of it, there is a lot of interest by the hard-core scientists as well as a lot of average people.  I was one of those that was intently curious and wanted to find out more. 

About nine years ago, I began reading and working with lucid dreaming.  This is a technique of training your conscious mind to remain aware and active during a dream so that it can direct and control the subconscious mind and your dreams.   

I had read about brain waves called delta, theta, alpha, beta and gamma that are related to various thought patterns in the brain.  Way back when I was in the Navy, I bought a surplus recording electroencephalograph (EEG) and all of the hookups.  I had played with it as an interface to my computer and eventually had gotten it to recognize binary responses to my thoughts.  I could, for instance, answer yes-no questions using the output of this EEG fed into an A-D converted and then into the game-port on my old computer.  Now I got out that old EEG and began recording my night’s dreaming to map my REM and NREM sleep and to record my brain’s electrical activity.  I wired the EEG into some lights and into my computer so I could trigger various events with my brain waves.  It was fun to experiment and helped me define and refine my lucid dreaming. 

It took me about a year of practice before I had my first lucid dream and it was amazing. 

I tried keeping a log of my dreams and ordering my brain to remember to dream but I found the best technique was to first relax all over and then to imagine that I was walking up a long stairway to a special sleep temple.  I enhanced this image by slightly rubbing my feet together as if I was taking the steps up those stairs.  When I reach the temple, I was asleep and dreaming and was aware I was doing it.  Over time, I could shorten this climb up those stairs and even got to the point that I could do it during the day while waiting or riding a bus or train and while riding in a car.  After awhile, I could enter my dreams as if I was a spectator at a movie but I gradually began trying to exert control over what I was dreaming.   

I tried to enhance the lucid dreaming state with various drugs and herbs and teas and other foods.  I tried melatonin, kava kava, passionflower, St. John’s wort, and various herb teas with and without caffeine.  They all had some effect but I did not like the idea of having to take a drug to make this work so I stopped all of them except a multi-vitamin that gave me a bunch of “B” vitamins, fish oil, choline, and other stuff for an old guy like me. 

Eventually, I gained almost complete control of my dreams so that I could conjure up any event, environment or people I wanted to and then will them to do something.  The nature of this control is a little weird.  The subconscious mind is still creating the dream and can take it in an independent direction if I don’t exert some willpower but I can’t just command it to do my bidding.  I have to think it and want it, to make it happen and even then, it will do it but in a way I might not have chosen to do if my conscious mind had full control.  For instance, if I want to go fishing, it will create the entire fishing environment before I can define the boat or where or what I want to fish for.   

When my subconscious mind leaped ahead like that with something I did not want, it took me a long time and it was hard for me to learn to backup and redesign the dream.  At first, I’d have to dock the boat and walk to another boat in order to change.  Now I have learned how to “reset” the dream and do an instant redesign to something more like I want.  My reset signal is a little weird and I found it by accident.  I dropped an LED flashlight in my bed covers one night.  While in a deep sleep, I suddenly lost the dream I was dreaming as if it had been erased.  I willed myself awake and found that the flashlight was on and was under the covers near my legs.  It was odd that it would have any effect on my legs but I wired up a light to a pressure switch on my finger and then tried to reset a dream.  Eventually, I found that if I put the light under my legs, it would give me just the right amount of mind control over resetting my dreams.  Weird but it works so used it.  Using this reset signal, I can exert a lot of control but I have to work at controlling the switch. 

At first, I felt thrilled by this newfound capability and would sometimes remain excited for hours after I woke up.  After awhile, I realized this was not just being thrilled but I was feeling anxious and nervous.  I sometimes felt guilty for what I was seeing or felt anxiety over simply being in the dream.  This got to be a problem until I started making overt efforts while I was in my dream state to tell myself to be relaxed and calm when I woke.  I practiced meditation and yoga-like poses in my dreams to facilitate this effort with very good results.  Eventually, I was able to calm down and enjoy my dreams and actually feel relaxed and calm after it was over – even if the dream itself was exciting. 

I wanted to expand on my capabilities so I contacted a psychology professor friend of mine that I met back in my government R&D days.  He is a good friend and knows how to keep a secret.  I won’t give you any information that will let you identify him because I don’t think he wants to be seen as being involved in any of this but the truth is; he has been dabbling in it for years.  He pointed me toward lots of studies on the various brain waves and processes like REM and NREM sleep and slow-wave sleep and the stages of sleep and circadian rhythms, etc.  He told me about sleep inducements like being body temperature, (hot baths), a high carbohydrate diet and exercise.  I told him I wanted to stay clear of any drugs but he told me that lots of foods and over-the-counter drugs could affect sleep.  I must have spent months reading all about sleep, how it works and what causes it.  As with dreams, I found that there was a lot about what can be observed about sleep but not much about why we sleep. 

One of the most amazing aspects of lucid dreams is that you can immerse your entire being in the dream so that it seems as real as if you were really there.  You get the sights, smells, feelings and taste of your dream world.  It reminds me very much like the holodeck that they showed on the Star Trek TV series in which the computer could create artificial environments, people and nature. 

This realism is great most of the time but it can also be very disturbing.  I foolishly dreamed I was in a shooting war against some gang members and I got shot.  The link between my conscious and subconscious mind was so powerful and complete that I felt the shock and pain in my dream as if I had really been shot and it forced me to wake up and when I did, my heart was racing, my body felt flushed and I was breathing very hard.  I don’t think I would really die if I died in my dream but I have not wanted to test that theory. 

At first it was fun to experiment with stuff.  I even used it to conjure up famous scientists living and dead to discuss some science problem I was having at work.  Amazingly I often would solve relatively complex problems in these dreams and then take them to work and find out that they worked.  I also found that I could look at a book or magazine as fast I could turn the pages while awake and then recall that book in my dream and see every page clearly and even do a word search of the contents.  It was sort of a weird kind of photographic memory that I could tap into only in my dreams. 

I also explored other sensations and wild experiences.  I made myself able to fly, I swam faster than fish, and I became super strong, and other super powers.  I went through a phase of experimenting with sex and drugs – or what I imagined what drugs would do to you.  I played out all the great movies I had ever watched.  Some of these experiences were so much fun that I really looked forward to getting a good night’s sleep and often wanted to remain asleep in the morning. 

One interesting event happened about this time.  I go for a physical every year but the one I had about this time showed me to be in much better shape than ever in the past.  My blood chemistry was that of a person half my age and I had no visible or detectable problems of any kind.  Since I am an old guy with the typical old- guy problems – arthritis, high cholesterol, high blood pressure, age spots, etc., the doctor was both amazed and confused that I showed no signs of any of these maladies.  He wanted to do the tests over but first asked me a lot of questions.  I mentioned that I had gotten a lot of sleep over the past two years but I was careful to not mention my lucid dreaming.  He did run the tests again and told me that he added a few extras.   

When he got back the results, we had another meeting.  He told me that I had elevated levels of something called dimethyltryptamine and decreased levels of cortisol.  These were not just slight changes but at levels that he thought were way out of the ordinary.  He also noted that I had unusually high levels of acetylcholine, serotonin, dopamine and norepinephrine and said, “no wonder you feel good, you are high on natural uppers”.  I tried to downplay the results and told him it was probably diet. 

After about two years of this, I got tired of all the weird stuff and decided I try to find something really useful I could do.  I wanted to try to do things like remote viewing, telekinesis or mental telepathy, mind reading or see the future.  I quickly discovered that this was not like trying to influence my dreams – I had to exert much more control over my subconscious mind in order to make it focus on what I was trying to do.   This required me to not just direct the subconscious mind with my conscious mind but to actually superimpose the two over each other so that I would have all the benefits of both.   

As with all of this, I did my own research and found that the two sides of the brain are connected by the corpus callosum.  A sort of communications superhighway between the two hemispheres of the brain.  It turns out that this inter-hemispheric communications is vitally important to my efforts to superimpose the conscious mind onto my subconscious mind.  I discovered that being left handed, a musician and having worked in both artistic and math related career jobs, were all contributors to my corpus callosum being very efficient in letting me overlap my subconscious mind with my conscious mind. 

I can’t tell you have many hours it took but about a year later, in 2007, I was able to begin to control the essence of my subconscious.   I began to have some success when I began using a sleep sound machine that gave me 60 choices of various sounds to sleep by.  I began with white noise but soon found that something that had a background beat to it worked better.  To explore more sounds faster, I got two sound machines and put them on the left and right side of my bed for stereo sounds.  After a lot of fiddling with them, I found a range of frequencies that worked best and surprisingly, they worked best when they were not both set to the same tone.  A psychology professor that is a friend of mine told me this was called binaural beats and entrainment and that it was a well developed technique to create infrasound inside the brain.  I read up on this subject and soon was tweaking my sound machines to give me exactly what I needed. 

I also saw an advertisement for a relaxation aid that put flashing LED lights on the inside of some sunglasses so that you could see them with your eyelids closed.  I bought a pair and experimented with them.  I found that different combinations of sounds and flashing lights gave me different capabilities while in my dream state.  With some, I was very pensive and analytical and used them for problem solving and designing.  With other combinations, I was more aggressive and physical and could imagine building things and making decisions better.  I also found that when I used the right combination of lights, I could induce sleep almost immediately and by putting the lights on a timer, I could end my dreams on a set schedule.  My psychology professor friend told me this is called hypnagogia. 

After more than a year of experimentation, I began to have some success but frankly, at first it was a big letdown.  I was able to feel my senses while asleep so that I could feel and hear the environment around my sleeping body.  At the time, I thought “no big deal”.  Almost by accident, I decided to see if I could feel my heart and suddenly I was actually inside my heart – looking at it beating.  I could touch it and feel it and even change its speed of beating, at will.  I continued this exploration all over my body – lungs, brain, ears, eyes, etc.  I tested it by going to my lower left leg where I have been shot.  Some tiny bullet fragments are still in my leg and I could actually see them.  I later matched up where I “saw” them to an X-Ray of my leg and they agreed.  I had actually seen the inside of my real leg…and heart and lungs, etc. 

Then I began trying to push the capabilities of my subconscious to do some of those wild paranormal tricks.  Using all my tricks and aids I have learned over the past 6 years, I was able to eventually totally dominate the subconscious mind with my conscious thought.  The results were amazing.  Every time I tried it I discovered something new I could do or experience.  Here are just a few… 

Hyper-Senses – I was immediately aware of my surroundings at a level far beyond anything I could have imagined.  I could feel the variations in the thread covering of my bed sheets.  I could hear air moving in the room.  I could hear sounds from outside the house like I was using a massive hearing aid.  I could smell individual objects in the room like the wood dresser, the wool rug and the wall paint.  I could feel the vibrations of the furnace and after awhile, I could feel the vibrations of cars driving on the road a block away from my home.   I learned to open my eyes without waking out of my dream state and found I could see colors and detail I never thought possible.  And what was even more amazing was that I could do this selectively so that it was not all flooding my senses at once.  It was like listening to someone talking in a rock concert – only easier – I could tune in our tune out whatever I wanted to concentrate on.  

As I explored these hyper senses, I realized that when I sensed something by smell or touch or hearing, I almost immediately imagined an image of that thing.  When I truck drove down the road, I heard it first and then smelled it and then felt it and as I added each new sensation, I enhanced my image of it until I was convinced it was the garbage truck.  I then looked out the window and – for the first time – visually saw that it was indeed the garbage truck.  I did this with hundreds of things until I could “see” well beyond my visual range. 

X-Ray vision.  Well not exactly x-ray vision, more like having a selective virtual reality vision.  Because I know what it looks like on the other side of a wall, I could look at the wall and then look thru it.  Because of my heightened senses, I could hear, feel, smell and sense things in the next room even if they had been moved since I was in the room last.  It was as if the 3-dimensional qualities of my vision were expanded to include hearing, smell and feelings.  I could “see” – in my mind’s eye – my dog as he was walking across the next room until he walked into my door and was visible. 

Remote viewing  (RV) – or at least something like RV.  My hyper senses and this x-ray vision combined to give me the ability to look outside my house and then into other houses and down the street.  The limit seemed to be about ¼ mile but it was amazing.  I spent hours nosing around inside my neighbor’s houses; listening to their conversations and watching them do stuff. 

These were all just variations on the hyper senses that my mind was giving me but it was rapidly going beyond that.  I added these improved senses to the near photographic memory of what I could do and see now and added in a nearly perfect recall of my own memory to create some really bizarre capabilities.  For instance, I discovered I could revisit a moment in my past and see it in details even beyond what I experienced the first time the event happened.  I remembered going camping with my Dad when I was 12.  I could smell the pine trees and hear the nearby river and feel the heat of the sun on my face.  I could imagine the scene to the point of seeing in 3-D and being able to walk around the scene and see myself back then.  I could play it like a videotape and slow or stop the actions to study and see things that my senses recorded by that I have not remembered in all these years.  It was utterly amazing. 

I was in an accident in which I was hit by a school bus that ran a stop sign.  I cannot remember anything that happened that day from before I got up in the morning until I woke up in the hospital room.  I used these newfound senses of the overlapped mind to revisit that day and follow myself up to the accident.  Just as the accident happened, my visual memory of it went blank but I still was mentally recording the sounds and smells and feelings of the events around me.  I was able to recreate the scene as if I was watching from above looking down on the scene and saw what happened to me.  I heard the bus driver crying when she thought she had killed me.  I felt the ambulance guys working on me and the ride in the gurney to the hospital and the loud sound of the siren blasting.  It was incredible to relive those long forgotten moments.  These discoveries kept me busy for weeks but I wanted to push the limits even more.   

Remembering my earlier studies of brain waves called delta, theta, alpha, beta and gamma and that they are related to various thought patterns in the brain, I wanted to see if I could sense these waves.  I invited a friend over for a late night dinner.  She is a sound sleeper and I have had her over many times before so I knew I could count on her being a good test subject.  She slept in the guest room and after several glasses of wine; I knew she would sleep thru almost any noise. 

I waited until she was well asleep and then entered my dream state and quickly moved into my conscious control state.  While remaining asleep, I walked into her room and sat in a chair by her bed.  I then concentrated on visualizing her brain waves.  To my surprise, I began to see a totally new dream scene.  It took me a minute or two to realize I was seeing her dream.  I was looking at her dream as if it was a 3-D movie being projected on a screen in front of me.  She was dreaming of swimming on a beach.  Within a minute or two, her dream changed to a completely different dream.  Now she was back in college and was studying for a test with some friends.  In another minute or two, it changed again.  I sat there for two hours watching her dream snippets come and go. 

She was a good friend and so I did not try to enter her dreams or influence them in any way.  I somehow thought that was not a good thing to do to a friend.  However, in the weeks that followed, I tried this same effort on several other people – mostly neighbors that I knew very little.  Soon, I was able to do it without leaving my bed.  I could use my remote viewing and hyper senses to “see’ the person’s brain waves and then see their dreams.  I even was able to do this with my neighbor but the range of this was very limited – to about 100 feet.  Once, I even went to a motel and roamed all of the nearby rooms and explored their dreams. 

What I found, over time, is that most people dream in very short dreams that often seem disjointed and illogical.  It sometimes seemed like I was seeing a 2-minute excerpt of a longer movie and the film kept skipping and jumping thru scenes.  Only about one in 25 or 30 tries did I find someone that would dream a coherent story line that I could follow and was interesting.  This disappointment prompted me to move into other new directions. 

I figured if I could use my hyper-senses on someone asleep in the next room or next house; why not try it while they are awake.  I began tweaking my sound machines and lights and searching for the right conditions.  I found that noise from them talking or watching TV or music overwhelmed the sensation of vibrations and made it impossible to read anything.  It was like trying to feel your heart beat while riding on a motorcycle.  I began looking for someone that liked to sit in a quiet room and remain awake.  I began buying books for all my neighbors in the hopes they would sit and read.   

Unfortunately, this too was a disappointment.  I found that most people that are reading are paying attention to what they are reading – duh!  The effect was that the image in their mind was simply a dreamlike version of what they were reading and not of any thoughts that they might have about their lives or living.  Once in awhile, I would get someone that would project themselves into the story but even then, the story moved slowly – at the speed they were reading – and had very little personal insights.  It was time to move on to other efforts. 

We are now up to about a year ago.  I have been doing this for more than ten years and have found it has almost dominated my life.  I started it after I retired and I retired with enough money to never have to work again but I have actually done little else.  I have not interacted with my neighbors and I have done relatively little community service or helping of others other than sending money to charities.  I decided I wanted to get active again outside of myself but I was not sure how or what I could do and I wanted to try to do something with my new found skills.  Everything I could think of seemed like it could not bridge the gap between my mental world and the real world.  I could imagine and dream and sense all kinds of things and situations but there was no way to translate that into the real world.  So that became my goal – find some way to manifest my mental capabilities in the real world. 

I got out my old EEG again and began to experiment with the brain-to-computer interface again.  This time, I was in full control of my dream state, my subconscious and my body.  I quickly found I could create code sequence patterns in my delta waves.  I was able to consciously send patterns of two and three cycles of delta waves into the A-D converter and have the computer interpret them.  In my dream, I imagined that I created a typewriter that had a cord that ran first to my head and then to an antenna so that as I typed my brain waves were being “transmitted” out in the proper pattern for each letter.  I then programmed my computer to recognize about 40 different code patterns based on the old ASCII codes that they use to use in TTY devices back in the 1950’s.  This gave me a way to type in my dreams but have the actual printing occur in the real world on a real computer. 

This is a simplified version of what took me a year to do but I eventually got it working.  I found that I could type in my dream state a whole lot faster than I can type in real life.  In fact, what you are reading right now has taken me about 5 minutes to type.  Several other stories on this blog were also typed this way.  I am going to start a novel next week and hope to have it done by the first of March (it is now Feb 15th). 

I hope you have enjoyed this description of my exploits over the past decade in lucid dreaming.  I can only say that that if I can do it, anybody can.  Give it a try.    

Nuclear Attack Communications

While I was active duty Navy, I was involved in a lot of strategic communication studies that involved researching and computer modeling to find ways to guarantee reliable communications under all circumstances – including things like the worst possible weather, earthquakes, terrorists and nuclear attack. We called it Communications Continuity.  The objective is to have assured communications in the trans and post attack phase of a nuclear war or other crisis.  That means that you cannot rely on any fixed installation since it will be bombed in the opening salvo.  Likewise all of the satellites and fixed communications centers, including phone and computer lines and all of the major nodes on the internet will be destroyed.Despite all this, there are entire networks that are designed to survive and work even after a major attack.  The one that is most reliable is to use is low frequency radio waves in the ELF range.  Extremely Low Frequency.  Way below the AM broadcast band.  These frequencies have two very good characteristics:  They will punch thru the static and noise created by atom bomb blasts and they will penetrate into the water to reach submerged submarines.  Unfortunately, they also have two bad characteristics.  To make use of ELF effectively, you need a BIG antenna to receive and transmit and you need a whole bunch of power – like in the multiple megawatt range.To receive ELF, subs use a trailing wire antenna that can drag behind the submerged sub by more than a mile, if needed.  Aircraft (like SAC bombers) have drop-down wires that can reach out 18,000 feet to snag an ELF signal.  Since these guys mostly receive only, they do not need the power of a megawatt transmitter to respond to these signals.  But someone has to have that power and a really big antenna.  It’s there, right under your nose and you have probably seen it and did not know it.One of the backups to the backups that the military uses to send ELF messages is the power lines that normally deliver power to your homes and businesses.  By cutting these wires at two ends and making some other minor changes, they can turn a stretch of highway telephone pole power wires into a very long ELF antenna.  This allows them to not have to use tuning systems to try to pump out all that power into a ¼ wavelength antenna or shorter, less efficient antenna.The power comes from two 18 wheeler trucks.  One has fuel and a small command post and the other is one huge generator – capable of creating about 20 megawatts of electrical power.  A third vehicle is usually an RV with the crew quarters and other support.  These three vehicles travel in teams around the US – constantly in motion – driving along routes that have been surveyed to make ideal ELF antennas.  They are all disguised as normal 18 wheelers and have all the fake papers to let them move among all the other truckers on the road.At last count, there are 24 of these teams covering an area of 350, 000 square miles from Alaska to Florida and all of Canada.  They never stop.  There are dozens of crews that are rotated out every 45 days at special bases where they can get equipment spares and run testsNext time you have a totally unexplained power outage, look for two 18 wheelers and an RV traveling together or near each other.  You might have just witnessed a test of the emergency communications network.Think this is far fetched.  Consider this.  Each military and intelligence service has an office dedicated to this subject as well as several entire organizations (DIA, DISA, DSS, NRO, NSA, CSC, etc.)but the overall office with DoD is the Assistant Secretary of Defense for Communications, Command, Control and Intelligence (ASD-C3I). There is also a new office called the Assistant Secretary of Defense for Networks and Information Integration (ASD-NII).

Achieving the Speed of Light NOW

Scientists have been telling us for some time that it is impossible to achieve the speed of light.  The formula says that mass goes to infinity as you approach C so the amount of power to go faster also rises to infinity.  The theory also says that time is displaced (slows) as we go faster.  We have “proven” this by tiny fractions of variations in the orbits of some of our satellites and in the orbit of Mercury.  For an issue within physics that is seen as such a barrier to further research, shouldn’t we see a more dramatic demonstration of this theory?  I think it should so I made up one.

Let us suppose we have a weight on the end of a string.  The string is 10 feet long and we hook it up to a motor that can spin at 20,000.  The end of the string will travel 62.8 feet per revolution or 1,256,637 feet per minute.  That is 3.97 miles per second or an incredible 14,280 miles per hour.  OK so that is only .0021% of C but for only ten feet of string and a motor that we can easily create, that is not bad.

There are motors that can easily get to 250,000 RPM and there are some turbines that can spin up to 500,000 RPM.  If we can explore the limits of this experimental design, we might find something interesting.   Now let’s get serious. 

Let’s move this experiment into space.  With no gravity and no air resistance, the apparatus can function very differently.  It could use string or wire or even thin metal tubes.  If we control the speed of the motor so that we do not exceed the limitations imposed by momentum, we should be able to spin something pretty fast.

Imagine a motor that can spin 50,000 RPM with a sting mechanism that can let out the string from the center as the speed slowly increases.  Now let’s, over time, let out 1 mile of string while increasing the speed of rotation to 50,000 RPM.  The end will not be traveling at nearly 19 thousand miles per hour or 2.82% of C.

If we boost the speed up to 100,000 RPM and can get the length out to 5 miles, the end of the string will be doing an incredible 188,495,520 miles per hour.  That is more that 28% the speed of light.

What will that look like?  If we have spun this up correctly, the string (wire, tubes, ?) will be pulled taunt by the centrifugal force of the spinning.  With no air for resistance and no gravity, the string should be a nearly perfect vector outward from the axis of rotation.  The only force that might distort this perfect line is momentum but if we have spun this setup slowly so that the weight at the end of the string is pulling the string out of the center hub, then it should be straight. 

I have not addressed the issue of the strength of the wire to withstand the centrifugal force of the spinning weight.  Not that it is trivial but for the purposes of this thought experiment, I am assuming that the string can handle whatever the weight size we use.

Let us further suppose that we have placed a camera exactly on the center of the spinning axis facing outward along the string.  What will it see?  If the theory is correct, then despite the string being pulled straight by the centrifugal force, I believe we will see the string curve backward and at some point it will disappear from view.  The reason is that as you move out on the string, its speed is going faster and faster and closer to the C.  This will cause the relative time at each increasing distance from the center to be slower and appear to lag behind.  When viewed from the center-mounted camera, the string will curve.

If we could use some method to make the string visible for its entire length, its spin would cause it to eventually fade from view when the time at the end of the string is so far behind the present time at the camera that it can no longer be seen.  It is possible that it might appear to spiral around the camera, even making concentric overlapping spiral rings. 

If synchronized clocks were places at the center and at the end of the string, and then we placed a camera at both ends but could view the two images side-by-side at the hub.  Each one would view a clock that started out synchronized and the only difference would be that one is now traveling at some percentage of C faster than the other.  I believe they would read different times as the spin rate increased. 

But now here is a thought puzzle.  Suppose there is an electronic clock at the end of the string as described by the above paragraph but now instead of sending its camera image back to the hub, we send its actual reading by wires embedded in the string back to the hub where it is read side-by-side with a clock that has been left at the hub.  What will it read now?  Will the time distortion alter the speed of the electrons so that they do NOT show a time distortion at the hub?  Or will the speed of the electricity be constant and thus show two different times?  I don’t know.

Plato Rises!

This is one of two email exchanges I had with some other writer/scientists in which we explored the outer edge of science.  You have to remember that way back then, there were some ideas that have since died off – like the monuments and face on Mars and the popularity of UFOs. 

Some of this is part of an on-going dialog so it may seem like there is a part missing because this is a response to a previously received message.  I think you get the gist of what is going on.  Despite the fact that this is mostly from 9 years ago, the science described and projected then is still valid or has not yet been proven wrong. You might find these interesting. 

===============================

October 13, 1998

I want to thank you for letting me post your article about gravity shielding that appeared in the March ‘98 WIRED magazine.  Your comments on my article about lightning sprites and the blue-green flash are also appreciated.  In light of our on-going exchange of ideas, I thought you might be interested in some articles I wrote for my BBS and WEB forums on “bleeding edge science” that I hosted awhile back.  Some of these ideas and articles date back to the mid-90’s, so some of the references are a little dated and some of the software that I use now is generally available as a major improvement over what I had then.

What I was involved with then can be characterized by the books and magazines I read, a combination of Skeptical Enquirer, Scientific American, Discovery and Nature.  I enjoyed the challenge of debunking some space cadet that had made yet another perpetual motion machine or yet another 250 mile-per-gallon carburetor – both claiming that the government or big business was trying to suppress their inventions.  Several of my articles were printed on the bulletin board that pre-dated the publication of the Skeptical Enquirer.

I particularly liked all the far-out inventions attributed to one of my heroes – Nikola Tesla.  To hear some of those fringe groups, you’d think he had to be an alien implant working on an intergalactic defense system.  I got more than one space cadet upset with me by citing real science to shoot down his gospel of zero-point energy forces and free energy.

===============================

These articles deal with the fringe in that I was addressing the “science” behind UFO’s.

  I have done some analysis on life in our solar system other than Earth and the odds against it are very high.  At least, life as we know it.  Even Mars probably did not get past early life stages before the O2 was consumed.  Any biologist will tell you that in our planet evolution, there was any number of critical thresholds of presence or absence of a gas or heat or water that, if crossed, would have returned the planet to a lifeless dust ball.  Frank Drakes formulas are a testament to that.  The only reason that his formulas are used to “prove” life exists is because of the enormous quantities of tries that nature has to get it right in the observable universe and over so much time.

  One potential perspective is that what may be vesting us, as “UFO’s” could be a race or several races of beings that are 500 to 25,000 years more advanced than us.  Given the age of the universe and the fact that our sun is probably second or third generation, this is not difficult to understand.  Some planet somewhere was able to get life started before Earth and they are now where we will be in the far distant future.

  Stanley Miller proved that life, as we know it, could form out of organic and natural events during the normal evolution of a class M planet.  But Drake showed that the chances of that occurring twice in one solar system is very high against it.  If you work backwards from their formulas, given the event of earth as an input of some solution of the equations, you would need something like 100 million planets to get even a slight chance for another planet with high‑tech life on it.

  Taken this into consideration and then comparing it to the chances that the monuments on mars are natural formations or some other claim of extraterrestrial life within our solar system, you must conclude that there is virtually no chance for life in our solar system.  Despite this, there are many that point to “evidence” such as the appearance of a face and pyramids in Mars photographs.  It sounds a lot like an undated version of the “canals” that were first seen in the 19th century.  Now we can “measure” these observations with extreme accuracy – or so they would have you believe.

The so‑called perfect measurements and alignment that are supposedly seen on the pyramids and “faces” are very curious since even the best photos we have of these sites have a resolution that could never support such accuracy in measurements.  When you get down to “measuring” the alignment and sizes of the sides, you can pretty much lay the compass or ruler anywhere you want because of the fuzz and loss of detail caused by the relatively poor resolution.  Don’t let someone tell you that they measured down to the decimal value of degrees and to within inches when the photo has a resolution of meters per pixel!

   As for the multidimensional universe; I believe Stephen Hawkin when he said that there are more than 3 dimensions however, for some complex mathematical reasons, a fifth dimension would not necessarily have any relationship to the first four and objects that have a fifth dimension would have units of the first four (l,w,h & time) that are very small ‑ on the order of atomic units of scale.  This means that according to our present understanding of the math, the only way we could experience more than 4 dimensions is to be able to be reduced to angstrom sizes and to withstand very high excitation from an external energy source.   Lets exclude the size issue for a moment since that is a result of the math model that we have chosen in the theory and may not be correct.

  We generally accept that time is the 4th dimension after l, w, and h which seem to be related as being in the same units but in different directions.  If time is a vector (which we believe it is) and it is so very different than up, down, etc, then what would you imagine a 5th dimension unit to be?

  Most people think of “moving” into another dimension and it being just some variation of the first 4 but this is not the case.  The next dimension, is not capable of being understood by us because we have no frame of reference. 

Hawkin makes a much better explanation of this in one of his book but suffice it to say that we do not know how to explore this question because we cannot conceive of the context of more than 4 dimensions.  The only way we can explore it is with math ‑ we can’t even graph it because we haven’t got a 5-axis coordinate system.  I have seen a 10 dimensional formula graphed but they did only 3 dimensions at a time.  Whatever the relationship of a unit called a “second” has with a unit called a “meter”, may or may not be the same relationship as the meter has with “???????” (Whatever the units of the 5th dimension are called).  What could it possibly be?  You describe it for me, but don’t use any reference to the first 4 dimensions.  For instance, I can describe time or length without reference to any of the other known dimensions.  The bottom line is that this is one area that even a computer cannot help because no one has been able to give a computer an imagination ……..yet.

  As for longevity, there has been some very serious research going on in this area but it has recently been hidden behind the veil of aids research. There is a belief that the immune system and other recuperative and self‑correcting systems in the body wear‑out and slowly stop working.  This is what gives us old‑age skin and gray hair.  This was an area that was studied very deeply up until the early 1980’s.  Most notably were some studies at the U. of Nebraska that began to make some good progress in slowing the biological aging by a careful stimulation and supplementation of naturally produced chemicals.  When the AIDS problem surfaced a lot of money was shifted into AIDS research.  It was argued that the issues related to biological aging were related to the immune issues of AIDS.  This got the researchers AIDS money and they continued their research, however, they want to keep a very low profile because they are not REALLY doing AIDS research. That is why you have not heard anything about their work. 

Because of my somewhat devious links to some medical resources and a personal interest in the subject, I have kept myself informed and have a good idea of where they are and it is very impressive.  Essentially, in the inner circles of gerontology, there is general agreement that the symptomatology of aging is due to metabolic malfunction and not cell damage.  This means that it is treatable.  It is the treatment that is being pursued now and as in other areas of medicine in which there is such a large multiplicity of factors affecting each individuals aging process, successes are made in finite areas, one area at a time.  For instance, senility is one area that has gotten attention because of the mapping to metabolic malfunction induced by the presence of metals along with factors related to emotional environment.  Vision and skin condition are also areas that have had successes in treatments.

  When I put my computer research capability to look at this about a year ago, what I determined was that by the year 2024, humans will have an average life span of about 95‑103 years.  It will go up by about 5% per decade after that for the next century, then it will level out due to the increase of other factors.

_____________ Are They Really There? ___________ Life is Easy to Make:

 Since 1953, with the Stanley Miller experiment, we have, or should have discarded the theory that we are unique in the universe.  Production of organic life and even DNA and RNA have been shown to occur in simple mixtures of hydrogen, ammonia, methane and water when exposed to an electrical discharge (lightning).  The existence of most these components has been frequently verified by spectral analysis in distant stars but, of course, until recently, we can’t see the star’s planets.  Based on the most accepted star and planet formation theories, most star systems would have a significant number of planets with these elements and conditions.

 Quantifying the SETI

 A radio astronomer, Frank Drake developed some equations that were the first serious attempt to quantify the number of technical civilizations in our galaxy.  Unfortunately, his factors were very ambiguous and various scientists have produced numbers ranging from 1 to 10 billion technical civilizations in just our galaxy.  This condition of a formula is referred to as unstable or ill‑conditioned systems.  There are mathematical techniques to reduce the instability of such equations.  I attempted to do so to quantify the probability of the existence of intelligent life.

 I approached the process a little different.  Rather than come up with a single number for the whole galaxy, I decided to relate the probability to distance from Earth.  Later I added directionality.

 Using the basic formulas Drake used to start, I added a finite stochastic process using conditional probability. This produces a tree of event outcomes for each computed conditional probability.  (The conditions being quantified were those in his basic formula: rate of star formation; number of planets in each system with conditions favorable to life; fraction of planets with on which life develops; fraction of planets that develop intelligent life; fraction of planets that develop intelligent life that evolve technical civilizations capable of interstellar communications and the lifetime of such a civilization).

 I then layered one more parameter onto this by increasing the probability of a particular tree path inversely to the relation of one over the square of the distance.  This added a conservative estimate for the increasing probability of intelligent life as the distance from Earth increases and more stars and planets are included in the sample size.

 I Love Simulation Models

 I used standard values used by Gamow and Hawking in their computations, however, I ignored Riemannian geometry and assumed a purely Euclidean universe.  Initially, I assumed the standard cosmological principles of homogeneity and isotropic distributions.  (I changed that later) Of course this produced 1000’s of probable outcomes but by using a Monte Carlo simulation of the probability distribution and the initial computation factors of Drake’s formula (within reasonable limits), I was able to derive a graph of probability of technical civilizations as a function of distance.

 But I Knew That

 As was predictable before I started, the graph is a rising, non‑linear curve, converging on 100%.  Even though the outcome was intuitive, what I gained was a range of distances with a range of corresponding probabilities of technical civilizations.  Obviously, the graph converges to 100% at infinite distances but surprisingly, it is above 99% before leaving the Milky Way Galaxy.  We don’t even have to go to Andromeda to have a very good chance of there being intelligent life in space.  Of course, that is not so unusual since our galaxy may have about 200 billion stars and some unknown multiple of planets.

 Then I made It Directional

 I toyed with one other computation.  The homogeneous and isotropic universe used by Einstein and Hawking is a mathematical convenience to allow them to relate the structure of the universe to their theories of space‑time. These mathematical fudge‑factors are not consistent with observation in small orders of magnitude in distance from earth ‑ out to the limits of what we can observe ‑ about 15 billion light years.  We know that there is inhomogeneous or lumps in the stellar density at these relatively close distances.  The closest lump is called the Local Group with 22 galaxies but it is on the edge of a super cluster of 2500 galaxies.  There is an even larger group called the Great Attractor that may contain tens of thousands of galaxies. 

By altering my formula so that I took into account the equatorial system direction (ascension & declination) of the inhomogeneous clustering.  Predictably, this just gave me a probability of intelligent life based on a vector rather than a scalar measure.  It did however, move the distance for any given probability much closer ‑ in the direction of clusters and super clusters.  So much so that at about 351 million light years, the probability is virtually 100%.  At only about 3 million light years, the probability is over 99%. That is well within the Local Group of galaxies.

 When you consider that there are tens of billions of stars and galaxies within detection range by Earth and some unknown quantity beyond detection – it is estimated that there are galaxies numbering as many as a 1 followed by 21 zeros – that is more that all the grains of sand in all the oceans, beaches and deserts in the entire world.  And in each of those galaxies, there are billions of stars!  Now you can begin to see that the formula to quantify the number of technical civilizations in space results in virtually 100% no matter how conservative you make the input values.  It can do no less than prove that life is out there.