Category Archives: Science….”Fiction?”

Nearly everything that HG Wells and msot of what Issac Asmov have written about were less fiction than they were predictions and logical extensions of science and discovery. If we take our minds to the science that lies just beyond the current leading edge of science, what will we see?

2011 – The Year the Government Got Healthy

The discoveries and creations accomplished in 2011 will have far reaching affects for decades to come. These advances in biology, nano-technology, computer science, materials science and programming are truly 21st Century science. The latest issue of Discovery magazine details the top stories as they have been released to the public, but, as you have learned from this blog, there is much that is not released to the public. Every research study or major development lab in the US that is performing anything that is of interest to any part of the government is watched and monitored very closely. Obviously, every government funded research project has government “oversight” that keeps tabs on their work but this monitoring applies to every other civilian R&D lab and facility as well. I have described the effects of this monitoring in several of my reports but I have not specifically spelled it out. Now I will.

The government has a network of labs and highly technically trained spies that monitor all these civilian R&D projects as they are developed. These guys are a non-publicized branch of the Federal Laboratory Consortium (FLC) that provides the cover for this operation behind the guise of supporting technology transfer from the government to the civilian market – when, in fact, it’s real goal is just the opposite.

The Labs in FLC are a mix of classified portions of existing federal labs – such as NRL, Ft. Detrick, Sandia, Argonne, Brookhaven, Oak Ridge, PNNL, Los Alamos, SEDI and about a dozen others – and a lot of government run and controlled civilian labs such as Lawrence Livermore, NIH and dozens of classified college and university and corporate labs that give the appearance of being civilian but are actually almost all government manned and controlled.

The spy network within the FLC is perhaps the least known aspect. Not even Congress knows much about it. It is based in Washington but has offices and data centers all over. The base operations comes under an organization within the Dept. of Homeland Security (DHS) called the Homeland Security Studies and Analysis Institute (HSSAI). The public operations of HSSAI is run by Analytic Services, Inc. but the technology spy activities are run by the Office of Intelligence and Analysis (OIA) Division of DHS.

Within the OIA, the FLC technology spies come under the Cyber, Infrastructure and Science Director (CIS) and are referred to as the National Technology Guard (NTG) and it is run like a quasi-military operation. In fact, most of these NTG spies were trained by the Department of Defnse (DoD) and many are simply on loan from various DoD agencies.

This is a strange and convoluted chain of command but it works fairly efficiently mostly because the lines of information flow , funding and management are very narrowly defined by the extremely classified nature of the work. What all these hidden organizations and fake fronts and secret labs do is to allow the funding for these operations to be melted into numerous other budget line items and disguised behind very official and humanitarian and publicly beneficial programs. This is necessary because some of the lab work that they get involved in can become quite expensive – measured in the billions of dollars.

The way this network actually works is actually fairly simple. Through the FLC and other funding and information public resources, leading edge projects are identified within HSSAI. They then make the decision to “oversight”, “grab” or “mimic” the details of the R&D project. If they implement “oversight”, that means that OIA and CIS keep records of what and how the R&D projects are progressing. If they “grab” it, that means that the NTG is called upon to obtain copies of everything created, designed and/or discovered during the project. This is most often done using cyber technology by hacking the project’s computers of everyone involved. It is the mimic that gets the most attention in the OIA.

If a project is tagged as a mimic or “M” project, the HSSAI mates a government lab within the FLC to be the mimic of the R&D project being watched. The NTG usually embeds spies directly in the civilian R&D project as workers and the OIA/CIS dedicates a team of hackers to grab everything and pass it directly to the mated FLC lab. The NTG spies will also grab samples, photos, duplicates and models of everything that is being accomplished.

What is kind of amazing is that this is all done in real time – that is, there is almost no delay between what is being done in the civilian R&D lab and what is being done to copy that work in the government lab. In fact, the payoff comes when the government lab can see where a project is going and can leap ahead of the civilian R&D lab in the next phase of the project. This is often possible because of the restraints in funding, regulations, laws and policy that the civilian labs must follow but the government labs can ignore. This is especially true in biological sciences in which the civilian lab must follow mandated protocols that can sometimes delay major breakthroughs by years. For instance, the civilian lab has to perform mice experiments and then monkey experiments and then petition for human testing. That process can take years. If a treatment looks promising, the government lab can skip to human testing immediately – and has done so many times.

Let me give you an example that is in recent news. The newest advances in science are being made in the convergence areas between two sciences. Mixing bioscience with any other science is called bioconvergence and it is the most active area of new technologies. This example is the bioconvergence of genetics and computers. The original project was begun by a collaboration between a European technology lab based in Germany and an American lab based in Boston. The gist of the research is that they created a computer program that uses a series of well known cell-level diagnostic tests to determine if a cell is a normal cell or a cancer cell. The tests combine a type of genetic material called microRNA with a chemical marker that can react to six specific microRNAs. The markers can then be read by a computer sensor that can precisely identify the type of cell it is. This is accomplished by looking at the 1,000+ different microRNA sequences in the cell. The computer knows what combination of too much or too little of the six microRNAs that identifies each distinct type of cell.

Once that is accomplished, they can define, identify and isolate the specific individual cancer cells. If it is a cancer cell, then the same program creates a gene that is custom designed to turn off the reproductive ability of that specific identified cancer cell. This synthetic gene for a protein called hBxi, promotes cell death by stopping its ability to split, divide and/or reproduce. There are several chemical safeguards built into the process that prevent healthy cells from being targeted. The whole project is being called the “Geniom RT Analyzer for microRNA quantification analysis for biomarkers of disease and treatment” but the lab guys just call it “biologic” for short.

Nearly all of the separate aspects of this project are well known but in the past, it has taken months or years to cross-index the various aspects of the 1000 or more microRNA sequences and then months or years more to devise a response. Using this biologic computer program mated to a biochemical logic “circuit”, the process takes a few hours. The biocomputer analyzes millions of combinations and then defines exactly how to tag and destroy the bad cells.

In keeping with standard protocols, human testing will begin around 2015 and it could take until 2025 before this is a commercially available medical treatment for treating cancer. FLC identified the value of this treatment very early on and created a mimic lab at Ft. Detrick, Maryland at the National Interagency Confederation of Biological Research (NICBR). The NICBR has a long history of managing non-weapons-related biological research. They provide an easy funding path and a nice cover for some of the most advanced and most classified medical research performed by the US.

The NICBR mimic lab was keeping pace with the progress being made by the biologic project until they could project ahead and see the benefits in other areas. NICBR, of course, had the computer analysis program as soon as it was completed and had duplicated the biochip and geniom analyzer hardware just as fast. Once it had proved that the process worked, they began to make much greater progress than the biologic labs because they had more money, less limitations and access to immediate human test subjects. As successes began to pile up, they added more staff to help make modifications to the biologic system by creating new biochips, modifying the geniom analyzer and analysis software. Within a few months in mid-2011, they had geared up to a staff of over 100 using four different labs at Ft. Detrick churning out new capabilities one a weekly and then on a daily basis.

By the time the biologic lab was making their preliminary reports public in SCIENCE magazine, in September 2011, the NICBR lab was just finishing its first human tests which were entirely successful. By the middle of October 2011, they had all but eliminated false positives and began optimizing the circuits to identify new cell types. Using a flood of new and redefined biochips and modifications to the software, they had expanded the microRNA analysis to other complex cell states and by the middle of November, had successful tests on 16 different types of cancer and were adding others at a rate of 3 to 5 per week but parallel efforts were also working on other applications of the same biologic process.

Since the core analysis is actually a computer program and the microRNA sequences defined by the multiplex primer extension assay (MPEA) for a vast number of different types of cells are well known, this process can be expanded to cover other applications just by altering the computer program, biochip and MPEA and the synthetic protein gene that is applied. They also quickly discovered that the computer processing power was there to perform many of these tests simultaneously by using multiple biochips and MPEAs and CCD cameras for reading the biochips. This allowed doing analysis on dozens of cancers and other cell types and then allowing the computer to define and concoct the appropriate response.

The NICBR report at the end of November described their latest extension of the applications of this technology to regenerative medicine to allow the almost immediate repair of bad hearts, destroyed lung tissues and other failed organs. Essentially any cell in the body that can be uniquely defined by its microRNA can be targeted for elimination, replacement or enhancement. The NICBR lab is expanding and adding new applications almost daily and the limits to what is possible won’t be reached for years.

At the end of November, the first report from NICBR had made its way up through OIA/CIS and HSSAI to a very select group of DoD intelligence officers – some military and some civil service and some civilians (by invitation only). This is a group that does not show up on any organization chart or budget line item. They are so deep cover that even their classified security compartment name is classified. Unofficially, they call themselves the Red Team and reports they create are called Red Dot Reports or RDRs. (They use a red dot to identify projects that they believe have immediate applications and high priority) They advise the JCS and the president on where and how to direct black-ops R&D funds and how to develop and use the developed technology. They are not the final word but they do act as a buffer between what is really going on in the labs and those that might benefit or take advantage of the technology.

This group imagined the application of the biologic technology in the role of prolonging the life of key individuals in the military and government. Anyone with a life threatening disease like cancer can now be cured. Anyone with a failing or damaged organ can use this technology to put synthetic genes or designer stem cells directly into the near immediate repair or replacement of the damaged cells. Almost immediately, each of the members began to name off members of the senior most military officers and the senior most political leaders that might benefit from this biologic technology.

Now comes the part you will never hear made public. The Red Team are highly trained and very capable of keeping secrets but they are also human and they know that technology like this can mean life or death to some people and for that, those people might do anything. It is still not known who did it first but someone in the Red Team contacted a senior US Senator that he knew had recently been diagnosed with prostate cancer. (In fact there are 11 members of Congress and two Cabinet members that currently have cancer of one form or another. This is not something that they want made public if they want to be re-elected so it is very confidential.) Traditional treatment involves surgery, radiation and chemo-therapy and then you just reduce your chances of a recurrence. With the biologic technology, you skip past all that unpleasant treatment and go immediately to being cured without any chance of recurrence. For that, anyone would be most grateful and it is obvious that whomever it was on the Red Team that leaked this news, did so to gain favor with someone that could benefit him a great deal.

Once it was known that the news had leaked out, almost every one of the Red Team members made contact with someone that they thought would benefit from the biologic technology. By the second week in December, dozens of people were clamoring for the treatment and promising almost anything to get it. The word of this technology and its benefits are still spreading to the leaders and business tycoons around the world and the Red Team is trying desperately to manage the flood of bribes and requests for treatment.

As you read this, the NICBR is treating the sixth Congressman for various cancers and there is a line of more than 30 behind these 6. The lab has enlisted the aid of two other departments to setup and begin treatments within Ft. Detrick and plans are in the works to create treatment centers in five other locations – all of them on very secure military installations – plus one will be setup on Guam at the Air Force base there to treat foreign nationals. By the end of January, these facilities will be operational and it is expected that there will be a list of over 500 people waiting for treatments for cancer or damaged or failed organs. I have heard that the price charged to corporate tycoons is $2 million but the treatment is being traded with other political leaders in other countries for various import/export concessions or for political agreements.

This will all be kept very, very secret from the public because there are millions of people that would want treatments and that would create incredible chaos. The biologic equipment is only about $950,000 for a complete system, not counting the payments for patents to the original researchers. But this is not the holdup from going public with this. If it got out that the government had this technology, they would have to admit to having stolen it from the Boston group and that would imply that they are doing and have done this before – which is completely true. They do not want to do that so they are going to let the original researchers work their way through the system of monkey testing for 3 years and then human trials for 3 or 4 years and then through the FDA approval process which will take another 2 to 3 years and they will get to market about when they estimated – about 2025.

In the meantime, if you hear about some rich and famous guy or some senior Congressman making a miraculous recovery from a serious illness or a failing body part, you can bet it was because they were treated by a biologic device that is unavailable to the general public for the next 15 years or so.

<<Addendum>>

<< You are probably wondering how I know all this detail about some of the best kept secrets in the US government. As I have mentioned in numerous other reports, I worked in government R&D for years and almost all of it was deep cover classified. My last few years of work were in the field of computer modeling and programming of something called sensor fusion. The essence of this type of computer programming is the analysis of massive amounts of inputs or calculations leading to some kind of quantified decision support. This is actually a pretty complex area of math that most scientists have a hard time translating to their real-world metrics and results.

When the CIS staff at HSSAI first got tagged to support the mimic of the biologic lab work, they needed some help in the programming of the biologic analysis using the photo CCD input data and the massive permutations and combinations of microRNA characteristics. I was asked to provide some consulting on how to do that. The task was actually pretty simple because those guys at the Boston biologic lab were pretty smart and they had already worked out the proper algorithms needed. I just reversed engineered their logic back to the math and then advanced it forward to the modified algorithms needed for other cell detections.

In the process of helping them I was also asked to advise and explain the processing to other parts of the government offices involved – the OIA, the NICBR, FLC, HSSAI and even to the Red Team. I was privy to the whole story. I am writing it here for you to read because I think it is a great disservice to the general public to not let them have access to the very latest medical technology – especially when it can save lives. If I get in legal trouble for this, then it will really go public so I am sure that the government is hoping that I am going to reach a very few people with my little blog and that I will not create any real problems. That is their hope and they are probably right. >>

Bombs Away!

The Air Force is working overtime to redefine its role in warfare in light of using UAV’s, drones and autonomous weapons. What is at stake is the very nature of what the AF does. For the past 40 years, it has based its power and control on a three-legged table – bombers, fighters and missiles. Its funding and status in DoD is based on keeping all three alive and actively funded by congress.

The dying cold war and the end of the threat from Russia has largely diminished the role of ICBM missiles. The AF is trying to keep it alive by defining new roles for those missiles but it will almost certainly lose the battle for all but a few of the many silos that are still left.

The role as fighter is actively being redefined right now as UAV’s take over attack and recon roles. There is still the queasy feeling that we do not want to go totally robotic and there is a general emotion that we still need a butt in the seat for some fighter missions such as intercept and interdiction of targets of opportunity but even those are being reviewed for automation. There is, however, no denying that the AF will maintain this responsibility – even if non-pilots perform it.

The role of bomber is the one that is really in doubt. If the Army uses the Warthog for close combat support and the Navy uses A-6’s and F-18’s for attack missions, then the role of strategic bomber is all that is left for the AF and that is a role that is most easily automated with standoff weapons and autonomous launch-and-forget missiles. The high altitude strategic bomber that blankets a target area is rapidly becoming a thing of the past because of the nature of our enemy and because of the use of surgical strikes with smart bombs. To be sure, there are targets that need blanket attacks and carpet-bombing but a dropped bomb is notorious for not hitting its targets and the use of hundreds of smart weapons would be too costly as compared to alternatives.

The AF is groping for solutions. One that is currently getting a lot of funding is to lower the cost of smart bombs so that they can, indeed, use many of them in large numbers, necessitating the need for a manned bomber aircraft and still be cost-effective. To that end, a number of alternatives are being tried. Here is one that I was involved in as a computer modeler for CEP (circular error of probably) and percent damage modeling (PDM). CEP and PDM are the two primary factors used to justify the funding of a proposed weapon system and then they are the first values measured in the prototype testing.

CEP says what is the probably of the weapons hitting the target. CEPs for cruise missiles are tens of feet. CEPs for dumb bombs is hundreds or even thousands of feet and often is larger than the kill radius of the bomb making it effectively useless against the target while maximizing collateral damage. PDM is the amount of damage done to specific types of targets given the weapon’s power and factoring in the CEP. A PDM for a cruise missile may be between 70% and 90% depending on the target type and range (PDM decreases for cruise missiles as range to target increases). The PDM for a dumb (unguided) bomb is usually under 50% making the use of many bombs necessary to assure target destruction. In WWII, PDM of our bombers was less than 10% and in Viet Nam, it was still under 30%. The AF’s problem is to improve those odds. Here is how they did it.

Whether we call them smart bombs or precision guided munitions (PGM) or guided bomb units (GBU) or some other name, they are bombs that steer to a target by some means. The means changes from GPS, to laser to infrared or RF to several other methods of sending guidance signals. JDAM is one of the latest smart bombs but there are dozens of others. JDAM’s run about $35,000 on top of the cost of making the basic bomb. In other words, the devices (fins, detection, guidance) that make it a smart bomb add about $35,000 to the cost of a dumb bomb. The AF’s goal was to reduce this to under $5,000, particularly in a multiple drop scenario.

They accomplished this in a program they code named TRACTOR. It starts with a standard JDAM or other PGM that uses the kind of guidance needed for a specific job. The PGM is then modified with a half-dome shaped device that is attached to the center of the tail of the JDAM. This device looks like a short rod about 1 inch in diameter with a half dome at one end and a bracket for attaching it to the JDAM at the other end. It can be attached with glue, clamps or screws. It extends about 6 inches aft of the fins and is very aerodynamic in shape.

Inside the dome is a battery and a small processor along with a matrix of tiny laser emitting diodes (LsED) that cover the entire inside of the dome. It can be plugged into the JDAM’s system or run independent and can be modified with add-on modules that give it additional capabilities. This is called the LDU – laser direction unit.

The other side of this device is a similar looking half dome that is attached to the nose of a dumb bomb using glue or magnets. There is a plug-in data wire that then connects to a second module that is attached to the rear of the dumb bomb. This second unit is a series of shutters and valves that can be controlled by the unit on the nose. This is called the FDU – following direction unit.

Here is how it works. The LDU is programmed with how the pattern of bombs should hit the ground. It can create a horizontal line of the bombs that are perpendicular or parallel to the flight of the JDAM or they can be made to form a tight circle or square pattern. By using the JDAM as the base reference unit and keying off its position, all the rest of the bombs can be guided by their FDUs to assume a flight pattern that is best suited for the target. The FDU’s essentially assume a flight formation during their decent based on instructions received from the LDU. This flight formation is preprogrammed into the LDU based on the most effective pattern needed to destroy the target or targets.

A long line of evenly spaced bombs might be used to take out a supply convoy while a grid pattern might be used to take out a large force of walking enemy that are dispersed on the ground by several yards each. It is even possible to have all the bombs nail the exact same target by having them all form a line behind the LDU JDAM bomb in order to penetrate into an underground bunker.

It is also possible to create a pattern in which the bombs take out separate but closely spaced targets such as putting a bomb onto each of 9 houses in a tightly packed neighborhood that might have dozens of houses. Controlling the relative distance from the reference LDU and making sure that that bomb is accurate will also accurately place all the other bombs placed on their targets. This effectively creates multiple smart bombs in an attack in which only one bomb is actually a PGM.

The method of accomplishing this pattern alignment is thru the use of the lasers in the LDU sending out coded signals to each bomb to assume a specific place in space relative to the LDU, as the bombs fall toward the target. The lasers in the LDU send coded signals that cause the FDU bombs to align along specific laser tracks being sent out by the LDU and at specific distances from the LDU. The end result is that they can achieve any pattern they want without regard to how the bombs are dropped – as long as there is enough altitude to accomplish the alignment. It is even possible for the LDU dropped from on bomber to control the FDU’s on bombs dropped by a second bomber.

The low cost was achieved by the use of easily added-on parts to existing bomb types and devices and by using innovative control surfaces that do not use delicate vanes and flaps. The FDU uses rather robust but cheap solenoids that move a spoon-shaped surface from being flush with the FDU module to being extended out into the slipstream of air moving over the bomb. By inserting this spoon up into the airflow, it creates drag that steers the bomb in one direction. There are eight of these solenoid-powered spoons that are strapped onto the FDU that can be used separate or together to steer or slow the bomb to its proper place in the desired descent flight pattern.

Since these LDU and FDU devices are all generic and are stamped out using SMD – (surface mount devices) – the cost of the LDU is under $3,000 and the FDU is under $5,000. 25 dumb bombs can be converted into an attack of 25 smart bombs for a total cost of about $110,000. If all of them had to be JDAMs, the cost would have been $875,000 – a savings of more than 87%.

These have already been tested and are being deployed as fast as they can be made.

Update 2012:

A recent innovation to the Tractor program was initiated in March of 2012 with the advent of miniaturized LDU’s and FDU’s that can be easily attached to the individual bomblets in a cluster bomb. These new add-ons are small enough and custom fitted to the bomblets so that they can be very quickly added to the cluster bombs. In practice, a separate LDU bomb is dropped with the cluster bomb and the cluster bomb is dropped from a much higher altitude than normal. This gives the individual bomblets time to form complex patterns that enhance their effectiveness. For instance, an anti-runway cluster bomb would line up the bomblets in a staggered zig-zag pattern. If the intent is area denial to personnel and tanks, the submunitions would be directed into an evenly spaced blanket covering a wide but defined area. This allows the placement of the mines into a pattern that is much wider than would normally be achievable with a standard cluster bomb drop which is usually limited to only slightly wider than the flight path of the dropping aircraft. Now a single drop can cover two or three square miles if the are dropped from above 15,000 feet.

A similar deployment technique is being developed for the dispersion of clandestine sensors, listening devices, remote cameras and other surveillance systems and devices.

Power from Dirt

Part of the year, I live in Vermont, where there is a lot of interest in renewable energy sources. They want to use wind or solar or wood or biofuels but almost all the tree-huggers skip the part about all those renewable energy sources combined would not meet the demand and we would still need a coal, gas or nuclear power plant to make up the difference. I decided to try to make up something that really could give enough energy for a household but would also work year round and be independent of weather, temperature and would use a fuel that is cheap and renewable. That is a big set of requirements and it took me several months to work out how to do it. It turns out that it can be done with dirt and some rocks and a little electronics.

As I have said many times, I worked for NRL and then DARPA while I was active duty in the Navy and then for other labs and in my own R&D company when I got out of the military. While I was at DARPA, they worked on an idea of using piezoelectric devices in the shoes of soldiers to provide electricity to low powered electronics. It turned out to be impractical but it showed me the power of piezoelectric generators.

I also work at NRL when they were looking into thermal-electric generators to be used on subs and aircraft. Both subs and planes travel where the outside is really cold and the insides are really hot and that temperature differential can be used to create electricity. I had a small involvement in both these projects and learned a lot about harvesting energy from micro-watt power sources. I also learned why they did not work well or could not be used for most situations back then but that was 22 years ago and a lot has changed since then. I found that I could update some of these old projects and get some usable power out of them.

I’ll tell you about the general setup and then describe the details. The basic energy source starts out with geothermal. I use convective static fluid dynamics to move heat from the earth up to the cold (winter) above ground level – giving me one warm surface (about 50 degrees year round) and a cold surface – whatever the ambient air temperature is, in the winter.

I then used a combination of electro and thermal-mechanical vibrators attached to a bank of piezoelectric crystal cylinders feeding into a capture circuit to charge a bank of batteries and a few super capacitors. This, in turn, powers an inverter that provides power for my house. The end result is a system that works in my area for about 10 months of the year, uses no fuel that I have to buy at all, has virtually no moving parts, and works 24×7 and in all weather, day and night. It gives me about 5,000 watts continuous and about 9,000 watts surge which covers almost all the electrical needs in my house – including the pump on the heater and the compressor on the freezer and refrigerator. I’ll have to admit that I did get rid of my electric stove in order to be able to get “off the grid” entirely. I use propane now but I am working on an alternative for that also. So, if you are interested, here’s how I did it.

The gist of this is that I used geothermal temperature differentials to create a little electricity. That was used to stimulate some vibrators that flexed some piezo-electric material to create a lot more electricity. That power was banked in batteries and capacitors to feed some inverters that in turned powered the house. I also have a small array of photovoltaic (PV) solar panels and a small homemade wind mill generator. And I have a very small hydro-electric generator that runs off a stream in my back yard. I use a combination of deep cycle RV, and AGM and Lithium and NiMH batteries in various packs and banks to collect and save this generated power. I total, on a good day, I get about 9,500 watts out. On a warm, cloudy, windless and dry day, I might get 4,000 watts but because I get power and charge the system 24/7 but use it mostly only a few hours per day, it all meets my needs with power to spare.

Today it was 18F degrees outside. Last night it was 8F degrees. From now until mid-April, it will be much colder than 50 degrees above ground. Then we have a month or less in which the air temp is between 40 and 60 followed by about 3 months in which the temps are above 70. Then another month of 40-60 before it gets cold again. That gives me from 20 to more than 40 degrees of temperature differential for 10 months of the year.

Using these two differential temperatures, I hooked up a bank of store-bought, off-the-shelf solid-state thermal electric devices (TEDs). These use “Peltier” elements (first discovered in 1834) to convert electricity to heat on one plate and cold on another. You can also reverse the process and apply heat and cold to the two plates and it will produce electricity. That is called the “Seebeck effect”, named after a guy that discovered it in 1821. It does not produce a lot of electricity but because I had an unlimited supply of a temperature differential, I could hook up a lot of these TEDs and bank them to get about 160 volts at about 0.5 amps on an average day with a temperature differential of 20 degrees between the plates. That’s about 80 watts. With some minor losses, I can convert that to 12 volts at about 6 amps (72 watts) to power lots of 12 volt devices or I can get 5 volts at about 15 amps (75 watts) to power a host of electronics stuff.

Then, I dug a hole in the ground – actually, you have to drill a hole in the ground. Mine is 40 feet but the deeper the better. It has to be about 10-12 inches in diameter. If you have a lot of money and can customize the parts and then you can use a smaller diameter hole. I salvaged the cooling coils off the back of several commercial grade freezers to get the copper pipes that have those thin metal heat sinks attached to them. I cut and reshaped these into a tightly packed cylinder that was 10″ in diameter and nearly four feet long, containing nearly 40 feet of copper pipes in a wad of spiral and overlapping tubes – so it would fit in my 40’ deep by 12″ inch diameter hole. Down that deep, the hole filled with water but the water was still about 50 degrees. I wrapped the heat sinks in several layers of wire fence material. This was aluminum screens with about ¼” holes. I used two long copper tubes of 1 inch diameter to connect the two ends of the coil to the surface as I sank them to the bottom. All the joints were soldered and then pressure tested to make sure they did not leak.

Just before and after it was sunk into the hole, I pushed some marble sized pea rocks into the hole. This assured that there would be a free-flow of water around the heat sink lines without it becoming packed with clay. I bought a 100 foot commercial grade water hose to slip over the two pipes to cover them from the surface down to the sunken coils. This hose has a thick hard rubber outside and soft rubber on the inside and had a 1.75 inch inside diameter. It was designed to use with heavy duty pumps to pump out basements or ponds. It served as a good sleeve to protect the copper tubes and to insulate the pipes. To insulate it further, I bought a can of spray expanding foam. The kind that you use to fill cracks and it hardens into a stiff Styrofoam. I cut the can open and caught the stuff coming out in a bucket. I then diluted it with acetone and poured it down between the hose and the copper pipe. It took about 18 days to dry and harden but it formed a really good insulating layer so that the pipes would not lose much heat or cold while the fluid moved up and down in the copper pipes. The two copper pipes sticking out were labeled “UP” and “Down” and I attached the down pipe to the bottom of a metal tank container.

The next part is another bit of home brew. I needed a large thin metal sandwich into which to run the “hot” fluid. To make it would cost a fortune but I found what I needed at a discount store. It is a very thin cookie sheet for making cookies in the oven. Its gimmick is that it is actually two thin layers separated by about a quarter inch of air space. This keeps the cookies from getting too hot on the bottom and burning. I bought 16 of these sheets and carefully cut and fused them into one big interconnected sheet that allowed the fluid to enter at one end, circulate between the layers of all the sheets and then exit the other end. Because these sheets were aluminum, I had to use a heliarc (also known as TIG or GTAW welding and I actually used argon, not helium) but I was rained by some of the best Navy welders that work on airframes of aircraft. The end product was almost 6 x 6 feet with several hose attachment points into and out of the inner layer.

I then made a wood box with extra insulation all around that would accommodate the metal sandwich sheet. The sheet was then hooked up to the UP hose at one end and to the top of the tank/container that was connected to the Down hose. Actually, each was connected to splitters and to several inlet nad outlet ports to allow the flow to pass thru the inner sandwich along several paths. This made a complete closed loop from the sunken coils at the bottom of the hole up the UP tube to the 6 x 6 sheet then thru the tank to the DOWN tube and back to the coils.

Now I placed my bank of Peltier solid-state thermal-electric modules (SSTEMs) across the 6×6 sheet. Attaching one side of the SSTEMs to the 6×6 sheet and the other side to a piece of aluminum that made up the lid of the box that the sandwich sheet was in. This gave me one side heated (or cooled) by the sandwich sheet with fluid from the sunken coils and the other side of the SSTEMs was cooled (or heated) by the ambient air. The top of the flat aluminum lid also had a second sheet of corrugated aluminum welded to it to help it dissipate the heat.

So, if you are following this, starting from the top, there is a sheet of corrugated aluminum that is spot welded to a flat sheet that forms the top of the box lid. Between these two sheets that are outside the box and exposed to the air, there are air gaps where the sine-wave shaped sheet of corrugated aluminum meets the flat sheet. This gives a maximum amount of surface area exposed to the air. In winter, the plates are the same temperature as the ambient air. In the Summer, the plates have the added heat of the air and the sun.

The underside of this flat aluminum sheet (that makes up the box lid) is attached to 324 Peltier SSTEMs wired in a combination of series and parallel to boost both voltage and current. The lower side of these SSTEMs is connected to the upper layer of the thin aluminum of the cookie-sheet sandwich. This cookie-sheet has a sealed cavity that will be later filled with a fluid. The lower side of this cookie sheet is pressed against the metal side of a stack of three inch thick sheets of Tyvek house insulation. The sides and edges of all of these layers is also surrounded by the Tyvek insulation.

I then poured 100% pure car antifreeze into the tank on the copper up/down tubes. I had to use a pump to force the antifreeze down to the coils and back up thru the cookie sheet back to the tank. I ran the pump for about 6 hours to make sure that there was no trapped air anywhere in the system. The tank acted like an expansion tank to keep the entire pipe free of any trapped air. The antifreeze was the thick kind – almost like syrup – that would not freeze at any temperature and carried more heat than water would.

It actually began to work very fast. The top of the large flat hollow sheet had filled with fluid and it got cold from the ambient air. This cooled the antifreeze and the cold fluid wants to sink down the DOWN pipe to the sunken coils at the bottom of the hole. The coils meanwhile were heating the fluid down there to 54 degrees and that wanted to rise up the UP pipe. As soon as the heated fluid got up to the top, it cooled in the hollow sheet and sank down the DOWN tube again. This is called dynamic convective thermal fluid circulation or some just call it thermal siphoning.

The transfer of heat up to the surface creates a continuous temperature differential across the plates of the Peltier SSTEMs and then they create about 160 volts of DC electricity at about 0.5 amps or about 80 watts of electricity. I needed to use a solar panel controller to manage the power back to a usable 12 to 14 volts to charge a bank of batteries. But I am not done yet.

I added a second flat aluminum sheet on top of the corrugated aluminum- like a sandwich. This added to the surface area to help with heat dissipation but it also was to allow me attach 100 piezoelectric vibrators. These small thin 1.5″ diameter disks give off a strong vibration when as little as 0.5 volts are applied to them but they can take voltages up to 200 volts. They were 79 cents each from a surplus electronic online store and I bought 100 of them and spaced them in rows on the aluminum lid. Along each row, I placed a small tube of homemade piezoelectric crystals. I’m still experimenting with these crystals but I found that a combination of Rochelle salt and sucrose work pretty well but more importantly, I can make these myself. I’d rather use quartz or topaz but that would cost way too much.

The crystal cylinders have embedded wires running along their length and are aligned along the rows of piezoelectric vibrators. They are held in place and pressured onto the vibrators by a second corrugated aluminum sheet. This gives a multi-layer sandwich that will collectively create electricity.

One batch of the SSTEMs is wired to the 100 piezoelectric vibrators while the rest of the SSTEMs feed the solar controller to charge the batteries. I had to fiddle with how many SSTEMs it took to power the vibrators since they will work with a very small amount but they do a better job if they are powered at a higher level.

The vibrators cause a rapid oscillation in the cylinders of Rochelle salt and sucrose which in turn give off very high frequency, high voltage electricity. Because the bank of cylinders is wired in both series and parallel, I get about 1,500 volts at just over 200 milliamps, or about 300 watts of usable electricity.

It takes an agile-tuned filter circuit to take that down to a charging current for the batteries. I tried to make such a device but found a military surplus voltage regulator from an old prop-driven aircraft did the job. This surplus device gave me an initial total at a continuous 13.5 volts DC of about 22 amps charging power fed into a bank of deep cycle AGM batteries.

I found that the piezo vibrators had a secondary and very unexpected positive benefit. Since the vibration was also felt in the circulating antifreeze and the SSTEMs, it seems to have made them function more efficiently. There is more heat transfer in the dynamic convective thermal fluid circulation than the normal formulas and specs would dictate but I think it is because the vibration of the fluid makes the thermal transfer in the cookie sheet panel more efficient. The SSTEMs are boosted in output by several watts of power. So when everything is running, I am getting about 340 watts of charging power on a continuous basis. Of course this fluctuates as the temperature differential changes but I rarely get less than 250 watts and sometimes as high as 400 watts.

A local recreational vehicle (RVs, trailers, campers, boats) dealer removes the very large deep cycle AGM batteries from their high-end RVs’ solar systems even when they have been used very little. He lets me test and pick the ones I want and sells them for $20. I have 24 of them now that are banked to charge off the thermal and piezoelectric devices and then feed into several inverters that give me power for lights, heaters, fans, freezers, TV’s etc. The inverters I use now give me up to 5,000 watts continuous and up to 12,000 watts of surge (for up to 2 hours) but I have set the limit of surge to 9,000 watts so I do not damage the batteries. The 24 deep cycle batteries could give me 5,000 watts continuously for up to several days without any further charging but so far, I have found that I am using only about 25% to 35% of the system capacity for about 80% of the time and about 80% of the capacity for 20% of the time. The high usage comes from when the freezer or refrigerator compressors kick on and when the heater boiler pumps kick on. As soon as these pumps start and get up to speed, the load drops back to a much lower value. The rest of the time, I am just using CFL and LED lights and my computer.

I finished this setup in September of 2011 and it worked better and better as the winter temperatures dropped in November and December. I had to make one adjustment. The piezo vibrators made too much noise and the aluminum plates made them sound even louder. I have since added some noise dampening and now I can’t hear it unless I am outside and standing near it. The dampening I used was two 4’x8’ sheets of thick heavy-duty foam used to put in horse and cattle stalls to keep the animals from standing on freezing cold ground. These were $30 each and have isolated the sheets from the wood and metal frame but still allows the vibrators to do their thing on the piezo tubes and the cookie sheet SSTEMs.

I have lots of meters and gauges on the system to monitor temperatures and power outputs and levels and so far nothing seems to be fading or slowing. There are slight changes in the charge levels of the batteries due to changes in the ambient air temperature but that has been less than +/- 10% so far. I was concerned that the cold antifreeze would freeze the water around the sunken coils but so far that has not happened. I think it is because there is a fairly rapid turnover of water at that depth and the coils just don’t have a chance to get that cold.

I’m also going to experiment with rewiring the whole thing to give me perhaps 60 volts output into the bank of batteries that are wired in series to make a 60 volt bank. This is the way that electric cars are wired and I have obtained a controller out of a Nissan Leaf that uses a bank of batteries in a 60 volt configuration. It should be more efficient.

The whole system cost me about $950 in batteries, fittings, hoses and chemicals and a lot of salvage of used and discarded parts. I already had the inverters and solar controller. I also had a friend that drilled the hole for me – that would have cost about $400. The rest I got out of salvage yards or stripped off old appliances or cars. It took about 3 weeks to build, working part time and weekends. I estimate that if you had to pay for all of the necessary parts and services to build the system; it would cost about $3,000. By the end of next year, I will have saved about half that much in electricity. As it is, I will have a full payback by about March of 2013.

I still have a hookup to the city power lines but since installing this system, I have used only about 10 kilowatts. I think that was mostly when I used my arc welder and did not want to suck too much out of the batteries. A side benefit has also been that since September, when I first started using it, there have been 4 power outages – one lasted for two days….for my neighbors. I never lost power.

I have not done it yet but I can also setup an electric meter that allows me to sell electricity back to the power company. When I integrate this whole system with my solar PV array, I might do that but for now, I can store unused power in the batteries for later use and since I won’t run out of fuel, I don’t need to recover any extra ongoing costs.

Since this system has no expendable fuel, no moving parts, no maintenance and no costs, I expect it will be functional for the next 15 to 20 years – maybe longer. Actually, I can’t think of why it will ever stop working.

Sept 2012 Update:

This system has been running for a year now. My power company monthly bill is averaging about $28/mo. But it goes lower almost every month. I am still running several AC units during the summer and have used ARC welding and electric heaters in the winter.

My estimate of costs and maintenance was a little optimistic as I discovered my 79 cent piezo vibrators were worth every penny – they lasted about 6 months. I have since replaced them with a bunch of salvaged parts used in claxon alarms on board Navy ships. These normally run on 28 volts but I did not need them to be loud so I found that if I fed them with 4 volts, I got the vibration I needed without the noise and they are so under-powered that they will likely last for years.

During the Spring and Fall, the system was not too productive because the temperature differential was usually less than 10 degrees but in the hottest part of the summer, I was back up to over 300 watts of total output with differentials of 20+ degrees between the 50 degree ground and the 70 to 85 degree summer heat.

I was not hit by the floods of last Spring but my place in the woods experienced torrential rains and the water table rose to nearly the surface. In all that my system continued to work – in fact, I noticed a slight improvement in performance since the temperature exchange rate improved with the heavy flow of underground water.

I still have not hooked up to a reversing electric meter but I did calculate that I would have made a net $290 over the past year instead of paying out $28/mo average. If I added in my solar PV system, my small $400 vertical wind generator and the 60 to 100 watts I get from a tiny hydroelectric setup I have on a small steam that runs thru my property, I would have had a net gain of over $1,000. Not bad for a bunch of junk and surplus parts and a little help from the dirt under my lawn.

The Aurora Exists but Its Not What You Think

The Aurora is the new jet that people have been saying is the replacement for the SR-71- it is real but it isn’t what you’d think it is. First a little history.

The U-2 spy plane was essentially a jet powered glider. It had very long wings and a narrow body that could provide lift with relatively little power. It used the jet engine to take it very high into the air and then it would throttle back to near idle and stay aloft for hours. The large wings were able to get enough lift in the high thin air of the upper atmosphere partly because it was a very light weight plane for its size. Back in the early 60’s, being high was enough protection but still allowed the relatively low resolution spy cameras to take good photos of the bad guys.

When Gary Powers’ U-2 got shot down, it was because the Soviets had improved their missile technology in both targeting and range and because, we gave the Russians details about the flight – but that is another story. The US stopped the U-2 flights but immediately began working on a replacement. Since shear altitude was no longer a defense, they opted for speed and the SR-71 was born. Technically, the SR-71 (Blackbird) was not faster than the missiles but, because of its speed (about Mach 3.5) and its early attempt at stealth design, by the time they had spotted the spy plane and coordinated with a missile launch facility, it was out of range of the missiles.

The CIA and the Air Force used the Blackbird until the early 1980’s when it was retired for spying and used only for research. At the time, the official word for why it was retired was that satellite and photographic technology had advanced to the point of not needing it any more. That is only partially correct. A much more important reason is that the Russians had new missiles that could shoot down the SR-71. By this time, Gorbachev was trying to mend relations with the west and trying to move Russia into a more internationally competitive position so he openly told Regan that he had the ability to shoot down the SR-71 before he actually tried to do it. Regan balked so Gorbachev conducted a “military exercise” in the Spring of 1981 in which the Russians made sure that the US was monitoring one of their old low orbit satellites and then during a phone call to Regan, the satellite was “disabled” – explosively.

At the time it was not immediately clear how they had done it but it wasn’t long before the full details were known. A modified A-60 aircraft code named “SOKOL-ESHELON,” which translates to “Falcon Echelon”, flying out of Beriev airfield at Taganrog, shot down the satellite with an airborne laser. When Regan found out the details, he ordered the Blackbird spy missions to stop but he demanded that Gorbachev give him some assurances that the A-60 would not be developed into an offensive weapon. Gorbachev arranged for an “accident” in which the only operational A-60 was destroyed by a fire and the prototype and test versions were mothballed and never flew again.

The spy community – both the CIA and DoD – did not want to be without a manned vehicle spy capability so they almost immediately began researching a replacement. In the meantime, the B-1, B-2 and B-117 stealth aircraft were refined and stealth technology was honed to near perfection. The ideal spy aircraft would be able to fly faster than the SR-71, higher than the U-2 and be more invisible than the B117 but it also had to have a much longer loiter time over the targets or it would not be any better than a satellite.

These three requirements were seen to be mutually exclusive for a long time. The introduction and popularity of unmanned autonomous vehicles also slowed progress but both the CIA and DoD wanted a manned spy plane. The CIA wanted it to be able to loft more sophisticated equipment into the complex monitoring of a dynamic spy situation. DoD wanted it to be able to reliably identify targets and then launch and guide a weapon for precision strikes. For the past 30 years, they have been working on a solution.

They did create the Aurora which uses the most advanced stealth technology along with the latest in propulsion. This, at least satisfied two of the ideal spy plane requirements. It started with a very stealthy delta-wing design using an improved design of the SR-71 engines, giving it a top speed of about Mach 4.5 and a ceiling of over 80,000 feet but that was seen as still too vulnerable. In 2004, following the successful test of NASA’s X-43 scramjet reaching Mach 9.8 (about 7,000 MPH), DoD decided to put a scramjet on the Aurora. Boeing had heard that DoD was looking for a fast spy jet and they attempted to bust into the program with their X-51a but DoD wanted to keep the whole development secret so they dismissed Boeing and pretended there was no such interest in that kind of aircraft. Boeing has been an excluded outsider ever since.

In 2007, DARPA was testing a Mach10 prototype called the HyShot – which actually was the test bed for the engine planned for the Aurora. It turns out that there are a lot technological problems to overcome that made it hard to resolve a working design in the post-2008 crashed economy and with the competition from the UAV’s while also trying to keep the whole development secret. They needed to get more money and find somewhere to test that was not being watched by a bunch of space cadets with tin foil hats that have nothing better to do than hang around Area 51, Vandenberg and Nellis.

DoD solved some of these issues by bringing in some resources from the British and got NASA to foot some of the funding. This lead to the flight tests of the HiFire in 2009 and 2010 out of the Woomera Test Range in the outback of South Australia. The HiFire achieved just over 9,000 MPH but it also tested a new fuel control system that was essentially the last barrier to production in the Aurora. They used a pulsed laser to ignite the fuel while maintaining the hypersonic flow of the air-fuel mixture. They also tested the use of high velocity jets of compressed gas into the scramjet to get it started. These two innovations allowed the transition from the two conventional jet engines to the single scramjet engine to occur at a lower speed (below Mach5) while also making the combination more efficient at very high altitudes. By late 2010, the Aurora was testing the new engines in the Woomera Test Range and making flights in the 8,000 to 9,700 MPH range.

During this same period, the stealth technology was refined to the point that the Aurora has a RCS (radar cross-section) of much less than 1 square foot. This means that it has about the radar image of a can of soda and that is way below the threshold of detection and identification of most radars today. It can fly directly into a radar saturated airspace and not be detected. Because of its altitude and speed and the nature of the scramjet, it has an undetectable infrared signature also and it is too high to hear audibly. It is, for allintents and purposes, invisible.

This solved two of the three spy plane criteria but they still had not achieved a long loiter time. Although the scramjet is relatively fuel efficient, it really is only useful for getting to and from the surveillance site. Once over the spy area, the best strategy is to fly as slow as possible. Unfortunately, wings that can fly at Mach 10 to Mach 12 cannot support the aircraft at much slower speeds – especially in the thin air at 80,000 feet.

Here is where the big surprise pops up. Thanks to the guys at NRL and a small contribution I made to a computer model, the extended loiter time problem was something that they began working on back in 2007. It started back when they retrofitted the HyShot engine into the Aurora, then NRL convinced the DARPA program manager to also retrofit the delta wings of the Aurora with a swing capability, similar to the F-14 TomCat. The result would be a wing that expands like a folding Japanese fan. In fast flight mode, the wing would be tucked into the fuselage making the aircraft look like the long tapered blade of a stiletto knife. In slow flight mode, the wings would fan out to wider than an equilateral triangle with a larger wing surface area.

As with any wing, it is a compromise design of flying fast and slow. The swing wing gave the Aurora a range increase from reduced drag while using the scramjet. It also allowed the wing loading to be expanded slightly giving it more lift at slower speeds and in thinner air. However, most of the engineers on the project agreed that these gains were relatively minor and it was not worth the added cost in building and maintenance. This was not a trivial decision as it also added weight and took up valuable space in the fuselage that was needed to put in the modified scramjet and added fuel storage. Outside of NRL, only two people were told why they needed to do this wing modification and how it could be done. Those two were enough to get the funding and NRL won the approval to do it.

What NRL had figured out was how to increase lift on the extended wing by a factor of 10 or more over a conventional wing. This was such a huge increase that the aircraft could shut off its scramjet and run one or both of its conventional jet engines at low idle speeds and still stay aloft – even at extreme altitudes. Normally, this would require a major change in wing shape and size to radically change the airfoil’s coefficient of lift of the wing but then the wing would be nearly useless for flying fast. A wing made to fold from one type wing (fast) to another (slow) would also be too complex and heavy to use in a long-range recon role. The solution that NRL came up with was ingenious and it turns out it partly used a technology that I worked on earlier when I was at NRL.

They designed a series of bladders and chambers in the leading edge of the wing that could be selectively expanded by pumping in hydraulic fluid and expanding these bladders to alter the shape of the wing from a near symmetric chambered foil to that of a high lift foil. More importantly, it also allowed for a change in the angle of attack (AoA) and therefore, the coefficient of lift. They could achieve AoA change without altering the orientation of the entire aircraft – this kept drag very low. This worked well and would be enough if they were at a lower altitude but in the thin air at 80,000+ feet, the partial vacuum created by the wing is weakened by the thin air. To solve that, they devised a way to create a much more powerful vacuum above the wing.

When they installed the swing-wing, there were also some additions to some plumbing between the engines and the wing’s suction surface (upper surface, at the point of greatest thickness). This plumbing consisted of very small and lightweight tubing that mixes methane and other gases from an on-board cylinder with super heated and pressurized jet fuel to create a very high volatile mix that is then fed to special diffusion nozzles that are strategically placed on the upper wing surface. The nozzles atomize the mixture into a fine mist and spray it under high pressure into the air above the wing. The nozzles and the pumped fuel mixture are timed to stagger in a checkerboard pattern over the surface of the wing. This design causes the gas to spread in an even layer across the length of the wing but only for about 2 or 3 inches above the surface.

A tiny spark igniter near each nozzle causes the fuel to burn in carefully timed bursts. The gas mixture is especially designed to rapidly consume the air in the burning – creating a very high vacuum. While the vacuum peaks at one set of nozzles, another set of nozzles are fired. The effect is a little like a pulse jet in that it works in a rapid series of squirt-burn-squirt-burn repeated explosions but they occur so fast that they blend together creating an even distribution of enhanced vacuum across the wing.

You would think that traveling at high Mach speeds would simply blow the fuel off the wing before it could have any vacuum effect. Surprisingly, this is not the case. Due to something called the laminar air flow effect, the relative speed of the air moving above the wing gets slower and slower as you get closer to the wing. This is due to the friction of the wing-air interface and results in a remarkable slow relative air movement within 1 to 3 inches of the wing. This unique trick of physics was known as far back as WWII when crew members on B-29’s, flying at 270 knots, would stick their heads out of a hatch and scan for enemy fighters with binoculars. If they kept within about 4 or 5 inches of the outer fuselage surface, the only effect was that they would get their hair blow around. The effect on the Aurora was to keep the high vacuum in close contact with the optimum lifting surface of the wing.

Normally, the combination of wing shape and angle of attack, creates a pressure differential above and below the wing of only 3 to 5 percent. The entire NRL design creates a pressure differential of more than 35% and a coefficient of lift that is controllable between .87 and 9.7. This means that with the delta wing fully extended; the wing shape bladders altering the angle of attack and the wing surface burn nozzles changing the lift coefficient, the Aurora can fly at speeds as low as 45 to 75 MPH without stalling – even at very high altitudes.

At the same time, it is capable of reducing the angle of attack and reshaping the wing into a chambered wing (a very thin symmetric) shape and then sweeping the delta wing into a small fraction of its extended size so that it can achieve Mack 15 under scramjet power. For landing and takeoff and for subsonic flight, it can adjust the wing for optimum fuel or performance efficiency while using the conventional jet engines.

My cohorts at NRL tell me that the new version of the Aurora is now making flights from the Woomera Test Range in the outback of South Australia to Johnston Atoll (the newest test flight center for black ops aircraft and ships) – a distance of 5,048 miles – in just over 57 minutes – which included the relatively slow speed climb to 65,000 feet. The Aurora then orbited over Johnson Atoll for 5 ½ hours before flying back to Woomera. In another test, the Aurora left Woomera loaded with fuel and a smart bomb. It flew to Johnson Atoll and orbited for 7 hours before a drone target ship was sent out from shore. It was spotted by the Aurora pilot and then bombed by the laser-guided bomb and then the pilot returned to Woomera.

I was also told that at least three of the precision strikes of Al Quida hideouts were, in fact, hit by the Aurora and then credited to a UAV in order to maintain the cover.

The Aurora is the fastest and the slowest highest altitude spy aircraft ever made and if the pilots don’t make a mistake, you may never see it.

Whack-a-Mole comes to the Battlefield

Whack-a-Mole comes to real world combat

  An old idea has been updated and brought back in the latest military weapon system.  Back in Vietnam, the firebases and forward positions were under constant sneak attack from the Vietcong under the cloak of night.  The first response to this was what they called Panic Minute.  This was a random minute chosen several times per day and night in which every soldier would shoot their weapon for one full minute.  They would shoot into the jungle without having any particular target.  We know it worked sometimes because patrols would find bodies just beyond the edge of the clearing.  But it also did not work a number of times and fire bases were being overrun on a regular basis. 

  The next response was Agent Orange.  Originally called a “defoliant” and designed to just make the trees and bushes drop all their leaves.  Of course, the effect was to kill all plant life and often making the soil infertile for years after.  They stopped it when they began to notice that it also was not particularly good for humans.  It acted as a neurotoxin causing all kinds of problems in soldiers that were sprayed or that walked thru it.

  The third and most successful response to these sneak attacks was a top secret program called Sentry.   Remember when this was – in the mid to late 60’s and early 70’s.  Electronics was not like it is now.  The Walkman, which was simply a battery operated transistor radio, was not introduced until 1978.  We were still using 8-track cartridge tapes and reel-to-reel recorders.  All TV’s used tubes and the concept of integrated circuits was in its infancy.  Really small spy cameras were about the size of a pack of cigarettes and really small spy type voice transmitters were about half that size.  Of course, like now, the government and the military had access to advances that had not yet been introduced to the public.

  One such advance was the creation of the sensors used in the Sentry program.  They started with a highly sensitive vibration detector.  We would call them geophones now but back then they were just vibration detectors.  Then they attached a high frequency (VHF) transmitter that would send a clicking sound in response to the detectors being activated by vibrations.  

The first version of this was called the PSR-1 Seismic Intrusion detector – and is fully described on several internet sites.  This was a backpack size device connected to geophones the size of “D” cell batteries.  It worked and proved the concept but it was too bulky and required the sensors to be connected by wires to the receiver.  The next version was much better.

  

What was remarkable about the next attempt was that they were able to embed the sensor, transmitter and batteries inside a package of hard plastic and coated on the outside with a flat tan or brown irregular surface. All this was about the size of one penlight battery.  This gave them the outward appearance of being just another rock or dirt clog and it was surprisingly effective.  These “rocks” were molded into a number of unique shapes depending on the transmitting frequency. 

  

The batteries were also encased in the plastic and it was totally sealed.  It was “on” from the moment of manufacture until the batteries died about 2 months later.  A box of them would contain 24 using 24 different frequencies and 24 different click patterns and were shipped in crates of 48 boxes.  The receiver was a simple radio with what looked like a compass needle on it.  It was an adaptation of the RFDF (radio frequency direction finder) used on aircraft.  It would point the needle toward an active transmitter and would feed the clicking to its speaker.

  

In the field, a firebase would scatter these rocks in the jungle around the firebase, keeping a record of the direction that each different frequency rock was thrown from the base.  All of the No. 1 rocks from 6 to 10 boxes were thrown in one direction.  All of the No. 2 rocks were thrown in the next direction, and so on.  The vibration detectors picked up the slightest movement within a range of 10 to 15 meters (30-50 feet).  The firebase guards would setup the receiver near the middle of the sensor deployment and would monitor it 24 hours a day.  When it began clicking and pointing in the direction of the transmitting sensors, the guard would call for a Panic Minute directed in that direction.  It was amazingly effective.

  

In todays’ Army, they call this Geophysical MASINT (measurement and signature intelligence) and the devices have not actually changed much.  The “rocks” still look like rocks but now they have sensors in them other than just seismic.  Now they can detect specific sounds, chemicals and light and can transmit more than just clicks to computers.  The received quantitative data is fed into powerful laptop computers and can be displayed as fully analyzed in-context information with projections of what is happening.  It can even recommend what kind of response to take.

  

These sensors “rocks” are dispersed at night by UAV’s or dropped by recon troops and are indistinguishable from local rocks.  Using multiple sensors and reception from several different rocks, it is possible to locate the source of the sensor readings to within a few feet.  This is much the same as the way the phone companies can track your locations using triangulation from multiple cell towers.  Using only these rocks, accuracy can be reduced to within ten feet or less but when all this data is integrated into the Combat Environmental Data (SID) network, targets can be identified, confirmed, located and placed within 2 or 3 feet.

  

What the Army has done with all this data is create a near automated version of Whack-a-Mole by integrating the use of artillery and the Digital Rifle System (DSR) into the SID and rock sensor network.  The result is the ability to setup a kill zone (KZ) that can be as big as 30 miles in diameter.  This KZ is sprinkled with the sensor rocks and the AIR systems of the DRS and linked by the SID network into strategically placed DRS rifles and digitally controlled artillery.  When these various systems and sensors are all in place, the Army calls it a WAK zone (pronounced “Whack”) –  hence the nickname Whack-a-Mole.

  

The WAK zone computers are programmed with recognition software of specifically targeted people, sounds, chemicals and images that constitute a confirmed kill target.  When the WAK zone computers make that identity, it automatically programs the nearest DRS rifle or the appropriate artillery piece to fire on the target.  For now, the actual fire command is still left to a person but it is fully capable of a full automatic mode.  In several tests in Afghanistan, it has not made any identification errors and the computerized recommendation to shoot has always been confirmed by a manual entry from a live person.

  

Studies and contractors are already working on integrating UAV’s into the sensor grids so that KZ’s of hundreds of miles in diameter can be defined.  The UAV’s would provide not only arieal sensors of visual, IR and RF detection but also they will carry the kill weapon.

  Whack-a-Mole comes to the battlefield!

 

Unthethered planets Are Not What the Seem

  

Two seemingly unrelated recent discoveries were analyzed by a group at NASA with some surprising and disturbing implications.  These discoveries came from a new trend in astronomy and cosmology of looking at “voids”.

  The trend is to look at areas in the sky that appear to not have anything there.  This is being done for three reasons. 

  

(1) In 2009, the Hubble was trained on what was thought to be an empty hole in space in which no previous objects have ever been observed.  The picture used the recently improved Wide Field and Planetary Camera #2 to do a Deep Field image.   The image covered 2.5 arc minutes – the width of a tennis ball as seen from 100 meters away.  The 140.2 hour exposure resulted in an image containing more than 3,000 distinct galaxies at distances going out to 12.3 billion light years away.  All but three of these were unknown before the picture was taken.  This was such an amazing revelation that this one picture has its own Wikipedia page (Hubble Deep Field) and it altered our thinking for years to come.

  

(2) The second reason is that for this image and for every other image or closer examination of voids, new and profound discoveries have been made.  Using radio frequencies, infrared, UV, and all the other wavelengths that we have cameras, filters and sensors to detect, have resulted in new findings every time they are used on “voids”.

  

(3) In general, the fields of astronomy and cosmology have been getting crowded with many more researchers than there are telescopes and labs to support them.  Hundreds of scientists in these fields do nothing but comb through the images and data of past collections to find something worth studying.  Much of that data has been reexamined hundreds of times and there is very little left to discover about it.  The new data from these examinations of voids has created a whole new set of raw data that can be examined from dozens of different perspectives to find something that all these extra scientists can use to make a name for themselves.

  

To that end, Takahiro Sumi and his team Osaka University recently examined one of these voids and found 10 Jupiter sized planets but the remarkable aspect is that these planets were “unthethered” to any star or solar system.  They were not orbiting anything.  In fact they seem to be moving in random directions at relatively high speeds and 8 of the 10 are actually accelerating.  Takahiro Sumi speculates that these planets might be the result of a star that exploded or collided but that is just a guess.

  

In an unrelated study at the radio telescope array in New Mexico, Albert Swenson and Edward Pillard announced that they found a number of anomalous RF and infrared emissions coming from several areas of space that fall into the category of being voids.  One of those void areas that had one of the strongest signals was the same area that Takahiro Sumi had studies.  Their study was unique because they cross-indexed a number of different wavelength measurements of the same area and found that there were very weak moving points of infrared emissions that appeared to be stronger sources of RF emissions with an unidentified energy emission in the 1.5 to 3.8 MHz region.   This study produced a great deal of measurement data but made very few conclusions about what they meant. 

  

The abundance of raw data was ripe for one of those many extra grad students and scientists to examine the data and correlate it to something.  The first to do so was Eric Vindin, a grad student doing his doctoral thesis on the arctic aurora.  He was examining something called the MF-bursts in the auroral roar – which an attempt to find the explicit cause of certain kinds of aurora emissions.  What he kept coming back to was that there was a high frequency component present in the spectrograms of the magnetic field fluctuations that were expressed at significantly lower frequencies.  Here is part of his conclusion:

  

“There is evidence that such waves are trapped in density enhancements in both direct measurements of upper hybrid waves and in ground-level measurements of the auroral roar for an unknown fine frequency structure which qualitatively matches and precedes the generation of discrete eigenmodes when the Z-mode maser acts in an inhomogeneous plasma characterized by field-aligned density irregularities.  Quantitative comparison of the discrete eigenmodes and the fine frequency structure is still lacking.”

  

To translate that for real people to understand, Vindin is saying that he found a highly modulated high frequency (HF) (what he called a “fine frequency structure “) signal embedded in the magnetic field fluctuations in the earth’s magnetic field that makes up and causes the background visual emissions we know as the Auroral Kilometric Radiation (AKR).  He can cross index these modulations of the HF RF to changes in the magnetic field on a gross scale but has not been able to identify the exact nature or source of these higher frequencies.   He did rule out that the HF RF was coming from Earth or the atmosphere.  He found that they were in the range from 1.5 to 3.8 MHz.  Vindin also noted that the HF RF emissions were very low power as compared to the AKR and occurred slightly in advance (sooner) than changes in the AKR.  His study, published in April 2011, won him his doctorate and a job at JPL in July of 2011.

  

Vindin did not extrapolate his findings into a theory or even a conclusion but the obvious implication of these findings is that these very weak HF RF emissions are causing the very large magnetic field changes in the AKR.  If that is true, then it is a cause-and-effect that has no known correlation in any other theory, experiment or observation.

  

Now we come back to NASA, two teams of analysts lead by Yui Chiu and Mather Schulz, working as hired consultants to the Deep Space Mission Systems (DSMS) within the Interplanetary Network Directorate (IND) of JPL.   Chiu’s first involvement was to publish a paper critical of Eric Vindin’s work.  He went to great effort to point out that the relatively low frequency of 1.5 to 3.8MHz is so low in energy that it is highly unlikely to have extraterrestrial origins and it is even more unlikely that it would have any effect on the earth’s magnetic field.  This was backed by a lot of math equations and physics that showed that such a low frequency could not travel from outside of the earth and still have enough energy to do anything – much less alter a magnetic field.  He showed that there is no know science that would explain how an RF emission could alter a magnetic field.  Chiu pointed out that NASA uses UHF and SHF frequencies with narrow beam antennas with extremely slow modulations to communicate with satellites and space vehicles because it takes the higher energy in those much higher frequencies to travel the vast distances of space.  It also takes very slow modulations to be able to send any reliable intelligence on those frequencies.  That is why it often takes several days to send a single high resolution picture from a space probe.  Chiu also argued that received energies from our planetary vehicles was about as strong as a cell phone transmitting from 475 miles away – a power rating in the nanowatt range.  Unless his HF RF signal originate from an unknown satellite, I could not have come from some distant source in space.

  

The motivation of this paper by Chiu appears to be the result of a professional disagreement that he had with Vindin shortly after Vindin came to work at JPL.  In October of 2011, Vindin published a second paper about his earlier study in which he addressed most of Chiu’s criticisms.  He was able to show that the HF RF signal was received by a polar orbiting satellite before it was detected at an earth-bound antenna array.  He antenna he was using was a modified facility that was once a part of the Defense Early Warning (DEW) line of massive (200 foot high) movable dish antennas installed in Alaska.  The DEW line signals preceded but appeared to be synchronized with the aurora field changes.  This effectively proved that the signal was extraterrestrial. 

  

Vindin also tried to address the nature of the HF RF signal and its modulations.  What he described was a very unique kind of signal that the military has been playing with for years. 

  

In order to reduce the possibility of a radio signal being intercepted, the military uses something called “frequency agility”.  This is a complex technique that breaks up the signal being sent into hundreds of pieces per second and then transmits each piece on a different frequency.  The transmitter and receiver are synchronized so that the receiver is jumping its tuning to match the transmitter’s changes in the transmission frequency.  If you could follow the jumps, it would appear to be random jumps but it actually follows a coded algorithm.  If someone is listening to any one frequency, they will hear only background noise with very minor and meaningless blips, clicks and pops.  Because a listener has no way of knowing where the next bit of the signal is going to be transmitted, it is impossible to rapidly tune a receiver to intercept these kinds of transmissions.  Frequency agile systems are actually in common usage.  You can even buy cordless phones that use this technique. 

  

As complex as frequency agility is, there are very advanced, very wide-band receivers and computer processors that can reconstruct an intelligent signal out of the chopped up emission.  For that reason, the military have been working on the next version of agility.  

  

In a much more recent and much more complicated use of frequency agility they are attempting to combine it with agile modulation.  This method breaks up both the frequency and the modulation of the signal intelligence of the transmission into agile components.  The agile frequency modulation (FM) shifts from the base frequency to each of several sidebands and to first and second tier resonance frequencies as well as shifting the intermediate (IF) frequency up and down.  The effect of this is to make it completely impossible to locate or detect any signal intelligence at all in an intercepted signal.  It all sounds like random background noise. 

  

Although it is impossible to reconstruct an agile frequency that is also modulation agile (called “FMA”), it is possible, with very advanced processors to detect that there is a signal present that is FMA modified.  This uses powerful math algorithms that take several hours of processing on massive amounts of recorded data and uses powerful computers to resolve the analysis many hours after the end of the transmission.  And even then it can only confirm to a high probability that there is a presence of an FMA signal without providing any indication of what is being sent. 

  

This makes it ideal for use on encrypted messages but even our best labs have been able to do it only when the transmitter and the receiver are physically wired together to allow them to synchronize their agile reconstruction correctly.  The NRL is experimenting with mixes of FMA and non-FMA and digital and analog emissions all being sent at the same time but it is years away from being able to deploy a functional FMA system.

  

I mention all this because as part of Vindin’s rebuttal, he was able to secure the use of the powerful NASA signal procession computers to analyze the signals he recorded and was able to confirm that there is a 91% probability that the signal is FMA.  This has, of course, been a huge source controversy because it appears to indicate that we are detecting a signal that we do not have the technology to create.  The NRL and NSA has been following all this with great interest and has independently confirmed Vindin’s claims.

  

What all this means is that we may never be able to reconstruct the signal to the point of understanding or even seeing text, images or other intelligence in it but what it does absolutely confirm is that the signal came from an intelligent being and was created specifically for interstellar communications.  There is not even a remote chance that anything in the natural world or in the natural universe could have created these signals out of natural processes.  It has to be the deliberate creation of intelligent life.

  

What came next was a study by Mather Schulz that is and has remained classified.  I had access to it because of my connections at NRL and because I have a lot of history in R&D in advanced techniques in communications.  Schulz took all these different reports and put them into a very logical and sequential argument that these unthethered planets were not only the source o the FMA signals but they are not planets at all.  They are planet size spaceships.

  

Once he came to this conclusion, he went back to each of the contributing studies to find further confirmation evidence.  In the Takahiro Sumi study from Osaka University and in the Swenson and Pillard study, he discovered that they had detected that the infrared emissions were much stronger on the side away from the line of travel and that there was a faint trail of infrared emissions behind each of the unthethered planets. 

  

This would be consistent with the heat emissions from some kind of a propulsion system that was pushing the spaceship along.  What form of propulsion would be capable of moving a planet-size spaceship is unknown but the fact that we can detect the IR trail at such great distances indicates that it is producing a very large trail of heated or ionized particles that extend for a long distance behind the moving planets.  The fact that he found this on 8 of the 10 unthethered planets was positive but then he also noted that the two that do not have these IR emissions, are the only ones that are not accelerating.  This would also be consistent with heat emissions from a propulsion system that is turned off and the spaceship is coasting.

  

The concept of massive spaceships has always been one of the leading solutions to sub-light-speed interplanetary travel.  The idea has been called “Generations Ships” that would be capable of supporting a population large enough and for a long enough period of time to allow multiple generations of people to survive in space.  This would allow survival for the decades or centuries needed to travel between galaxies or star systems.  Once a planet is free from its gravitational tether to its solar system star, it would be free to move in open space.  The solution of replacing the light and heat from their sun is not a difficult technological problem when you consider the possible use of thermal energy from the planet’s core.  Of course, a technology that has achieved this level of advanced science would probably find numerous other viable solutions.

  

Schulz used a combination of the Very Large Array of interferometric antennas at Socorro, New Mexico along with the systems at Pune, India and Arecibo, PR to collect data and then had the bank of Panther Cray computers at NSA analyze the data to determine that the FMA signals were coming from the region of space that exactly matched the void measured and studies by Takahiro Sumi.  NSA was more than happy to let Schulz use their computers to prove that they had not dropped the ball and allowed someone else on earth to develop a radio signal that they would not be able to intercept and decipher.

  

Schulz admitted that he cannot narrow down the detection to a single unthethered planet (or spaceship) but he can isolate it to the immediate vicinity of where they were detected.  He also verified the Swenson and Pillard finding that other voids had similar but usually weaker readings.  He pointed out that there may be many more signal sources from many more unthethered planets but outside of these voids, the weak signals were being deflected or absorbed by intervening objects.  He admitted that finding the signals in other voids did not confirm that they also had unthethered planets but he pointed out that it does not rule out that possibility either.

  

Finally, Schulz setup detection apparatus to simultaneously measure the FMA signals using the network of worldwide radio telescopes at the same time taking magnetic, visual and RF signals from the Auroral Kilometric Radiation (AKR).  He got the visual images with synchronized high speed video recordings from the ISIS in cooperation with the Laboratory for Planetary Atmospherics out of the Goddard SFC. 

  

Getting NSA’s help again, he was able to identify a very close correlation of these three streams of data to show that it was, indeed, the FMA signal originating from these unthethered planets that preceded and apparently was causing corresponding changes in the lines of magnet force that was made visible in the AKR.  The visual confirmation was not on shape or form changes in the AKR but in color changes that occurred at a much higher frequency than the apparent movements of the aurora lights.  What was being measured were the increase and decrease in the flash rate of individual visual spectrum frequencies.  Despite the high speed nature of the images, they were still only able to pick up momentary fragments of the signal – sort of like catching a single frame of a movie every 100 or 200 frames.  Despite this intermittent nature of the visual measurements, what was observed exactly synchronized with the other magnetic and RF signals – giving a third source of confirmation.  Schulz provided some very shallow speculation that the FMA signal is, in fact, a combined agile frequency and modulation signal that includes both frequencies and modulation methods that are far beyond our ability to decipher it. 

  

This detection actually supports a theory that has been around for years – that a sufficiently high enough frequency that is modulated in harmonic resonance with the atomic level vibrations of the solar wind – the charged particles streaming out of the sun that create the Aurora at the poles – can be used to create harmonics at very large wavelengths – essentially creating slow condensations and rarefactions in the AKR.  This is only a theory based on some math models that seem to make it possible but the control of the frequencies involved are far beyond any known or even speculated technology so it is mostly dismissed.  Schulz mentions it only because it is the only known reference to a possible explanation for the observations.  It has some validity because the theory’s math model exactly maps to the observations.

  

Despite the low energy, low frequency signal and despite the fact that we have no theory or science that can explain it, the evidence was conclusive and irrefutable.  Those unthethered planets appear to be moving under their own power, are emitting some unknown kind of signal that is somehow able to modulate our entire planet’s magnetic field.  The conclusion that these are actually very large spaceships, containing intelligent life that is capable of creating these strange signals, seems to be unavoidable.

  

The most recent report from Schulz was published in late December 2011.  The fallout and reactions to all this is still in its infancy.  I am sure they will not make this public for a long time, if ever.  I have already seen and heard about efforts to work on this at several DoD and private classified labs around the world.  I am sure this story is not over. 

  

We do not now know how to decode the FMA signals and we don’t have a clue how it is affecting the AKR but our confirmed and verified observations have pointed us to only one possible conclusion – we are not alone in the universe and whoever is out there has vastly improved technologies and intelligence than we do.

  

The Fuel you have never heard of….

 

I have always been fascinated by the stories of people that have invented some fantastic fuel only to have the major oil companies suppress the invention by buying the patent or even killing the inventor.  The fascination comes from the fact that I have heard these stories all my life but have never seen any product that might have been invented by such a person.  That proves that the oil companies have been successful at suppressing the inventors….or it proves that such stories are simply lies.  Using Plato – my research software tool, I thought I would give it a try.  The results were far beyond anything I could have imagined.  I think you will agree.

 

I set Plato to the task of finding what might be changed in the fuel of internal combustion engines that might produce higher miles per gallon (MPG).  It really didn’t take long to return a conclusion that if the burned fuel had more energy in the burning, it would give better MPG for the same quantity of fuel.  It further discovered that if the explosion of the fuel releases its energy in a shorter period of time, it works better but it warned that the engine timing becomes very critical.

 

OK so, what I need is a fuel or a fuel additive that will make the spark plug ignite a more powerful but faster explosion within the engine.  I let Plato work on that problem for a weekend and it came up with Nitroglycerin (Nitro).  It turns out that Nitro actually works precisely because its explosion is so fast.  It also is a good chemical additive because it is made of nitrogen, oxygen and carbon so it burns without smoke and releases only those elements or compounds into the air. 

 

Before I had a chance to worry about the sensitive nature of Nitro, Plato provided me with the answer to that also.  It seems that ethanol or acetone will desensitize Nitro to workable safety levels.  I used Plato to find the formulas and safe production methods of Nitro and decided to give it a try.

 

Making Nitro is not hard but it is scary.  I decided to play it safe and made my mixing lab inside of a large walk-in freezer.  I only needed to keep it below 50F and above 40F so the freezer was actually off most of the time and it stayed cool from the ice blocks in the room.  The cold makes the Nitro much less sensitive but only if you don’t allow it to freeze.  If you do that, it can go off just as a result of thawing out.  My plan was to make a lot of small batches to keep it safe until I realized that even if very small amounts, it was enough to blow me up if it ever went off.  So I just made up much larger batches and ended up with about two gallons.

 

I got three gas engines – a lawn mower, a motorcycle and an old VW Bug.  I got some gas of 87 octane but with 10% ethanol in it.  I also bought some pure ethanol additive and put that in the mix.  I then added the Nitro.  The obvious first problem was to determine how much to add.  I decided to err of the side of caution and began with very dilute mixtures – one part Nitro into 300 parts gas.   I made-up just 100 ml of the mixture and tried it on the lawn mower.  It promptly blew up.  Not actually exploded but the mixture was so hot and powerful that it burned a hole in the top of the cylinder and broke the crankshaft and burned off the valves.  That took less than a minute of running.

 

I then tried a 600:1 ratio in the motorcycle engine and it ran for 9 minutes on the 100 ml.  It didn’t burn up but I could tell very little else about the effects of the Nitro.  It tried it again with 200 ml and determined that it was running very hot and probably would have blown a ring or head gasket if I tried it for any longer.  I had removed the motorcycle engine from an old motorcycle to make this experiment but now I regretted that move.  I had no means to check torque or power.  The VW engine was still in the Bug so I could actually drive it.  This opened up all kinds of possibilities.

 

I gas it up and drove it with normal gas first.  I tried going up and down hills, accelerations, high speed runs and pulling a chain attached to a tree.  At only 1,400 cc, it was rated at only 40 HP when it was in new condition but now it had much less than that using normal gas.

 

I had a Holly carb on the engine and tweaked it to a very lean mixture and lowered the Nitro ratio to 1,200 to 1.   I had gauges for oil temp and pressure and had vacuum and fuel flow sensors to help monitor real-time MPG.  It ran great and outperformed all of the gas-only driving tests.  At this point I knew I was onto something but my equipment was just too crude to do any serious testing.  I used my network of contacts in the R&D community and managed to find some guys at the Army vehicle test center at the Aberdeen Test center (ATC).  A friend of a friend put me in contact with the Land Vehicle Test Facility (LVTF) within the Automotive Directorate where they had access to all kinds of fancy test equipment and tons of reference data.  I presented my ideas and results so far and they decided to help me using “Special Projects” funds.  I left them with my data and they said come back in a week.

 

A week later, I showed up at the LVTF.  They said welcome to my new test vehicle – a 1998 Toyota Corona.  It is one of the few direct injection engines with a very versatile air-fuel control system.  They had already rebuilt the engine using ceramic-alloy tops to the cylinder heads that gave them much greater temperature tolerance and increased the compression ratio to 20:1.  This is really high but they said that my data supported it.  Their ceramic-alloy cylinder tops actually form the combustion chamber and create a powerful vortex swirl for the injected ultra-lean mixture gases.

 

We stared out with the 1,200:1 Nitro ratio I had used and they ran the Corona engine on a dynometer to test and measure torque (ft/lbs) and power (HP).  The test pushed the performance almost off the charts.  We repeated the tests with dozens of mixtures, ratios, air-fuel mixes and additives.  The end results were amazing.

 

After a week of testing, we found that I could maintain a higher than normal performance using a 127:1 air fuel ration and a 2,500:1 Nitro to gas ratio if the ethanol blend is boosted to 20%.  The mixture was impossible to detonate without the compression and spark of the engine so the Nitro formula was completely safe.  The exhaust gases were almost totally gone – even the Nox emissions were so low that a catalytic converter was not needed.  Hydrocarbon exhaust was down in the range of a Hybrid.  The usual problem of slow burn in ultra-lean mixtures was gone so the engine produced improved power well up into high RPMs and the whole engine ran at lower temperatures for the same RPM across all speeds.  The real thrill came when we repeatedly measured MPG values in the 120 to 140 range.

 

The rapid release and fast burn of the Nitro allowed the engine to run an ultra-lean mixture that gave it great mileage while not having any of the usual limitations of lean mixtures.  At richer mixtures, the power and performance was well in excess of what you’d expect of this engine.  It would take a major redesign to make an engine strong enough to withstand the torque and speeds possible with this fuel in a normal 14:1 air-fuel mixture.  Using my mix ratio of 120+:1 gave me slightly improved performance but at better than 140 MPG.  It worked.  Now I am waiting for the buyout or threats from the gas companies.

 

July 2010 Update:

 

The guys at ATC/LVTF contacted my old buddies at DARPA and some other tests were performed.  The guys at DARPA have a test engine that allows them to inject high energy microwaves into the combustion chamber just before ignition and just barely past TDC.  When the Nitro ratio was lowered to 90:1, the result was a 27 fold increase in released energy.  We were subsequently able to reduce the quantity of fuel used to a level that created the equivalent of 394 miles per gallon in a 2,600 cc 4-cyl engine.  The test engine ran for 4 days at a speed and torque load equal to 50 miles per hour – and did that on 10 gallons of gas – a test equivalent of just less than 4,000 miles!  A new H-2 Hummer was rigged with one of these engines and the crew took it for a spin – from Calif. To Maine – on just over 14 gallons of gas.  They are on their way back now by way of northern Canada and are trying to get 6,000 miles on less than 16 gallons.

 

The government R&D folks have pretty much taken over my project and testing but I have been assured that I will be both compensated and protected.  I hope Obama is listening.

New Power Source Being Tested in Secret

The next time you are driving around the Washington DC beltway, the New York State Thruway, I80 through Nebraska or I5 running through California or any of a score of other major highways in the US, you are part of a grand experiment to create an emergency source of electric power.  It is a simple concept but complex in its implementation and revolutionary in its technology.  Let me explain from the beginning… 

We cannot generate electricity directly.  We have to use either chemical, mechanical solar or nuclear energy and then convert that energy to electricity – often making more than one conversion such as nuclear to heat to steam to mechanical to electrical.  These conversion processes are inefficient and expensive to do in large quantities.  They are also very difficult to build because of the environmental groups, inspections, regulations, competition with utilities and investment costs.   The typical warfare primer says to target the infrastructure first.  Wipe out the utilities and you seriously degrade the ability of the enemy to coordinate a response.  The US government has bunkers and stored food and water but has to rely mostly on public utilities or emergency generators for electricity.   Since the public utilities are also a prime target, that leaves only the emergency generators but they require large quantities of fuel that must be stored until needed.  A 10-megawatt generator might use 2500 gals of fuel per day.  That mandates a huge storage tank of fuel that is also in demand by car and aircraft.  This is not the kind of tenuous link of survivability that the government likes to rely on. 

The government has been looking for years for ways to bypass all this reliance on utilities out of their control and sharing of fuel with the goal of creating a power source that is exclusively theirs and can be counted upon when all other forms of power have been destroyed.  They have been looking for ways to extend their ability to operate during and after an attack for years.  For the past ten years or more they have been building and are experimenting with one that relies on you and me to create their electricity. The theory is that you can create electricity with a small source of very powerful energy – such as nuclear – or from a very large source of relatively weak energy – such as water or wind.   The difficultly and complexity and cost rises sharply as you go from the weak energy sources to the powerful energy sources.  You can build thousands of wind generators for the cost of one nuclear power plant.  That makes the weak energy sources more desirable for the movement to invest in.  The problem is that it takes a huge amount of this weak energy source to create any large volumes of electricity.  Also the nature of having a clandestine source of power means that they can’t put of a thousand wind generators or build a bunch of dams.  The dilemma comes in trying to balance the high power needs with a low cost while keeping it all hidden from everyone.  Now they have done all that. 

If you have traveled very much on interstate highways, you have probably seen long sections of the highway being worked on in which they cut rectangular holes (about 6 feet long, by 18 inches wide by nearly four feet deep) in the perfectly good concrete highway and then fill them up again.  In some places, they have done this for hundreds of miles – cutting these holes every 20 to 30 feet – tens of thousands of these holes throughout the interstate highway system.  Officially, these holes are supposed to be to fix a design flaw in the highway by adding in missing thermal expansion sections to keep the highway from cracking up during very hot or very cold weather.  But that is not the truth. 

There are three errors with that logic.   (1) The highways already have expansion gaps built into the design.  These are the black lines – filled with compressible tar – that create those miles of endless ‘tickety-tickety-tick” sound as you drive over them.  The concrete is laid down in sections with as much as 3 inches between sections that is filled in with tar.  These entire sections expand and contract in weather and squeeze the tar up into those irritating repeating bumps.  No other thermal expansion is needed. 

(2) The holes they cut (using diamond saws) are dug out to below the gravel base and then refilled with poured concrete.  When done, the only sign it happened is that the new concrete is a different color.  Since they refilled it with the same concrete that they took out, the filling has the same thermal expansion qualities as the original so there is no gain.  If there were thermal problems before, then they would have had the same problems after the “fix”.  Makes no sense.   (3) Finally, the use of concrete in our US interstate system was based on the design of the Autobahn in Germany which the Nazi’s built prior to WWII.  Dozens of years of research was done on the Autobahn and more on our highway system before we built the 46,000 miles of the Eisenhower National System of Interstate and Defense Highways, as it was called back in 1956.  The need for thermal expansion was well known and designed into every mile of highway and every section of overpass and bridge ever built.  The idea that they forgot that basic aspect of physics and construction is simply silly.  Ignoring, for a moment, that this is a highly unlikely design mistake, the most logical fix would have been to simply cut more long thin/narrow lines into the concrete and fill them with tar.  Digging an 18” wide by 6 foot long by 40-inch deep hole is entirely unneeded. 

Ok, so if they are not for thermal expansion, what are they.   Back in 1998, I was held up for hours outside of North Platte, Neb. while traffic was funneled into one lane because they were cutting 400 miles of holes in Interstate 80.  It got me to thinking and I investigated off and on for the next 7 years.  The breakthrough came when I made contact with an old retired buddy of mine that worked in the now defunct NRO – National Reconnaissance Office.  He was trying to be cool but told me to take a close look at the hidden parts of the North American Electric Reliability Corporation (NERC).  I did. It took several years of digging and I found out NERC has their fingers into a lot of pots that most people do not know about but when I compared their annual published budget (they are a nonprofit corporation) with budget numbers by department, I found about $300 million unaccounted for.  As I dug further, I found out they get a lot of federal funding from FERC and the Department of Homeland Security (DHA).  The missing money soon grew to over $900 million because much of it was “off the books”.   

In all this digging, I kept seeing references to Alqosh.  When I looked it up, I found it was the name of a town in northwest Iraq where it is believed that Saddam had a secret nuclear power facility.  That intelligence was proved wrong during the inspections that led up to the second Iraq war but the name kept appearing in NERC paperwork.  So I went looking again and found that it is also a derivation of an Arabic name meaning “the God of Power”.  It suddenly fell into context with the references I had been seeing.  Alqosh is not a place but the name of the project or program that had something to do with these holes that were being cut in the highway.  Now I had something to focus on. As I dug deeper, some of it by means I don’t want to admit to, I found detailed descriptions of Alqosh within NERC and its link to DoD and DHS.  Here’s what I found. 

The concrete that was poured into those holes was a special mixture that contained a high concentration of piezoelectric crystals.  These are rocks (quartz), ceramics and other materials that produce electricity when they are physically compressed.  The mix was enhanced with some custom designed ceramics that also create electricity.  The exact mixture is secret but I found out that it contains berlinite, quartz, Rochelle salt, lead zirconate titanate, polyvinylidene fluoride, sodium potassium niobate and other ingredients. The mix of quartz, polymers and ceramics is very unique and with a very specific intent in mind.  Piezoelectric materials will produce electricity when they are compressed squeezed – this is called the direct piezoelectric effect (like a phonograph needle).  But they also have exactly the opposite effect.  The lead zirconate titanate crystals and other ceramics in the mix will expand and contract in the presence of electricity – this is called the reverse piezoelectric effect.  This is how tiny piezoelectric speakers work.   

The concrete mix, in which a part, was designed to create electricity when compressed by a car passing over it.  Some of these materials react immediately and some delay their response for up to several seconds.  This creates a sort of damper wave of voltage spikes passing back and forth thru the material over a period of time. While some of this mix is creating electricity, some other parts of the specially designed ceramics were intended to flex in physical size when they sensed the electricity from the other quartz materials.  As with the quartz crystals, some of these ceramics delay their responses for up to several seconds.  Sort of like time-released capsules.  The flexing ceramics, in turn, continue the vibrations that cause the quartz to continue creating electric pulses.   

The effect is sort of like pushing a child’s swing.  The first push or vibration comes from the car passing.  That, in turn, creates electricity that makes some of the materials flex and vibrate more.  This push creates more electricity that repeats in an escalating manner until, like the swing, it is producing high waveforms of peak power spikes. The end result of this unique mix of chemicals, crystals, ceramics and polymers is what is called a piezoelectric transformer that uses the acoustic (vibration) (initiated by a car passing) coupling to step up the generated voltages by over 1,500-to-1 into a resonance frequency of about 1 megahertz.  A passing car initiates the series of high voltage electrical pulses that develop constructive resonance with subsequent pressures from passing cars so that the voltage peaks of this resonance can top out at or above 12,700 volts and then tapers off in a constant frequency, decreasing amplitude damper wave until regenerated by the next car or truck.  Multiple axle vehicles can produce powerful signals that can resonate for several minutes. 

Once all this electricity is created, the re-bar on the bottom of the hole also has a special role to play.  It contains a special spiral coil of wire hidden under the outer layer of conducting polymers.  By a careful design of the coil and insulating wires, these re-bars create a simple but highly effective “resonance tank circuit”.   The simplest form of a tank circuit is a coil of wire and a single capacitor.  The value of the inductance of the coil and the capacitance of the capacitor determines the resonance frequency of the circuit.  Every radio and every transmitter made has had a tank circuit in it of one sort or another. The coils of wire on the re-bar create an inductor and the controlled conducting material in the polymer coatings create a capacitor that is tuned to the same resonance frequency as the piezoelectric transformer making for a highly efficient harmonic oscillator that can sustain the “ring” (series resonance voltage magnification over a protracted time domain) for several minutes even with out further injection of energy.   In other words, a car passing can cause one of these concrete patches to emit a powerful high frequency signal for as much as 10 to 20 minutes, depending on the size, weight and speed of the vehicle. 

The final element of this system is the collection of that emitted RF energy.  In some areas, such as the Washington DC beltway, there is a buried cable running parallel to the highway that is tuned to receive and pass this electrical energy into special substations and junction boxes that integrate the power into the existing grid.  These special substations and junction boxes can also divert both this piezoelectric energy as well as grid power into lines that connect directly to government facilities. In other more rural areas, the power collection is by a receiver that is hiding in plain sight.  Almost all power lines have one or more heavy cables that run along the upper most portions of the poles or towers.  These top most cables are not connected to the power lines.  This line is most often used as lightning protection and is grounded into earth ground.   

Along those power lines that parallel highways that have been “fixed” with these piezoelectric generators, this line has been replaced with a specially designed cable that acts as a very efficient tuned antenna to gather the EMF and RF energy radiated by the modified highway re-bar transmitters.  This special cable is able to pick up the radiated piezoelectric energy from distances as far away as 1 mile.  In a few places, this specialized cable has been incorporated into the fence that lines both sides of most interstate highways.  Whether by buried cable, power line antenna or fence-mounted collector, the thousands of miles of these piezoelectric generators pumps their power into a nationwide gird of electric power without anyone being aware of it. The combined effect of the piezoelectric concrete mix, the re-bar lattice and the tuned resonant pickup antennas is to create a highly efficient RF energy transmitter and receiver with a power output that is directly dependent upon the vehicle traffic on the highway.  For instance, the power currently created by rush hour traffic along the Washington DC beltway is unbelievable.  It is the most effective and efficient generators in the US and creates as much as 1.6 megawatts by the inner beltway alone.   

The total amount of power being created nationwide is a secret but a report that circulated within DARPA following the 9/11 attacks said that 67 hidden government bunker facilities were brought online and fully powered in preparation to receive evacuated government personnel.  The report, which was focused on the continuity of services, mentioned that all 67 facilities, with a total demand of an estimated 345 megawatts, “used 9% of the available power of Alqosh”.  By extrapolation, that means that the Alqosh grid can create about 3,800 megawatts or about the power of two large nuclear power plants. So why is it secret?  Three reasons.  (1) The government doesn’t want the bad guys nor the American public to know that we can create power from our highways.   They don’t want the bad guys to know because they don’t want it to become a target.  They don’t want the general public to know because they frankly do not want to share any of this power with the public – even if commercial utility power rates get extraordinarily high and fossil fuel or coal pollution becomes a major problem. 

(2) Some of the materials in the concrete mix are not exactly healthy for the environment, not to mention that millions of people have had their travel plans messed up by the highway construction.  Rain run off and mixtures with hydrocarbons are known to create some pretty powerful toxins – in relatively small quantities but the effects of long term exposures are unknown. (3) Its not done yet.  The system is still growing but it is far from being complete.  A recent contract was released by NERC to install “thermal expansion” sections into the runways of the largest 24 airports in the US.  There is also a plan to expand into every railroad, metro, commuter train, subway and freight train system in the US.  A collaboration between DARPA, NERC and DHS recently produced a report that targets 2025 to complete the Alqosh grid with a total capacity of 26,000 megawatts of generating power. 

The task of balancing the high power needs of the government with a low cost while keeping it all hidden from everyone has been accomplished.  The cost has been buried in thousands of small highway and power line projects spread out over the past 10 years.  The power being created will keep all 140 of the hidden underground bunkers fully powered for weeks or months after natural disaster or terrorists have destroyed the utilities.  The power your government uses to run its lights and toasters during a serious national crisis may just be power that you created by evacuating the city where the crisis began. 

A few of you doubt me?!!

 

I have gotten a number of comments about the science of my stories. Since I spent most of my life in hard core R&D, science is my life and the way I talk. To read my stories, you have to be willing to either accept that the science behind it is fact or go look it up yourself. You will quickly find that there is damn little, if any fiction, in my stories. I take exception to people that say the science is wrong so I’m going to self analyze one of the stories that I have gotten the most questions about.

 

In the story about the accidental weapon discovery, I described a C-130 with a multi-bladed prop – See US Patent 4171183 – . Also see http://usmilnet.com/smf/index.php?topic=9941.15 and http://www.edwards.af.mil/news/story.asp?id=123089573. As I said in the story the long and telescoping blade is still classified so there are no public pictures of it.

 

The ATL (airborne tactical laser) program being run out of the ACTD program by the DUSD(AS&C), an office within OSD. The ACTD program is where the original project was started in cooperation with the Naval Research Lab (NRL). The original objective was to improve the speed and range of long distance transport by aircraft. It followed some research that showed that if the variable pitch of prop were extended outward from the hub further, then the efficiency would improve.

 

Since a prop is a lifting wing that lifts horizontally, it must maintain a constant angle of attack (AoA) over the entire length of the blade. AoA is the angle between the camber line of the wing and the axis of the flow of air over the blade. Since the relative speed of the prop changes as a function of distance from the hub, the blade must twist or pitch more as you move further out the blade. This was the essential secret that the Wright Brothers discovered in 1902 and is the basic difference between a screw propeller and a wing propeller.

What was discovered in the development of vertical wind turbines is that blades as long as 50 feet but as thin as 5 inches could be made to be more efficient and with higher torque than conventional blades. In wind power, the added torque allows you to turn a larger generator but this is due to the wind passing over the blade making it spin. But in an aircraft the engines would be spinning the blade to make it take a bigger (more efficient) bite out of the air, this would mean being able to create more thrust or it might be able to operate at a higher altitude (in thinner air). Do a Google search for “Vertical Wind Turbine”. You’ll see designs like the WindSpire that is 30 feet tall with blades less than 8 inches wide that is so efficient that it produces 2000 kilowatts and can operate in 8 MPH winds and it can handle 100 MPH gusts.

 

The guys at NRL took that and reversed it into an efficient propeller design for the C-130 in the hopes that it would give a similar improved performance. The carbon-fiber telescoping blade was just a natural extension of that thinking.

 

As to the laser beam creating a wide range of frequencies, that is also easy to explain. The Doppler Effect says that an increase in wavelength is received when a source of electromagnetic radiation is moving away from the observer and a decrease in wavelength is received when a source of electromagnetic radiation is moving toward from the observer. This is the basis for the Red Shift (redshift) used by astronomers to examine the movement of starts. It is the reason that a train has a rising pitch whistle as it coming toward you and a decreasing pitch sound as it passes and goes away from you. This is basic high school physics.

 

As the laser beam was rotated, any observer in a lateral position to the aircraft would see one part of the rotating beam rotating toward them (for example, the part above the prop hub) and another part rotating away from them (in this example, the part below the prop hub). The bottom part would have a redshift to its visible light because it is moving away from the observer. The part of the prop that is moving the slowest, near the hub, would have the least redshift but as the observer looked at the light coming from the laser beam further out on the prop, the speed would increase and the redshift would be greater until the Doppler shift would be so great that the light would shift to a frequency below the visible light spectrum. This would move the light energy into the infrared area but as the light traveled faster and faster, it would shift lower and lower. Since the laser beam extended for miles and the beam was traveling at speeds from a few hundred MPH to thousands of mils per second, the red shift along the beam path constantly moved down the electromagnetic spectrum passed radar, TV, short wave radio and down into the ELF range.

 

That portion of the prop above the hub was doing the same thing but it was moving toward the observer in the lateral position and so it was giving a blue shift – toward higher frequencies. As the light frequencies compressed into the blue and ultraviolet range, it became invisible to the naked eye but it still was emitting energy at higher and higher frequencies – moving into X-rays and gamma rays at speeds toward the end of the beam.

 

The end result of this red and blue shift of the light from the laser beam is that there was a cone of electromagnetic radiation emanating from the hub of each of the two engines (on the C-130) or the one engine on the retrofitted 707. This cone radiated out from the hub with a continuously changing frequency to the electromagnetic emissions as the cone widens out behind the aircraft. The intensity of the emissions is directly proportional to the power of the laser and the speed of the props so the highest and lowest frequencies were the most intense. These also happened to be the most destructive.

 

This is just one story that is firmly based in real and actual science. You have to be the judge if it is true or not but I defy you to find any real flaw in the logic or science. As with all of my stories, I don’t talk about space cadet and tin foil hat stuff. I have 40 years of hard core R&D experience along with four degrees in math, computer modeling, physics and engineering so I’m not your usual science writer but whether it is science fiction or not is up to you to decide. Just don’t make that decision because you don’t believe or understand the science – that is the part that should not be questioned. If you doubt any of it, I encourage you to look it up. It will educate you and allow me to get these very important ideas across to people.

Perpetual Motion = Unlimited Power….Sort of…

The serious pursuit of perpetual motion has always intrigued me. Of course I know the basic science of conservation of energy and the complexities of friction, resistance, drag and less than 100% mechanical advantage that dooms any pursuit of perpetual motion to failure…but still, I am fascinated at how close some attempts have come. One college professor built a four foot tall Ferris wheel and enclosed its drive mechanism in a box around the hub. He said it was not perpetual motion but that it had no inputs from any external energy source. It did, however, make a slight sound out of that box. The students were to try to figure out how the wheel was turning without any apparent outside power source. It turned without stop for more than two years and none of his students could figure out how. At the end of his third year, he introduced his mechanism. He was using a rolling marble design that was common for perpetual motion machines but that also had been proven to not work. What he added was a tiny IC powered microcircuit feeding a motor that came out of a watch. A Watch! The entire 4 foot high Ferris wheel needed only the additional torque of a watch motor to keep it running for nearly 4 years!

This got me to thinking that if I could find a way to make up that tiny little additional energy input, I could indeed make perpetual motion. Unlike most of my other ideas, this was not something that could easily be simulated in a computer model first. Most of what does not work in perpetual motion is totally unknown until you build it. I also knew that the exchange of energy to and from mechanical motion was too inefficient to ever work so I concentrated on other forms of energy exchange. Then I realized I had already solved this – back in 1963!

Back in 1963, I was a senior in high school. Since 1958, I had been active in science fairs and wanted my last one to be the best. To make a long story short, I won the national science fair that year – sponsored by Bell Telephone. My project was “How far will sound travel” and my project showed that the accepted theory that sound diminishes by one over the square of the distance (the inverse square law) is, in fact, wrong. Although that may occur in an absolutely perfect environment of a point source of emission in a perfectly spherical and perfectly homogeneous atmosphere, it never ever occurs in the real world.

I used a binary counting flashing light circuit to time sound travel and a “shotgun” microphone with a VOX to trigger a measure of speed and power of the sound under hundreds of conditions. This gave me the ability to measure to 1/1000th of a second and down to levels that were able to distinguish between the compressions and rarefaction’s of individual sound waves. Bell was impressed and I got a free trip to the World’s Fair in 1964 and to Bell Labs in Murry Hill NJ.

As a side project of my experiments, I attempted to design a sound laser – a narrow beam of sound that would travel great distances. I did. It was a closed ten-foot long Teflon-lined tube that contained a compressed gas – I used Freon. A transducer (a flat speaker) at one end would inject a single wavelength of a high frequency sound into the tube. It would travel to the other end and back. At exactly 0.017621145 seconds, it would pulse one more cycle at exactly the same time that the first pulse reflected and returned to the transducer. This was timed to exactly coincide with the first pulse so that it was additive, making the first pulse nearly double in amplitude. Since the inside of the tube as smooth and kept at a constant temperature, the losses in one pass through the tube were almost zero. In less than 5 minutes, these reinforcing waves would build the moving pulse to the point of containing nearly all of the gas in the tube into the single wave front of one pulse. This creates all kinds of problems so I estimated that it would only be about 75% efficient but that was still a lot.

Using a specially shaped and designed series of chambers at the end opposite the transducer, I could rapidly open that end and emit the pulse in one powerful burst that would be so strong that the wave front of the sound pulse would be visible and it would remain cohesive for hundreds of feet. It was dense enough that I computed it would have just over 5 million Pascal’s (Pa) of force or about 750 PSI. The beam would widen to a square foot at about 97 meters from the tube. This is a force sufficient to knock down a brick wall.

One way to make the kind of transducer that I needed for this sound laser was to use a carefully cut crystal or ceramic disc. Using the property of reverse piezoelectric effect, the disc will uniformly expand when an electric field is applied. A lead zirconate titanate crystal would give me the right expansion while also being able to respond to the high frequency. The exit chambers were modeled after some parabolic chambers that were used in specially made microphones used for catching bird sounds. The whole thing was perfectly logical and I modeled it in a number of math equations that I worked out on my “slip stick” (slide rule).

When I got to Bell Labs, I was able to get one scientist to look at my design and he was very intrigued with it. He said he had not seen anything like it but found no reason it would not work. I was asked back the next day to see two other guys that wanted to hear more about it. It was sort of fun and a huge ego boost for me to be talking to these guys about my ideas. In the end, they encouraged me to continue thinking and that they would welcome me to work there when I was old enough.

I did keep thinking about it and eventually figured out that if I can improve the speed of response of the sensors and transducer, I could shorten the tube to inches. I also wanted more power out of it so I researched what was the gas with the greatest density. Even this was not enough power or speed, so I imagined using a liquid – water – but it turns out that water molecules are like foam rubber and after a certain point, they absorb the pulses and energy too much. The next logical phase of matter was a solid but that meant that there was nothing that could be emitted. I was stumped…for awhile.

In the late 1970’s I figured, what if I extended the piezoelectric transducer crystal to the entire length of the tube – no air – just crystal. Then place a second transducer at one end to pulse the crystal tube with a sound wave. As the wave travels the length of the crystal tube, the compression and rarefaction’s of the sound wave pulse create stress or strain on the piezoelectric crystal, making it give off electricity by the direct piezoelectric effect.   this is how a phonograph needle works as it bounces on the grooves of the record. 

Since the sound pulse will reflect off the end of the tube and bounce back, it will create this direct piezoelectric effect hundreds of times – perhaps thousands of times – before it is reduced by the transfer into heat. As with my sound laser, I designed it to pulse every single bounce to magnify the amplitude of the initial wave front but now the speed was above 15,000 feet per second so the pulses had to come every 0.0001333 seconds. That is fast and I did not know if current technology was up to the task. I also did not know what it would do to the crystal. I was involved in other work and mostly forgot about it for a long time.

In the late 1980’s, I now was working for DARPA and had access to some great lab equipment and computers. I dug out my old notes and began working on it again. This time I had the chance to actually model and create experiments in the lab. My first surprise was that these direct piezoelectric effects created voltages in the hundreds or even thousands of volts. I was able to get more than 10,000 volts from a relatively small crystal (8 inches long and 2 inches in diameter) using a hammer tap. I never thought it would create this much of a charge. If you doubt this, just take a look at the Mechanism paragraph in Wikipedia for Piezoelectricity.

When I created a simple prototype version of my sound laser using a tube of direct piezoelectric crystal, I could draw off a rapid series of pulses of more than 900 volts using a 1/16th watt amplifier feeding the transducer. Using rectifiers and large capacitors, I was able to save this energy and charge some ni-cads, power a small transmitter and even light a bulb.

This was of great interest to my bosses and they immediately wanted to apply it to war fighting. A friend of mine and I cooked up the idea of putting these crystals into the heels of army boots so that the pressures of walking created electricity to power some low power devices on the soldier. This worked great but the wires, converter boxes, batteries, etc.,  ended up being too much to carry for the amount of power gained so it was dropped. I got into other projects and I dropped it also.

Now flash forward to about 18 months ago and my renewed interest in perpetual motion. I dug out my old notes, computer models and prototype from my DARPA days. I updated the circuitry with some newer faster IC circuits and improved the sensor and power take-off tabs. When I turned it on, I got sparks immediately. I then rebuilt the power control circuit and lowered the amplitude of the input sound into the transducer. I was now down to using only a 9-volt battery and about 30 ma’s of current drain to feed the amplifier.   I estimate it is about a 1/40th watt amplifier.  The recovered power was used to charge a NIMH battery of 45 penlights of 1.2 volts each.

Then came my epiphany – why not feed the amplifier with the charging battery! DUH!

I did and it worked. I then boosted the amplifier’s amplitude, redesigned the power take-off circuit and fed it into a battery that was banked to give me a higher power density. It worked great. I then fed the battery back into an inverter to give me AC. The whole thing is about the size of a large briefcase and weighs about 30 pounds – mostly from the batteries and transformers. I am getting about 75 watts out of the system now but I’m using a relatively small crystal. I don’t have the milling tools to make a larger properly cut crystal but my modeling says that I can get about 500 watts out of a crystal of about 3 inches in diameter by about 12 inches long.

I call my device “rock power” and when I am not using it for power in my shop or on camping trips, I leave it hooked up to a 60 watt bulb. That bulb has been burning now for almost 7 months with no signs of it diminishing. It works! Try it!!!

Dark Matter’s Dark Secret

We all know how Edwin Hubble made his measures of the movement of distant objects and concluded that the universe was expanding.  This caused researchers to wonder if we would expand forever (open), re-collapse (closed) or reach some future steady state (flat).   This also implied that we must have been smaller in the past and therefore the big bang theory was supported.  What did happen and what will happen depends on a lot on the average density of the universe and the exact rate of expansion. 

We are in the middle of the Sloan Digital Sky Survey to quantify these values in greater detail but we now know enough now to know that the visible matter in the universe is not enough to account for what we observe.  In fact, the “missing mass problem” has been around since 1933 and follows from the application of the virial theorem to galactic movements.

As science and math have done many times before, we speculate on a solution and then go looking for proof that that solution exists.  So we created the “dark matter component” and its counterpart – dark energy.  Since this is entirely an imaginary creation, we have given it properties that fit current observations – which is that it is entirely invisible, even though it makes up 96% of the universe.  It has no emissions or reflections of any electromagnetic radiation so we have no idea what it looks like. 

Despite supporting this imaginary construct, cosmologists and astronomers will admit that they cannot suggest what extrapolation of any known physics could account for something that is responsible for so much mass in the universe and yet cannot be detected by any normal observation.

Our only inference that it is there is from observed gravitational effects on visible matter.    In other words, we have a hole in a theory that we have filled with something that cannot be seen or detected by any means.  We also have a detected gravitational anomaly in a group of formulas that predict various galactic motion.  We have neatly solved both problems by linking them to unknown and imaginary attributes of dark matter.   

Ah but math and observations, in this case, are not consistent because we do not see the same level of correlation between galactic rotation curve anomalies and the gravitational implications from galaxies that have a large visible light component.  We also do not see a uniform distribution of dark matter throughout space or even within galaxies.  The ratio of the detected gravitational anomalies attributed to dark matter does not seem to be consistent based on the quantity of stars in a galaxy.  

In fact, in globular clusters and galaxies such as NGC3379, there seems to be little or no dark matter at all and other galaxies have been discovered (VIRGOH121) that are almost entirely dark matter.   Another recent study showed that there are 10 to 100 times fewer small galaxies than predicted by the dark matter theory of galaxy formation.  We can’t even agree on whether there is any dark matter in our own Milky Way galaxy. 

So this imaginary solution has become a unifying concept among most astrophysicists but only if you keep allowing for a long list of inconsistencies and logical anomalies that get dismissed by saying that we don’t know what dark matter is. 

Fortunately, the flip side of cosmology is quantum physics and scientists in that field of study have not been satisfied with expressions of human ignorance and have tried to seek out a plausible answer.  Unfortunately, they have not had a lot of success when solution candidates are put under intense analysis.   Direct detection experiments such as DAMA/Nai and EGRET have mostly been discounted because they cannot be replicated (shades of Cold Fusion).  The neutrino was a candidate for awhile but has mostly been discounted because it moves too fast.  In fact, most relativistic (fast moving) particles cannot be used because they do not account for the clumps of dark matter observed.   Studies and logic have ruled out baryonic particles, electrons, protons, neutrons, neutrinos, WIMPs, and many others.

Up to this point, all this is historical fact and can be easily confirmed.  What we have is a typical scientific anomaly in which a lot of people really fear thinking outside the box.  The box of traditional and institutional thinking.  All of the particle solutions sought so far are simply looking at the heaviest or most massive particles known and asking if that could be dark matter. 

Despite the thinking that the dark matter itself is imaginary, why not expand the possibilities to some truly wild ideas?  What if there are black holes the size of atoms but with the gravitational pull of a pound of lead.  Would the solar wind of bright galaxies blow such small objects away from the galaxy center?  That would account for the reduced dark matter detected in high light-to-mass galaxies.  The math to show that this is possible can be applied to include or dismiss this idea very quickly and perhaps that has been done.  But there is an even better candidate.

Dark Matter and even Dark Energy can be accounted for by the presence of the Higgs Field and the Higgs Boson.   This takes the dark matter search out of the realm of finding an object or particle that exhibits unseen mass and puts it into the realm of being caused by the force of gravity itself. 

The Higgs field is a non-zero vacuum expectation value that permeates every place in the universe at all times and plays the role of giving mass to every elementary particle, including the Higgs boson itself.   If the detected gravitational anomalies are caused by changes in the source of mass itself, then a number of the problems and inconsistencies of dark matter are resolved.

The Higgs field can impart mass to other elementary particles and thus by extension to macro-matter that eventually create the observed massive gravitational fields around certain galaxies.  The variation of the effects of dark matter might simply be the non-homogeneous distribution of the Higgs field itself or on the particles that it acts upon.

This is somewhat supported by what we know that the Higgs field does to elementary particles.  For instance, the Top Quark is an elementary particle that is about the same size as the electron and yet it has over 200,000 times the mass of the electron as a result of the effects of the Higgs field.  We do not know why this occurs but it is firmly established that the Higgs field does NOT impart mass based on size, atomic weight, volume, spin or any other known characteristic of the fundamental particles that we currently know about.  It isn’t a big stretch of the imagination to think that there might be other kinds of interactions between the Higgs field and other matter that is not linear or homogeneous.

Some of the components of the Higgs field, specifically the Goldstone Bosons, are infraparticles which interact with soft photons which might account for the reduced dark matter detected in high light-to-mass galaxies.  I still like the idea that the high light-to-mass galaxies have a low dark matter component because of the solar wind blowing away the particles that the Higgs field acts on or in the thinning of the Higgs field itself.  What specific component of the “solar wind” that is responsible for this outward pressure or push is unknown but such an action does fit observations.

Since we have confirmed the existence of the Higgs boson and the Higgs field, it is perhaps possible to predict what kind of repulsive force it might impart but an extension of the scalar field theory for dark energy might imply applicable consequences for electroweak symmetry breaking in the early universe or some variation of the quintessence field theory.  What we call vacuum energy, the quintessence field, dark energy and the Higgs Field might actually be all variations of the same theme.  One interesting coincidence is that all of these have been speculated to have been created at about the same time period after the big bang event, i.e. very early on in the expansion phase.

The bottom line is that we have far too many reasonable and logical opportunities to explore alternative concepts to explain the gravitational anomalies of the virial theorem to galactic movements without resorting to the distraction of creating a terra incognita label for our lack of imagination and knowledge.

Accidental Weapon Discovery!

A New C-130 Prop leads to an incredible weapon of awesome power! I could get put in jail or worse for revealing this but someone has to stop them before they kill a lot of people.  I have to start from the beginning to make you understand how some very complex physics is now being developed as the worst weapon man has ever conceived.  It all started with a recent discovery by the NRL that has the military scientists scrambling to explain what they have seen.  Here is the whole story: The venerable C-130 is a time tested four-engine, turbo-prop aircraft design that we have simply not been able to improve upon.  It is rugged, can land on almost anything and carries tons of weight and has very long range.  Each new model upgrade has been given a new letter and we are now up to C-130V’s.  There is, however a new prototype being tested at NRL.  The soon-to-be C-130W was to have only two high efficiency, high torque, high-bypass turbo-prop engines using a new synchronized two bladed 21 foot long prop blade using a new fluid version of a variable ratio transmission (FVRT).  This two engine, two bladed prop would seem to be a throwback in design since one of the previous NRL C-130 prototypes used a ten bladed prop but this new blade is very special and in combination with the FVRT, was expected to be a better design The prop blade is very thin and light weight- only about 4 inches at its widest point – and spins at an incredible 45,000 RPM because the high torque engines are able to achieve incredible gear ratios with the FVRT.  The blade telescopes outward from the hub after takeoff to reach its full 21 feet without banging into the ground.  The inside of the blade has a cable that is controlled within the hub.  There is a thin carbon-fiber cable running down the center of the blade to hold it to the hub and to allow it to be extended and still flex as the speed increases.  At the tips and along the blade of the props are tiny electric circuits that send back data about speed, air, temperatures, humidity, air density, and other data that lets the computers tweak the blade shape and engine speeds.  The reason that all this is important will become clear shortly. The light weight, shape and design makes for a blade that can withstand the very high speeds and still function.  In fact, it is the blade that actually sets the peak speed of the prop, not the engine.  The mix of torque, FVRT and air density causes the blade to spin up to a maximum speed and then hold a constant speed for a given set of conditions.  As the air thins at high altitude, the prop spins faster but eventually it is suppose to reach a maximum speed – or so they thought – let me explain. 

As with most props, it does not move the aircraft by pushing air out the back like a jet but by pulling it forward using the forward horizontal “lift’ of the prop blade.  At slow speeds, the shorter blade twists to give a greater angle of attack to bite into more air but at its fully extended length and highest speeds, something different happens. At full speed, the fully extended tips of the blades are moving at 19 miles per second – that is more than 67,000 miles per hour or about 1.1% the speed of light.  It has long since passed Mach 1, in fact it is moving at 61 mach!  In actuality, Mach is meaningless at these speeds because if the plane is not moving very fast, the blade spins in a near vacuum.  The air does not have enough time to close in on the space where the blade was before the other blade spins into the same space.  In fact, the maximum blade speed was initially thought to be while it was on the ground and creating this near vacuum in which there was very little air density so the blade spun faster.  Then as it began to move faster after takeoff, the speed of the rotation actually slowed and then speeded up again. Once it is moving, the blade actually works better and better as the speed of the aircraft increases and so far they have not found a limit on how fast it will go.  They have taken a test C-130 up to just under Mach 1 but were afraid that its wings and large tail could not structurally withstand the turbulence of a trans-sonic flight.  Newer carbon-fiber swept wings are being developed and a new nose and tail The new engines, blade and transmission are really interesting but that is not the cause of all of the buzz.  What is causing a stir is the unusual speed the prop has attained in full speed flight.  The blade started going much faster than anyone had predicted.  In fact, in a test in 2002, as its speed passed 67,000 RPM, it was shut down manually by the pilot for fear of flying apart.  They have spent nearly two years trying to figure out why it is doing this. 

High speed rotation was the intent of the design all along so the bearings, shaft and hub strength and mounting are all very robust and were found to be safe up to 100,000 RPM in test-bed tests.  Simulated testing based on materials strength indicates that it should be safe up to 150,000 RPM but the entire airframe has not been tested to that level of vibration or speed. Recently, a special test bed flight platform was fixed to the nose of an old KC-135 (Boeing 707 jet airframe).  A specially designed blade was built with extra strength and mounted on a new engine with a FVRT.  The blade was made so it telescoped down in size until it was extended to its full length in flight.  This allowed the KC-135 to take off.   It also allowed for the testing of props longer than 21 feet. Once at an altitude of 55,000 ft – normally too high for most prop planes, the new prop and engine were started and gradually run up to maximum.  It passed 50,000 RPM within  a few minutes and continued to climb in speed for more than an hour.  The KC-135 increased in speed with the assist of the prop until the pilot shut off his four jet engines and let the aircraft be driven only by the one prop in the front.  The speed dropped initially but eventually was back to its former speed and accelerating – passing 500 knots within another hour.  High speed cameras were aimed at the prop from several angles on the KC-135’s wing and blade tip data was being recorded.  After flying more than 4 hours at over 500 knots and a steady increase in prop speeds, the pilot brought all the jet engines back online and spun down the prop and retracted it for landing. Upon examination of the cameras and blade tip data, they think they have discovered the reason for the over speed prop.  The end of the prop was slowly changing shape in a totally unexpected way.  More specifically, it appears to be bending and flowing backward as if it was trailing a ribbon behind the blade tip.  As the speed increases, the blade appears to get shorter and shorter while the part that trails behind gets longer and longer – as if it were bending backward. 

This visual evidence is counter to everything known of metal in the presence of this much centrifugal force.  The spinning blade should have such a huge force pulling outward due to the very high centrifugal forces, that nothing should be bending – especially at right angles to the prop and parallel to the line of flight.  Finally an explanation was found: The tip of the prop is traveling at over 1% of the speed of light and that gives it a different temporal (time) relationship to the rest of the aircraft.  Time slows down as you approach the speed of light so the top of the prop is actually in an earlier time than the rest of the aircraft.  Even at 1% of the speed of light, there is a measurable and visual difference.  I have not done the math but the Lorentz-Einstein math says that relativistic time stops at the speed of light so if we assume that we get 1% time-space distortion at 1% the speed of light, we can see and calculate the prop distortions. At 500 MPH, our analysis shows that the tips of the prop are moving at around 100,460 MPH in a circle and also moving forward at 733 feet per second.  If the very tip is actually not in the same time as the rest of the prop, and time is distorted by 1%, then it will appear to be about 7 feet (88 inches) behind (slower than) the rest of the prop.  In other words, what their high speed cameras showed was a prop that curved backward so that the very tip was stretched and bent back by 88 inches.  As the speed of the prop and the speed of the aircraft increased, the length of the curvature and the amount of the prop blade involved increased. What it was seeing is the same prop but as it appears slightly in the past and therefore slightly behind where the prop is now.  As more and more of the prop reached higher and higher speeds, it appears to be further and further behind giving the false impression that it is bending.  It isn’t actually bending, we are just seeing it where it was in a recent past time. 

But that is not all.  The extensive instrumentation of the test prop showed that it actually got easier to spin it as it went faster.  The outward centrifugal forces should make it appear that the prop is getting heavier but the instruments are showing that the prop is actually getting lighter in weight as it spins faster.  That is contrary to what was expected since the centrifugal forces should have increased its apparent weight but measurements don’t reflect that.  An extensive study has revealed why.   Relative to the measurements taken in the present time – relative to the aircraft – there is a portion of the prop blade that is effectively missing because it is spinning in a different time!  The tip is effectively not there NOW because it has moved into an earlier time.  Since it is effectively missing as far as the measurements of present time torque, air resistance, friction, momentum, inertia and centrifugal forces, the engine can spin the remaining part of the prop blade easier.  But, as the engine sees less load, it can spin faster for the same amount of fuel and as it goes faster, more of the blade moves into the past.  Essentially, as the prop speeds up, more and more of the prop is moving into an earlier and earlier time – so the prop continues to go faster and faster. But now it gets really weird.  Since the Lorentz-Einstein math says that relativistic space-time effects are a constant and applies to everything, the guys at NRL hooked up a laser to the hub so that it pointed directly at a very tiny reflector attached at the very end of the prop.  The idea was to use this laser to measure the length distortions of the prop as a result of the centrifugal forces and to a lesser extent to measure the flex distortions.  Because of the dual carbon fiber cables inside the blade and the carbon fiber blade shell, it was not expected to show much distortion – and it didn’t…or rather not in the way they thought it would. Despite the visual distortion of the shape of the blade as shown in the high speed cameras and explained above, the laser showed that the blade was still straight with no bend or distortion.  This confirmed the idea that it was not actually changing shape but was changing time.  In a few frames of the camera shots, higher humidity made the beam visible and it showed that the laser beam bent in exactly the same way that the metal prop did.  The laser beam curved backward and remained exactly parallel along the apparently bent prop blade.  

Quite by accident, one of the reflectors broke off and the laser beam extended passed the end of the prop – out into the air but the cameras showed that it continued to bend until it disappeared completely.  At the time, this was seen as a curiosity so a much more powerful laser was installed and the cameras were re-pointed and the experiment repeated.  This time the clearly visible beam curved back until they were parallel to the flight path and then at about 4200 feet out, it just disappeared.  At that point, it was so far in the past that it effectively was not of this time and could not be photographed. Observable evidence was limited at this point so some of the findings were a function of calculations.  The laser light was increased in power to try to create a more visible beam.  Theoretically, this beam extended well beyond the prop by miles – when stopped, it was measured to be still strong and visible as much as 30 miles from the aircraft.  At the speed of rotation of 50,000 RPM, the laser light at 30 miles was moving at 84.45% of the speed of light meaning that the light beam was experiencing an 84% distortion of the space-time continuum – making the beam change a number of its propagation properties. The accepted theory of science is that light is made up of both waves (like the frequency of the colors of the spectrum) and particles  (photons which have no mass – or so we thought).  The laser light was a single frequency light from a tunable distributed feedback fiber laser having both thermal and piezoelectric control elements giving a single frequency, wavelength and intensity.  Using such a beam of uniform, intensity, high spatial purity and high conversion efficiency, we were able to use the light as a benchmark measure for precise spectrual analysis.  What we expected was minor nano-level changes but what happened was beyond anyting we had imagined. The time distortion created a cone shaped vortex that extended back from the plane in both space and time – effectively blanketing the entire countryside with a virtually continuous flood of coverage from the beam.  The Doppler shift effect on the frequency of the laser light from of the rotating beam altered the signal over the full range of rotation speeds from the hub to the outter most limits of the beam – dispersing a beam of mixed frequencies that went from its broadcast light frequency up to frequencies of cosmic particle frequencies.  The trailing edge of the blade was also emitting the light beam signal but the Doppler effect caused the shift to go down in frequency from its broadcast light frequency down thru all radio frequencies down to ripples in induced direct current.  Essentially this cone shaped beam was making a powerful sonic boom kind of coverage but instead of sound, the landscape was bathed in electro-magnetic frequency (EMF) spectrum radio waves and light of virtually every frequency spectrum from DC to light frequencies and beyond.   

It was quite by accident that we discovered that some of the emissions were in the X-ray and gamma ray range (measured using Compton scattering) and that the ionizing frequencies were having an affect on almost everything.  Upon exploring this further, we could not measure the shortest and longest wavelenght with the equipment we had.  After some calculations, we estimated that we were creating frequencies in the vicinity of the Planck length.  In other words, we were artificially creating the light radiation frequencies that normally exist within and between atomic particles.  We could measure down to around 10 picometers but it was obvious that there was something else there.  The energy needed for these intense particles would normally be in the range of 100 keV but we were seeing them being created by this cone of EMF without the benefit of a massive accelerator. The effects of the pass-over of all of these frequencies was startling.  Since there are harmonic frequencies for virtually everything in existence and this plane was putting out every known frequency from DC to gamma rays and beyond, the destructive harmonic frequency of thousands, perhaps millions of objects was reached.  In addition, the super high frequency of the high end, leading edge (compressed) wave front was bombarding everything with intense high energy ionizing EMF radiation that has only rarely been seen in events like electron-positron annihilation and radioactive decay – on the order of 10-20 Sv (Sieverts)!  After the fly over (mostly in the Nevada desert north of Las Vegas), the ground under the flight path was found to not contain any hard rocks or crystals.  Only sandstone and sedimentary rocks.  Anything that was hard or crystalline was shattered into smaller pieces – dust that was finer than sand – more like talcum powder.  Compounds were broken into their component atomic parts and atomic bonding was being destroyed within molecules.  Anything that could flex, bend or absorb the intense vibrations was mostly unaffected but even most of the plants were wilted and limp.  Those items that were hard, broke apart.  The weak signal, large area dispersal and the very short duration of exposure is the only thing that kept everything from sand to mountains from crumbling.   Several military ground vehicles were in the area and were totally immobilized.  The steel in their vehicles was instantly weakened to the point of falling apart.  “It had the consistency and strength of a Ritz cracker” – said one of the workers.  Even the man’s diamond ring turned to fine shiny dust.  The men were seriously injured by what appeared like massive bleeding but they are keeping all that very secret.  We, in the electronics room, suspect they were reacting to the massive dose of radiation in the X-ray and gamma rate region.  I don’t even want to think about what happened to their teeth and bones.   Now NRL is discussing how to control the beam and its effects but are struggling with the relativistic effects of the time-space distortions and the control of the laser beam. I hope you will take this seriously.  I could get in a lot of trouble for posting this.  If you doubt any of this, check it out.  Do the math.  Read the Lorentz-Einstein math or Doppler and the aerodynamics of prop blades.  The FVRT is not commercially available yet but will be soon.  The two bladed prop is still hush hush but can be found in dozens of aerodynamics books.  What is dangerous is that this plane, using this beam and prop as a weapon could be made to increase the beam power and destroy everything  under it and either side of it for miles – rocks, glass, buildings, people – turning everything into a fine powdery dust or an oozing mass of jelly. We have enough weapons and this is one that kills and destroys everything.  I can only hope tht by letting people know what is happening, we can stop more deaths.

How to Travel at the Speed of Light…or Faster.

I wrote this for a dissertation for a Theoretical Physics seminar last year (Boston). It was peer reviewed but not printed or accepted for the seminar because my credentials were not sufficient to meet the standards of the seminar. I did, however, get a positive commentary from Julius Wess who was given an expanded copy of this article because of his interest in supersymmetry and his work at DESY.

Nov 16, 2004

Super String Theory Update

In articles that have appeared in various publications within the past few months (Jan to June 2004), the Super String Theory has been studied to a much greater level of detail. The Super String Theory is an extension of the Standard Model and is also known as the Supersymmetry Standard Model (SSM). The SSM which is generally accepted as the most accurate to date; however, fails to explain mass fully and it is the one possible theory that can provide the joining of both quantum mechanics and Newtonian physics for a Unified Field Theory – a single model of all matter and energy in the universe.

 

One aspect of the SSM is that it predicts that there is a pervasive cloud of particles everywhere in space. A hundred years ago, this might have been called the “ether” but we now refer to this as the Higgs Field and the particles in this field are called Higgs Bosons.

 

This cloud of very small particles (Higgs Bosons) creates a field (the Higgs Field) that interacts with matter to create mass and gravity. The existence of this field is predicted by the Lagrangian function of the Standard Model and provides a description of the Higgs field as a quantum field. The Higgs field permeates all reality and the interaction between this field and other matter (electrons, other bosons, etc.) is what creates the effect we call mass. A dense solid interacts with this field more than a less dense solid creating all of the physical characteristics we attribute to mass – weight, momentum, inertia, etc.

 

The existence of the Higgs Field and the Higgs Boson was nearly proven in 2000 but the CERN synchrotron isn’t quite strong enough. Newer designs that are being built now should prove this concept within the next 3 to 5 years. To date, every physical prediction that we can achieve of the SSM and the implications of the Higgs field have been shown to be true. Let us speculate for a moment on the possibilities.

 

The Higgs field’s interaction with matter is what gives us the physical characteristics we attribute to mass – weight, momentum, inertia, etc. Imagine a jet aircraft flying in the air. As we move it faster, the air resists so that it takes a lot of energy to move a large (or heavy) object. If we go too fast, friction will heat up the surface of the wings as it does with re-entry vehicles from space. With jet aircraft, there is a sound barrier that builds up air in front of the aircraft and resists further increases in speed. The energy to move faster increases significantly as you get closer to the sound barrier and then when you exceed it, the energy to fly faster drops back down.

 

When you remove the air – such as in space – now you need very little energy from the engine to move very large objects or to go very fast. The resistance of the air and gravity is gone and the smallest push or thrust will make even a very large object move or go faster.

This is all fact. Now lets speculate for a moment….

 

What if Michelson and Morley’s 1879 experiment to find the “ether” medium that light traveled on was right but on such a different scale that they failed to detect what they were looking for. After all, we still are not certain exactly what the Higgs field is but what if that is their ether. It would certainly explain the dual personality of light – acting like both waves and particles. It might also help explain dark matter and dark energy in the universe – but I digress. Let’s speculate for a moment and imagine that the Higgs field does exist (not that big a stretch of the imagination) and that it is the media that keeps light from going any faster than….the speed of light.

 

Now suppose you didn’t have a Higgs field or could turn it off? If the Higgs field is not there at all, there is no mass, no momentum, no inertia and no weight. If an object has no mass or very little then even a small amount of thrust will push it very fast. Using an ion engine that has a low but very fast thrust, you should be able to push a massless object rapidly to the speed of light and perhaps beyond.

 

Think about it. Other than the E=MC2 formula and the math that was derived by observations in a Higgs Field universe, why is there an upper limit on speed? Why can’t we go faster than light IF the mass is low and the thrust is fast enough? Suppose like some many things in physics, the limits we have put on our thinking about possibilities is because of the limits we have put on our thinking. In other words, if relativity is flawed or misunderstood with respect to the its framing of the conditions of the math, then perhaps in a different frame of reference, the math is wrong and it does not take infinite energy to push an object to the speed of light.

 

Now back to facts. A careful read of special relativity will reveal that Einstein said, “the speed of light is constant when measured in any inertial frame”. If, as has been speculated, the Higgs field is responsible for the physical characteristics we attribute to mass – weight, momentum, inertia, etc., and it was possible to somehow remove the Higgs field, and therefore remove the inertial frame, then even the special theory of relativity says that light speed is no longer a constant.

 

Does it make any sense to even consider this perspective in light of all of the proven experiments and math that have proven General and Special relativity over and over again? The answer is yes if you consider one thing. If the Higgs field permeates all reality and the interaction between this field and other matter (electrons, other bosons, etc.) is what creates the effect we call mass, then how could we imagine that there is any other frame of reference. At the time of Einstein, the Higgs field was unknown so the absence of the Higgs filed could not even be speculated. Now it can be. Or we can imagine frames of reference that might allow objects to alter, interact with or somehow by-pass the effects of the Higgs field. For instance…..

 

We have speculated that there are particles called tachyons that have a LOWER limit of the speed of light but that are based on the assumptions that they have no mass. If a space ship could be made to have no mass, what would it’s speed limit be?

 

If the Higgs field is now acting like air and creating a barrier that appears to us to be the limiting factor in the speed of light, then perhaps faster than light travel is possible in the absence of a Higgs field.

 

How do we get the Higgs field to go away? I don’t know but in 25 or 50 or 75 years, we might know. One hint of a possibility is a startling new find called two-dimensional light. Its called plasmons and it can be triggered when light strikes a patterned metallic surface. In March 2006, the American Physical Society gave demonstrations of plasmons and plasmonic science. They demonstated, for instance, a plasmon microscope that was capable of imaging at scales lower (smaller than) the wavelength of the light they were using to view the object. This is like seeing a marble by firing beach balls at it.

 

Using a combination of metamaterials, nano-optics, microwaves and plasmonics, David Schurig and David R. Smith at Duke University and his British colleagues (in October 2006) created something that can cause microwaves to move along and around a surface. The effect is exactly like a Klingon cloaking device from Star Trek or like Harry Potter’s Cloak of Invisibility. This is not speculation, they have done it. Similar work by the Imperial College in London and SensorMetrix of San Diego are developing metamaterials capable of rerouting visible light, acoustic waves and other electromagnetic waves.

 

This is technology today. What will we be able to do in 50 years? Might we be able to sort of pry open a hole in the Higgs field by bending or rerouting the field around an object. You might call this a warped Higgs field or simply a warp field.

 

If we can warp the Higgs field in a controlled manner, then the temporal implications are another matter but travel at or faster than the speed of light might be possible.

 

OK, so how do you warp the Higgs field?

 

(Of course, we are way out in the realm of speculation but isn’t this they way that crazy things like black holes and super novas were first imagined? If our minds can fathom the remotest possibility now, then perhaps when the containment technology and power densities (energies) above the Fermi scale catches up with our imaginations, we can see if works.)

 

One aspect of the Supersymmetry Standard Model (SSM) is that the strings all vibrate. In fact, every particle and field has a vibration frequency. It is one characteristic attributed to the ‘spin” of a particle. With sufficient energy, it may be possible to create harmonic vibrations to these particles. One aspect of the cloaking device mentioned above is that they use destructive interference to null out the electromagnetic fields of one path and replace it with emissions from another path. This allows them to hide an object while substituting other sensor data that simulates the object not being there at all. The essence is that by controlling the vibrations or frequency on the nano-meter scale, they can manipulate light. Is it possible to extend this thinking to the Higgs field? If so, we might be able to manipulate the Higgs field on, in and around a surface.

It is hard to imagine that something like Bernoulli’s Principle of fluid flow would work on the scale of the Higgs field’s interaction with a surface moving at high speed but it serves as a possible analogy of an area of exploration. Actually, this is not at all that unreasonable.

 

I have flown in some big military planes. The C-130 has an overhead escape hatch near the flight deck. When we flew in the South Pacific, on a hot day, we would open this hatch and stick our heads out. There is something called laminar air-flow around the aircraft. As the plane moves thru the air at 250 MPH, the air going past it is moving at about that speed (assuming no wind), however, in the last 6 to 8 inches as you move closer to the surface of the plane, the wind slows down (relative to the aircraft) due to friction with the surface. This speed drops rapidly in the last 3 or 4 inches so that the wind passing over the fuselage within the last 2 inches is moving relatively slowly – about 30 to 60 MPH. You can stick you head up enough to get your eyes above the edge of the hatch and it won’t even blow your sunglasses off. I’ve done it.

 

What if the Higgs field could be warped by sub-nano-level wave manipulations or react in a manner similar to the laminar air-flow around an aircraft but do it on a space ship? What if we helped it a little by moving that field out away from the ship’s surface just a little? Here’s how.

 

As with recent studies in the use of standing waves to isolate and manipulate objects, it may be possible to seek and find a harmonic frequency that will create compressions and rarefactions in these particle vibrations or fields. If the surface of a vehicle were the emitter and it was properly synchronized, the rarefaction of the standing wave of the harmonic vibrations would create a layer of empty space (rarefactions) around the vehicle totally devoid of Higgs bosons and therefore have no Higgs field, i.e. a warp field.

 

To understand the impact of reducing or eliminating the Higgs field, let’s look at an example.

 

Since light is made up of photons and photons in motion have mass (they have no known rest mass), and since photons travel at the speed of light, turning on a flashlight in the absence of a surrounding Higgs field would instantly move the flashlight to the speed of light. The reason is that the photons coming out of the light beam have mass and are moving at the speed of light. This is mass moving all in one direction. The equal and opposite reaction is for the flashlight to move in the direction opposite from the way that the light beam is pointing. Normally the very tiny mass of the photons would have very little effect on the relatively heavy flashlight but if the flashlight had no mass at all, it would be like putting a rocket engine on a feather. The photons would seem to have the effect of a powerful blasting rocket engine making the mass-less flashlight accelerate to speeds nearly equal to the photons moving in the opposite direction.

 

Since our imaginary vehicle with the vibrating surface also has no mass at all in the absence of a surrounding Higgs field, it could be any size and the same flashlight could also move it to the speed of light.

But what about the people?

 

Now you ask how could you possibly withstand the acceleration from zero to the speed of light within a second or less. That is easy if you have no mass also. Momentum, inertia and even gravity depend on an object having mass. If you have no mass, you cannot have inertia or momentum.

 

Imagine for a moment throwing a heavy ball. When you let go of the ball it continues in the direction it is thrown. Now imagine throwing a feather. Actually it is quite hard to throw a feather because the moment you let go of it, it will stop moving forward and drift slowly downward. It has so little mass that any inertia or momentum it has would be quickly overcome by air resistance – regardless of speed.

 

If you were in a giant space craft but had a device that could create the absence of a surrounding Higgs field, you would have no mass. No momentum and no inertia and no reaction to gravity. A 90 degree turn at 1,000 mph (or any speed) would not be a problem because you cannot experience the “g” forces that an object with mass would experience. Hence, it is possible to make these radical turns and fantastic accelerations without killing everyone.

 

If, as we have speculated, it is the Higgs field particles (bosons), like air particles, that are artificially creating what we see as being the barrier to going faster than the speed of light, then when we shine that flashlight and the photons come out, they will travel faster than the speed of light until they enter the Higgs field and then they will slow back down to the speed of light. Since we are using the light (photon) thrust in the absence of a surrounding Higgs field, the flashlight might also accelerate the imaginary vehicle with the vibrating surface that has no mass to speeds faster than the speed of light.

 

Alternatively, imagine a warp field creating a massless vehicle that is powered by the graviton-beam engine described earlier in this report. If you can control this warp field, you can create any degree of mass you like. So you tune it to have the mass of a feather and then tune the graviton-beam to have the attractive or repulsive force of a planet-size object or perhaps the force of a black hole. Now you have as much power as can be obtained and controlled trying to move an object at speeds greater than the speed of light.

The September 2002 Jupiter event allowed Ed Fomalont of the National Radio Astronomy Observatory in Charlottesville, Virginia to prove that gravity’s propagation speed is no greater than lightspeed. This is because gravity, so one theory says, interacts with the Higgs field as a direct result of the Equivalence Principle in the context of Lorentz symmetry, and so it can be said that the nature of the gravity field can be attributed to the Higgs-Goldstone field. This has been postulated from several math and experimental directions and is generally accepted as fact.

The idea is that the Higgs-Goldstone boson may account for gravity and mass is what makes the use of some kind of warp field a possible solution for faster-than-light travel. Note that this approach does not rely on the deformation of space-time, worm-holes, multi-dimensional space or even violations of the equations of general relativity. Remembering that Einstein’s math was based on an inertial frame and this proposition removes that frame of reference.

General relativity (GR) explains these features by suggesting that gravitation forces (unlike electromagnetic forces) is a geometric effect of curved space-time, in which the effects of the space-time distortion is what propagates at light speeds. Problems with the causality principle also exist for Gravitational Radiation (GR) in this connection, such as explaining how the external fields between binary black holes manage to continually update without benefit of communication with the masses hidden behind event horizons. These causality problems would be solved without any change to the mathematical formalism of GR, but only to its interpretation, if gravity is once again taken to be a propagating force of nature in flat space-time with the propagation speed indicated by observational evidence and experiments. Such a change of perspective requires no change in the assumed character of gravitational radiation or its lightspeed propagation.

Although faster-than-light force propagation speeds do violate Einstein special relativity (SR), they are in accord with Lorentzian relativity, which has never been experimentally distinguished from SR-at least, not in favor of SR. Indeed, far from upsetting much of current physics, the main changes induced by this perspective are beneficial to areas where physics has been struggling, such as explaining experimental evidence for non-locality in quantum physics, the dark matter issue in cosmology, and the possible unification of forces. Recognition of a light-speed Higgs field propagation of gravity, as indicated by recent experimental evidence, may be the key to taking conventional physics to the next plateau.

Although certainly in the realm of wild speculation, it is still not beyond imagination nor in conflict with proven science that the graviton beam engine described in another article in combination with a massless vehicle wrapped in a warpped Higgs field could achieve speeds well in excess of light.

 

As crazy as this sounds, this is completely consistent with our present knowledge of physics. No, it is not proven but it is not disproven and even in its speculative form, it can be seen as compliant with existing math and theories.

 

The missing element is a sufficient energy source to manipulate Higgs bosons and a control mechanism to create the harmonic vibrating surfaces. It is easy to imagine that in 50 or 100 years we will have the means to do this.

 

It is also easy to imagine that a civilization on a distant planet that began its life a few million years before we did, could easily have resolved these problems and created devices that can be used in interplanetary travel.

Intergalactic Space Travel

Sometimes is it fun to reverse-engineer something based on an observation or description.  This can be quite effective at times because it not only offers a degree of validation or contradiction of the observation, it also can force us to brainstorm and think outside the box.

As a reasonably intelligent person, I am well aware of the perspective of the real scientific community with regard to UFO’s.  I completely discount 99.95% of the wing-nuts and ring-dings that espouse the latest abduction, crop circle or cattle mutilation theories.  On the other hand, I also believe Drake’s formulas about life on other worlds and I can imagine that what we find impossible, unknown or un-doable may not be for a civilization that got started 2 million years before us – or maybe just 2 thousand years before us.  Such speculation is not the foolish babbling of a space cadet but rather the reasoned thinking outside the box – keeping an open mind to all possibilities.

In that vein as well as a touch of tongue in cheek, I looked for some topic to try my theory of reverse-engineering on that would test its limits.  With all the I hype about the 50th anniversary of Roswell and the whole UFO fad in the news, I decided to try this reverse-engineer approach on UFOs and the little green (gray) men that are suppose to inhabit them. 

As with most of my research, I used Plato to help me out.  If you don’t know what Plato is, then go read my article on it, titled, Plato – My Information Research Tool.

Here goes:

 

What is the source of their Spacecraft Power? 

Assumptions: 

 Again, with the help of Plato, I did research of witnesses from all over the world. It is important to get them from different cultures to validate the reports.  When the same data comes from cross‑cultural boundaries, the confidence level goes up. Unfortunately, the number of contactees includes a lot of space cadets and dingalings that compound the validation problem.  I had to run some serious research to get at a reliable database of witnesses.  I found that the most consistent and reliable reports seem to increase as the size of their credit rating, home price and/or tax returns went up. When cross‑indexed with a scale of validity based on professions and activities after their reports, my regression analysis came up with a projected 93% reliability factor for a selected group of 94 witnesses.

 What descriptions are common are these: 

 The craft makes little or no noise.  It emits a light or lights that sometimes change colors.  There is no large blast of air or rocket fuel ejected.  Up close, witnesses have reported being burned as if sunburned.  The craft is able to move very slow or very fast and can turn very fast.  The craft is apparently unaffected by air or lack of it. 

 We can also deduce that: the craft crossed space from another solar system; they may not have come from the closest star; their craft probably is not equipped for multi‑

generational flight; there may be more than one species visiting us.

 What conclusions can be draw from these observations: 

 If you exclude a force in nature that we have no knowledge of then the only logical conclusion you can come to is that the craft use gravity for propulsion.  Feinberg, Feynmann, Heinz, Pagels, Fritzsche, Weinberg, Salam and lately Stephen Hawking have all studied, described or supported the existence of the gauge boson with a spin of two called a graviton.  Even though the Standard Model, supersymmetry and other theories are arguing over issues of spin, symmetry, color and confinement, most agree that the graviton exists.

 That gravity is accepted as a force made up of the exchange of fundamental particles is a matter of record.  The Weinberg‑Salam theory of particle exchange at the boson level has passed every unambiguous test to which it has been submitted.  In 1979, they got the Nobel Prize for physics for their model.

 Repulsive Gravity:

 We know that mass and energy are really the same and that there are four fundamental interactions and that the interactions take place by particle exchange.  Gravity is one of these four interactions.  IF we can produce a graviton, we can control it and perhaps alter it.  Altering it in the same way we can produce a POSITRON using the interaction of photons of energy greater than 1.022MeV with matter.  This is antimatter similar to an electron but with a positive charge.  As early as 1932, positrons were observed. 

It seems logical that we can do the same with gravitons.  It is, after all, gravity that is the only force that has not had an observed repulsive force and yet it doesn’t appear to be so very different than the other three fundamental interactions.

 Einstein and Hawking have pointed out that gravity can have a repulsive force as well as an attractive force.  In his work with black holes, Hawking showed that quantum fluctuations in an empty de Sitter space could create a virtual universe with negative gravitational energy.  By means of the quantum tunnel effect, it can cross over into the real universe. Obviously, this is all math theory but parts of it are supported by observed evidence.  The tunneling effect is explained by quantum mechanics and the Schrodinger wave equations and is applied in current technology related to thin layers of semiconductors.  The de Sitter‑Einstein theory is the basis of the big bang theory and current views of space‑time.

The bottom line is that if we have enough energy to manipulate gravitons, it appears that we can create both attractive and repulsive gravitons.  Ah, but how much power is needed?

 Recipe to Make Gravity

 We actually already know how to make gravitons.  Several scientists have described it.  It would take a particle accelerator capable of about 10 TeV (10 trillion electron volts) and an acceleration chamber about 100 Km long filled with superconducting magnets.

 The best we can do now is with the CERN and the FERMI synchrotrons.  In 1989 they reached 1.8 TeV at the FERMI LAB.  The Superconducting Super Collider (SSC) that was under construction in Ellis County, Texas would have given us 40 TeV but our wonderful “education president”, the first Mr. Bush, killed the project in August 1992.  With the SSC, we could have created, manipulated and perhaps altered a graviton.

 We Need A Bigger Oven

 The reason we are having such a hard time doing this is that we don’t know how else to create the particle accelerators than with these big SSC kind of projects.  Actually, that’s not true.  What is true is that we don’t know how to create the particle accelerators except with these big SSC kind of projects, SAFELY.  A nice nuclear explosion would do it easily but we might have a hard time hiring some lab technicians to observe the reaction.

 What do you think we will have in 50 or 100 or 500 years. Isn’t it reasonable to assume that we will have better, cheaper, faster, more powerful and smaller ways of creating high-energy sources? Isn’t it reasonable to assume that a civilization that may be 25,000 years ahead of us has already done that.  If they have, then it would be an easy task to create gravitons out of other energy or matter and concentrate, direct and control the force to move a craft.

 Silent Operation

 Now let’s go back to the observations.  The movement is silent.  That Fits ‑ gravity is not a propulsive force based on thrust of a propellant.  I imagine the gravity engine to be more like a gimbaled searchlight.  The beam being the attractive or repulsive graviton beam with a shield or lens to direct it in the direction they want to move.

 Sunburns from the UFOs

 How about the skin burns on close witnesses ‑ as if by sunburn? OK lets assume the burn was exactly like sunburn ‑ i.e.   caused by ultraviolet light (UVL).  UVL is generated by transitions in atoms in which an electron in a high‑energy state returns to a less energetic state by emitting an energy burst in the form of UVL.  Now we have to get technical again.  We also have to step into the realm of speculation since we obviously have not made a gravity engine yet.  But here are some interesting subjects that have a remarkable degree of coincidence with the need for high-energy control necessary for the particle accelerator and the observed sunburn effects.

 The BCS theory (Bardeen, Cooper & Schrieffer) states that in superconductivity, the “quantum‑mechanical zero‑point motion” of the positive ions allows the electrons to lower their energy state.  The release of energy is not absorbed as heat, implying it is not in the infrared range.  Recently, the so‑called high temperature ceramic and organic superconducting compounds are also based on electron energy state flow.  Suppose a by‑product of using the superconductors in their graviton particle accelerator is the creation of UVL?

 Perhaps the gimbaled graviton beam engine is very much like a light beam.  A MASER is a LASER that emits microwave energy in a coherent and single wavelength and phase.  Such coherency may be necessary to direct the graviton beam much like directing the steering jets on the space shuttle for precision docking maneuvers. 

A maser’s energy is made by raising electrons to a high-energy state and then letting them jump back to the ground state.  Sound familiar.  The amount of energy is the only difference between the microwave energy and the UVL process.  In fact, microwaves are just barely above the UVL in the electromagnetic spectrum. Suppose the process is less than perfect or that it has a fringe area effect that produces UVL at the outer edges of the energy field used to create the graviton beam.  Since the Grays would consider it exhaust, they would not necessarily shield it or even worry about it.

 But it has got to GO FAST! 

 Finally, we must discuss the speed.  The nearest star is Proxima Centauri at about 1.3 parsecs (about 4.3 light years).  The nearest globular cluster is Omega Centauri at about 20,000 light years and the nearest galaxy is Andromeda at about 2.2 million light years.  Even at the speed of light, these distances are out of reach to a commuter crowd of explorers.  But just as the theory of relativity shows us that matter and energy are the same thing, it shows that space and time are one and the same.  If space and time are related, so is speed.   This is another area that can get real technical and the best recent reference is Hawking’s A Brief History of Time.  In it he explains that it may be possible to travel from point A to point B by simply curving the space‑time continuum so that A and B are closer.  In any case we must move fast to do this kind of playing with time and space and the most powerful force in the universe is Gravity.  Let’s take a minor corollary:

 Ion Engine

 In the mid 60’s, a new engine was invented in which an electrically charged ion stream formed the reaction mass for the thrusters.  The most thrust it could produce was 1/10th HP with a projected maximum of 1 HP if they continued to work on improvements to the design.  It was weak but its Isp (specific impulse ‑ a rating of efficiency) was superior.  It could operate for years on a few pounds of fuel.  It was speculated that if a Mars mission were to leave Earth orbit and accelerate using an ion engine for half the mission and then decelerate for half the distance to Mars, they would get there 5 months sooner than if they had not used it.  The gain came from a high velocity exhaust of the ion engine giving a small but continuous gain in speed.

 Suppose such a small engine had 50,000 HP and could operate indefinitely.  Acceleration would be constant and rapid.  It might be possible to get to .8 or .9 of C (80% or 90% of the speed of light) over time with such an engine.  This is what a graviton engine could do.  At these speeds, the relativistic effects would take effect.   We now have all the ingredients

 Super String theory and other interesting versions of the space‑time continuum and space‑time curvature are still in their infancy.  We must explore them in our minds since we do not have the means to experiment in reality.  We make great gains when we can have a mind like Stephen Hawking working on the ideas.  We lose so much when we have politicians like Bush (Sr or Jr.) stop projects like the SSC.  We can envision the concept of travel and the desire and purpose but we haven’t yet resolved the mechanism.  The fact that what we observe in UFOs is at least consistent with some hard-core leading edge science is encouraging.

This is one subject that really surprises me that we haven’t begun some serious research into.  A lot of theoretical work has already been done and the observed evidence confirms the math.

Alien Life Exists

October 13, 1998

I want to thank you for letting me post your article about gravity shielding that appeared in the March ‘98 WIRED magazine.  Your comments on my article about lightning sprites and the blue-green flash are also appreciated.  In light of our on-going exchange of ideas, I thought you might be interested in some articles I wrote for my WEB forum on “bleeding edge science” that I hosted awhile back.  Some of these ideas and articles date back to the mid-90’s, so some of the references are a little dated and some of the software that I use now is generally available as a major improvement over what I had then.

What I was involved with then can be characterized by the books and magazines I read, a combination of Skeptical Enquirer, Scientific American, Discovery and Nature.  I enjoyed the challenge of debunking some space cadet that had made yet another perpetual motion machine or yet another 250 mile-per-gallon carburetor – both claiming that the government or big business was trying to suppress their inventions.  Several of my articles were printed on the bulletin board that pre-dated the publication of the Skeptical Enquirer.

I particularly liked all the far-out inventions attributed to one of my heroes – Nikola Tesla.  To hear some of those fringe groups, you’d think he had to be an alien implant working on an intergalactic defense system.  I got more than one space cadet upset with me by citing real science to shoot down his gospel of zero-point energy forces and free energy.

Perhaps the most fun is taking some wing ding that has some crazy idea and bouncing that against what we know about in hard science.  Most often than not, they make use of fancy science terms and word that they do not really understand to try to add credibility to their ravings.  I have done this so often, in fact, that I thought I’d take on a challenge and try to play the other side for once.  I’ll be the wing nut and spin a yarn about some off the wall idea but I’ll do it in such a way that I’ll try to really convince you that it is true.  To that, I’m going to use every thing I know about science.  You be the judge if this sounds like a space cadet or not.

===============================

 

Are They Really There?             Life is Easy to Make: 

 Since 1953, with the Stanley Miller experiment, we have, or should have discarded the theory that we are unique in the universe.  Production of organic life and even DNA and RNA have been shown to occur in simple mixtures of hydrogen, ammonia, methane and water when exposed to an electrical discharge (lightning).  The existence of most of these components has been frequently verified by spectral analysis in distant stars but, of course, until recently, we can’t see the star’s planets.  Based on the most accepted star and planet formation theories, most star systems would have a significant number of planets with these elements and conditions.

 Quantifying the SETI

 A radio astronomer, Frank Drake developed some equations that were the first serious attempt to quantify the number of technical civilizations in our galaxy.  Unfortunately, his factors were very ambiguous and various scientists have produced numbers ranging from 1 to 10 billion technical civilizations in just our galaxy.  This condition of a formula is referred to as unstable or ill‑conditioned systems.  There are mathematical techniques to reduce the instability of such equations.  I attempted to do so to quantify the probability of the existence of intelligent life.

 I approached the process a little different.  Rather than come up with a single number for the whole galaxy, I decided to relate the probability to distance from Earth.  Later I added directionality.

 Using the basic formulas Drake used to start, I added a finite stochastic process using conditional probability. This produces a tree of event outcomes for each computed conditional probability.  (The conditions being quantified were those in his basic formula: rate of star formation; number of planets in each system with conditions favorable to life; fraction of planets with on which life develops; fraction of planets that develop intelligent life; fraction of planets that develop intelligent life that evolve technical civilizations capable of interstellar communications and the lifetime of such a civilization).

 I then layered one more parameter onto this by increasing the probability of a particular tree path inversely to the relation of one over the square of the distance.  This added a conservative estimate for the increasing probability of intelligent life as the distance from Earth increases and more stars and planets are included in the sample size.

 I Love Simulation Models

 I used standard values used by Gamow and Hawking in their computations, however, I ignored Riemannian geometry and assumed a purely Euclidean universe.  Initially, I assumed the standard cosmological principles of homogeneity and isotropic distributions.  (I changed that later) Of course this produced 1000’s of probable outcomes but by using a Monte Carlo simulation of the probability distribution and the initial computation factors of Drake’s formula (within reasonable limits), I was able to derive a graph of probability of technical civilizations as a function of distance.

 But I Knew That

 As was predictable before I started, the graph is a rising, non‑linear curve, converging on  if you go out in distance far enough 100%.  Even though the outcome was intuitive, what I gained was a range of distances with a range of corresponding probabilities of technical civilizations.  Obviously, the graph converges to 100% at infinite distances but what was really surprising is that it is above 99% before leaving the Milky Way Galaxy.  We don’t even have to go to Andromeda to have a very good chance of there being intelligent life in space.  Of course, that is not so unusual since our galaxy may have about 200 billion stars and some unknown multiple of planets.

 Then I made It Directional

 I toyed with one other computation.  The homogeneous and isotropic universe used by Einstein and Hawking is a mathematical convenience to allow them to relate the structure of the universe to their theories of space‑time. These mathematical fudge‑factors are not consistent with observation in small orders of magnitude in distance from earth ‑ out to the limits of what we can observe ‑ about 15 billion light years.  We know that there is inhomogeneous or lumps in the stellar density at these relatively close distances.  The closest lump is called the Local Group with 22 galaxies but it is on the edge of a super cluster of 2500 galaxies.  There is an even larger group called the Great Attractor that may contain tens of thousands of galaxies. 

By altering my formula,  I took into account the equatorial system direction (ascension & declination) of the inhomogeneous clustering.  Predictably, this just gave me a probability of intelligent life based on a vector rather than a scalar measure.  It did however, move the distance for any given probability much closer ‑ in the direction of clusters and super clusters.  So much so that at about 351 million light years, the probability is virtually 100%.  At only about 3 million light years, the probability is over 99%. That is well within the Local Group of galaxies.

 When you consider that there are tens of billions of stars and galaxies within detection range by Earth and some unknown quantity beyond detection – it is estimated that there are galaxies numbering as many as a 1 followed by 21 zeros – that is more than all the grains of sand in all the oceans, beaches and deserts in the entire world.  And in each of those galaxies, there are billions of stars!  Now you can begin to see that the formula to quantify the number of technical civilizations in space results in virtually 100% no matter how conservative you make the input values.  It can do no less than prove that life is out there.

Alien Life

I presented the following to a Mensa conference on the paranormal (at Malvern) as a sort of icebreaker, tongue-in-cheek fun discussion.  It turned into the most popular (unofficial) discussion at the conference and created more than two years of follow-on discussions.

———————————————————————————————————

January 11, 1998

Sometimes is it fun to reverse-engineer something based on an observation or description.  This can be quite effective at times because it not only offers a degree of validation or contradiction of the observation, it also can force us to brainstorm and think outside the box.

As a reasonably intelligent person, I am well aware of the perspective of the real scientific community with regard to UFO’s.  I completely discount 99.5% of the wing-nuts and ring-dings that espouse the latest abduction, crop circle or cattle mutilation theories.  On the other hand, I also believe Drake’s formulas about life on other worlds and I can imagine that what we find impossible, unknown or un-doable may not be for a civilization that got started 2 million years before us – or maybe just 2 thousand years before us.  Such speculation is not the foolish babbling of a space cadet but rather the reasoned thinking outside the box – keeping an open mind to all possibilities.

In that vein as well as a touch of tongue in cheek, I looked for some topic to try my theory of reverse-engineering on that would test its limits and to test the limits of Plato.  (Plato is the name of my automated research tool)  With all the I hype about the 50th anniversary of Roswell and the whole UFO fad in the news, I decided to try this reverse-engineer approach on UFOs and the little green (gray) men that are suppose to inhabit them. 

What I found was quite surprising.

Who are the Aliens and Where do they come from? 

 Assumptions:

1.      I began this by first verifying that the most common description of aliens (GREYS) has a high probability of being accurate.  I collected data from all over the world using key word searches of newspaper stories going back for several years and then ran some cross checks on those that did the reporting.  I discarded any eyewitnesses that had any previous recorded sightings or were connected to any organization that supported or studied UFOs.  Of the 961 left, I ran a Monte Carlo analysis on the statistical chances that they had contact with or communicated with other UFO people or with each other.  I then did a regression analysis on their descriptions and the circumstances of their sightings.  All this filtering left me with a small sample size of only 41 descriptions but I was much more confident that I had as credible a group of “witnesses” as I could find.

2.      The surprising result was a 93% correlation of data (coefficient of correlation) that what they described was the same or very similar and that they were reporting the truth, as they knew it to be.  The truth, in this case, can be compared to a baseline or reference description of the classical or typical aliens.  When I did this, I found that I had a group that was so consistent as to have a collective 91% reliability factor as compared to the baseline or reference description.  That is very high ‑ just ask any lawyer.  The assumption here, is that the reference description and the eye witnesses are telling the truth.  If we consider that these 41 descriptions came from countries all over the world and in some cases from areas that did not have mass media news services, it would be more implausible to imagine that they all had conspired or collaborated rather than told the truth.

3.      I also believe in evolution and that its basic concepts are common throughout the universe.

 

 Now back to the most common description of aliens (GREYS): whitish gray skin; large eyes; small nose, ears and mouth; small in stature (3‑4 ft), large pear‑shaped head, small, thin and fragile body and hands; bi‑pedal (two legs).  Less reliable (74%) is that they make noises that don’t sound like speech or words and sometimes don’t talk at all.

OK, this may or may not be true.  It could somehow have been a descriptions that was dreamed up years ago and has somehow become so universally known that all 41 of my witnesses have heard and repeat the exact same description.  Unlikely but possible.  But let us proceed anyway – as if this was a valid description of real aliens from reliable witnesses. 

 From only this data, I deduced that:

 Their planet is smaller than Earth, heavy atmosphere and further from their sun or circling a dimmer sun than ours. They evolved from life on a planet at least 5 million years older than ours.   And I think I know why they are here.

 

OK Sherlock, WHY?

 Eyes: The eyes are big because the light where they evolved is weak, i.e., dim or far away from their sun.  They need big eyes to see in the dim light.  That’s a normal evolutionary response.  This might also account for the pale skin color.

 Nose: The nose is small because the atmosphere is heavy.  A small intake of their air is enough to get the air they need to breath.  This also accounts for the small chest.  How big would your lungs be if we had 60% oxygen in our air instead of 21%.  This can also account for how a large brain can survive in a small body.  The head is 10% of the body weight and volume but it uses 40% of the blood oxygen.  A very small creature cannot have a very large head unless the blood carries a very high content of oxygen.

 I say oxygen is what they breath because witnesses seem to agree that they have been seen without helmets or breathing apparatus.  This would also imply that they are carbon-based creatures like us.

 Mouth: The mouth can be small for three reasons.  The body is small and they may not have to eat much.  The air is thick and they can make noises with little effort so they don’t need a big voice passage.  If they have evolved direct mental telepathy, the mouth is not needed to communicate.

 Head: The large head obviously relates to a large brain.  The large brain in that small a body equates to a long evolution.  It might take a long evolution and large brain to figure out how to travel long distances in space.  The triangle or pear‑shaped head is simply a match of large brain to a small mouth and body.

 Morals: If they have evolved to the point of a large brain and extended space travel, they probably have a very different social order than we do.  We tend to compare them to how we would act if we were them and that just doesn’t work.  They are not going to view us the way we would if we were in their place.  The stupid idea that all they want to do is conquer us and dominant the Earth is our projection of our own ideas and fears onto them.  If you had the technology to travel the universe, what possible gain would there be to dominating a primitive society?  .  Why? What for? 

Use of our planet and its resources?  Not when there are 100’s of billions of planets out there.  If you had the technology to travel the universe, wouldn’t you also have the technology to do terra-forming on any planet you found?  We already know how to do this so it is easy to imagine that  futuristic beings would know how.

Slave Labor?  Not likely.  We already have robots that can do fantastic things.  In 1000 years we will have robots to do almost anything we want.  Why use reluctant and technically inferior slaves when you can whip up a robot to do the work.

There is virtually no technical or social problem that we can imagine that a society that is 1000 or more years advanced from us could not easily resolve.

 These aliens are also very non‑aggressive.  Psychologists have long since discovered that learning plays a role in the development of aggressive behavior.   This is observed in all races of mankind as well as in lower animals.

As IQ goes up, all 13 different kinds of aggressive behavior goes down.  If they have hurt people in their explorations it is inadvertent or unintentional.  The same way we don’t set out to harm the primitive tribes that we study in social and medical experiments.

 Eating: They may have very different physical requirements also.  If our health food fad were to really take hold, we might get to a point of being able to separate the pleasure of eating from the need to.  If the pleasure of eating were satisfied in some other way, such as a pill or some sort of external stimulus, then only the nutritional need would be left as an excuse to eat.  Even today we can substitute pills and artificial supplements for real food.  It might even be possible to evolve food and people so that you take in food that entirely metabolizes and in just the right quantity that there is no waste.  The end result would be that we would eat very little and we would have no human waste product at all. The digestive system would change and the elimination parts (bladder, intestines and kidneys) would shrink.  The effect would be to reduce the size of the pelvis and lower body ‑ much as we see in the typical description of a GREY.

 Behavior: They probably also have evolved different requirements for mental existence and thought.  For instance, if you extend Maslow’s Hierarchy of Prepotency above “Self‑Actualization”, what’s next? Altruism? Spontaneous and Total Empathy? Adaptive Radiation? If you have satisfied the motives for power and security and can do anything with technology, what’s next? Perhaps it is to study another planet, the same way we are fascinated by a primitive culture in the Brazilian jungles.  Perhaps they would study us the way we study ants in a colony or bees.  We might be that relatively primitive to them.

 We have recently gained insight into how much damage we do when we inject modern society’s thinking and technology into primitive cultures.  If we evolve for another 500 years and can go explore space and come across a primitive culture that is still warlike and cannot go out into space, wouldn’t we just observe.  If we are trying to do that now, in 500 years we would not only be committed to that concept but our technology would be good enough to allow us to observe without being obtrusive.  Imagine what we would think and would be able to do in 25,000 years.

Now imagine what “they” are thinking as they visit us.

Trans-Dimensional Travel

These articles deal with the fringe in that I was addressing the “science” behind so called UFO’s.

I have done some analysis on life in our solar system other than Earth and the odds against it are very high.  At least, life as we know it.  Even Mars probably did not get past early life stages before the O2 was consumed.  Any biologist will tell you that in our planet evolution, there were any number of critical thresholds of presence or absence of a gas or heat or water (or magnetic field or magma flow) that, if crossed, would have returned the planet to a lifeless dust ball. 

Frank Drakes formulas are a testament to that.  The only reason that his formulas are used to “prove” life exists is because of the enormous quantities of tries that nature has to get it right in the observable universe and over so much time.

One potential perspective is that what may be visiting us, as “UFO’s” could be a race or several races of beings that are 500 to 25,000 years or more advanced than us.  Given the age of the universe and the fact that our sun is probably second or third generation, this is not difficult to understand.  Some planet somewhere was able to get life started before Earth and they are now where we will be in the far distant future.

  Stanley Miller proved that life, as we know it, could form out of organic and natural events during the normal evolution of a class M planet.  But Drake showed that the chances of that occurring twice in one solar system is very high against it.  If you work backwards from their formulas, given the event of earth as an input of some solution of the equations, you would need something like 100 million planets to get even a slight chance for another planet with high‑tech life on it.

  Taken this into consideration and then comparing it to the chances that the monuments on mars are natural formations or some other claim of extraterrestrial life within our solar system, you must conclude that there is virtually no chance for life in our solar system.  Despite this, there are many that point to “evidence” such as the appearance of a face and pyramids in Mars photographs.  It sounds a lot like an updated version of the “canals” that were first seen in the 19th century.  Now we can “measure” these observations with extreme accuracy – or so they would have you believe.

The so‑called perfect measurements and alignment that are supposedly seen on the pyramids and “faces” are very curious since even the best photos we have of these sites have a resolution that could never support such accuracy in measurements.  When you get down to “measuring” the alignment and sizes of the sides, you can pretty much lay the compass or ruler anywhere you want because of the fuzz and loss of detail caused by the relatively poor resolution.  Don’t let someone tell you that they measured down to the decimal value of degrees and to within inches when the photo has a resolution of meters per pixel!

   As for the multidimensional universe; I believe Stephen Hawkin when he said that there are more than 3 dimensions however, for some complex mathematical reasons, a fifth dimension would not necessarily have any relationship to the first four and objects that have a fifth dimension would have units of the first four (l,w,h & time) that are very small ‑ on the order of atomic units of scale.  This means that according to our present understanding of the math, the only way we could experience more than 4 dimensions is to be able to be reduced to angstrom sizes and to withstand very high excitation from an external energy source.   Lets exclude the size issue for a moment since that is an artifact of the math model that we have chosen in the theory and may not be correct.

  We generally accept that time is the 4th dimension after l, w, and h which seem to be related as being in the same units but in different directions.  If time is a vector (which we believe it is) and it is so very different than up, down, etc, then what would you imagine a 5th dimension unit to be?

  Most people think of “moving” into another dimension and it being just some variation of the first 4 but this is not the case.  The next dimension, is not capable of being understood by us because we have no frame of reference. 

Hawkin makes a much better explanation of this in one of his books but suffice it to say that we do not know how to explore this question because we cannot conceive of the context of more than 4 dimensions.  The only way we can explore it is with math ‑ we can’t even graph it because we haven’t got a 5-axis coordinate system.  I have seen a 10 dimensional formula graphed but they did only 3 dimensions at a time. 

Whatever the relationship of a unit called a “second” has with a unit called a “meter”, may or may not be the same relationship as the meter has with “???????” (Whatever the units of the 5th dimension are called).  What could it possibly be?  You describe it for me, but don’t use any reference to the first 4 dimensions.  For instance, I can describe time or length without reference to any of the other known dimensions.  The bottom line is that this is one area that even a computer cannot help because no one has been able to give a computer an imagination ……..yet.  However, it is an area that is so beyond out thinking that perhaps we should not speculate about them coming from another dimension. 

Let’s look at other possibilities.    To do that, take a look at the other article on this blog titled, “Intergalactic Space Travel”.

Achieving the Speed of Light NOW

Scientists have been telling us for some time that it is impossible to achieve the speed of light.  The formula says that mass goes to infinity as you approach C so the amount of power to go faster also rises to infinity.  The theory also says that time is displaced (slows) as we go faster.  We have “proven” this by tiny fractions of variations in the orbits of some of our satellites and in the orbit of Mercury.  For an issue within physics that is seen as such a barrier to further research, shouldn’t we see a more dramatic demonstration of this theory?  I think it should so I made up one.

Let us suppose we have a weight on the end of a string.  The string is 10 feet long and we hook it up to a motor that can spin at 20,000.  The end of the string will travel 62.8 feet per revolution or 1,256,637 feet per minute.  That is 3.97 miles per second or an incredible 14,280 miles per hour.  OK so that is only .0021% of C but for only ten feet of string and a motor that we can easily create, that is not bad.

There are motors that can easily get to 250,000 RPM and there are some turbines that can spin up to 500,000 RPM.  If we can explore the limits of this experimental design, we might find something interesting.   Now let’s get serious. 

Let’s move this experiment into space.  With no gravity and no air resistance, the apparatus can function very differently.  It could use string or wire or even thin metal tubes.  If we control the speed of the motor so that we do not exceed the limitations imposed by momentum, we should be able to spin something pretty fast.

Imagine a motor that can spin 50,000 RPM with a sting mechanism that can let out the string from the center as the speed slowly increases.  Now let’s, over time, let out 1 mile of string while increasing the speed of rotation to 50,000 RPM.  The end will not be traveling at nearly 19 thousand miles per hour or 2.82% of C.

If we boost the speed up to 100,000 RPM and can get the length out to 5 miles, the end of the string will be doing an incredible 188,495,520 miles per hour.  That is more that 28% the speed of light.

What will that look like?  If we have spun this up correctly, the string (wire, tubes, ?) will be pulled taunt by the centrifugal force of the spinning.  With no air for resistance and no gravity, the string should be a nearly perfect vector outward from the axis of rotation.  The only force that might distort this perfect line is momentum but if we have spun this setup slowly so that the weight at the end of the string is pulling the string out of the center hub, then it should be straight. 

I have not addressed the issue of the strength of the wire to withstand the centrifugal force of the spinning weight.  Not that it is trivial but for the purposes of this thought experiment, I am assuming that the string can handle whatever the weight size we use.

Let us further suppose that we have placed a camera exactly on the center of the spinning axis facing outward along the string.  What will it see?  If the theory is correct, then despite the string being pulled straight by the centrifugal force, I believe we will see the string curve backward and at some point it will disappear from view.  The reason is that as you move out on the string, its speed is going faster and faster and closer to the C.  This will cause the relative time at each increasing distance from the center to be slower and appear to lag behind.  When viewed from the center-mounted camera, the string will curve.

If we could use some method to make the string visible for its entire length, its spin would cause it to eventually fade from view when the time at the end of the string is so far behind the present time at the camera that it can no longer be seen.  It is possible that it might appear to spiral around the camera, even making concentric overlapping spiral rings. 

If synchronized clocks were places at the center and at the end of the string, and then we placed a camera at both ends but could view the two images side-by-side at the hub.  Each one would view a clock that started out synchronized and the only difference would be that one is now traveling at some percentage of C faster than the other.  I believe they would read different times as the spin rate increased. 

But now here is a thought puzzle.  Suppose there is an electronic clock at the end of the string as described by the above paragraph but now instead of sending its camera image back to the hub, we send its actual reading by wires embedded in the string back to the hub where it is read side-by-side with a clock that has been left at the hub.  What will it read now?  Will the time distortion alter the speed of the electrons so that they do NOT show a time distortion at the hub?  Or will the speed of the electricity be constant and thus show two different times?  I don’t know.

Plato – The Birth of an Automated Research Tool

In the early 80’s, I was in Mensa and was trying to find some stimulating discussions of the outer limits of science.  I was an R&D manager for the Navy and was working for NRL in some very interesting but highly classified research.  I was careful to avoid any talk about my work but I really wanted to explore areas that I could talk about.  This was one of several attempts to do that.  I sent the message below to a young professor at Lawrence Livermore National Labs, who was running a Mensa discussion forum on ARPANET, in the hopes of getting something started.  He was working with artificial intelligence in math and robotic systems at the time.   

Remember, this was written in 1984.  The Apple Mac was one year old.  TCP/IP has just been introduced on ARPANET. Windows 1.0 was introduced in 1884 but I did not begin using it until version 3.1 came out.  The fastest processor was an Intel 286.  Most all software ran in DOS.  This message was originally sent via UUCP but I saved it as ascii text onto tapes and then later translated it to disks with the idea of someday writing a book, but I never did.   Enjoy….. 

Dennis,

This is my first contact with one of the Mensa discussion forums.    I found a few guys that were willing to talk to me but it seems I ticked off a lot of others by my lack of due respect for puzzles and my references to the “wing nuts and space cadets” that inhabit and comment on most of the Mensa forums.   🙂   I eventually formed my own forum, web site and discussion groups and a bunch of us proceeded to talk our way into a lot of business together. 

=====================================================================  September 9, 1984 

Hi.  I’m new to this board but I have an interest in the subjects you discuss.  I’d like to open a dialog with some of you about your ideas and what you are interested in and have analyzed or studied that may be interesting.  I’m no Mensa guru but I do like a mental challenge and the application of science but more importantly, I think there is a difference between knowledge and wisdom.  I seek the latter.   

Who am I: I guess what I need to do first is try to tell you who I am and perhaps try to establish a little credibility so that you won’t think I’m really am a National Enquirer writer or some wing nut with wild ideas.  Then I’ll present some simple but somewhat radical ideas to start with and see how it goes.  If there is any interest in going further, I’d love to get into some really heavy stuff about life, existence and the future.  I am particularly interested in discussing cosmology and the human animal, but that is for later.   

I’ve been developing a methodology for predicting events and narrowly defined aspects of certain subjects based on social and technical inputs from a vast information correlation program I use……But that should wait until I find out if anyone is even interested in this stuff. 

I have been working near the Washington DC area for a number of years.   I am a researcher that deals in analysis and logic.  I enjoy a mental challenge similar to what I perceive that many Mensa types like but I don’t enjoy the meaningless math puzzles or secret decoder ring stuff.  I prefer to ask or pursue the real mysteries of life and nature.   

I have a few technical degrees and have traveled and been schooled all over the world.  That was mostly a product of my parents being in the military and my early jobs.  I became interested in computers as soon as they came out.  I helped build an ALTAIR at the University of New Mexico’s engineering fair in 1971-2.  That was where the first “microcomputer” was created.  The Altair came a few months later.  It introduced me to computers but I immediately switched over to the software aspects of computers rather than become a hardware hacker.  I got an EE degree first, so I understand the hardware, I just think its secondary to getting the job done.  Then I got a CS degree and began to see the possibilities.  I did 40 credit hours of studies in computer simulations and loved it.  I was using math so much in my CS degree that I discovered that for one more semester, I could also get an BS in Applied Math – which I did.  Then I discovered that with just one more semester, I could get a degree in Physics so I did that too.  By then my parents were out of money and I had to get a job.  Ever since then I have been messing with computers. I was particularly fascinated by the speed of computers.  I won an award one time for being the only student that solved a particular math problem using an algorithm that would fit into 2K of RAM.  I did it simply by adding one to a variable and checking to see if that solved the equation ‑ if it didn’t I added one more.  It worked.  While working on one of the first OCR studies, I was captivated by the fact that the computer could find any text, no matter how much it had to search, in seconds that might take a person years to find.    That has been a driving force every since.   

 What is my Resource Tool? I liked software but I wanted to get to the things that I could see that a computer could do ‑ not spend my time writing code.  I became good at modifying and interfacing existing software to do what I wanted.  I found that this was much easier than writing my own code.  I got the original WordStar to talk to Visicalc and dBASE on an old CP/M Kaypro so that I could get automatic documents that self‑updated themselves.  That was fun but I wanted to apply the efforts more to real world applications.   

The programming was slow because I tend to think in pictures and I wanted the programming to think in pictures also.  I found a program that would reverse engineer a source code listing into a flow chart of the program.  It was crude but it worked.    I figured it would be even better if you could go the other way ‑ input a flowchart and get a compiler to write the code.     I bought a flow chart program and a Fortran compiler and made them talk to each other so that I could use the graphics of the flow chart program to create a chart of my program flow and then feed it into the compiler to get object code.   I have improved on it for the last several years so that I can input natural language variables and verbs and it interprets for me.  If it doesn’t understand some variable relationship and can’t figure it out by seeing it in context, it stops and asks me.  I now can spend most of my time Using a program instead of writing it.   

 CLICK! Necessity if the Mother of Innovation

The first real application of this program was when I became a player in the stock market and discovered it was easy to improve my investment decisions if I could get my hands on the right information.  The information was available, there was just no way to find it, link it and give it structure and purpose using the speed of the computer.  That was the start of my effort to create a better information search and retrieval system.   

 The Hardware + Software

In short, I created some special searching software that helps me find anything about anything and then it automatically identifies links, relationships and implications for me.  I know that sounds like a bunch of pie in the sky but it really isn’t all that hard to do.  There are, in fact several programs on the market now that do the same thing, only I did it first on a Radio Shack TRS‑80 in 1979.  Then again on an Apple II+ in 1983 and again in 1987 on a Mac and most recently on an MS‑DOS machine (from PC to XT to 286 and now a 386).   

My method has evolved over the years and now uses some fuzzy logic and thesaurus lookup techniques along with a smart indexing and interfacing to my CD‑ROM and hard disk databases.  I built it over several years in modular form as I added new hardware or new processing capabilities.  The flowchart compiler helped me move the code from one machine to another since the source code (the flow chart itself) remained essentially the same, only the compiler code changed.   I now have a mini‑LAN of four computers and it will pass tasks to other computers, in the form of macros, so I can get parallel searches going on several different information resources at the same time.    That also lets me proceed with the analysis while some slow peripheral, like the tape deck, is searching.   

 De Facto Credibility

This search software will also interface into script files for on‑line searches like IQUEST, DIALOG and about 270 others including several military and government databases and gateways (NTIS, DTIC, FEDLINK, etc.  ) that I have access to as a function of my job.  For the CompuServe Information System (CIS), the command structure that programs like TAPCIS uses makes it easy to initiate an on‑line search.  The slowest part of it is waiting for the on‑line responses from the dial‑up service that I am using but at work I can use some really fast lines on ARPANET. 

I also have access to a few foreign databases that are the equivalent of our UPI, AP and CIS’s IQUEST.  The European (based in Germany) databases have lots of technical data and the Japanese databases have collated worldwide news reports from about 30 nations.  I use some lines from Cable & Wireless that I am able to bill to my job.  The translation services allow me to issue a search in English and read the response in English but the data searched is in one of several languages.   I can get into a lot of this stuff for free but there is also a lot that costs money.  That’s one of the reasons I got permission and started using all these resources at work.    

 Plato is Born

Still, the on‑line search costs are why I tried to build up my own research capabilities.  I use a page‑feeder scanner and OCR software to read in books and other texts to add to the info databases that I can search.  There is a used bookstore near me that sells books dirt-cheap or will take trades for other stuff (non‑books).  This makes it possible for me to buy a book, rip it apart and feed it into the page‑feed scanner.  Then I can throw the book away.  Since I never, ever let anyone else use the database and never quote directly out of the texts, its not a copyright violation.   

400 CD‑ROMs, 90 (14 inch) laser disks, 250 or so tapes and perhaps 5000 disks of compressed (zipped) text files gives me immediate access to my own database of about 500 gigabytes of text or about 500 million pages.  Some of this has line pictures but most of it is just pure text because the OCR software does not translate the images – just the text.  That is a loss but if I think the image is important, I scan it and save it on a disk.  Add to this on‑line access to about 3500 databases, including some I can get to at work, containing perhaps 50,000 times as much as I have, and you get some idea of how powerful my search capability can be.  I call my search program, “Plato”.   

 Concept Searches: With Plato, I am able to input a search “concept” instead of a search “syntax”.  It will automatically cross‑reference and expand the search into related subjects based on parameters I set.  It took a long time to learn hour to phrase my search syntax but I usually get back just the data I want.  Plato saves the search paths, resources and references in a bibliography format if I need to refer to the source.   

When you think about it, it is all pretty simple and commonly used techniques used in lots of commercially available software.  The search of compressed (zipped) text data is done real well by Lotus Magellan.   Lots of search software is available but I found a mix of GOPHER and FOLIO VIEWS with some added fuzzy logic and thesaurus lookup techniques that I enhanced after seeing some spell checkers that looked up words phonetically and with missing letters.  The interfacing was simply a matter of finding hooks in other programs or putting front‑ends on them to get them to talk to each other.  If all else fails, I just use a script file and/or keyboard macro in a BAT or BIN file to simulate the manual typing in and reading out of text.  That always works.  Linking Information Resources: 

There are lots of programs that can search one database or a selected set of data sources.  All I did was add a few extra features (script and macro files) to make it move from one reference to another, to quantify the validity of the data and added some interfacing software that I wrote to make other programs, that already do parts of this, work together.   Using some of the research techniques and capabilities that Plato allows, I have been able to identify some very interesting linkages and cross‑references to concepts that may be of interest to people in this forum.  I have also been able to fairly easily dismiss some of the quackery and screwballs that sometimes frequent these idea exchanges.   

 And Then What?

I am a serious and scientific researcher and I am not interested in some of the nuts and liars that grab scientific or technical words at random and make up their own versions of reality.  On the other hand, I consider the majority of science to be somewhat boring.  I may not KNOW everything but I don’t need to if I can find out what I need to know in only a few minutes on the computer.  Besides, even if the answer to any question is right there on the screen, I still have to read it and after awhile, that mounts up to a lot of reading.   

It’s like having a dictionary.  Anytime you wanted to know what a word means you’d look it up, but most people wouldn’t sit around all day looking up words just for fun.  Now imagine the same thing with a very good set of encyclopedias.    There would be a lot more information but after awhile, just knowing that you can find it would be enough.  Now imagine a set of encyclopedias that contains 87 billion, 500 million pages of text!  That’s how big my dictionary is.  Ok so its not really that big but we are talking about the size of hundreds of libraries. The one advantage that I think I have over many people is that I believe that the answer to most of our questions are out there somewhere.  Many people don’t even think to ask if they really believe that the answer is not available.  Let me give you an example.  I worked as a part-time consultant to government contractors for a while and I often dealt with clients that were preparing a proposal for a contract.  When I tell them that I can get detailed information about what their other competitors are doing, most think I can’t or it would be done by illegal means.  I can and it’s legal.  I can, in many cases get not only what the competitors are going to bid but their cost structures and their past performance.  I can even get the salaries of the people doing the bidding.  After awhile, my clients start asking me to get information that it would never occur to them to ask for before I came on the scene.   

 Monotony: Getting back to that incredible large dictionary, it might be fun to look up stuff for a while but pretty soon you would stop looking up random subjects and try to find some real challenges.  I got to that point about 4 years ago, shortly after I finished the prototype for my first PC based search software.  I have expanded its capabilities as new databases became available.  The addition of the scanner to read in hardcopy text was a big improvement.  I was able to select books in topic areas I wanted or to fill in gaps in coverage.    The scanner(s) has been going, on average, at about 2‑4 hours a day for the last several years.   

 The Hawking Incident

As I added new data, it was fun for a few days to search for some incredibly minuscule detail.  Or to try out a fuzzy search and chase down some concept.  I particularly liked writing to Steven Hawking and telling him I thought I had determined the size of the universe.  He was very polite when he said, “I know!”.   

That incident was one of many where I began following a trail of information that made me believe I had “uncovered” some new idea or concept that “I” had not heard before only to find out that upon deeper research, it has already been discovered.  With all this information, it is a very humbling thought to realize that someone out there knows at least some part of all of it.   I guess there is something to be said for being able to consolidate and cross‑reference all of this information and focus it down for a single person.  It has the net effect of allowing me to ask questions that lead me into areas that I would never have known to follow into.   

It is very useful to integrate across scientific study areas.  For instance, medical people seem to know very little about electronics or physics and vise versa.  The result is that scientists in each field limit their view of the world by only seeing it from their own field of study.  Only in the last few years has there begun to be a cross mixing.  Things like a tiny pill made of SMD (surface‑mount devices) that a patient swallows.  The pill has a sensor array and a transmitter that sends data to a receiver outside the body.    The term non‑invasive gets redefined.   It seemed like ages before they began to introduce virtual reality to medical systems and robotics and yet it seemed to me to be a perfectly natural mix.   I felt that as soon as a movie like TRON was made, that it would be only a matter of time before robotics, animation and computer graphics would be combined into a 3‑D viewer but it seems that it is just now catching on.   

But What Has this all got to do with you?

Now it is at this point that I must chose a topic to discuss with people of this forum.  I enjoy almost any intellectual discussion from religion to cosmology to the human potential but I prefer a topic that is perhaps a little further out than most of these and that mixes a lot of hard-core science and math with some logic and speculation.   

I am very curious about the fringes of science.  The areas where conventional science is afraid or unwilling to conduct real research but that has an unusual following of “believers”.    _____________________________________________________________ So, Dennis, what do you think? 

___________________________________________________ 2007 Update: 

In the late 1990’s, I updated Plato with a modern windows GUI interface and OOPs OCX files and modules.  I expanded into a Dbase DBMS engine and SQL interfaces.  I was able to multiplex multiple modems using some ISP software so I could use multiple lines of input.  Later, I extended this to multiple computers on a TCP/IP network using broadband.  It still relied on macros and keyboard simulators to interface with other commercial and proprietary software but its parallel operations equated to massive procession power. I have continued to make use of a lot of web sites and online services that I can access as a result of my government work and that gives me a huge increase over simple web searches.  I also have improved the bi-directional translation capability so I can tap into databases created in other countries. 

I have also since expanded its ability to search for themes, concepts and related ideas while improving its ability to quantify the relevance of those findings.  It still takes hours to resolve most of my searches but I let it work overnight and sometimes over the weekend to find and resolve my searches.   The end result is a very useful tool that I find helpful but, as noted above, it is not perfect and still falls far short of the human mind.