Big Brother is Watching

And He knows Everything You have Ever Done! Sometimes our paranoid government wants to do things that technology does not allow or they do not know about yet. As soon as they find out or the technology is developed, then they do it. Case in point is the paranoia that followed 11 Sept 2001 (9/11) in which Cheny and Bush wanted to be able to track and monitor every person in the US. There were immediate efforts to do this with the so-called Patriots Act that bypassed a lot of constitutional and existing laws and rights – like FISA. They also instructed NSA to monitor all radio and phone traffic, which was also illegal, and against the charter of NSA. Lesser known monitoring was the hacking into computer databases and monitoring of emails by NSA computers. They have computers that can download and read every email on every circuit from every Internet user as well as every form of voice communication. Such claims of being able to track everyone, everywhere have been made before and it seems that lots of people simple don’t believe that level of monitoring is possible. Well, I’m here to tell you that it not only is possible, but it is all automated and you can read all about the tool that started it all online. Look up “starlight” in combination with “PNNL” on Google and you will find references to a software program that was the first generation of the kind of tool I am talking about. This massive amount of communications data is screened by a program called STARLIGHT, which was created by the CIA and the Army and a team of contractors led by Battelle’s Pacific Northwest National Lab (PNNL). It does two things that very few other programs can do. It can process free-form text and it can display complex queries in visual 3-D outputs. The free-form text processing means that it can read text in its natural form as it is spoken, written in letters and emails and printed or published in documents. For a database program to be able to do this as easily and as fast as it would for formal defined records and fields of a relational database is a remarkable design achievement. Understand this is not just a word search – although that is part of it. It is not just a text-scanning tool; it can treat the text of a book as if it were an interlinked, indexed and cataloged database in which it can recall every aspect of the book (data). It can associate and find any word or phrase in relation to any parameter you can think of related to the book – page numbers, nearby words, word use per page, chapter or book, etc. By using the most sophisticated voice-to-text messaging, it can perform this kind of expansive searching on everything written or spoken, emailed, texted or said on cell phones or landline phones in the US! The visual presentation of that data is the key to being able to use it without information overload and to have the software prioritize the data for you. It does this by translating the database query parameters into colors and dimensional elements of a 3-D display. To view this data, you have to put on a special set of glasses similar to the ones that put a tiny TV screen in from of each eye. Such eye-mounted viewing is available for watching video and TV – giving the impression you are looking at a 60-inch TV screen from 5 feet away. In the case of STARLIGHT, it gives a completely 3-D effect and more. It can sense which way you are looking so it shows you a full 3-D environment that can be expanded into any size the viewer wants. And then they add interactive elements. You can put on a special glove that can be seen in the projected image in front of your eyes. As you move this glove in the 3-D space you are in, it moves in the 3-D computer images that you see in your binocular eye-mounted screens. Plus this glove can interact with the projected data elements. Let’s see how this might work for a simple example: The first civilian application of STARLIGHT was for the FAA to analyze private aircraft crashes over a 10-year period. Every scrape of information was scanned from accident reports, FAA investigations and police records – almost all of this was in free-form text. This included full specs on the aircraft, passengers, pilot, type of flight plan (IFR, VFR) etc. It also entered geospatial data that listed departure and destination airports, peak flight plan altitude, elevation of impact, distance and heading data. It also entered temporal data for the times of day, week and year that each event happened. This was hundreds of thousands of documents that would have taken years to key into a computer if a conventional database were used. Instead, high-speed scanners were used that read in reports at a rate of 200 double-sided pages per minute. Using a half dozen of these scanners completed the data entry in less than one month. The operator then assigned colors to a variety of ranges of data. For instance, it first assigned red and blue to male and female pilots and then looked at the data projected on a map. What popped up were hundreds of mostly red (male) dots spread out over the entire US map. Not real helpful. Next he assigned a spread of colors to all the makes aircraft – Cessna, Beachcraft, etc.. Now all the dots change to a rainbow of colors with no particular concentration of any given color in any given geographic area. Next he assigned colors to hours of the day – doing 12 hours at a time – Midnight to Noon and then Noon to Midnight. Now something interesting came up. The colors assigned to 6AM and 6PM (green) and shades of green (before and after 6AM or 6PM) were dominant on the map. This meant that the majority of the accidents happened around dusk or dawn. Next the operator entered assigned colors to distances from the departing airport – red being within 5 miles, orange was 5 to 10 miles…and so on with blue being the longest (over 100 miles). Again a surprise in the image. The map showed mostly red or blue with very few in between. When he refined the query so that red was either within 5 miles of the departing or destination airport, almost the whole map was red. Using these simple techniques, an operator was able to determine in a matter of a few hours that 87% of all private aircraft accidents happen within 5 miles of the takeoff or landing runway. 73% happen in the twilight hours of dawn or dusk. 77% happen with the landing gear lowered or with the landing lights on and 61% of the pilots reported being confused by ground lights. This gave the FAA information they needed to improve approach lighting and navigation aids in the terminal control areas (TCAs) of private aircraft airports. This was a very simple application that used a limited number of visual parameters at a time. But STARLIGHT is capable of so much more. It can assign things like direction and length of a vector, color of the line or tip, curvature and width and taper to various elements of a search. It can give shape to one result and different shape to another result. This gives significance to “seeing” a cube versus a sphere or to seeing rounded corners on a flat surface instead of square corners on an egg-shaped surface. Everything visual can have meaning.  Having 20+ variables at a time that can be interlaced with geospatial and temporal (historical) parameters can allow the program to search an incredible amount of data. Since the operator is looking for trends, anomalies and outflyers, the visual representation of the data is ideal to spot this data without actually scanning the data itself by the operator. Since the operator is visually seeing an image that is devoid of the details of numbers or words, he can easily spot some aspect of the image that warrants a closer look. In each of these trial queries, the operator can using his gloved hand to point to any given dot and call up the original source of the information in the form of a scanned image of the accident report. He can also touch virtual screen elements to bring out other data or query elements. For instance, he can merge two queries to see how many accidents near airports (red dots) had more than two passengers or were single engine aircraft, etc. Someone looking on would see a guy with weird glasses waving his hand in the air but in his eyes, he is pressing buttons, rotating knobs and selecting colors and shapes to alter his 3-D view of the data. In its use at NSA, they add one other interesting capability. Pattern Recognition. It can automatically find patterns in the data that would be impossible for any real person to by looking at the data. For instance, they put in a long list of words that are linked to risk assessments – such as plutonium, bomb, kill, jihad, etc. Then they let it search for patterns. Suppose there are dozens of phone calls being made to coordinate an attack but the callers are from all over the US. Every caller is calling someone different so no one number or caller can be linked to a lot of risk words. STARLIGHT can collate these calls and find the common linkage between them, and then it can tack the calls, caller and discussions in all other media forms. Now imagine the list of risk words and phrases to be tens of thousands of words long. It includes code words and words used in other languages. It can include consideration for the source or destination of the call – from public phones or unregistered cell phones. It can link the call to a geographic location within a few feet and then track the caller in all subsequent calls. It can use voice print technology to match calls made on different devices (radio, CB, cell phone, landline, VOIP, etc.). This is still just a sample of the possibilities. STARLIGHT was the first generation and was only as good as the data that was fed into it through scanned documents and other databases of information. A later version, code named Quasar, was created that used advanced data mining and ERP (enterprise resource planning) system architecture that integrated the direct feed from information gathering resources. For instance, the old STARLIGHT system had to feed recordings of phone calls into a speech-to-text processor and then the text data that was created was fed into STARLIGHT. In the Quasar system, the voice monitoring equipment (radios, cell phones, landlines) is fed directly into Quasar as is the direct feed of emails, telegrams, text messages, Internet traffic, etc. So does the government have the ability to track you? Absolutely! Are they? Absolutely! But wait, there’s more! Above, I said that Quasar was a “later version”. It’s not the latest version. Thanks to the Patriot Act and Presidential Orders on warrantless searches and the ability to hack into any database, NSA now can do so much more. This newer system is miles ahead of the relatively well known Echelon program of information gathering (which was dead even before it became widely known). It is also beyond another older program called Total Information Awareness (TIA). This new capability is made possible by the bank of NSA Cray computers and memory storage that are said to make Google’s entire system look like an abacus combined with the latest integration (ERP) software and the latest pattern recognition and visual data representation systems. Added to all of the Internet and phone monitoring and screening are two more additions into a new program called “Kontur”. Kontur is the Danish word for Profile. You will see why in a moment. Kontur adds geospatial monitoring of a person’s location to their database. Since 2005, every cell phone now broadcasts its GPS location at the beginning of every transmission as well as at regular intervals even when you are not using it to make a call. This was mandated by the Feds supposedly to assist in 911 emergency calls but the real motive was to be able to track people’s locations at all times. For those few that are still using the older model cell phones, they employ “tower tracking” which uses the relative signal strength and timing of the cell phone signal reaching each of several cell phone towers to pinpoint a person within a few feet. A holdover from the Quasar program was the tracking of commercial data which included every purchase made by credit cards or any purchase where a customer discount card is used – like at grocery stores. This not only gives the Feds an idea of a person’s lifestyle and income but by recording what they buy, they can infer other behaviors. When you combine cell phone and purchase tracking with the ability to track other forms of transactions – like banking, doctors, insurance, police and public records, there are relatively few gaps in what they can know about you. Kontur also mixed in something called geofencing that allows the government to create digital virtual fences around anything they want. Then when anyone crosses this virtual fence, they can be tracked. For instance, there is a virtual fence around every government building in Washington DC. Using predictive automated behavior monitoring and cohesion assessment software combined with location monitoring, geofencing and sophisticated social behavior modeling, pattern mining and inference, they are able to recognize patterns of people’s movements and actions as being threatening. Several would-be shooters and bombers have been stopped using this equipment. To talk about the “Profile” aspect of Kontur, we must first talk about why or how is it possible because it became possible only when the Feds were able to create very, very large databases of information and still be able to make effective use of that data. It took NSA 35 years of computer use to get to the point of using a terabyte (1012) of data. That was back in 1990 using ferrite core memory. It took 10 more years to get to petabyte (1015) of storage – that was in early 2001 using 14-inch videodisks and RAID banks of hard drives. It took four more years to create and make use of an exabyte (1018) of storage. With the advent of quantum memory using gradient echo and EIT (electromagnetically induced transparency), the NSA computers now have the capacity to store and rapidly search a yottabyte (1024) of data and expect to be able to raise that to 1,000 yottabytes of data within two years. To search this much data, they use a bank of Cray XT Jaguar computers that do nothing but read and write to and from the QMEM – quantum memory. The look-ahead and read-ahead capabilities are possible because of the massively parallel processing of a bank of other Crays that gives an effective speed of about 270 petaflops. Speeds are increasing at NSA at a rate of about 1 petaflop every two to four weeks. This kind of speed is necessary for things like pattern recognition and making use of the massive profile database of Kontur. In late 2006, it was decided that NSA and the rest of the intelligence and right wing government agencies would stop this idea of real-time monitoring and begin developing a historical record of what everyone does. Being able to search historical data was seen as essential for back-tracking a person’s movements to find out what he has been doing and whom he has been seeing or talking with. This was so that no one would ever again accuse them on not “connecting the dots”. But that means what EVERYONE does! As you have seen from the above description, they already can track your movements and all your commercial activities as well as what you say on phones or emails, what you buy and what you watch on TV or listen to on the radio. The difference now is that they save this data in a profile about you. All of that and more. Using geofencing, they have marked out millions of locations around the world to including obvious things like stores that sell pornography, guns, chemicals or lab equipment. Geofenced locations include churches, organizations like Greenpeace and Amnesty International. They have moving geofences around people they are tracking like terrorists but also political opponents, left wing radio and TV personalities and leaders of social movements and churches. If you enter their personal space – close enough to talk, then you are flagged and then you are geofenced and tracked. If your income level is low and you travel to the rich side of town, you are flagged. If you are rich and travel to the poor side of town, you are flagged. If you buy a gun or ammo and cross the wrong geofence, you will be followed. The pattern recognition of Kontur might match something you said in an email with something you bought and somewhere you drove in your car to determine you are a threat. Kontur is watching and recording your entire life. There is only one limitation to the system right now. The availability of soldiers or “men in black” to follow-up on people that have been flagged is limited so they are prioritizing whom they act upon. You are still flagged and recorded but they are only acting on the ones that are judged to be a serious threat now.It is only a matter of time before they can find a way to reach out to anyone they want and curb or destroy them. It might come in the form of a government mandated electronic tag that is inserted under the skin or implanted at birth. They have been testing these devices in use on animals under the disguise of tracking and identification of lost pest. They have tried twice to introduce these to all the people in the military. They have also tried to justify putting them into kids for “safety”. They are still pushing them for use in medical monitoring. Perhaps this will take the form of a nanobot. If they are successful in getting the population to accept these devices and then they determine you are a risk, they simply deactivate you by remotely popping open a poison capsule using a radio signal. Such a device might be totally passive in a person that is not a threat but might be lethal or it can be programmed to inhibit the motor-neuron system or otherwise disable a person that is deemed to be a high-risk person. Watch out for things like this. It’s the next thing they will do. You can count on it. 

Leave a Reply