"Today, the Machine acts like a very large computer with top-level functions that operate at approximately the clock speed of an early PC. It processes 1 million emails each second, which essentially means network email runs at 1 megahertz. Same with Web searches. Instant messaging runs at 100 kilohertz, SMS at 1 kilohertz. The Machine's total external RAM is about 200 terabytes. In any one second, 10 terabits can be coursing through its backbone, and each year it generates nearly 20 exabytes of data. Its distributed "chip" spans 1 billion active PCs, which is approximately the number of transistors in one PC.
This planet-sized computer is comparable in complexity to a human brain. Both the brain and the Web have hundreds of billions of neurons (or Web pages). Each biological neuron sprouts synaptic links to thousands of other neurons, while each Web page branches into dozens of hyperlinks. That adds up to a trillion "synapses" between the static pages on the Web. The human brain has about 100 times that number - but brains are not doubling in size every few years. The Machine is.
Since each of its "transistors" is itself a personal computer with a billion transistors running lower functions, the Machine is fractal. In total, it harnesses a quintillion transistors, expanding its complexity beyond that of a biological brain. It has already surpassed the 20-petahertz threshold for potential intelligence as calculated by Ray Kurzweil. For this reason some researchers pursuing artificial intelligence have switched their bets to the Net as the computer most likely to think first. Danny Hillis, a computer scientist who once claimed he wanted to make an AI "that would be proud of me," has invented massively parallel supercomputers in part to advance us in that direction. He now believes the first real AI will emerge not in a stand-alone supercomputer like IBM's proposed 23-teraflop Blue Brain, but in the vast digital tangle of the global Machine.
In 10 years, the system will contain hundreds of millions of miles of fiber-optic neurons linking the billions of ant-smart chips embedded into manufactured products, buried in environmental sensors, staring out from satellite cameras, guiding cars, and saturating our world with enough complexity to begin to learn.We will live inside this thing.
One great advantage the Machine holds in this regard: It's always on. It is very hard to learn if you keep getting turned off, which is the fate of most computers. AI researchers rejoice when an adaptive learning program runs for days without crashing. The fetal Machine has been running continuously for at least 10 years (30 if you want to be picky). I am aware of no other machine - of any type - that has run that long with zero downtime. While portions may spin down due to power outages or cascading infections, the entire thing is unlikely to go quiet in the coming decade. It will be the most reliable gadget we have.
And who will write the software that makes this contraption useful and productive? We will. In fact, we're already doing it, each of us, every day. When we post and then tag pictures on the community photo album Flickr, we are teaching the Machine to give names to images. The thickening links between caption and picture form a neural net that can learn. Think of the 100 billion times per day humans click on a Web page as a way of teaching the Machine what we think is important. Each time we forge a link between words, we teach it an idea. Wikipedia encourages its citizen authors to link each fact in an article to a reference citation. Over time, a Wikipedia article becomes totally underlined in blue as ideas are cross-referenced. That massive cross-referencing is how brains think and remember. It is how neural nets answer questions. It is how our global skin of neurons will adapt autonomously and acquire a higher level of knowledge.
The human brain has no department full of programming cells that configure the mind. Rather, brain cells program themselves simply by being used. Likewise, our questions program the Machine to answer questions. We think we are merely wasting time when we surf mindlessly or blog an item, but each time we click a link we strengthen a node somewhere in the Web OS, thereby programming the Machine by using it.
What will most surprise us is how dependent we will be on what the Machine knows - about us and about what we want to know. We already find it easier to Google something a second or third time rather than remember it ourselves. The more we teach this megacomputer, the more it will assume responsibility for our knowing. It will become our memory. Then it will become our identity. In 2015 many people, when divorced from the Machine, won't feel like themselves - as if they'd had a lobotomy."