In 1965, Intel co-founder George E. Moore noted a trend. He observed that the number of transistors that could fit onto an integrated circuit was doubling every two years and predicted that such a trend would last for another decade.
Forty years later, it appears that Moore may have stumbled upon the beginning of the end of humankind.
There is a hypothetical concept in the study of nanotechnology: As computers progress at an exponential rate, there is believed to be a point at which technological evolution will become so great it will mark the beginning of post-humanity.
Throughout history, the human brain has used invention to spur progress. Mankind is limited in its physical abilities and depends on its tools to bridge the gap between concept and reality. Every tool in existence extends limitations. People fly in aircraft because they cannot swim across oceans, use phones because their voices cannot carry for miles and use guns because their hands do not kill fast enough.
There is one entity, however, that the mind has never sought to improve: itself.
It is not yet understood exactly how the human brain works. On the other hand, people have gradually come to understand why it works.
Extensive studies have documented the brain’s reactions to stimuli such as communication, risk assessment and problem-solving situations. People have created robots that can mimic human instinct with uncanny precision. So much has been gained from a language humans still do not speak: the language of the human mind.
Imagine a world in which human beings are fluent in that language. Imagine a world in which computers are capable, as San Diego State University computer science professor Vernor Vinge put it, of “waking up.”
Vinge discussed the possibility of science improving on human intellect, a point in time he called the Singularity, when the processes of a machine will surpass the intellectual activity of any living being. This “ultraintelligent machine,” he claims, will be mankind’s final invention.
According to the Singularity Institute for Artificial Intelligence, modern computer chips process information at a rate 10 million times greater than that of the human brain. In other words, humans with modern digital processing capabilities would have completed the Renaissance in roughly 2.6 hours.
The ultimate underlying point of interest, however, is not the completion of the first artificially intelligent being, but the subsequent generations. When viewed in light of what has been mentioned in this column, the rate of human innovation is very slow.
But if ultraintelligent machines were free to improve themselves, they could – in theory – develop beings more intellectually capable than we could ever be. This would justify Vinge’s idea of mankind’s last great invention, a time when computers will proclaim, “Thanks for everything, but we’ll take it from here.”
The Institute of Electrical and Electronics Engineers is holding a nationwide competition among electrical engineering students for 2009, requesting that participants construct a robot capable of sorting and storing recyclable waste as efficiently as possible.
USF will represent the Southeast division with a team of six engineering students led by Mark Mniece and including Mohamad Khawaja, Eric Davidson, Souad Rochdi, Moez Oueslati and Jose Salazar.
Salazar said that the significance of his team’s engineering efforts “is working together with people to learn and understand how the world functions and create solutions.”
Presently, those solutions are entirely dependent on people’s participation, but as humans introduce greater levels of complexity into the world, they may also be contributing to a future in which they are only minor players in a much bigger game.
Mohammed Ibrahim is a senior majoring in pre-med biology.