Technological Singularity
Category: Science Other
The technological singularity, or simply the singularity, is a theoretical moment in time when artificial intelligence will have progressed to the point of a greater-than-human intelligence that will "radically change human civilization, and perhaps even human nature itself." Since the capabilities of such an intelligence may be difficult for an unaided human mind to comprehend, the technological singularity is often seen as an occurrence (akin to a gravitational singularity) beyond which—from the perspective of the present—the future course of human history is unpredictable or even unfathomable.
The first use of the term "singularity" in this context was by mathematician John von Neumann. Neumann in the mid-1950s spoke of "ever accelerating progress of technology and changes in the mode of human life, which gives the appearance of approaching some essential singularity in the history of the race beyond which human affairs, as we know them, could not continue".The term was popularized by science fiction writer Vernor Vinge, who argues that artificial intelligence, human biological enhancement, or brain-computer interfaces could be possible causes of the singularity. Futurist Ray Kurzweilcited von Neumann's use of the term in a foreword to von Neumann's classic The Computer and the Brain.
Proponents of the singularity typically postulate an "intelligence explosion", where superintelligences design successive generations of increasingly powerful minds, that might occur very quickly and might not stop until the agent's cognitive abilities greatly surpass that of any human.
Kurzweil predicts the singularity to occur around 2045 whereas Vinge predicts some time before 2030. At the 2012 Singularity Summit, Stuart Armstrong did a study of artificial generalized intelligence (AGI) predictions by experts and found a wide range of predicted dates, with a median value of 2040. His own prediction on reviewing the data is that there's an 80% probability that the singularity will occur between 2017 and 2112
What is The Singularity?_
The acceleration of technological progress has been the central feature of this century. We are on the edge of change comparable to the rise of human life on Earth. The precise cause of this change is the imminent creation by technology of entities with greater than human intelligence. There are several means by which science may achieve this breakthrough (and this is another reason for having confidence that the event will occur):
o The development of computers that are "awake" and superhumanly intelligent. (To date, most controversy in the area of AI relates to whether we can create human equivalence in a machine. But if the answer is "yes, we can", then there is little doubt that beings more intelligent can be constructed shortly thereafter.
o Large computer networks (and their associated users) may "wake up" as a superhumanly intelligent entity.
o Computer/human interfaces may become so intimate that users may reasonably be considered superhumanly intelligent.
o Biological science may find ways to improve upon the natural human intellect.
tne conversation centered on the ever accelerating progress of technology and changes in the mode of human life, which gives the appearance of approaching some essential singularity in the history of the race beyond which human affairs, as we know them,could not continue.
The discourse about Artificial Intelligence is often polarized. There are those who, like Singularity booster Ray Kurzweil, imagine our robo-assisted future as a kind of technotopia, an immortal era of machine-assisted leisure. Others, Barrat included, are less hopeful, arguing that we must proceed with extreme caution down the path towards Artificial Intelligence—lest it lap us before we even realize the race is on. Many of the building blocks towards functional AI are by definition “black box†systems—methods for programming with comprehensible outputs, but unknowable inner workings—and we might find ourselves outpaced far sooner than we expect.
The scary thing, Barrat points out, isn’t the fundamental nature of Artificial Intelligence. We are moving unquestionably towards what might turn out to be quite innocuous technology indeed—even friendly. What’s scary is that countless corporations and government agencies are working on it simultaneously, without oversight or communication. Like the development of atomic weapons, AI is a potentially lethal technology, and the percentage of researchers actively considering its implications, or taking steps to ensure its safe implementation, is alarmingly small.(James Barrat, Our Final Invention: Artificial Intelligence and the End of the Human Era)