The Technological Singularity refers to the point where superhuman intelligence is created, resulting in a snowball effect of creating more powerful intelligences. This could lead to the extinction or liberation of humanity, depending on the attitudes of the superhuman intelligences towards humans. The Singularity may be reached through AI or artificial intelligence, and experts predict it could happen between 2010 and 2030. The initial motivations of early superhuman intelligence may determine the fate of humanity, making it a practical engineering goal towards which significant progress can be made.
The Technological Singularity, or simply “Singularity”, is a multifaceted concept in futurism with several overlapping and sometimes conflicting definitions. The most appropriate and prominent definition of the Singularity was given by Vernor Vinge in his essay, The Coming Technological Singularity. Refers to the point where superhuman intelligence is technologically created. These superhuman intelligences could then apply their brainpower and expertise to the task of creating additional or more powerful superhuman intelligences, resulting in a snowball effect with consequences beyond our current ability to imagine.
The term “Technological Singularity” was coined by analogy with the singularity at the center of a black hole, where the forces of nature become so intense and unpredictable that our ability to calculate the behavior of matter under these circumstances drops to zero. Often mentioned alongside the idea of superhuman intelligence in Singularity dialogue is the notion of accelerating technological change. Some have argued that as the slope of technological progress increases, it will culminate in an asymptote, visually similar to a mathematical singularity.
However, this notion of singularity is not the same as Vinge intended; referring to the emergence of superhuman intelligence, along with superhuman thinking speeds. (Including intelligence, the ability to understand and create concepts, transform data into theories, make analogies, be creative, and so on.) Although superhuman intelligences creating further superhuman intelligences would actually result in the acceleration of technological progress, the progress it would not become infinite, in the sense that it would suggest a mathematical singularity.
Since superhuman intelligences would, by definition, be smarter than any human being, our ability to predict what they would be capable of with a given amount of time, matter, or energy is unlikely. A superhuman intelligence might be able to fashion a functioning supercomputer with cheap and readily available components, or develop full-fledged nanotechnology with nothing more than an atomic force microscope. Because the ability of a superhuman intelligence to design and manufacture technological gadgets would quickly surpass the best efforts of human engineers, a superhuman intelligence may very well be the last invention humanity ever needs to make. Due to their superhuman genius and rapidly developing technologies, the actions of intelligences emerging from a Technological Singularity could lead to the extinction or liberation of our entire species, depending on the attitudes of the most powerful superhuman intelligences towards beings. humans.
Oxford philosopher Nick Bostrom, director of the Oxford Future of Humanity Institute and the World Transhumanist Organization, argues that how superhuman intelligences treat humans will depend on their initial motivations at the time of their creation. A kind superhuman intelligence, wanting to preserve its kindness, would breed kind (or kinder) versions of itself as the spiral of self-improvement continued. The result may be a paradise where superhuman intelligences solve the world’s problems and offer consensual intelligence enhancement to human beings. On the other hand, a malignant or indifferent superhuman intelligence could do much the same thing, resulting in our accidental or deliberate destruction. For these reasons, the Technological Singularity may be the most important milestone our species will ever face.
Several paths to superhuman intelligence have been proposed by Singularity analysts and proponents. The first is AI, or intelligence amplification, which takes an existing human and transforms it into a non-human via neurosurgery, brain-computer interface, or perhaps even brain-brain interface. The other is artificial intelligence, or artificial intelligence, the creation of a dynamic cognitive system that surpasses humans in its ability to form theories and manipulate reality. When either of these technologies will reach the threshold level of sophistication needed to produce superhuman intelligence is uncertain, but a variety of experts, including Bostrom, cite dates between 2010 and 2030 as likely.
Because the Singularity may be closer than many may assume, and because the initial motivations of early superhuman intelligence may determine the fate of our human species, some philosopher-activists (“Singularitarians”) see the Singularity as more than just a subject for speculation. and discussion, but as a practical engineering goal towards which significant progress can be made in the present day. Thus, in 2000, the Singularity Institute for Artificial Intelligence was founded by Eliezer Yudkowsky to work solely towards this goal.
Protect your devices with Threat Protection by NordVPN