|
Most mainstream AI researchers hope that strong AI can be developed
by combining the programs that solve various sub-problems. Hans
Moravec wrote in 1988:
I am confident
that this bottom-up route to artificial intelligence will one day
meet the traditional top-down route more than half way, ready to
provide the real world competence and the commonsense knowledge
that has been so frustratingly elusive in reasoning programs. Fully
intelligent machines will result when the metaphorical golden spike
is driven uniting the two efforts.
However, even this fundamental philosophy has been disputed; for
example, Stevan Harnad of Princeton concluded his 1990 paper on
the Symbol Grounding Hypothesis by stating:
The expectation
has often been voiced that top-down (symbolic) approaches
to modeling cognition will somehow meet bottom-up (sensory)
approaches somewhere in between. If the grounding considerations
in this paper are valid, then this expectation is hopelessly modular
and there is really only one viable route from sense to symbols:
from the ground up. A free-floating symbolic level like the software
level of a computer will never be reached by this route (or vice
versa) nor is it clear why we should even try to reach such
a level, since it looks as if getting there would just amount to
uprooting our symbols from their intrinsic meanings (thereby merely
reducing ourselves to the functional equivalent of a programmable
computer).
Modern artificial general
intelligence research
The term artificial general intelligence was used as
early as 1997, by Mark Gubrud in a discussion of the implications
of fully automated military production and operations. The term
was re-introduced and popularized by Shane Legg and Ben Goertzel
around 2002. AGI research activity in 2006 was described by Pei
Wang and Ben Goertzel as producing publications and preliminary
results. The first summer school in AGI was organized in Xiamen,
China in 2009 by the Xiamen universitys Artificial Brain Laboratory
and OpenCog. The first university course was given in 2010 and 2011
at Plovdiv University, Bulgaria by Todor Arnaudov. MIT presented
a course in AGI in 2018, organized by Lex Fridman and featuring
a number of guest lecturers.
However, as yet, most
AI researchers have devoted little attention to AGI, with some claiming
that intelligence is too complex to be completely replicated in
the near term. However, a small number of computer scientists are
active in AGI research, and many of this group are contributing
to a series of AGI conferences. The research is extremely diverse
and often pioneering in nature.
Timescales: In the introduction
to his 2006 book, Goertzel says that estimates of the time needed
before a truly flexible AGI is built vary from 10 years to over
a century, but the 2007 consensus in the AGI research community
seems to be that the timeline discussed by Ray Kurzweil in The Singularity
is Near (i.e. between 2015 and 2045) is plausible. - However, mainstream
AI researchers have given a wide range of opinions on whether progress
will be this rapid. A 2012 meta-analysis of 95 such opinions found
a bias towards predicting that the onset of AGI would occur within
1626 years for modern and historical predictions alike. It
was later found that the dataset listed some experts as non-experts
and vice versa.
Organizations explicitly
pursuing AGI include the Swiss AI lab IDSIA, Nnaisense, Vicarious,
Maluuba, the OpenCog Foundation, Adaptive AI, LIDA, and Numenta
and the associated Redwood Neuroscience Institute. In addition,
organizations such as the Machine Intelligence Research Institute
and OpenAI have been founded to influence the development path of
AGI. Finally, projects such as the Human Brain Project have the
goal of building a functioning simulation of the human brain. A
2017 survey of AGI categorized forty-five known active R&D
projects that explicitly or implicitly (through published
research) research AGI, with the largest three being DeepMind, the
Human Brain Project, and OpenAI.
In 2017, Ben Goertzel
founded the AI platform SingularityNET with the aim of facilitating
democratic, decentralized control of AGI when it arrives.
In 2017, researchers
Feng Liu, Yong Shi and Ying Liu conducted intelligence tests on
publicly available and freely accessible weak AI such as Google
AI or Apples Siri and others. At the maximum, these AI reached
an IQ value of about 47, which corresponds approximately to a six-year-old
child in first grade. An adult comes to about 100 on average. Similar
tests had been carried out in 2014, with the IQ score reaching a
maximum value of 27.
In 2019, video game programmer
and aerospace engineer John Carmack announced plans to research
AGI.
In 2020, OpenAI developed
GPT-3, a language model capable of performing many diverse tasks
without specific training. According to Gary Grossman in a VentureBeat
article, while there is consensus that GPT-3 is not an example of
AGI, it is considered by some to be too advanced to classify as
a narrow AI system.
Brain simulation
Whole brain emulation
A popular discussed approach to achieving general intelligent action
is whole brain emulation. A low-level brain model is built by scanning
and mapping a biological brain in detail and copying its state into
a computer system or another computational device. The computer
runs a simulation model so faithful to the original that it will
behave in essentially the same way as the original brain, or for
all practical purposes, indistinguishably. Whole brain emulation
is discussed in computational neuroscience and neuroinformatics,
in the context of brain simulation for medical research purposes.
It is discussed in artificial intelligence research as an approach
to strong AI. Neuroimaging technologies that could deliver the necessary
detailed understanding are improving rapidly, and futurist Ray Kurzweil
in the book The Singularity Is Near predicts that a map of sufficient
quality will become available on a similar timescale to the required
computing power.
Early estimates
Estimates of how much processing power is needed to emulate a human
brain at various levels (from Ray Kurzweil, and Anders Sandberg
and Nick Bostrom), along with the fastest supercomputer from TOP500
mapped by year. Note the logarithmic scale and exponential trendline,
which assumes the computational capacity doubles every 1.1 years.
Kurzweil believes that mind uploading will be possible at neural
simulation, while the Sandberg, Bostrom report is less certain about
where consciousness arises.
For low-level brain simulation,
an extremely powerful computer would be required. The human brain
has a huge number of synapses. Each of the 1011 (one hundred billion)
neurons has on average 7,000 synaptic connections (synapses) to
other neurons. It has been estimated that the brain of a three-year-old
child has about 1015 synapses (1 quadrillion). This number declines
with age, stabilizing by adulthood. Estimates vary for an adult,
ranging from 1014 to 5×1014 synapses (100 to 500 trillion).
An estimate of the brains processing power, based on a simple
switch model for neuron activity, is around 1014 (100 trillion)
synaptic updates per second (SUPS). In 1997, Kurzweil looked at
various estimates for the hardware required to equal the human brain
and adopted a figure of 1016 computations per second (cps). (For
comparison, if a computation was equivalent to one floating
point operation a measure used to rate current supercomputers
then 1016 computations would be equivalent to
10 petaFLOPS, achieved in 2011). He used this figure to predict
the necessary hardware would be available sometime between 2015
and 2025, if the exponential growth in computer power at the time
of writing continued.
Modelling the neurons
in more detail
The artificial neuron model assumed by Kurzweil and used in many
current artificial neural network implementations is simple compared
with biological neurons. A brain simulation would likely have to
capture the detailed cellular behaviour of biological neurons, presently
understood only in the broadest of outlines. The overhead introduced
by full modeling of the biological, chemical, and physical details
of neural behaviour (especially on a molecular scale) would require
computational powers several orders of magnitude larger than Kurzweils
estimate. In addition the estimates do not account for glial cells,
which are at least as numerous as neurons, and which may outnumber
neurons by as much as 10:1, and are now known to play a role in
cognitive processes.
To
Chapter 52
|