Since there is so much conversation going on these days about Artificial Intelligence and what we might expect in the coming years as scientists and researchers advance in constructing ever-more complex machines, I thought it might be a good time to consider not only what it means to be “intelligent,” but also what importance the term “artificial” carries with it when using the two terms together in a sentence. In recent years, cognitive scientists and AI researchers have made significant progress in producing machines which can perform specific tasks and demonstrate specialized capacities for accomplishing remarkable feats of machine intelligence, and in very specific ways, have outperformed humans in circumstances which previously were thought to be beyond such artificial constructs.
While all of the hoopla and publicity surrounding such events generally results in hyperbole and sensational headlines, there is a degree of fundamental achievement underneath it all that warrants our attention and could be described as commendable in the context of modern scientific research. Most media consumers and television viewers have encountered the commercials for IBM’s Watson, and have likely been exposed to reports of Watson’s abilities and accomplishments. There is much to admire in the work that resulted in the existence of such a system, and the benefits are fairly straightforward as presented by the advertisements, although it is also clear that they have been designed to feature what might be the most benign and easy-to-understand characteristics of a system which accomplishes its tasks using artificial intelligence. Much of the underlying science, potential risks, and limits of such research are rarely discussed in such ads.
In order to make some kind of sense of it all, and to think about what it is exactly that is being accomplished with artificial intelligence, what forces and processes are being employed, and how the results compare to other cognitive achievements, especially as it relates to human intelligence and human cognitive processes, we have to understand something about the most important differences between a system like Watson, and the cognitive processes and brain physiology of modern humans. While some stunning similarities exist between the basic architecture of neural networks in the brain and modern AI devices, not a single project currently being undertaken is anywhere near the goal of rising to an equivalent level of general capability or even just achieving a basic understanding what it takes to create a human mind. It’s not that it’s an impossible undertaking, nor is it impossible to imagine how human minds might eventually make great leaps in both constructing advanced systems and in making progress toward a greater level of understanding. After all, the human mind is pretty stunning all by itself!
What is most discouraging from my point of view is how much emphasis is being placed on the mechanics of intelligence–the structural underpinning of physical systems–instead of including a more holistic and comprehensive approach to increasing our understanding. A recent article in the Wall Street Journal by Yale University computer science professor, David Gelernter, (Review, March 19-20, 2017) posits that “…software can simulate feeling. A robot can tell you it’s depressed and act depressed, although it feels nothing.” Whether or not this approach might bring us closer to “machines that can think and feel,” successfully doing so seems like a long shot. If all we can do is “simulate” a human mind, is that really accomplishing anything?
Professor Gelernter goes to great lengths to describe the levels of a functional human mind, and gives us valuable insights into the way our own minds work, and he illuminates the way we shift between levels of awareness, as well as how we make such good use of our unique brand of intelligence. He then suggests that AI could create these same circumstances in a “computer mind,” and that it could “…in principle, build a simulated mind that reproduced all of the nuances of human thought, and (which) dealt with the world in a thoroughly human way, despite being unconscious.” He takes great pains to enumerate all the ways in which the “spectrum” of a human mind operates, and then concludes that “Once AI has decided to notice and accept this spectrum–this basic fact about the mind–we will be able to reproduce it in software.”
We cannot reduce what it means to feel to the astonishingly complex machinations of the human brain, any more than we can boil down the complexity of the human brain to the point where an artfully written piece of software can recreate anything even close to human feelings–what it actually feels like to be a living, breathing, cognitive human being. As Hamlet explains to Horatio, “There are more things in heaven and earth, Horatio, than are dreamt of in your philosophy.” Shakespeare’s intimation on the limitations of even human thought should give us pause to consider the limitations of producing it artificially.
—more to come—
According to Webster’s Unabridged Dictionary, intelligence is defined as:
1. capacity for learning, reasoning, understanding, and similar forms of mental activity; aptitude in grasping truths, relationships, facts, meanings, etc.
2. manifestation of a high mental capacity: “He writes with intelligence and wit.”
In a recent study conducted at the University of Western Ontario, researchers acknowledged the limitations of current scientific research, but offered a basis for suggesting factors to consider. They “looked into the brain areas that are activated by tasks that are typically used to test for intelligence,” and reported their results–
“…based on the set of brain areas that might contribute to those tasks. However don’t get too excited, the methods used have severe limitations and we are still only at the hypothesis level. We do not know how these areas contribute to performance in intelligence tests and we do not know why they are activated and how they interact together to create the behavior.”
According to a recently published neuroscientific paper, “a broader definition was agreed to by 52 prominent researchers on intelligence:”
“Intelligence is a very general capability that, among other things, involves the ability to reason, plan, solve problems, think abstractly, comprehend complex ideas, learn quickly and learn from experience. It is not merely book learning, a narrow academic skill, or test‑taking smarts. Rather, it reflects a broader and deeper capability for comprehending our surroundings—‘catching on’, ‘making sense’ of things, or ‘figuring out’ what to do. Intelligence, so defined, can be measured, and intelligence tests measure it well.”
Reviewing the many related brain structures involved in cognitive functioning, researchers concluded that:
“…variations in these structures and functions may be “endophenotypes” for intelligence — that is, they might be intermediate physiological markers that contribute directly to intelligence. Therefore, genes involved in intelligence might be more closely linked to these variations in brain structure and function than to intelligence itself. In fact, in all studies to date, the genetic influences on these structures and functions were highly correlated with those on general intelligence.”
–excerpts from “The Neuroscience of Human Intelligence Differences,” by Ian J.Deary, Lars Penke and Wendy Johnson
There are a number of individuals today who are beginning to make associations between the technological advances of modern science and some of the ancient esoteric traditions like yoga, in an attempt to explain our subjective experience of consciousness:
“If hypothetical machinery inside neurons fails to explain qualia, (the ‘what-it’s-like’ quality of experience) must we then consider the molecules that make up the neuronal machinery, or the atoms inside the molecules, or the subatomic particles inside the atoms? Where is the difference that causes the qualia of subjective experience? A less problematic explanation is possible. German scientist, Gottfried Leibniz, postulated irreducible quanta of consciousness he termed ‘monads.’ Matter does not create consciousness. Instead, matter is animated by monads. It seems hardly a coincidence that Leibniz’ monads would perfectly fit between the moments of time that lead to Kaivalya, (Yoga term for enlightenment or nirvana.)
Ultimately, Kaivalya is an ineffable experience. But the claim of yoga is that it provides means to experience what is outside of the individualized mind. The experience of going through the center of consciousness and emerging, as it were, on the other side is very much one of turning inside out. In our ordinary consciousness we are turned outwards towards the world-image which we externalized around us.
In going through our consciousness the entire process is reversed, we experience an inversion…that which was without becomes within. In fact, when we succeed in going through our center of consciousness and emerge on the other side, we do not so much realize a new world around us as a new world within us. We seem to be on the surface of a sphere having all within ourselves and yet to be at each point of it simultaneously…the outstanding reality of our experience…is the amazing fact that nothing is outside us.”
–excerpts from article by DONALD J. DEGRACIA, Associate Professor of Physiology at the Wayne State University School of Medicine, Detroit, in EDGESCIENCE MAGAZINE #16 • NOVEMBER 2013
Recent research in artificial intelligence has begun to approach what might be described as a kind of tipping point, where the lines will likely begin to blur between what is clearly a type of machine intelligence, like the current offerings in robotics and self-driving cars, to something more akin to the kind of intelligence that talks back to you or responds in a more conversational manner like Apple’s “Siri,” and the Windows 10 offering of a personal assistant application called “Cortana.” Many of these innovations are built upon interest in the idea of eventually being able to develop the technologies surrounding A.I. to the point where they will function so much like the human brain, that communicating with them will be virtually indistinguishable from doing so with another live human person.
While this is an enormously appealing concept to our modern sensibilities, and currently fueling a huge amount of research in the industry, even supposing that it might be possible to produce a device or platform commensurate with the trillions of connections between neurons in the human brain, characterizing any resulting machine as either “intelligent” or “conscious,” requires us to re-examine what it means to be intelligent and conscious. Our current understanding of these terms, even as they apply to humans, is still not especially comprehensive or complete, and looking at the development of “human” or “biological” intelligence through the millennia, demonstrates a key component of the challenge in creating an artificial version that might qualify as equivalent.
Early humans and their fellow primates and mammals, along with all the various species endowed with sufficiently complex neural structures and central nervous systems, at some point, eventually possessed a brain or other neural configuration of adequate strength, size, and architecture, which allowed for the retention of memories, and for processing the sensory data gleaned through the available senses. These structures, from the most primitive to the most sophisticated, at some point provided the necessary support for adaptive learning or for acquiring a sufficient degree of species-specific abilities, in order for the organism to make efficient use of that information, and to produce a range of results, commensurate with their species-specific capacities and habitation, which enhanced their survival in their respective environments.
Once our ancient ancestors reached a certain level of development, through the integration of incremental evolutionary changes, they achieved a nominal degree of enhanced cognitive talents, attaining a sufficient capacity for what we describe as “human intelligence,” which eventually led to the ability to reason and plan well enough to override emotional distractions, needs and desires, and to awaken to a penetrating level of subjective self-awareness. As any parent of a healthy child can tell you, intelligence does not appear immediately even in modern human children. In spite of advantageous circumstances and environments in which these amazing cognitive human creatures develop, it still requires a minimal degree of relevant experience in the world to accumulate a useful and functional knowledge base, to hone learning skills, and to be able to draw on a collection of memories, which enhance whatever cognitive, genetic, and other physiological resources they might bring to the process.
As a consequence of the random combinations of chromosomes in the human reproductive process, there is a sufficient degree of diversity in the general distribution of combinations available to the human genome, so that each human child has a relatively unique set of circumstances genetically. This diversity is necessary for the health of our species, and as a result, we observe a full range of endowment, which can result in bestowing our descendents with a general baseline capacity for the development of cognitive efficiency, or at the other end of the spectrum, a potential for an enhanced intellectual development, right from the start. A vast array of cultural and environmental variables can either promote or inhibit whatever potential is present, and throughout human history, we have observed how a viable or disadvantageous environment, as well as individual initiative or apathy, can alter the equation in either direction.
It seems likely, in view of these mitigating factors, that it is through a combination of innate cognitive talent, genetic endowment, and environmental conditions that we see contributions to the general flow of intelligence either making a significant appearance, or faltering and struggling to gain ground, in much the same way as it has been since the earliest neural structures appeared in whatever creatures are still existent today. In every case, whatever degree of potential existed within a particular species, it was either successfully developed and exploited for survival, or ended up being thwarted by circumstances from developing successfully enough to sustain a niche for a particular species, resulting in their extinction.
Our challenge in the 21st century is finding a way to determine which contributing factors for increasing intelligence can be safely selected by humans for the most productive incorporation into what we are currently describing as “artificial intelligence,” or “machine intelligence.” Unfortunately, no matter what we are ultimately able to do, in my view, we won’t be able to incorporate our humanity fully into machines, nor will we be able to artificially endow them with the experience of “being human.” In order for us to be aware of our experience of existing as a human being, while clearly requiring a variety of nominally functional, finely-tuned, and integrated biological systems, each of which are essential currently, because there is so much more to being a subjectively aware human person, there must be something that it is like to be human, which cannot be precisely replicated by any technological advancement or created through sheer engineering genius. The subjective experience of human consciousness utilizes our very human capacity for intelligence, as well as our access to a penetrating awareness provided by an astonishing array of electrochemical processes in our miraculous brains, but what we are accessing is not PRODUCED by the brain, but rather it is PERCEIVED by it.
It’s interesting to me how some scientists and thinkers in all the various fields of investigation into artificial intelligence believe that it is simply a matter of achieving a sufficient degree of complexity in the structures we devise for the processing of the voluminous data necessary to be equivalent to the human brain, constructing a sufficiently pliable, flexible, and interactive software, driven by the necessary algorithms, and we will eventually produce a sentient, intelligent, and conscious machine.
In his fascinating and expansive book entitled, “The Universe in a Nutshell,” Stephen Hawking posits that if “very complicated chemical molecules can operate in humans to make them intelligent,” it should follow that “equally complicated electronic circuits can also make computers act in an intelligent way.” He goes on to say that electronic circuits have the same problem as our chemical processes in the brain, which is to process data at a useful speed. He also rightly points out that computers currently have less computational power than “a humble earthworm,” and while they “have the advantage of speed…they show no sign of intelligence.” He also reminds us that even with our capacity for what we call intelligence, that “the human race does not have a very good record of intelligent behavior.”
The possession of a capacity for intelligence of any sort, artificial or otherwise, is clearly not a “stand-alone” feature that is sufficient to sustain any species in and of itself. As we have observed throughout the evolutionary history of the natural world, constructing and sustaining a successful organism requires the development of a range of compensatory and complimentary abilities and potentials, commensurate with the designs and functions of a particular species, in order to achieve a requisite degree of balance.
In the case of Homo sapiens, our particular brand of human intelligence, as we currently understand it, appears to be primarily the result of human evolution and progress throughout our history as upright, bipedal, and increasingly cognitive beings. As a result, our species is apparently uniquely well-suited for our evolutionary niche, and dominates currently among the other living organisms, mostly for this very reason. While we share much in common with our primate and mammalian family of creatures, and bearing in mind that we are equally indebted to all living things and to the Earth itself for our continued ability to sustain ourselves, intelligence appears to exist in remarkably adaptive and unique ways in each of the various evolutionary paths for each family of species that coexist with us today.
It would be arrogant to suggest that our variety of intelligence is in any way superior to that enjoyed by other organisms on our planet, except in the context of its usefulness to our specific nature as humans. Our own highly-adaptive nature is fairly well-suited generally to the requirements of our species, and while one might reasonably argue that our inclinations and intelligence are lacking in one way or another, for the most part, even considering our limitations, foibles, and perceived deficits, human intelligence has managed to keep pace with the unfolding of our continued evolution thus far, and providing that we persist in developing and adapting to our ever-changing circumstances, there is cause for optimism in my view.
What we tend to miss in most of our estimations of what sort of artificial intelligence might emerge from our efforts to produce it, is that no matter what results are forthcoming, it will very likely be profoundly different than our own ultimately, in spite of how specifically we aim to recreate the mental processes and physiological structures of our own exquisitely adaptive brains.