What’s Symbol Grounding?

Print anything with Printful



The symbol grounding problem concerns how words become associated with their meanings and how consciousness is related to understanding symbolic meaning. It is discussed in the context of artificial intelligence and is part of the field of semiotics, which includes syntactics and semiosis. The problem was first defined in 1990 by Steven Harnad, who used John Searle’s “Chinese room” thought experiment to illustrate it. AI can manipulate symbols without understanding their meaning, so it cannot be said to possess consciousness or reach semiosis.

Symbol grounding is the connection of symbols, such as written or spoken words, with the objects, ideas, or events to which they refer. A problem called the symbol grounding problem concerns the ways in which words become associated with their meanings and, by extension, how consciousness is related to understanding symbolic meaning. The problem of the foundation of symbols, as it is related to these questions of meaning and consciousness, is often discussed in the context of artificial intelligence (AI).

The study of symbols, as well as the processes by which they acquire meaning and are interpreted, is known as semiotics. Within this field of study, a branch called syntactics deals with the properties and governing rules of symbolic systems, as in the formal properties of language. The problem of the foundation of the symbol is included in the ambit of this discipline, which includes semiosis, the process that allows an intelligence to understand the world through signs.

The symbol grounding problem was first defined in 1990 by Steven Harnad of Princeton University. In short, the symbol grounding problem asks how understanding the meanings of symbols within a system, such as formal language, can be made intrinsic to the system. While the system operator may understand the meanings of the symbols, the system itself does not.

Harnad references John Searle’s classic “Chinese room” thought experiment to illustrate his point. In this experiment, Searle, who has no knowledge of the Chinese language, is given a set of rules that allow him to correctly answer, in written Chinese, the questions he is asked, even in written Chinese. An observer outside the room might come to the conclusion that Searle understands Chinese very well, but Searle is able to manipulate the Chinese symbols without ever understanding the meaning of the questions or the answers.

According to Harnad, the experiment may be analogous to an AI. A computer might be able to produce the correct answers to external prompts, but it acts according to its programming, like Searle by following the rules given to it in the thought experiment. AI is capable of manipulating symbols that have meaning to an outside observer, but it lacks a semantic understanding of symbols. Therefore, the AI ​​cannot be said to possess consciousness, because it does not actually interpret the symbols or understand what they refer to. It does not reach semiosis.




Protect your devices with Threat Protection by NordVPN


Skip to content