A&O notes on ABSTRACTION

A&O notes on ABSTRACTION

ABSTRACTION in MIND

Tenenbaum et al (2011)[i]   Observe  that “We build rich causal models, make strong generalizations, and construct powerful abstractions, whereas the input data are sparse, noisy, and ambiguous—in every way far too limited. A massive mismatch looms between the information coming in through our senses and the ouputs of cognition.” [ii]

 Read Nobel Lauriate neuroscientist Eric Kandel’s comments on abstract art from Reductionism in Art and Brain Science (2016) 

NEUROSCIENCE  

Abstract concepts in the primate brain (byPeter Stern)

Do primates have neurons that encode the conceptual similarity between spaces that differ by their appearance but correspond to the same mental schema? Baraduc et al.recorded from monkey hippocampal neurons while the animals explored both a familiar environment and a novel virtual environment that shared the same general structure as the familiar environment but displayed never-before-seen landmarks. About one-third of hippocampal cells showed significantly correlated firing for both familiar and novel landscapes. These correlations hinged on space or task elements, rather than on immediate visual information. The functional features of these cells are analogous to human concept cells, which represent the meaning of a specific stimulus rather than its apparent visual properties. (Science, this issue p. 635)

Abstract

Concept cells in the human hippocampus encode the meaning conveyed by stimuli over their perceptual aspects. Here we investigate whether analogous cells in the macaque can form conceptual schemas of spatial environments. Each day, monkeys were presented with a familiar and a novel virtual maze, sharing a common schema but differing by surface features (landmarks). In both environments, animals searched for a hidden reward goal only defined in relation to landmarks. With learning, many neurons developed a firing map integrating goal-centered and task-related information of the novel maze that matched that for the familiar maze. Thus, these hippocampal cells abstract the spatial concepts from the superficial details of the environment and encode space into a schema-like representation.” ( http://science.sciencemag.org/content/363/6427/635 )

INTRODUCTION.

“The human hippocampus is home to concept cells that represent the meaning of a stimulus—a person or an object—rather than its immediate sensory properties (1). This invariance involves an abstraction from the percept to extract only relevant features and attribute an explicit meaning to them (23). Whereas concept cells are emblematic of the human hippocampus, place cells, which fire when the animal is in a particular place, are typical of rodent hippocampus (4). Place and concept cells share properties, such as stimulus selectivity. Concept cells are specific to one person or object, and place cells are selective to one position within an environment. Furthermore, place cells identified in one environment are silent in a different environment (5). Exceptions to this likely stem from resemblance or common elements across spaces (610). However, in humans and rodents, it is unknown whether hippocampal cells can represent a spatial abstraction. We tested this possibility in monkeys, in which hippocampal neurons develop high-level spatial representations (11). We hypothesized that spatial abstraction involves elementary schemas (1213), extracting commonalities across experiences beyond superficial details to signify interrelations among elements (1415). We accordingly trained macaques to explore a virtual maze with a joystick in search of an invisible reward whose location had to be triangulated with respect to visible landmarks (Fig. 1, A and B, and fig. S1). After monkeys were proficient in this familiar maze (more than 90% correct, fig. S2), they were tested in an isomorphic novel maze bearing never-before-seen landmarks (Fig. 1B), presented for each session after or before the familiar maze. Thereupon, animals rapidly displayed flexible spatial inference and rapidly reached good performance [figs. S2 and S3 and (11)], indicating that they had constructed a schema of the task (fig. S3) (16) rather than a series of stimulus response associations (learning set). We tested whether this process results in environment-specific memories or in a single schema for both spaces.”

 

 Return to A&O website on ABSTRACTION


[i] How to Grow a Mind: Statistics, Structure, and Abstraction  Joshua B. Tenenbaum, Charles Kemp, Thomas L. Griffiths, and Noah D. Goodman.  Science 11 March 2011: 1279-1285.  Abstract,  Full TextFull Text (PDF), Supporting Online Material

[ii]  REVIEW:  How to Grow a Mind: Statistics, Structure, and Abstraction. (2011)  Joshua B. Tenenbaum,  Charles Kemp2,  Thomas L. Griffiths, Noah D. GoodmanScience 11 March 2011: Vol. 331 no. 6022 pp. 1279-1285 DOI: 10.1126/science.1192788

 

ABSTRACT

In coming to understand the world—in learning concepts, acquiring language, and grasping causal relations—our minds make inferences that appear to go far beyond the data available. How do we do it? This review describes recent approaches to reverse-engineering human learning and cognitive development and, in parallel, engineering more humanlike machine learning systems. Computational models that perform probabilistic inference over hierarchies of flexibly structured representations can address some of the deepest questions about the nature and origins of human thought: How does abstract knowledge guide learning and reasoning from sparse data? What forms does our knowledge take, across different domains and tasks? And how is that abstract knowledge itself acquired?

The Challenge: How Does the Mind Get So Much from So Little?

For scientists studying how humans come to understand their world, the central challenge is this: How do our minds get so much from so little? We build rich causal models, make strong generalizations, and construct powerful abstractions, whereas the input data are sparse, noisy, and ambiguous—in every way far too limited. A massive mismatch looms between the information coming in through our senses and the ouputs of cognition.

Consider the situation of a child learning the meanings of words. Any parent knows, and scientists have confirmed (1, 2), that typical 2-year-olds can learn how to use a new word such as “horse” or “hairbrush” from seeing just a few examples. We know they grasp the meaning, not just the sound, because they generalize: They use the word appropriately (if not always perfectly) in new situations. Viewed as a computation on sensory input data, this is a remarkable feat. Within the infinite landscape of all possible objects, there is an infinite but still highly constrained subset that can be called “horses” and another for “hairbrushes.” How does a child grasp the boundaries of these subsets from seeing just one or a few examples of each? Adults face the challenge of learning entirely novel object concepts less often, but they can be just as good at it (Fig. 1).