Studio identifies our "inner pickpocket"

0
29

Researchers at Cambridge University, Central European University and Columbia University have discovered that one of the reasons why successful pickpockets are so efficient is that they are able to identify objects they have never seen before simply touching them. Similarly, we can anticipate what an object in a shop window will look like by simply looking at it.

In both scenarios, we rely on the brain's ability to break the continuous flow of information received from our sensory inputs into distinct blocks. The pickpocket is able to interpret the sequence of small depressions on the fingers like a series of well-defined objects in a pocket or bag, while the visual system of the shopper is able to interpret the photons as reflections of light from the objects in the window.

Our ability to extract objects distinct from the scenes clutter with touch or sight and accurately predict how they will feel based on how they look, or how they look based on how they feel, is critical to the way we interact with the world.

By performing intelligent statistical analyzes of previous experiences, the brain can immediately identify both objects without the need for well-defined boundaries or other specialized clues and predict unknown properties of new objects. The results are reported in the eLife open access diary.

"We are observing how the brain acquires the continuous flow of information it receives and segments it into objects," said Professor Máté Lengyel of the Cambridge Engineering Department, who co-directed the research. "The common view is that the brain receives specialized cues: like edges or occlusions, about where a thing ends and something else starts, but we have discovered that the brain is a really intelligent statistical machine: it looks for patterns and finds construction building blocks for objects. "

Lengyel and his colleagues designed scenes of different abstract shapes with no visible boundaries between them, and asked the participants to observe the shapes on a screen or to "separate" them along a tear line that passed through or between objects.

The participants were then tested on their ability to predict the visual (how familiar were the pieces of a real puzzle with respect to abstract pieces constructed from the parts of two different pieces) and the tactile properties of these puzzle pieces (how difficult it would be to physically separate again scenes in different directions).

The researchers found that the participants were able to form the correct mental model of puzzle pieces from the visual or tactile (tactile) experience alone, and were able to immediately predict tactile properties from the visual and vice versa.

"These results challenge classic visions of how we extract and learn objects in our environment," said Lengyel. "Instead, we showed that the general-purpose statistical calculations known to work even in younger children are powerful enough to reach such cognitive enterprises. In particular, the participants in our study were not selected to be professional pickpockets – so these results suggest also that c & # 39; a secret, a statistically expert nonsense in all of us ".

The research was funded in part by the Wellcome Trust and the European Research Council.

Reference:

Gábor Lengyel et al. "The unimodal statistical learning produces representations of multimodal objects." ELife (2019). DOI: 10.7554 / eLife.43942

. (tagToTranslate) university (t) University of Cambridge (t) Europe (t) research council (t) environment (t) Cambridge (t) professor (t) research (t) European (t) council (t) Central (t )) ttt world) the mental study () () human (

LEAVE A REPLY

Please enter your comment!
Please enter your name here

This site uses Akismet to reduce spam. Learn how your comment data is processed.