Tweaking AI software to function like a human brain improves computer’s learning ability


Computer-based artificial intelligence can function more like human intelligence when programmed to use a much faster technique for learning new objects, say two neuroscientists who designed such a model that was designed to mirror human visual learning.

In the journal Frontiers in Computational Neuroscience, Maximilian Riesenhuber, PhD, professor of neuroscience, at Georgetown University Medical Center, and Joshua Rule, PhD, a postdoctoral scholar at UC Berkeley, explain how the new approach vastly improves the ability of AI software to quickly learn new visual concepts.

«Our model provides a biologically plausible way for artificial neural networks to learn new visual concepts from a small number of examples,» says Riesenhuber. «We can get computers to learn much better from few examples by leveraging prior learning in a way that we think mirrors what the brain is doing.»

Humans can quickly and accurately learn new visual concepts from sparse data ¬- sometimes just a single example. Even three- to four-month-old babies can easily learn to recognize zebras and distinguish them from cats, horses, and giraffes. But computers typically need to «see» many examples of the same object to know what it is, Riesenhuber explains.

The big change needed was in designing software to identify relationships between entire visual categories, instead of trying the more standard approach of identifying an object using only low-level and intermediate information, such as shape and color, Riesenhuber says.

«The computational power of the brain’s hierarchy lies in the potential to simplify learning by leveraging previously learned representations from a databank, as it were, full of concepts about objects,» he says.


Story Source: Materials provided by Georgetown University Medical Center. Note: Content may be edited for style and length.


Добавить комментарий

Ваш адрес email не будет опубликован. Обязательные поля помечены *