Self-taught AI may have a lot in common with the human brain

ten years Many of the most impressive AI systems today are taught using massive amounts of labeled data. For example, an image might be labeled “tabby cat” or “tiger cat” to “train” an artificial neural network to correctly distinguish between a tabby and a tiger. The strategy has been both astonishingly successful and woefully inadequate.

This “supervised” training requires human labor to label data, and neural networks often take shortcuts, learning to associate labels with minimal and sometimes superficial information. For example, a neural network might use the presence of grass to identify photos of cows, since cows are often photographed in fields.

“We’re training a new generation of algorithms, just like undergraduates [who] Didn’t show up for class the whole semester, and then the night before the final exam, they were all rote memorizing,” said Alexei Evers, a computer scientist at the University of California, Berkeley. “They didn’t really study the material, but they did well on the exam.”

Furthermore, for researchers interested in the intersection of animal and machine intelligence, this “supervised learning” may be limited to what it reveals about the biological brain. Animals – including humans – do not use labeled datasets to learn. For the most part, they explore the environment on their own, and by doing so, they gain a rich and deep understanding of the world.

Now, some computational neuroscientists have begun to explore neural networks trained with little or no human-labeled data.These “self-supervised learning” algorithms have been shown to human language modeling and more recently image recognition. In recent work, computational models of mammalian visual and auditory systems constructed using self-supervised learning models have been shown to more closely approximate brain function than supervised learning models. To some neuroscientists, artificial networks seem to be starting to reveal some of the actual methods our brains use to learn.

flawed oversight

Brain models inspired by artificial neural networks matured about 10 years ago, around the same time as a alex net Revolutionizing the task of classifying unknown images. Like all neural networks, this network consists of layers of artificial neurons, which are computational units connected to each other and may vary in strength or “weight”. If the neural network fails to classify the image correctly, the learning algorithm updates the weights of the connections between neurons to reduce the chance of misclassification in the next round of training. The algorithm repeats this process multiple times for all training images, adjusting the weights until the network’s error rate is acceptably low.

According to Alexei Efros, a computer scientist at the University of California, Berkeley, most modern AI systems rely too much on human-created labels. “They didn’t really study the material,” he said.Courtesy of Alexei Efros

Around the same time, neuroscientists developed the first computational models primate visual system, using neural networks like AlexNet and its successors. The association looked promising: For example, when monkeys and artificial neural networks were shown the same image, the activity of real and artificial neurons showed an interesting correspondence. This is followed by artificial models for hearing and odor detection.

But as the field grew, researchers realized the limitations of supervised training. For example, in 2017, computer scientist Leon Gatys, then at the University of Tübingen in Germany, and his colleagues took a photo of a Ford Model T and then overlaid the photo with a leopard skin pattern, producing a strange but easy-to-use image. Recognized image. Leading artificial neural network correctly classifies the original image as model T, but treats the modified image as a leopard. It focuses on texture and doesn’t understand the shape of a car (or a leopard, for that matter).

Self-supervised learning strategies aim to avoid such problems. In this approach, humans do not label the data.Instead, “the labels come from the data itself,” says Freedman Women, a computational neuroscientist at the Friedrich Michel Institute for Biomedical Research in Basel, Switzerland. Self-supervised algorithms essentially create gaps in the data and ask the neural network to fill in the gaps. In so-called large language models, for example, a training algorithm would show the neural network the first few words of a sentence and ask it to predict the next word.When trained with a large amount of text collected from the Internet, the model seems to be learning The syntactic structure of the language, demonstrating impressive linguistic abilities – all without external labels or supervision.

Source link