Google has a plan to stop its new AI from getting dirty and rude

Silicon Valley CEO When announcing the company’s next big thing, the focus is usually on the positive. In 2007, Apple’s Steve Jobs praised the first iPhone’s “revolutionary user interface” and “breakthrough software.” Google CEO Sundar Pichai took a different tack with his company. Annual meeting On Wednesday, he announced a beta test of Google’s “most advanced conversational artificial intelligence to date.”

Pichai said the chatbot, called LaMDA 2, can communicate on any topic and performed well in tests with Google employees.He announced that an upcoming AI Test Kitchen This will make the robot available to outsiders to try. But Pichai added a stern warning. “While we have improved safety, the model may still produce inaccurate, inappropriate or offensive responses,” he said.

Pichai’s wobbly tones illustrate the mix of excitement, confusion and worry surrounding a series of recent breakthroughs in the capabilities of machine-learning software that processes language.

The technology has improved autocomplete and web search. It also created new categories of productivity apps that help employees by Generate smooth text or programming code. And when Pichai first disclosed the LaMDA project last year, he said it could eventually be used in Google’s search engine, virtual assistant and workplace apps. However, despite these dizzying promises, it’s unclear how to reliably control these new AI wordsmiths.

Google’s LaMDA, or language models for conversational applications, is an example of what machine learning researchers call large language models. The term is used to describe software that builds a statistical sense of language patterns by processing large amounts of text, usually online sources. For example, LaMDA was initially trained on over a trillion words from online forums, question-and-answer sites, Wikipedia, and other web pages. This massive amount of data helps algorithms perform tasks such as generating different styles of text, interpreting new text, or acting as a chatbot. If these systems worked, they wouldn’t be like the frustrating chatbots you use today. Currently, Google Assistant and Amazon’s Alexa can only perform certain pre-programmed tasks and deflect when they encounter content they don’t understand. What Google is proposing now is a computer you can actually talk to.

Chat transcripts released by Google show that LaMDA can be — at least sometimes — informative, thought-provoking, and even funny. Testing chatbots prompts Google VP and AI researcher Blaise Agüera y Arcas write a personal essay Last December, the technology could provide new insights into the nature of language and intelligence. “On the other side of the screen, the idea of ​​having a ‘who’ instead of an ‘it’ can be hard to shake,” he wrote.

Pichai made it clear The first version of LaMDA was announced last year, and on Wednesday he thought it could offer a broader avenue for voice interfaces than the often frustratingly limited functionality of services like Alexa, Google Assistant and Apple’s Siri. Now, Google’s leaders seem convinced they may finally have found a way to make computers that can actually talk to them.

Meanwhile, large language models have been shown to speak fluently about dirty, nasty, and simply racist. Scraping billions of words of text from the web inevitably involves a lot of nasty stuff. OpenAI, the company behind Language Generator GPT-3which was reportedly created to perpetuate stereotypes about gender and race, and requires clients to implement filters to filter out objectionable content.

Source link