Hit the books: Why we need to treat tomorrow’s robots like tools

Don’t be swayed by the sweet dial tones of tomorrow’s AI and the siren of the singularity.No matter how close AI and robots look and behave, they will never really be Yes In their new book, Paul Leonardi, Duca Family Professor of Technology Management at UC Santa Barbara, and Tsedal Neeley, Harvard Business School Naylor Fitzhugh Professor of Business Administration Thinking in numbers: what is really needed in the age of data, algorithms and artificial intelligence – and therefore should not be treated like humans. The pair argue in the excerpt below that doing so would hinder interaction with advanced technology and hinder its further development.

Harvard Business Review Press

Reprinted with permission from Harvard Business Review Press.Taken from Thinking Digitally: What It Really Takes to Thrive in the Age of Data, Algorithms and AI Paul Leonardi and Tsedal Neeley. Copyright 2022 Harvard Business School Publishing Company. all rights reserved.


Treat AI like a machine, even if it looks like a human

We are used to interacting with computers visually: buttons, drop-down lists, sliders, and other functions allow us to issue commands to the computer. However, advances in artificial intelligence are transforming our interactions with digital tools into more natural, human interactions. So-called conversational user interfaces (UIs) enable people to use digital tools by writing or talking, more like how we interact with others, such as Burt Swanson’s “conversation” with assistant Amy. When you say “Hey Siri,” “Hello Alexa,” and “OK Google,” that’s conversational UI. The growth of tools controlled by conversational UIs is staggering. Every time you dial an 800 number and are asked to spell your name, answer “yes” or say the last four digits of your Social Security number, you are interacting with the AI ​​using a conversational UI. Conversational bots have become ubiquitous, partly because they make good business sense and partly because they allow us to access services more efficiently and conveniently.

For example, if you booked a train trip through Amtrak, you may have already interacted with an AI chatbot. It’s called Julie, and it answers more than 5 million questions from more than 30 million passengers a year. You can book a rail trip with Julie just by saying where and when you’re going. Julie can pre-fill forms on Amtrak’s scheduling tool and provide guidance through the rest of the booking process. Amtrak’s investment in Julie has returned 800%. Amtrak saves over $1 million annually in customer service costs by using Julie to solve low-level, predictable issues. Bookings have increased by 25% and bookings made through Julie generate 30% more revenue than bookings made through the website because Julie is good at upselling customers!

One of the reasons for Julie’s success is that Amtrak makes it clear to users that Julie is an AI agent, and they’ll tell you why they decided to use AI instead of directly connecting you to a human. This means that people are positioning it as a machine, not mistaken for a person. They have low expectations for this and tend to ask questions in such a way that they get useful answers. Amtrak’s decision may sound counterintuitive, as many companies try to disguise their chatbots as real people, and interacting with the machines seems like the accurate way to get the best results. Digital mindsets need to change the way we think about our relationship with machines. Even as they become more human, we need to think of them as machines – requiring explicit instructions and focusing on narrow tasks.

x.ai is the company that makes Amy, a meeting scheduler that lets you schedule meetings at work, or invite friends to your kid’s basketball game, just by emailing Amy (or her counterpart Andrew) with your request , as if they were a live personal assistant. However, the company’s CEO, Dennis Mortensen, observed that more than 90 percent of the inquiries the company’s help desk receives are related to people trying to use natural language in bots and trying to get good results.

Perhaps that’s why scheduling simple meetings with new acquaintances has become so annoying for Professor Swanson, who has been trying to use colloquialism and conventions in informal conversations. Aside from the way he talks, he makes a lot of assumptions about his interactions with Amy that are absolutely correct. He assumes that Amy can understand his schedule constraints and that “she” can discern his preferences from the context of the conversation. Swanson is informal and casual — bots don’t understand that. It doesn’t understand that when asking for someone’s time, especially if they’re doing you a favor, changing meeting schedules frequently or abruptly doesn’t work. It turns out that casual interaction with intelligent robots is much harder than we thought.

Researchers have tested the idea that treating a machine like a machine is better than trying to be human. Stanford University professor Clifford Nass and Harvard Business School professor Youngme Moon conducted a series of studies in which people interacted with anthropomorphic computer interfaces. (Anthropomorphism, or the assignment of human attributes to inanimate objects, is a major problem in AI research.) They found that individuals tend to overuse human social categories, apply gender stereotypes to computers, and engage with computer agents. Racial identification. Their findings also suggest that people exhibit over-learned social behaviors, such as courtesy and reciprocity towards computers. Importantly, people tend to engage in these behaviors — treating robots and other intelligent agents as people — even when they know they’re interacting with computers rather than humans. It seems that our collective urge to engage with people often sneaks into our interactions with machines.

The problem of mistaking computers for humans is compounded when interacting with human agents through conversational UIs. Take, for example, a study we conducted with two companies that use AI assistants to provide answers to everyday business queries. One uses human-like anthropomorphic artificial intelligence. The other is not.

Employees of companies that use anthropomorphic proxies often get angry with the agent when the agent doesn’t return useful answers. They often say things like “He sucks!” Or “I wish he did better” when referring to the results the machine gave. Most importantly, their strategies for improving their relationship with machines mirrored those they use with the rest of the office. They’ll ask questions more politely, they’ll rephrase them with different words, or they’ll try to time the questions strategically so they think the agent will be “less busy” in one person’s words. None of these strategies were particularly successful.

In contrast, employees at another company said they were more satisfied with their experience. They type in the search term as if it were a computer and spell it out in detail to ensure that an AI that can’t “read between the lines” and understand nuances will pay attention to their preferences. The second group often expressed how surprised they were when their queries returned useful or even surprising information, and they attributed any problems to typical errors of the computer.

For the foreseeable future, the data is clear: Treating technology like technology when it comes to interacting with machines — no matter how human-like or smart they look — is the key to success. A big part of the problem is that they set expectations for users that they will respond in a human-like way, and they make us assume they can infer our intentions, which they can’t. Interacting successfully with conversational UIs requires a digital mindset that understands that we are still a long way from effective human-like interactions with technology. Recognizing that AI agents cannot accurately infer your intent means it’s important to articulate each step of the process and be clear about what you want to accomplish.

All products featured by Engadget are selected by our editorial team, independent of our parent company. Some of our stories include affiliate links. We may receive an affiliate commission if you purchase through one of these links.

Source link