The future of digital assistants is weird

In its simplest form, loving a smart wife may mean providing digital assistants with different personalities that more accurately represent the multiple femininities that exist around the world, rather than the pleasing, submissive ones that many companies choose to adopt personality.

Strengers added that Q will be a typical case of queer devices, “but it’s not the only solution.” Another option might be to introduce masculinity in a different way. An example might be Pepper, a humanoid robot developed by Softbank Robotics, usually called he/hi pronouns, capable of recognizing faces and basic human emotions. Or, another robot, Jibo, launched in 2017. It also uses male pronouns and is marketed as a home social robot, although it has since gained a second life as a device focused on healthcare and education. Considering the “gentle and weak” masculinity of Pepper and Jibo-for example, the former answers questions politely and often appears frivolous, while the latter is often whimsical and approach users with a likable demeanor –Strengers and Kennedy see as a positive step in the right direction.

Strange digital assistants may also lead to the creation of robotic personalities to replace humanized technical concepts. When asked about the gender of the Capital One baking robot Eno launched in 2019, it will jokingly answer: “I am binary. I am not saying that I am both. I mean I am actually just one and zero. . Think of me as a robot.”

Similarly, the online banking chatbot Kai developed by Kasisto completely abandoned human characteristics. Jacqueline Feldman, the Massachusetts author and user experience designer who created Kai, explained that the robot was “designed to be genderless.” Instead of assuming a non-binary identity like Q, but assuming a robot-specific identity and using the pronoun “it”. “From my perspective as a designer, robots can be beautifully designed and charming in new ways that are unique to robots without having to pretend to be humans,” she said.

When asked whether it is a real person, Kai will say: “A robot is a robot and a robot. The next question, please,” clearly show the user that it is not a human, nor is it pretending. If asked about gender, it will answer: “As a robot, I am not a human. But I learn. This is machine learning.”

Being a robot does not mean that Kai has been abused.A few years ago, Feldman also Discussed Kai is deliberately designed to have the ability to divert and close harassment.For example, if the user repeatedly harassed the robot, Kai will answer “I’m imagining the white sand and the hammock, please try again later!” “I really try to give the robot some dignity,” Feldman Tell The Australian Broadcasting Corporation in 2017.

Nonetheless, Feldman believes that a robot’s self-recognition as a robot is a morally necessary condition. “Lack of transparency when designing a company [bots] It’s easy for people interacting with a robot to forget that it’s a robot,” she said, and gendering robots or giving them a human voice will make this more difficult. Because of the many consumer experiences of chatbots May be frustrating Feldman believes that providing robots with human qualities may be a situation of “overdesign”.

Source link

Leave a Reply

Your email address will not be published. Required fields are marked *