2019, the guard On the borders of Greece, Hungary and Latvia, they began testing an artificial intelligence-powered lie detector. The system, called iBorderCtrl, analyzes facial movements to try to spot signs that a person is lying to border agents.The trial was driven by nearly $5 million in EU research funding and spanned nearly 20 years Research at Manchester Metropolitan University, UK.
The trial sparked controversy. Polygraphs and other techniques used to detect lies of physical attributes have been widely declared unreliable by psychologists. Soon, iBorderCtrl also reported an error.According to media reports, the Lie prediction algorithm doesn’t workand the project’s own website admit The technology “may pose risks to fundamental human rights”.
This month, Silent Talker, a spin-off from Manchester Metropolitan that made the technology underlying iBorderCtrl, disbanded. But that’s not the end of the story. Lawyers, activists and lawmakers are pushing for an EU law to regulate artificial intelligence that would ban systems that claim to detect human deception in immigration — citing iBorderCtrl as an example of what could go wrong. Former Silent Talker executives could not be reached for comment.
Banning AI lie detectors at the border is right Artificial Intelligence Act EU state officials and members of the European Parliament are considering it.The legislation aims to protect EU citizens’ fundamental rights, such as the right to live without discrimination or to declare asylum. It labels some use cases of AI as “high risk”, some “low risk” and bans others outright.Those lobbying for changes to the AI bill include human rights groups, unions and companies like Google and Microsoftthey want the AI Act to distinguish between those who build general-purpose AI systems and those who deploy them for specific uses.
Last month, advocacy groups including the European platform for digital rights and international cooperation for undocumented migrants Call The use of artificial intelligence polygraphs to measure things like eye movements, intonation or facial expressions at the border is prohibited.Civil liberties nonprofit Statewatch released a analyze The AI bill, as written, would allow the use of systems like iBorderCtrl, adding to Europe’s existing “publicly funded border AI ecosystem”, warns. The analysis calculated that over the past two decades, about half of the 341 million euros ($356 million) spent on the use of artificial intelligence at the border, such as profiling migrants, went to private companies.
Petra Molnar, deputy director of the nonprofit Refugee Law Lab, said the use of artificial intelligence polygraphs at the border effectively created new immigration policies through technology and flagged everyone as suspicious. “You have to prove you’re a refugee and unless you prove otherwise, you’re considered a liar,” she said. “That logic underpins everything. It supports AI lie detectors, it supports more surveillance and border counterattacks.”
Immigration attorney Molner said people often avoid eye contact with border or immigration officials for innocuous reasons such as culture, religion or trauma, but doing so can sometimes be misinterpreted as a sign that a person is hiding something . Humans often have trouble communicating across cultures or talking to people who have experienced trauma, so why do people believe machines can do better, she said?