This chatbot aims to steer people away from child abuse material

Using a chatbot is more direct and perhaps more engaging, says Donald Findlater, Stop the helpline now Operated by the Lucy Faithfull Foundation. After the chatbot appeared more than 170,000 times in March, 158 people clicked through to the helpline’s website. While that number is “moderate,” Findlater said, these people have taken a significant step. “They’ve overcome a lot of hurdles to do this,” Findlater said. “Anything that stops people from just starting their journey is a measure of success,” added IWF’s Hargreaves. “We know people are using it. We know they’re making referrals, we know they’re accessing services .”

Pornhub has a great reputation for video moderation on its website, reports detail How women and girls upload their own videos without their consent. December 2020, Pornhub removes over 10 million videos Ask people from its website Upload content to verify its identity. last year, 9,000 CSAMs Removed from Pornhub.

“IWF chatbots are another layer of protection that ensures users know they won’t find such illegal material on our platform and asks them to stop immediately to help change their behavior,” a Pornhub spokesperson said, adding: Adds that it has ‘zero tolerance’ for illegal material Create clear policies around CSAMPeople involved in the chatbot project say Pornhub is volunteering, not getting paid for doing so, and that the system will be up and running on Pornhub’s UK website next year before being evaluated by outside academics.

John Perrino, a policy analyst at the Stanford Internet Observatory who was not involved in the project, said there has been an increase in the number of new tools being built in recent years that use “Safe by design“To combat online harms. “It’s an interesting collaboration in terms of policy and public perception to help and point users to healthy resources and healthy habits,” Perrino said. He added that he had never seen one before. Similar tools have been developed for porn sites.

There is already some evidence that this technological intervention can keep people away from potential child sexual abuse material and reduce the number of searches for online CSAM.For example, as Back in 2013, Google partnered with the Lucy Faithfull Foundation to introduce warning messages when people search for terms that might be associated with CSAM. Google said in 2018 that searches for child sexual abuse material were “13 times lower” because of the warnings.

independent The 2015 study found Search engines that took steps to block child sexual abuse-related terms saw a sharp drop in search volume compared to those that didn’t.A set of advertisements aimed at directing people looking for CSAM to the Helpline Germany saw 240,000 website hits and over 20 million impressions in three years.One 2021 Research People who viewed warning pop-ups on gambling sites found that nudges had a “limited” impact.

Those involved in the chatbot stressed that they did not believe it was the only way to stop people from finding child sexual abuse material online. “The solution is not a panacea to stop the demand for child sexual abuse on the Internet. It’s deployed in specific settings,” Sexton said. However, if the system is successful, he added, it could be rolled out to other websites or online services.

“They’ll also be looking elsewhere, whether it’s on various social media sites, or on various gaming platforms,” ​​Findlater said. However, if this happens, the triggers that caused it to pop have to be evaluated and the system rebuilt for the specific website it’s on. For example, Pornhub uses search terms that don’t work with Google searches. “We can’t transfer a set of warnings to another context,” Findlater said.

Source link