[ad_1]
In 2020, I helped create the Signal for Help, a hand signal that communicates to friends, family, and bystanders that “I need you to check in on me in a safe way.” Our team promoted the Signal for Help across social media, anticipating a pandemic-related rise in already high rates of gendered violence, and it went viral in November 2021 during a charged time of anxiety, stay-at-home directives, and the proliferation of video calling.
Cases of women and girls using the Signal for Help to get help in dangerous situations have made the news. For example, a woman used the Signal for Help during a traffic stop to get help with her abusive husband, and another woman used it to notify staff at a gas station that she was being held against her will by a violent ex-boyfriend. As a result, well-meaning people have been trying to integrate the Signal for Help with digital technology. A company with AI camera tools reached out to ask about building recognition of the Signal for Help into their security system, and there have been similar amateur attempts discussed on social media.
The appeal is clear: Automatic detection could be useful for a well-intentioned friend or coworker on the other side of a video call who might miss seeing someone using the Signal for Help. It’s admirable that people want to help those who may be in danger, but these new applications of technology misunderstand the purpose and use of the Signal for Help.
Such efforts are part of a growing trend of using AI to recognize distress: Experiments identifying distress in livestock like chickens, cattle, and pigs yield promising results because AI seems to disentangle a cacophony of animal shrieks, clucks, and grunts better than the naked ear.
But humans are not chickens or cattle. Intention to abuse and control can transform luddites into experts. In dangerous relationships, there’s always the question of who’s in charge of the tech.
The Signal for Help is an intentionally ephemeral tool, designed to help people communicate without uttering a word, and without leaving a digital trace. I’m being hurt … I can’t say it out loud … will you be there for me while I figure it out? Impermanence is an important feature, given the way abusers tend to control and manipulate. They lurk and stalk and monitor devices. Women’s shelters routinely help survivors deal with hacked smartphones, unwanted location tracking and voice recording apps, hidden cameras, and the like. Message boards, social media, and even word-of-mouth can help abusers violate the people they claim to love. In the case of Signal for Help, abusers might use the same AI mechanism designed for safety to alert them that the person they’re hurting is trying to use the Signal for Help.
And there are other problems with AI tools to detect distress in humans, which include software to scan student emails and web searches for self-harm and violence, as well as to identify student confusion, boredom, and distraction in virtual classrooms. On top of ethical and privacy concerns, their deployment hinges on the belief that we can reliably perceive someone in trouble, and act on it in a way that will truly help them. These tools operate on a positivist belief that when a human is in distress, they express it outwardly in predictable ways. And when they express it, they desire a specific kind of intervention.
But research shows that our assumption that human facial expressions align to emotions is not one we can wholeheartedly believe. Mismatches between body and emotion may be more pronounced in unhealthy relationships. People being abused speak of disassociation, of needing to “leave their bodies” to survive. Some refer to the lengths they take to obscure their offense, injury, and pain, that they have to do so to placate abusers and the bystanders who back them up. They talk about how conscious they are of every inflection and twitch, of how they chew, blink, and breathe, and that they get punished when they merely exist in a way that irritates their abusers.
[adinserter block=”4″]
[ad_2]
Source link