Instagram ‘systematically fails’ to protect women from online abuse in DMs, study says
By Ryan General
A recent report by the U.S.-based nonprofit Center for Countering Digital Hate (CCDH) says Instagram “systematically fails” to protect women from online abuse and harassment.
According to CCDH’s “Hidden Hate” report, the Meta-owned image-sharing platform did not act on 90% of complaints made about gender-based abuse and harassment received via direct messages, despite them being reported via its tools. The platform also did not resolve image-based sexual abuse messages within 48 hours.
The study involved an analysis of 8,717 messages received by five high-profile women on Instagram, including South Asian women’s rights advocate and culture magazine Burnt Roti founder Sharan Dhaliwal.
“Social media is a really important way we establish our brand, maintain relationships, and transact commerce,” said Imran Ahmed, chief executive of the CCDH, told the Washington Post. “Are we now saying the cost for women doing that is this level of abuse?”
The study involved an analysis of 8,717 messages received by five high-profile women on Instagram, including South Asian women’s rights advocate and culture magazine Burnt Roti founder Sharan Dhaliwal.
The report highlighted how Instagram did not act on the majority of misogynistic abuse, including unsolicited nude photos and videos, violent messages and death threats.
Dhaliwal, who has over 10,000 Instagram followers, said she has received hundreds of “d*ck pics” via the platform’s DM system.
“It’s a power play … it’s about them feeling they have power and can walk away from that saying: ‘I did that’,” the writer said.
Some abusers even use Instagram’s video call function to harass female users.
Dhaliwal shared that a user tried to call her after sending her photos of male genitalia. Just days later, another user sent her 42 messages that included sexual content before attempting to video call her as well.
The study further noted how 227 of 253 accounts that sent abusive messages remained active at least a month after getting reported.
Jacey Kan, an advocacy officer for Hong Kong nonprofit Association Concerning Sexual Violence Against Women, told the South China Morning Post the platform has been used as a tool where “targeting, coercion, and non-consensual distribution happen.”
The Hong Kong-based nonprofit has seen cases of image-based abuse across different apps and platforms in Hong Kong rise from 44 cases in 2019 to 200 in 2021.
In response to the spike, the group launched a service called “Ta-DA”, which is focused on assisting victims in asking platforms to remove offensive content.
According to Kan, over 80% of the 309 links submitted to them were removed after their follow-up.
Meta’s head of women’s safety, Cindy Southworth, released a statement expressing disagreement with the report’s conclusions.
“We do agree that the harassment of women is unacceptable,” she said. “That’s why we don’t allow gender-based hate or any threat of sexual violence, and last year we announced stronger protections for female public figures.”
Feature Image via solenfeyissa
Share this Article
Share this Article