A new Instagram filter will reportedly help crack down on bullying on the site by flagging words reportedly associated with behavior.
On the heels of Facebook publishing its Community Standards, which outline what kinds of language constitute hate speech, its photo-sharing site Instagram shared a blog post unveiling its new method of protecting users from bullying.
Like the previous offensive content filter that Instagram debuted in 2017, the bullying filter will hide comments that contain certain flagged words (that have not been released to the public). As Instagram explains, “This new filter hides comments containing attacks on a person’s appearance or character, as well as threats to a person’s well-being or health.”
Instagram’s blog post also vowed to protect the young public figures who use the site against bullying language:
We are also expanding our policies to guard against bullying young public figures on our platform. Protecting our youngest community members is crucial to helping them feel comfortable to express who they are and what they care about.
According to the press post, the filter will “also alert us to repeated problems so we can take action.”
The New York Times reported that using this new feature could even potentially lead certain users to be banned from the platform entirely if they engage in bullying. The Times reported that Instagram will rely on DeepText software in order to “review words for context and meaning, much as the human brain determines how words are used.”
Given the recent controversy surrounding Facebook’s “hate speech” glitch and concerns over the newly published Community Standards, the use of software to reduce bullying on Instagram could raise more concerns about the subjective nature of language,