
Instagram today released two new features with a view to curb bullying on the platform: a warning when you try to post an abusive comment, and a âRestrictâ function to limit another personâs interaction with you.
Adam Mosseri, Head of Instagram, said the AI-powered warning feature has stopped some people from posting foulmouthed comments during the early testing period:
In the last few days, we started rolling out a new feature powered by AI that notifies people when their comment may be considered offensive before itâs posted. This intervention gives people a chance to reflect and undo their comment and prevents the recipient from receiving the harmful comment notification.
Often, people use different tricks like using symbols or alternative spellings to fool the AI and post abusive comments. Instagram hasnât shared any details as to what itâs doing to curb that, and hasnât specified if the feature is available for languages other than English. Weâve asked the company for more information, and weâll update the post accordingly.

The social network is also testing another feature called âRestrict,â which will allow you to limit a personâs interaction with you. If you restrict a person, they will still be able to post comments on your posts, but they wonât be visible to anyone but themselves. You can then review and allow the restricted personâs comment to be visible to others specifically.

Instagram said often people donât block, unfollow, or report their bullies, because âit could escalate the situation.â The restricted person wonât be able to see when youâre active on the platform, or when youâve read their direct messages.
In April, the platform started demoting offensive posts as a measure to curb hate speech. Weâll have to wait and see if these features are effective, and if the social networkâs AI is strong enough to detect tricky abusive comments.
Get the TNW newsletter
Get the most important tech news in your inbox each week.