Instagram has announced a new feature that will alert parents if their teenage children repeatedly search for terms related to suicide or self-harm in a short period. This move comes as pressure mounts for governments to implement regulations similar to Australia’s ban on social media use for individuals under 16.
Owned by Meta Platforms Inc., Instagram stated that it will begin sending notifications to parents enrolled in its optional supervision setting when their kids attempt to access content related to suicide or self-harm. This alert system will be activated next week in Canada, the United States, Britain, and Australia.
The platform emphasized that these alerts are an extension of their efforts to safeguard teens from potentially harmful content on Instagram. Instagram reiterated its strict policies against any content that promotes or glorifies suicide or self-harm, adding that their current policy involves blocking such searches and guiding users to support resources.
Governments worldwide are increasingly focusing on protecting children from online harm, especially following concerns raised by the AI chatbot Grok, which has been linked to the creation of non-consensual sexualized images. In response, countries like Britain and Australia have taken steps to consider or implement restrictions to enhance online safety for children. Spain, Greece, and Slovenia have also expressed intentions to explore limiting access to certain online content.
In the UK, efforts to prevent children from accessing pornography websites have raised privacy implications for adults and sparked disputes with the US concerning free speech limitations and regulatory boundaries. Instagram’s introduction of “teen accounts” for those under 16 requires parental approval to modify settings. Additionally, parents can opt for increased monitoring capabilities with their teenager’s consent. These teen accounts are designed to block sensitive content, including materials of a sexual or violent nature.
