Meta Introduces New Instagram Safety Features to Protect Adolescents
Meta, in a landmark move has removed thousands of social media accounts that were leaving obscene comments or requesting images from adult-run accounts of kids under 13 on Instagram.
In its ongoing efforts to create a safer digital space for the users, Meta has introduced a fresh set of safety features on one of the most-widely used Apps, Instagram, which are aimed at protecting the teenagers and children worldwide. These enhancements focus particularly on the private direct messaging (DMs) medium, where many inappropriate or risky interactions often begin.
- With online safety becoming a growing concern for both parents and young users, Instagram’s latest update is designed to offer more transparency, protection, and control to its younger audience and their guardians.
New Features’ Rollout
1. One of the standout features of this safety update is a new contextual tool that adds transparency before a conversation even starts. Now, when a teenager tries to send a direct message to someone for the first time, Instagram will display key details about that recipient — including the date they joined the platform. This change empowers teens to make better-informed decisions by helping them identify potentially suspicious accounts, especially those which have been recently created and likely to be fake or unauthorized accounts.
2. Another critical upgrade is the introduction of a streamlined “Block and Report” button. This dual-action feature allows teens to both block and report a user in a single step, eliminating the need to perform these actions separately. It’s a quicker and a more efficient technique to immediately stop unwanted contact and notify Instagram about potentially harmful/suspicious behavior.
3. In addition to these tools, Instagram is has added a layer of protection by displaying safety reminders to teens before they begin a new chat — even with people they already follow. These reminders shall serve as gentle prompts to encourage young users in reflecting before engaging with a user on the platform. Most importantly, they will be advised to share any personal information unless they feel completely safe and sure.
However, Meta isn’t restricting these new features to just teen accounts – it will extend these safety upgrades to adult-run profiles that primarily feature children — such as parenting pages, child influencers, or accounts managed by guardians of children under 13. These types of accounts will now be subject to tighter DM restrictions and will automatically have features like “Hidden Words” activated. This setting filters offensive or inappropriate language from both messages and comments, offering an extra layer of moderation for vulnerable accounts.
How to Raise Awareness?
To ensure users are aware, Instagram will place a notification banner at the top of affected users’ feeds, informing them that their safety settings have been updated. This move aims to make users more engaged with and aware of their privacy controls, without having to search through settings manually.
- These new features complement Instagram’s existing suite of safety policies, including Nudity Protection, Location Warnings, and Safety Notices, all of which aim to build a more secure and positive environment for young users. Meta’s continued investment in safety reflects its commitment to minimizing online risks for teens and children, while empowering them and their families with the right knowledge and tools needed to navigate the digital space better.
With these thoughtful and layered changes, Instagram is taking a proactive step towards making its platform not only more user-friendly and user-centric for the youth, but also a domain where families can feel more secure about their children’s online presence.