Manila, Philippines – Child rights organization Plan International Pilipinas released an assessment that provides a comprehensive overview of the online platforms usage, preferences, behaviors, and vulnerabilities of Filipino children and youth, along with recommendations to strengthen child protection mechanisms across these platforms.
Presented to the public in celebration of Safer Internet Day on February 26, the 2024 assessment gathers insights from 590 respondents, including 494 participants from focus group discussions and 80 from online surveys, all aged 10–24. It also incorporates perspectives from 16 key informant interviews with relevant stakeholders and child protection actors. The assessment covers Cavite, Manila, Baguio, Iloilo, Guimaras, Leyte, and Davao.
Findings show that children and young people primarily use digital platforms for communication (64.3%), education (47.7%), leisure (37.2%), and stress reduction (34.3%). Among mainstream platforms, Facebook (33.2%) and Messenger (23.3%) remain the most widely used, followed by TikTok (23%), Instagram (13.9%), and Twitter/X (6.6%).
For non-mainstream platforms, the anime streaming and video-sharing site BiliBili had the highest usage among respondents at 52.8%, followed by the messaging app Signal (19.7%), the live-streaming platform BigoLive (18.2%), the chat service OmeTV (6.8%), and the messaging app Tox (2.5%).This signals a significant shift among young users migrating from mainstream platforms toward lesser-known, less-regulated digital spaces that offer more privacy and autonomy, away from the scrutiny of their parents and other adults.
However, this change in digital behavior comes with increased risks, including the overexposure of personal information as young users unknowingly share details such as their location, age, or photos without understanding the risks. Additionally, many bypass age restrictions using “alter” accounts, adult profiles, or borrowed devices, exposing them to inappropriate content, targeted ads, and online threats.
Economic barriers also contribute to this shift, as children and youth from lower-income backgrounds rely on free platforms with weaker safety measures, while their peers access safer, paid options.
In terms of safety, respondents identified TikTok, Facebook, and Messenger as relatively child-friendly due to their accessibility and ease of use. However, they also raised concerns about online risks, including exposure to harmful content, cyberbullying, and potential exploitation.
They flagged certain platforms as non-child-friendly, including online games with violent themes (Valorant, Call of Duty), messaging apps and chat services (Telegram, Discord, OmeTV), and live-streaming services (BigoLive). Apps such as Roblox and Fortnite, among others, were considered unsafe due to explicit content and toxic online behavior among its users, and the risk of online addiction. Additionally, mobile wallet apps and platforms with payment features (GCash, PayPal) were seen as unsuitable for children, with respondents highlighting concerns about scams and other security risks.
The assessment revealed distinct patterns in online usage trends across different age groups and genders, highlighting the need for age-appropriate safeguards. Children aged 10–14 primarily used Roblox, Mobile Legends, Facebook, and YouTube, while those aged 15–17 were more active on social media and messaging apps for communication and leisure. Adolescents aged 18–24 frequently engaged with Facebook, TikTok, and online shopping applications.
Online platforms likewise played a role in social interactions, with some young women aged 18–24 admitting to using them for dating. Alarmingly, the assessment found that children as young as 10–14 have been using dating and networking apps like OmeTV, Tinder, and Litmatch, which are primarily designed for adults.
To address these challenges, the assessment offers targeted recommendations to tech companies, including simplifying terms of service with video tutorials and multilingual options, implementing enforceable age-based content ratings (e.g., “Rated 13+”), and scaling up safety features on their platforms. It also calls for strengthened digital literacy programs targeting children, parents, and educators. For the government, it recommends implementing and enforcing stricter regulations and monitoring mechanisms.
“This assessment goes beyond merely identifying risks—it’s a roadmap that highlights the gaps that leave children vulnerable to harassment, exploitation, and abuse,” said Pebbles Sanchez-Ogang, Executive Director of Plan International Pilipinas. “Today, we stand alongside tech industry leaders, government allies, and our partners in communities. We recognize the crucial role each of you has played—and continue to play—in protecting children online and offline. However, to truly strengthen these efforts, we must ensure children’s voices are meaningfully integrated into the policies and protective measures that shape their digital lives.”
“Safe spaces we know before are not safe anymore. Child protection must start at home,” said Undersecretary Angelo Tapales, Executive Director of the Council for the Welfare of Children. He highlighted that the alarming proliferation of “deepfakes” or digitally manipulated videos makes children more susceptible to online exploitation.
During the event, a Roundtable Discussion on Artificial Intelligence (AI) and Child Protection explored the opportunities and risks posed by emerging technologies for children and young people, as well as measures to mitigate potential harm.
PCol. Jay D. Guillermo, Chief of the Cyber Response Unit of the Philippine National Police Anti-Cybercrime Group (PNP ACG), noted that “discussions on AI-related crimes began as soon as the internet emerged and information started being programmed and automated. The rapid advancements in AI have only made these processes faster and more sophisticated.” He added that law enforcement must continuously adapt as AI-driven offenses become more complex.
Melissa Vergel De Dios, Chief Sustainability Officer of PLDT and Smart, stressed the urgency of protecting children online, calling them “the most vulnerable” among their users. She highlighted proactive efforts to combat online exploitation, including blocking 1.6 million sites and URLs linked to abusive material, with AI-generated content increasingly detected in these efforts.
Atty. Yves Gonzales, Head of Government Affairs & Public Policy at Google Southeast Asia, underscored the company’s strict platform policies, stating, “Google’s AI tools explicitly prohibit the creation or distribution of child sexual abuse and exploitation materials (CSAEM),” with violations reported to the U.S.-based National Center for Missing & Exploited Children (NCMEC). He also highlighted ongoing updates to national legislation, including Republic Act No. 11930, or the Anti-Online Sexual Abuse or Exploitation of Children (OSAEC) and Anti-Child Sexual Abuse or Exploitation Materials (CSAEM) Act, which explicitly bans AI-generated CSAEM.|