Addressing technological issues at the intersection of online safety, youth well-being, and policy development

VYS helps you build healthier online communities. We are your partner in meeting your goals around trust & safety, youth safety and well-being, thoughtful content moderation, and frameworks for building safe products.VYS was founded by Vaishnavi J after more than a decade of working on the most interesting trust & safety challenges around online child safety and privacy, age-appropriate design, policy development and enforcement, and product guidance.

HOW CAN VYS HELP YOU?

  • Support your trust & safety strategy as it relates to youth populations, ranging from community standards to product development

  • Catch key child safety and wellbeing challenges that may arise from your product design or rules of engagement early on and help mitigate them

  • Evaluate and inform your brand safety strategy to keep your brand reputation safe and support healthy user experiences

  • Support your product roadmapping priorities and provide product development frameworks grounded in youth-centred design

  • Provide practical guidance on best practices around youth regulation, including ensuring your proposed regulation is future-proof

  • Provide expert perspective and commentary on ongoing trust & safety challenges, supporting a more informed public discussion

Contact us for more details

ABOUT VAISHNAVI

Vaishnavi is an expert in online child safety and privacy, age-appropriate design, policy development and enforcement, and product guidance.Most recently, Vaishnavi was the head of youth policy at Meta, developing age-appropriate content and product policies across Instagram, Facebook, VR, and messaging services. She previously led Twitter’s video content policies and was the company’s first safety lead in the Asia-Pacific region. She also served as Google’s child safety and privacy lead for the Asia-Pacific region, developing global networks of youth experts and supporting local teams with digital literacy policy programs.Vaishnavi now advises US and international companies, civil society groups, and policymakers, offering thoughtful and timely counsel on their trust & safety challenges. She is regularly featured as an expert commentator in publications such as NPR, CNN, Washington Post, Wall Street Journal, New York Times, and Rolling Stone. Vaishnavi holds a BA in Political Economy from UC Berkeley and an LLM in IT & IP Law from the University of Hong Kong.

Meta

I began my career at Meta as the head of safety and wellbeing for Instagram, collaborating with external experts as well as internal policy and product teams to ensure Instagram was a safe and healthy space for its community. With a largely young demographic, my work focussed on addressing the sexual exploitation of minors, suicide amd self-injury, teen mental health, non-consensual intimate images, human exploitation, and bullying & harassment.Some examples of my work at Instagram (in partnership with a number of teams):

  • Advocated for and drove the launch of reporting options for child safety violations at the profile, comment, and hashtag levels on Instagram. I also supported the content policy team’s update of the minor sexualisation policy, advocating for the value of this change for Instagram

  • Developed a safety assessment and harms framework around a highly visible new product, Instagram Reels, ahead of launch, and flagged which product mitigations were critical to protecting the community

  • Conceptualised and convened Instagram's first standing group of experts in the area of eating disorders, to advise on our approach to addressing these issues across both products and policies

  • Worked cross-functionally to develop a strong case for revisiting the way Instagram treated public figure harassment on the platform under its content policies, accounting for the needs of underrepresented groups facing outsized attacks due to their identity

  • Advocated for and drove the launch of an inform treatment that surfaces resources within Search when people search for terms related to suicide, self-injury, or eating disorders

  • Regularly represented the company with senior policymakers, civil society groups, academic experts, advertisers, and the press

Over time, my responsibilities increased to first become the head of youth wellbeing across all Meta apps, then the head of youth policy for all Meta apps. This meant expanded responsibility for the safety and wellbeing of youth on apps like Meta Quest (VR), Messenger, Facebook, and WhatsApp.Some examples of my work across Meta apps with my team:

  • Expanded Meta's Youth Advisors to include several new youth development experts in the fields of youth media, parenting, and youth development, to help inform the company's product and policy development

  • Provided counsel and guidance to product teams around a number of youth products including Instagram and Meta Quest (VR) 's first set of parental supervision tools, nudges to take a break after spending long periods of time scrolling within Instagram, and improvements in preventing unwanted contact from suspicious adults to minors

  • Regularly initiated and drove cross-functional policy frameworks to guide product development at Meta, accounting for the advice from youth development experts, emerging regulatory obligations, and looking across the industry to assess best practices

  • Partnered with multiple cross-functional teams to develop an approach towards age-based ranking and recommendations policies

  • Regularly represented the company with senior policymakers, civil society groups, academic experts, advertisers, and the press

Twitter

I was Twitter's first head of safety policy for the Asia-Pacific region, working out of Singapore (also my hometown!) to develop content policies that kept communities safe across APAC. Some interesting cases that I grappled with included the potential toxicity of celebrity fan clubs, addressing the first instances of deepfake celebrity nudes in South Korea, and being the first policy responder on the Christchurch mosque shootings. I eventually moved to San Francisco to take on a global role with the company managing our approach to live video and worked on a number of interesting projects there, including our approach to tweets that were in the public interest to keep up, developing a harms framework for the company to assess how to penalise violations of different prevalence and severity, and driving optimisation across policy, operations, and enforcement teams. I also represented Twitter's positions on key safety issues with media, regulatory, civil society, and internal teams.

Google

At Google, I focussed on child safety, privacy, and security for the public policy team, supporting the company's priorities in APAC, Middle East, Africa, and Russia. I worked from Hong Kong to provide subject matter expertise and counsel on child safety, privacy, and security to local policy teams across this region. I ran policy campaigns supporting the empowerment of women through technology and worked with local policy teams to rapidly launch digital literacy programs across the region. I also represented Google’s priorities around key online safety issues at international fora such as UNESCO, IGF, IAPP, and APEC.