OUR FOCUS IS ON PROVIDING REMOTE MODERATION, TRUST & SAFETY, AND COMMUNITY MANAGEMENT JOB LEADS

Trust and Safety Manager At WellSaid Labs

Who You Are: An empathetic Trust & Safety Manager 

The WellSaid Labs Ethics team oversees the implications of our technology to protect our stakeholders from harm, including team members, customers, voice talent, and our broader community.

We are looking for a Trust and Safety Manager to support our content moderation efforts to ensure our technology use is in line with our user Terms of Service. In this role, you will be responsible for managing a third-party content review team to review the contents created on the platform by users and ensure to keep the platform safe from any harmful, overtly sexual, or illegal use. You will work closely with the engineering, product, and customers teams to mitigate scam, fraud, and Terms of Service violations and a variety of other issues dealing with the integrity of our product and the safety of our team. You will be involved in a wide array of technical and non-technical projects at any one time.

How You’ll Contribute:  

In your day-to-day, you will help manage our third-party moderation relationships to ensure consistency and high standards for our customers. You will also help to create transparent documentation around violations in user content to share with the team and stakeholders. You will also stay on top of trending content concerns, so we can continue to be thoughtful about our Terms of Service.

You will also…

  • Team protection/security: Establish best practices for protecting our team from harm & build preventative measures
  • Gather & measure analytics for moderation and identity verification program performances
  • Strategize moderation program & identity verification improvements, including testing new content use cases
  • Work with Engineering to communicate and prioritize engineering needs
  • Research resources, solutions for moderation system and identity verification improvements
  • Run Terms of Service compliance center for our API customers
  • Partner with ML team to evaluate the performance of ML tooling moderation tooling
  • Establish best practices for protecting our team from harm & build preventative measures

What We’re Looking For 

To thrive in this role, you ideally have successfully managed a content moderation effort within a user-facing application. You will have worked well cross-functionally with other teams within an organization and with third-party vendors to maintain a high standard of quality.

 You also have some combination of the following: 

  • 2+ years experience in Trust & Safety 
  • SME in content moderation for text-based content
  • Experience with problem-solving, planning, and execution in a self-directed environment
  • Experience managing 3rd party vendors
  • Experience building  performance metrics and flagging KPI’s for our teams and customers
  • Transitioning ambiguity into repeatable, explainable processes

Link

Visit Our Other Pages!

Did you apply for a job found on RemoteModJobs? Show your appreciation and help me keep fresh job leads coming by Tipping!! Tip via TipTopJar