We have partnered with a generative AI technology company at the forefront of AI in the media industry. This company specializes in creating products to enhance video creation and editing across various industries including filmmaking, post production advertising, visual effects and more.
The Trust and Safety Lead will be responsible for developing and implementing strategies to ensure the safety, security, and ethical use of our generative AI technologies. This role involves collaborating with cross-functional teams to identify and mitigate risks, establish best practices, and promote a culture of trust and safety within the organization.
Responsibilities:
- Develop and lead the implementation of trust and safety policies and procedures for generative AI products.
- Conduct risk assessments and audits to identify potential safety and ethical issues.
- Collaborate with product, engineering, legal, and compliance teams to address safety concerns and ensure regulatory compliance.
- Monitor and analyze user feedback and incident reports to continuously improve safety measures.
- Lead incident response efforts, including investigation, resolution, and communication of safety-related incidents.
- Stay up-to-date with industry trends, emerging threats, and best practices in AI safety and ethics.
- Provide training and guidance to employees on trust and safety protocols.
- Advocate for user privacy and data protection in all AI initiatives.
Qualifications:
- 5+ years of experience in a trust and safety role, preferably within the tech or AI industry.
- Strong understanding of AI technologies and their ethical implications.
- Ability to work collaboratively in a fast-paced, dynamic environment.
- Passion for promoting ethical AI and protecting user safety.
- JD preferred, but not required