AI Red-Teamer — Adversarial AI Testing
2 days ago

Job description
hackajobis collaborating with
Mercor*
to connect them with exceptional tech professionals for this role.
Location:
Remote; Geography restricted to USA, Taiwan, Malaysia. Additional countries are considered on a case-by-case basis.
Type:
Full-time or Part-time Contract Work
Fluent Language Skills Required: English & Chinese (Mandarin). Native-level fluency in English and Chinese is required for this position.
Why This Role Exists At Mercor, we believe the safest AI is the one that's already been attacked — by us.
We are assembling a red team for this project - human data experts who probe AI models with adversarial inputs, surface vulnerabilities, and generate the red team data that makes AI safer for our customers.
This project involves reviewing AI outputs that touch on sensitive topics such as bias, misinformation, or harmful behaviors. All work is text-based, and participation in higher-sensitivity projects is optional and supported by clear guidelines and wellness resources. Before being exposed to any content, the topics will be clearly communicated.
What You'll Do - Red team conversational AI models and agents: jailbreaks, prompt injections, misuse cases, bias exploitation, multi-turn manipulation - Generate high-quality human data: annotate failures, classify vulnerabilities, and flag systemic risks - Apply structure: follow taxonomies, benchmarks, and playbooks to keep testing consistent - Document reproducibly: produce reports, datasets, and attack cases customers can act on
Who You Are - You bring prior red teaming experience (AI adversarial work, cybersecurity, socio-technical probing) - You're curious and adversarial: you instinctively push systems to breaking points - You're structured: you use frameworks or benchmarks, not just random hacks - You're communicative: you explain risks clearly to technical and non-technical stakeholders - You're adaptable: thrive on moving across projects and customers
Nice-to-Have Specialties - Adversarial ML: jailbreak datasets, prompt injection, RLHF/DPO attacks, model extraction - Cybersecurity: penetration testing, exploit development, reverse engineering - Socio-technical risk: harassment/disinfo probing, abuse analysis, conversational AI testing - Creative probing: psychology, acting, writing for unconventional adversarial thinking
What Success Looks Like - You uncover vulnerabilities automated tests miss - You deliver reproducible artifacts that strengthen customer AI systems - Evaluation coverage expands: more scenarios tested, fewer surprises in production - Mercor customers trust the safety of their AI because you've already probed it like an adversary
Why Join Mercor - Build experience in human data-driven AI red teaming at the frontier of safety - Play a direct role in making AI systems more robust, safe, and trustworthy
The contract rate for this project will be aligned with the level of expertise required, the sensitivity of the material, and the scope of work.
Similar jobs
Adversarial AI testing specialists probe AI models with adversarial inputs to surface vulnerabilities and generate red team data that makes AI safer for customers. · ...
1 month ago
Human data experts who probe AI models with adversarial inputs surface vulnerabilities and generate the red team data that makes AI safer for customers. · ...
1 month ago
This project involves reviewing AI outputs that touch on sensitive topics such as bias, misinformation, or harmful behaviors. All work is text-based. · Red team conversational AI models and agents: jailbreaks, prompt injections, misuse cases, bias exploitation, multi-turn manipul ...
1 month ago
hackajob* · is collaborating with · Mercor* · to connect them with exceptional tech professionals for this role. · **Location**: Remote; Geography restricted to USA & Europe **Type**: Full-time or Part-time Contract Work **Fluent Language Skills Required:** English & German. Nati ...
2 days ago
We are assembling a red team for this project - human data experts who probe AI models with adversarial inputs, surface vulnerabilities, and generate the red team data that makes AI safer for our customers. · Red team conversational AI models and agents: jailbreaks, prompt inject ...
3 days ago
We're building a pod of AI Red-Teamers: human data experts who probe AI models with adversarial inputs, surface vulnerabilities and generate the red-team data that makes AI safer for our customers. · Red-team AI models and agents: jailbreaks, prompt injections, misuse cases, expl ...
1 month ago
We are assembling a red team for this project - human data experts who probe AI models with adversarial inputs, surface vulnerabilities, and generate the red team data that makes AI safer for our customers. · Review AI outputs that touch on sensitive topics such as bias, misinfor ...
3 days ago
We are assembling a red team for this project - human data experts who probe AI models with adversarial inputs, surface vulnerabilities, and generate the red team data that makes AI safer for our customers. · At Mercor, we believe the safest AI is the one that's already been atta ...
3 days ago
hackajob* · is collaborating with · Mercor* · to connect them with exceptional tech professionals for this role. · **Location**: Remote; Geography restricted to USA, Brazil **Type**: Full-time or Part-time Contract Work **Fluent Language Skills Required:** English & Brazilian Por ...
2 days ago
Why This Role Exists · We believe the safest AI is the one that's already been attacked — by us. That's why we're building a pod of AI Red-Teamers: human data experts who probe AI models with adversarial inputs, surface vulnerabilities, and generate the red-team data that makes A ...
1 day ago
Why This Role Exists · We believe the safest AI is the one that's already been attacked — by us. That's why we're building a pod of AI Red-Teamers: human data experts who probe AI models with adversarial inputs, surface vulnerabilities, and generate the red-team data that makes A ...
1 day ago
Why This Role Exists · We believe the safest AI is the one that's already been attacked — by us. That's why we're building a pod of AI Red-Teamers: human data experts who probe AI models with adversarial inputs, surface vulnerabilities, and generate the red-team data that makes A ...
1 day ago
AI Red-Teamer (Adversarial AI Testing) | 111/hr Remote
Only for registered members
We are seeking an experienced AI Red-Teamer to conduct advanced adversarial testing of AI models and agents. The ideal candidate will have prior experience in AI red-teaming, adversarial AI testing, cybersecurity, or socio-technical risk analysis. · ...
1 month ago
We are seeking an AI Security Systems Architect to design and develop state-of-the-art systems for security testing and evaluation of artificial intelligence technologies. · This role involves creating scalable infrastructure to support cutting-edge adversarial testing methodolog ...
1 week ago
· ...
1 month ago
We are seeking an enthusiastic and technically proficient Penetration Tester to join our growing security team. · In this role, you will conduct hands-on penetration testing for both web applications and enterprise infrastructure while supporting advanced engagements such as adve ...
1 week ago
A leading organization is seeking an experienced AI Penetration Testing Specialist to assess and secure modern software systems that incorporate artificial intelligence and machine learning. This role plays a critical role in protecting business-critical applications by uncoverin ...
1 week ago
We are seeking an enthusiastic and technically proficient Penetration Tester to join our growing security team. · In this role, you will conduct hands-on penetration testing for both web applications and enterprise infrastructure while supporting advanced engagements such as adve ...
1 month ago
AI Security Specialist (English & Brazilian Portuguese) | 28.74/hr Remote
Only for registered members
The position involves working as an AI Red-Teamer Adversarial AI Testing Advanced in English and Brazilian Portuguese on a hourly contract basis. · Responsibilities include Red team conversational AI models and agents · Generate high-quality human data by annotating failures clas ...
1 month ago
The Opportunity · We are building a dedicated AI Red Team to rigorously test and harden enterprise-scale AI products. · We are looking for an adversarial machine learning specialist who thinks like an attacker. · This role focuses on identifying vulnerabilities in LLM-driven syst ...
2 hours ago