If you’re considering clinical AI training work, the onboarding process can feel opaque from the outside. It isn’t. Large parts are automated, platforms reuse your application across roles, and the actual work you need to do is roughly two to four hours spread over a few weeks.
The main thing to understand upfront: you are not applying for a single job. You are joining a talent pool. There’s no single interview followed by a clear offer. Instead, platforms use a staged, assessment-led model designed to scale safely across thousands of candidates. Approval means you’re eligible — work arrives when a project matches your profile.
Realistically, onboarding takes one to four weeks and a first paid task could come immediately or after several months. That range is normal and doesn’t reflect your performance.
Who this is for
This guide is for regulated healthcare professionals applying to platforms such as Mercor or Micro1 for clinical AI training, AI output evaluation or review, prompt, rubric, or gold-answer creation, and safety, quality, or domain-expert oversight.
The onboarding stages
Stage 1 — Application and eligibility screening
You submit your CV or LinkedIn profile, confirm your clinical background, and upload credentials or registration documents depending on the role. Most platforms run an automated background check (Mercor posted me a Disclosure Scotland certificate a few days after mine) and identity verification via a mobile app — passport scan, face photo, chip verification. Thorough, but straightforward. At this stage platforms are checking eligibility and completeness, not ranking you against other candidates.
Stage 2 — AI-led interview and assessment
This is the most consequential stage and where most filtering happens. Expect scenario-based questions, evaluation of AI-generated responses, and tasks testing clinical judgement, safety awareness, and boundary-setting. How these interviews are structured and scored — and how to approach them — is covered in detail in the companion guides below.
Stage 3 — Domain or task-specific evaluation
SelectiveSome roles require additional validation: reviewing example outputs, writing or refining prompts, applying a scoring rubric, or explaining why a given answer is unsafe. More common for clinical safety roles, regulated domains, and senior or specialist reviewers.
Stage 4 — Human review and verification
Not universalWhere it happens, this is usually a spot check of AI-scored results, credential verification, or a short clarifying message. Not a traditional interview — a validation step.
Stage 5 — Pool acceptance
You receive onboarding documents, sign platform agreements including NDAs, and complete any compliance acknowledgements. You are now eligible. Work comes next, on the platform’s timeline.
What happens after approval
Work is matched to you based on project demand, your domain fit, prior performance scores, and availability. Early on, expect small or calibration tasks and gaps between offers. This is deliberate — platforms build confidence in quality before scaling volume.
Most clinicians who do well experienced an uneven start.
Time to first paid task
Realistic expectations
Silence doesn’t mean rejection.
Possible outcomes
How to approach this psychologically
How this fits with the rest of the guide
This page covers the journey and outcomes. The companion pages cover preparation and execution.
Written by
Sean Key
Digital Health Senior Programme Manager · 29 years’ NHS & private sector experience
Sean has spent nearly three decades delivering complex digital programmes across the NHS and private healthcare — from LIMS and PACS deployments to primary care, urgent care, mental health, and national interoperability work. Not a clinician. His perspective is that of a practitioner who understands how digital health really gets built, procured, and adopted in the real world.
