Disclaimer: Links on this site are referral links and I may earn a fee from Mercor or Micro1 if you click them. I do not work for Micro1 or Mercor.


Typical Clinician AI Onboarding Process

If you’re considering clinical AI training work, the onboarding process can feel opaque from the outside. It isn’t. Large parts are automated, platforms reuse your application across roles, and the actual work you need to do is roughly two to four hours spread over a few weeks.

The main thing to understand upfront: you are not applying for a single job. You are joining a talent pool. There’s no single interview followed by a clear offer. Instead, platforms use a staged, assessment-led model designed to scale safely across thousands of candidates. Approval means you’re eligible — work arrives when a project matches your profile.

Realistically, onboarding takes one to four weeks and a first paid task could come immediately or after several months. That range is normal and doesn’t reflect your performance.

Who this is for

This guide is for regulated healthcare professionals applying to platforms such as Mercor or Micro1 for clinical AI training, AI output evaluation or review, prompt, rubric, or gold-answer creation, and safety, quality, or domain-expert oversight.

The onboarding stages

Stage 1 — Application and eligibility screening

Minutes → few days

You submit your CV or LinkedIn profile, confirm your clinical background, and upload credentials or registration documents depending on the role. Most platforms run an automated background check (Mercor posted me a Disclosure Scotland certificate a few days after mine) and identity verification via a mobile app — passport scan, face photo, chip verification. Thorough, but straightforward. At this stage platforms are checking eligibility and completeness, not ranking you against other candidates.

Stage 2 — AI-led interview and assessment

30–60 min to complete

This is the most consequential stage and where most filtering happens. Expect scenario-based questions, evaluation of AI-generated responses, and tasks testing clinical judgement, safety awareness, and boundary-setting. How these interviews are structured and scored — and how to approach them — is covered in detail in the companion guides below.

Stage 3 — Domain or task-specific evaluation

Days → few weeksSelective

Some roles require additional validation: reviewing example outputs, writing or refining prompts, applying a scoring rubric, or explaining why a given answer is unsafe. More common for clinical safety roles, regulated domains, and senior or specialist reviewers.

Stage 4 — Human review and verification

Days → 1 monthNot universal

Where it happens, this is usually a spot check of AI-scored results, credential verification, or a short clarifying message. Not a traditional interview — a validation step.

Stage 5 — Pool acceptance

Immediate → few days

You receive onboarding documents, sign platform agreements including NDAs, and complete any compliance acknowledgements. You are now eligible. Work comes next, on the platform’s timeline.

What happens after approval

Work is matched to you based on project demand, your domain fit, prior performance scores, and availability. Early on, expect small or calibration tasks and gaps between offers. This is deliberate — platforms build confidence in quality before scaling volume.

Most clinicians who do well experienced an uneven start.

Time to first paid task

Realistic expectations

Fastest1–2 weeks
Common3–6 weeks
SometimesSeveral months

Silence doesn’t mean rejection.

Possible outcomes

Active
Approved and working Regular or semi-regular tasks; performance feedback influences volume over time.
Waiting
Approved but quiet Passed onboarding, few tasks initially — this reflects market demand, not your ability. It often changes.
Partial
Conditional approval Approved for certain task types only — scope can expand with good performance.
Rejected
Feedback is limited Mercor currently provides no feedback. Micro1’s rejection email can be replied to for feedback, which I found genuinely useful. Common reasons include safety or boundary issues and inconsistent answers. Micro1 currently allows re-application after 30 days.

How to approach this psychologically

📁
Portfolio mindset
Freelance expert work, not employment
🎯
Quality over speed
Careful judgement beats fast answers
Expect gaps
Variability early on is completely normal
🔇
Silence ≠ failure
Demand-driven matching creates quiet spells

How this fits with the rest of the guide

This page covers the journey and outcomes. The companion pages cover preparation and execution.

Applied Clinical Judgement

Clinician AI Onboarding:
The End-to-End Process

What to expect, how long it takes, and realistic outcomes

2–4 hrs
Your time investment
5
Stages to completion
1–12 wks
To first paid task
Key mindset

💡 You are not applying for a single job — you are joining a talent pool. Approval means eligibility. Work arrives when a project matches your profile. Silence doesn’t mean rejection.

The five stages
1
Application & Eligibility Minutes → few days
CV, credentials, identity verification (passport chip scan), background check. Checking completeness — not ranking you.
2
AI-Led Interview 30–60 min
Scenario questions, AI output evaluation, clinical judgement and safety tasks. Most filtering happens here.
3
Domain Evaluation Days → weeks Selective
Reviewing outputs, writing prompts, applying rubrics. More common for safety and specialist roles.
4
Human Verification Days → 1 month Not universal
Spot check of AI-scored results, credential check, or a short message. A validation step.
5
Pool Acceptance Immediate → few days
Onboarding docs, NDAs, compliance sign-off. You’re eligible. Early tasks are small — volume grows with trust.
Possible outcomes
Active
Approved & working Regular tasks; performance shapes volume
Waiting
Approved but quiet Reflects market demand, not your ability
Partial
Conditional approval Certain task types only — expands with performance
Rejected
Feedback limited Reply to Micro1 for feedback. Re-apply after 30 days.
Time to first paid task
Realistic expectations
Fastest1–2 weeks
Common3–6 weeks
SometimesSeveral months
Approach it right
📁
Portfolio mindset
Freelance expert work, not employment
🎯
Quality over speed
Careful judgement beats fast answers
Expect gaps
Variability early on is normal
🔇
Silence ≠ failure
Demand-driven matching creates quiet spells
Author Card – Sean Key
Sean Key – Digital Health Programme Manager

Written by

Sean Key

Digital Health Senior Programme Manager  ·  29 years’ NHS & private sector experience

Sean has spent nearly three decades delivering complex digital programmes across the NHS and private healthcare — from LIMS and PACS deployments to primary care, urgent care, mental health, and national interoperability work. Not a clinician. His perspective is that of a practitioner who understands how digital health really gets built, procured, and adopted in the real world.

Last Reviewed: