AI in Recruitment: Powerful Potential, but are we Forgetting the Basics of Reliability and Validity?
- James Shimmen

- 10 hours ago
- 3 min read
Artificial Intelligence is transforming recruitment. From automating screening to analysing video interviews, AI‑powered tools promise faster hiring, better candidate matching, and improved efficiency. At GFB, we’re genuinely excited about the possibilities AI brings—its potential to streamline processes and uncover insights that were previously out of reach is extraordinary.
But excitement should not eclipse caution.
As new AI‑driven hiring tools enter the market—often packaged with bold claims about accuracy, predictive power, and “bias‑free” results—it’s essential that organisations remember the fundamentals of good assessment practice: reliability, validity, fairness, and defensibility. These standards have long been the bedrock of psychometrics, and yet many AI tools currently used in selection processes don’t come close to meeting them.
Here we explore the contrast between how psychometric assessments are validated and how AI recruitment tools are evaluated today—and why due diligence is more crucial than ever.
Psychometric Assessments: Built on Decades of Rigorous Validation
Traditional psychometric assessments—ability tests, personality questionnaires, situational judgement tests—are held to strict professional standards. To be considered credible, they must demonstrate:
1. Reliability
Can the tool produce consistent results over time, across different groups, and across different versions of the test?
2. Validity
Does it measure what it claims to measure? Does it predict the right outcomes? Is it job‑relevant?
3. Fairness and Bias Control
Are results equivalent across demographic groups? Does the tool disadvantage any protected group?
4. Transparency
Are the underlying constructs, scoring methods, and validation evidence openly shared and peer‑reviewed?
5. Ongoing Review and Oversight
In the UK, the British Psychological Society (BPS) provides independent review and accreditation for assessments through the Psychological Testing Centre. Their guidelines and test reviews evaluate:
reliability and validity evidence
fairness and diversity considerations
testing materials and usability
technical manuals and supporting evidence
This oversight gives employers confidence that BPS‑approved tools meet widely recognised professional and ethical standards.
AI Recruitment Tools: Innovation Outpacing Verification
AI tools for recruitment—including CV‑screening algorithms, video‑interview analysis tools, chatbot assessors, or “fit” prediction models—often lack this level of scrutiny.
Many AI vendors promote:
“objective” or “science‑backed” results
“bias‑free” decision‑making
high accuracy or predictive power
Yet critical questions often go unanswered:
Where is the reliability evidence?
Does the tool produce stable, repeatable results? If a candidate completes a video interview twice, would the AI rate them similarly?
Where is the construct validity?
What exactly does the tool measure? If facial expressions or voice patterns are analysed, what behavioural or psychological construct do these features map onto?
How is job relevance established?
Is there robust data linking the AI’s outputs to real performance metrics within specific roles?
How was bias tested and mitigated?
Does the vendor provide transparent demographic impact data? Or are claims of “bias‑free” simply marketing?
Where is the independent oversight?
Unlike psychometrics, most AI hiring tools are not evaluated by bodies like the BPS. Regulatory frameworks in the UK are emerging, but oversight is still catching up with innovation.
In many cases, organisations find themselves relying on the vendor’s own internal validation—something psychometricians would never accept as sufficient.
Why This Matters: Legal, Ethical, and Reputational Risk
Hiring decisions carry legal weight. In the UK, employers must ensure that any tool used in selection:
does not discriminate against protected groups
is job‑relevant and evidence‑based
can stand up to external scrutiny
Using tools with unclear validity or untested algorithms increases exposure to:
legal challenge (e.g., discrimination claims)
reputational harm if AI decisions are shown to be biased
poor hiring outcomes due to inaccurate predictions
No matter how sophisticated AI becomes, employers remain accountable for the fairness and defensibility of their selection processes.
We’re Pro‑AI… But Pro‑Evidence Too
Lets be clear, here at GFB we are definitely not “anti‑AI.” Quite the opposite—we are excited about how AI can complement traditional assessment approaches, enhance efficiency, and make data‑driven decision‑making more accessible.
But we also want to open a responsible conversation.
New tools with glossy interfaces and big promises can be tempting. But when hiring decisions—and people’s careers—are at stake, due diligence isn’t optional. Organisations must ask the same tough questions of AI tools that they would ask of any other assessment:
Is it reliable?
Is it valid?
Is it fair?
Has it been independently reviewed?
Can we defend using it?
AI may be new, but the principles of good assessment are not.
Final Thoughts: Don’t Be Blinded by the Shine
AI offers extraordinary potential, but its credibility must match its claims. The future of recruitment will absolutely involve AI—but it should be responsible, transparent, and scientifically robust.
The message is simple: Adopt AI enthusiastically—but evaluate it rigorously. That’s how we build hiring processes that are innovative and fair.
Need help finding reliable well validated tools to support your recruitment? Get in touch here.




Comments