AI, Assessments, and Integrity: Why Cheating Isn’t a New Problem — But the Stakes Are Higher Than Ever
- James Shimmen

- 2 days ago
- 4 min read

Concerns about candidates cheating on assessments are not new. Ever since organisations began moving testing online, questions around fairness, integrity, and trust have naturally followed. What has changed, however, is the ease and accessibility of tools that can be misused — particularly generative AI — and the external pressures candidates are facing in an increasingly competitive job market.
With AI just a browser tab away and roles attracting record numbers of applicants, it’s understandable that some candidates may feel tempted to take a risk. But while the technology has evolved, so too have the ways assessment providers detect, deter, and prevent malpractice. The reality is that online assessments have been preparing for this moment far longer than many people realise.
Cheating Concerns Didn’t Start With AI
When online testing first emerged, scepticism was inevitable. Compared to traditional, supervised, in-person assessments, early remote tests raised understandable concerns:
How do we know the right person completed the test?
How do we stop candidates looking up answers?
What prevents someone else helping them off-camera?
These questions existed long before large language models and chat-based AI tools came onto the scene. In response, assessment providers have spent years investing in test design, psychometric robustness, and technical controls aimed at making cheating both difficult and detectable.
How Online Assessments Have Evolved Over Time
Rather than relying on a single safeguard, modern online testing uses a layered approach that combines design, data, and technology.
Item banking and large question pools were among the earliest advancements. By rotating questions drawn from extensive banks, candidates are far less likely to receive identical tests or benefit from shared content.

Adaptive assessments took this a step further. These tests adjust question difficulty in real time based on a candidate’s responses, making it extremely difficult to outsource answers or
predict what will come next.
Verification testing also became a key control. Candidates might complete an unsupervised assessment early in the process, followed by a shorter, supervised or proctored test later on to confirm consistency in performance.
Video and remote proctoring introduced identity verification and session monitoring, creating a digital equivalent of test-centre supervision while maintaining the flexibility candidates expect.
These measures were not reactions to AI — they were responses to long-standing concerns about online assessment integrity.
Why AI Has Changed the Conversation
AI has intensified scrutiny because it feels different. Answers can be generated quickly, quietly, and without a second person present. Combined with economic uncertainty and intense job competition, it’s easy to see why some candidates may justify using AI as a “necessary advantage”.
But using AI in this way carries real consequences:
Mismatch risk: If AI inflates results, candidates may be placed into roles they are not actually prepared for.
Fairness issues: It undermines honest candidates who complete assessments independently.
Long-term impact: Being flagged for malpractice can damage trust and future opportunities.
To address this, assessment providers have extended existing controls and introduced new, AI-era safeguards.
Modern Safeguards: From Prevention to Detection
Many platforms now focus not only on identifying cheating after the fact, but on actively preventing it during the assessment experience.
Examples include:
Preventing copy and paste functionality
Requiring full-screen mode, making it harder to switch tabs or consult other tools
Tracking candidate activity patterns, such as:
Browser off-focus detection, identifying when a candidate toggles between tabs or applications
Print screen detection, recording attempts to capture the assessment
Multi-face detection, flagging when more than one individual appears on camera
Periodic image capture throughout the session
IP tracking, identifying suspicious location changes
Code replay and session analysis, flagging unusual behaviours such as excessively long pauses or abnormal completion times
Individually, these signals may not prove malpractice. Together, they form a powerful audit trail that makes AI misuse far easier to detect than many candidates assume.
Why It’s Time to Be Explicit Again
For a period, its possible that we became guilty of complacency. Years of investment in test design, security measures, and proctoring technologies made online assessments increasingly robust. Item banks grew deeper, adaptive testing became smarter, verification processes more reliable. Cheating didn’t disappear, but it became harder, riskier, and easier to spot — and that success may have lulled the industry into a false sense of security.
The rapid rise of generative AI has disrupted that equilibrium.
Assessment providers and hiring organisations can no longer rely solely on the strength of the technology behind the test. While controls and detection methods remain critical, silence or ambiguity around AI use creates a grey area that candidates may interpret as permission. In a tight job market, where pressure is high and competition fierce, even well‑intentioned candidates may convince themselves that “a little help” is acceptable if the rules are not explicit.
This is why confronting the issue head on matters.
Clear, unambiguous communication is now essential. Candidates need to hear — early and often — that using AI to assist with assessments is not acceptable, why that boundary exists, and what the consequences are. This is not about policing behaviour for the sake of control; it’s about protecting fairness, validity, and the credibility of the hiring process for everyone involved.
Assessment integrity cannot be implied anymore — it has to be actively reinforced. That means updating instructions, revisiting candidate guidance, training hiring teams to communicate expectations confidently, and being transparent about how misuse can be detected. When expectations are clear, most candidates will respect them. When they are vague, temptation fills the gap.
The industry has faced moments like this before, and each time it has responded by raising standards rather than lowering them. This moment is no different. The path forward isn’t to retreat from online assessment or fear AI — it’s to reassert the principles that make assessments meaningful in the first place: authenticity, fairness, and trust.
For candidates, the message needs to be simple: the risk is higher than it appears, and authenticity still matters. For organisations, continued transparency and investment in robust assessment design is the key to maintaining trust — now and in the future.
If you’d like to review your current process, clarify candidate guidance, or understand how to better safeguard against misuse, we’re here to help. Contact us to start a conversation about building fair, future‑ready assessment process.




Comments