AI positive people may be more at risk of manipulation claims report
15 Feb 2026
People with a positive view of AI may be at higher risk of being misled by AI tools, claims a new report.
Researchers from Lancaster University, private firm Cognitive Consultants International (CCI-HQ) and the government Defence Science and Technology Laboratory asked nearly 300 participants to judge the authenticity of 80 faces, half of which were real and half created by AI.
Their study entitled ‘Examining Human Reliance on Artificial Intelligence in Decision Making’ published in Scientific Reports provided text guidance, labelled as coming from either humans or from AI, predicting whether the face was real or fake.
Examples of guidance included such statements as “based on the predictions of 100 humans with expertise in facial recognition, this is a synthetic face” and “based on an algorithm trained to classify real and synthetic faces, the prediction is this is a real face”.
However, guidance was correct in just 50% of cases, with participants were unaware of the manipulation of real versus synthetic faces and correct versus incorrect guidance.
Participants were also asked to complete the human trust scale and the General Attitudes towards Artificial Intelligence Scale (GAAIS), to measure their inclination to trust people and their attitudes towards AI.
The results revealed that more positive attitudes toward AI were coupled with a reduced ability to discriminate between real and synthetic faces in the case of those who received AI guidance.
Lead author Lancaster University’s Dr Sophie Nightingale said: “The public are increasingly being offered AI solutions to help them to navigate decision making in the real world. But our findings suggest that AI-driven support tools may be uniquely placed to engender biases in humans and may ultimately impair rather than elevate decision making.
The research team also included Joe Pearson, formerly of Lancaster University, Itiel Dror from Cognitive Consultants International (CCI-HQ) and Emma Jayes, Georgina Mason and Grace-Rose Whordley from the Defence Science and Technology Laboratory.
Pic: Example stimuli with facial images shown as silhouettes due to licencing permissions: top = synthetic face, AI condition; bottom = real face, human condition.
Real faces were obtained from Flikr-Faces-HQ Dataset, made available by NVIDIA Corporation under Creative Commons BY-NC-SA 4.0 license