AI Tools Fuel Spike in Student Legal Complaints
UK universities are seeing a sharp rise in formal complaints and legal threats from students, with many now arriving in the form of Letters Before Action (LBAs). In 2024, the Office of the Independent Adjudicator (OIA) received 3,613 complaints from students in England and Wales, a 15% increase on the previous year and the largest annual jump in a decade. Financial remedies have surged as well: nearly £2.5 million was awarded to students in 2024, more than double the amount in 2023, according to Times Higher Education. While many of these cases stem from real issues (pandemic disruptions, strikes, misinformation about courses), AI is also making it easier for students to escalate disputes into claims.
Chat GPT-generated “lawyer letters”
Traditionally, drafting a robust LBA would require legal expertise or hiring a solicitor. Now, AI chatbots can produce a polished LBA in seconds. For students who feel wronged, it’s tempting to skip straight to an AI-crafted legal letter demanding redress, but entrusting ChatGPT with this task is risky. A 2025 study at the University of Southampton found people were as willing to rely on legal advice from ChatGPT as from a human lawyer. Armed with what looks like professional legal advice, students feel empowered to press their case forcefully.
Inflated claims and unrealistic demands
Universities report that student-drafted complaint letters are growing longer and more aggressive, often clearly based on templates from the internet or chatbots. Commonly, these AI-assisted letters overstate the student’s legal position. For example, a simple grievance about class quality might balloon into a multi-page letter citing consumer law and demanding large sums in compensation. According to the sector watchdog, larger payouts (over £5,000) do happen, but they are rare.
Seasoned advisers urge students to keep demands “reasonable, realistic and justifiable,” since exorbitant claims rarely succeed. Unfortunately, AI doesn’t exercise such caution: if instructed to draft a strong demand, it may “default to aggressive posturing” without regard to the actual merits. This can inflate students’ expectations unrealistically. Data from Times Higher Education shows that most escalated cases do not result in success (n 2024, 78% of academic appeals were not upheld as justified), but by the time that becomes clear, a lot of time and effort may have been spent.
“Ghost cases” and AI hallucinations
AI-written letters sometimes cite laws/precedents that don’t exist. Generative AI has a known tendency to “hallucinate” (to make up convincing sounding but false information). In the context of student complaints, a chatbot might misleadingly cite, say, an “Education Act 2022” clause or a precedent about tuition refunds that it fabricated. Lawyers warn that ChatGPT may misquote rules or suggest incorrect procedures, and its confident tone can make the content dangerously convincing to non-lawyers. A student who sees a neat (albeit fictitious) legal reference in their AI-drafted letter is likely to believe they’ve found a smoking gun, making it much harder to reason with the claimant.
Tougher negotiations for universities
Universities cannot dismiss these AI-generated letters, as LBAs must be answered carefully to avoid legal risk. This consumes significant staff time and expense. Moreover, students coached by AI often exhibit less willingness to compromise, with some students who might have otherwise accepted a university’s decision becoming determined to escalate. In a similar vein, a student who has an AI tool telling them “you have a very strong case” will be harder to placate with a mild apology or a goodwill gesture. The negotiation starts from a more adversarial position.
Going forward
There have been calls to educate litigants (including students) on the limits of AI in legal advice; essentially to improve AI literacy. Some experts suggest universities could pre-empt formal complaints by making internal processes more responsive and sympathetic, so students don’t feel the need to “lawyer up” with ChatGPT. Indeed, the OIA’s annual report remarks that a common driver of escalated complaints is a “perceived lack of humanity” in universities’ initial handling of issues. On the flip side, universities are training staff to recognize AI-crafted letters and respond in a firm but understanding tone. For example, inviting the student to discuss the matter in person is an invitation an AI-generated letter cannot directly respond to.
For university administrators and lawyers, the key is to adapt: ensure complaint processes are clear and empathetic, address legitimate issues swiftly, and develop strategies for handling AI-augmented complaints. For students, the message is to use these powerful tools with caution. ChatGPT can draft a letter, but it doesn’t guarantee you have a winnable case. Unrealistic expectations can waste time and erode trust. The technology may be new, but the old advice still stands: sound legal guidance and open communication are the best route to a fair outcome.
This article was written by Rebecca Quinn, A Partner in the HCR Law Commercial team, and Kaush Kuralla, Trainee Solicitor in the Commercial team.
HCR Law are a provider on The National Education Legal Services Framework contract managed by Dukefield Procurement. Find out more.