CAMPUS NOTE

The Dark Side Of AI In Universities: Are We Celebrating Cheating?

22/07/2025 03:55 PM
Opinions on topical issues from thought leaders, columnists and editors.
By :
Assoc Prof Dr Azmi Abdul Latiff

Artificial Intelligence (AI) has become the shiny new tool in today’s classrooms. From helping university students correct English grammar in their assignments to assisting lecturers in creating materials, AI is reshaping education at lightning speed and has become very resourceful, pushing search engines like Google aside.

But lurking beneath the excitement is a worrying trend: the growing over-reliance on AI and the celebration of cheating through it.

Recently, I came across a post on social media where a local university student openly boasted about using AI to complete assignments.

He claimed that he could complete the tasks given even at the eleventh hour, by relying on AI to generate essays and reports, without getting caught by lecturers. Shockingly, this was not whispered in shame but proudly shared, as if it were a clever achievement.

Such a growing overreliance on AI tools when completing assignments raises serious concerns. Universities risk producing graduates who not only lack the professional competence to perform in the real world but may also emerge as individuals with questionable work ethics and underdeveloped personal responsibility.

Are universities, knowingly or unknowingly, breeding a culture where AI cheating is celebrated rather than condemned?

AI: A double-edged sword

There’s no denying that AI tools like ChatGPT, Grammarly and Quillbot have transformed how students approach learning and assignments. They offer instant feedback, improve accuracy, and help learners overcome writer’s block.

ChatGPT, for instance, is akin to a very resourceful friend who knows everything under the sun and a servant who would prepare anything on the ‘master’s’ request.

Nevertheless, AI tools are meant to support learning, not replace it. When students use AI to generate assignments fully, it defeats the purpose of education. They skip the very processes that education is designed to cultivate – critical thinking, creativity, problem-solving, and self-expression. They may submit polished and remarkable work but walk away with shallow learning.

In English language proficiency classes, for instance, the goal is not just to produce correct sentences in assignments but to ‘own’ the language, to communicate confidently, and to express one’s own ideas. A student who depends entirely on AI tools will struggle to perform in real-life interactions, workplace communications, or even oral exams.

Why is this happening?

Several forces are pushing students down this slippery slope.

First, the pressure to perform is immense. Faced with tight deadlines and high expectations, some students view AI as an easy way out.

Second, peer influence plays a big role. When students see others using AI to get ahead without adverse consequences, they start believing it’s normal or even clever.

Third, many students lack awareness of the ethical lines they’re crossing. To them, using AI might not feel like cheating because it’s not copy-pasting from another student or the internet, it’s “just an app”.

Finally, there’s an institutional problem: some universities have not yet set clear policies on AI use, leaving both students and teachers in a grey zone of what is allowed and what is not.

The risks we overlook

When universities fail to address this issue, they risk not only the integrity of assignments but also their entire reputation.

Graduates may leave their universities with AI-polished degrees, but their lack of real-world skills, such as communication and interpersonal skills, will quickly be exposed to the job market.

Employers will notice. Industries will complain. The institution's credibility and the country’s higher education system will slowly erode.

Beyond that, overreliance on AI erodes the teacher-student relationship. Instead of seeing lecturers as mentors guiding their learning, students start viewing them as obstacles to outsmarting. The students will celebrate every time they complete their AI-generated assignments without getting caught by the teachers. This undermines the entire spirit of education.

What needs to change?

Universities must act now to reclaim the narrative.

First, set clear AI guidelines: Define what counts as acceptable assistance (e.g., grammar checks) and what crosses the line (e.g., full essay generation).

Second, teach ethical AI use. Integrate discussions about responsible AI practices into the curriculum.

Third, design better assessments. Assessments that require students to do physical oral presentations, handwritten reflective journals, and in-class tests and discussions are much harder to fake with AI.

Next, train lecturers to detect AI-generated work and, more importantly, design learning tasks that promote originality and critical engagement.

Most importantly, we need to rebuild the value of effort and learning among students. Education should not be reduced to chasing grades or tricking systems. It should be about growth, discovery, and building human capacities that no machine can replicate.

Technology should serve, not take control

AI is here to stay, but so are the values of integrity, effort, and human learning. As educators and institutions, we must ensure that technology serves these values, not undermines them.

For students, please stop celebrating the clever use of AI to cheat and start championing the honest, sometimes messy, but ultimately rewarding process of real learning.

-- BERNAMA

Assoc Prof Dr Azmi Abdul Latiff is Dean of the Centre for Language Studies at Universiti Tun Hussein Onn Malaysia (UTHM).

(The views expressed in this article are those of the author(s) and do not reflect the official policy or position of BERNAMA)