Can Teaching Ethical Behaviour Solve Gen-AI Academic Misconduct?
- sasha97518
- Jul 12
- 4 min read
Updated: Jul 13
This blog explores an attempt to teach and encourage honest and ethical behaviour, in an attempt to understand student behaviour for assessment purposes. It also explores the declaration method.
Why This Question Matters:
Generative AI (Gen-AI) is now woven into everyday study practices. Educators face a tough dilemma: How do we uphold academic integrity? One solution many advocate for but untested is to double down on explicit ethics education and the cultivation of personal honesty.
To test this out, a pedagogically structured approach was implemented at a particular university in which I had the opportunity to contribute. The design comprised four distinct stages.
Process:
The first stage engaged students with the Engineers Australia Code of Ethics and Guidelines on Professional Conduct.
The second stage involved students participating in targeted learning activities that fostered awareness of AI ethics, including principles of ethical integrity. At the centre were the 32 ethical implications.
The third stage introduced an assessment-for-learning task that had been deliberately stress-tested as highly susceptible to GenAI use. This task required students to write a personal reflection on specific learning experiences. Given the highly individualised nature of the required response, it was explicitly communicated that the use of any AI or GenAI tools was strictly prohibited.
The first three stages were completed in 2024. At that time, only a small number of AI-generated submissions were detected, suggesting either early signs of student compliance or a lack of confidence or acceptance of the available tools. However, in 2025, a significant shift in behaviour became evident; at least three-quarters of the submissions demonstrated some form of AI involvement, directly contradicting the task’s expectations. This clear escalation in usage over just 12 months highlighted a growing confidence and dependency on the technology, prompting the implementation of stage four.
In stage four, the course coordinator composed a personal letter to students, written as a reflective account of the consequences of unauthorised AI use and the broader implications on genuine learning. It’s crucial to note that no penalties were issued, as this was an assessment-for-learning task. Instead, this stage was facilitated as a guided tutorial activity. Students were first asked to read the letter. Following this, they anonymously responded to a series of questions intended to promote deep critical thinking about the incident and its ethical implications. The final part of the activity involved a structured class discussion where students openly explored the themes that emerged.
Results:
9% stated they had not used AI, while 13% reported being unaware that the tools they used were not permitted. A striking 78% openly admitted to using AI.
On reflection, 43% maintained that AI is a valid tool and should be embraced, not avoided. While, 48% stated that this assessment for learning process motivated them to do better, and be more thoughtful in the future.
46% used AI because they believed it would improve the quality of their work, while 15% lacked confidence in their ability to complete the task despite substantial scaffolding and support.
100% of students agreed they had been ethically prepared to use AI, yet many chose to bypass that guidance.
These results indicate that while teaching and encouraging honest and ethical behaviour contributes to the solution, it is no longer sufficient on its own. From an academic integrity perspective, the ship has sailed. The behavioural shift from 2024 to 2025 suggests that student confidence in and reliance on AI will likely become even more entrenched in 2026.
The AI Declaration Method:
Some have proposed that requiring students to submit AI-use declarations alongside their work could introduce an added layer of academic integrity. In theory, this encourages transparency and accountability. However, when trialled in another subject, nearly 5% of students were found to have submitted false declarations, primarily through the inclusion of hallucinated or fabricated references (a common byproduct of unacknowledged Gen-AI use). This figure likely underrepresents the true extent of the issue, as many cases may have gone undetected. This again highlights a critical vulnerability: systems based on trust alone are inherently open to misuse, especially when the risk of detection remains low.
Observations:
Several key patterns emerged from the data. High-performing students were significantly less likely to use AI when explicitly instructed not to, or to submit false declarations. While it remains possible that some were simply more skilled at concealing their tracks, the available evidence does not support this assumption. In contrast, lower-performing students, particularly those under pressure to pass, such as international students facing financial or time constraints, were more likely to breach AI-use policies. This trend suggests a complex intersection between academic ability, external pressures, and ethical decision-making. It highlights a crucial area for further research to develop more equitable, supportive responses.
Ramifications from the Results:
Cultural Shift Requires Systemic Change: A growing dependency on AI tools points to a wider cultural transformation in learning. When ensuring competency attainment is a core requirement, hoping that students refrain from AI use when told is not feasible. This reinforces that basic long-term principle that assessment of learning must be designed with secure or supervised implementation as a core requirement. In contrast, unsecured or unsupervised assessments must begin with the assumption that GenAI use is inevitable and respond accordingly. As the technology evolves, an alternative approach will become hard to justify if competency assurance is the main goal of the degree.
Assessment Design Must Evolve: At the AAIEEC, we have done the stress testing. Table 10 of our popular paper highlights the short and long-term changes to assessment approaches that can be applied to strengthen or redesign them.
Assessments + Policy needed: At the AAIEEC, we have also outlined policy considerations needed to increase integrity.
Reflective Learning Opportunities Remain Valuable: Even in cases of misconduct, structured reflection and discussion can yield important ethical insights and growth, suggesting a role for restorative approaches.
Scaffolding Alone Isn’t Enough: Confidence and perception of quality still drive AI use despite support. Future interventions may need to focus more on developing student agency, metacognition, and trust in their own ability.
Performance Pressure Overrides Ethical Guidance: Despite ethical preparation and task scaffolding, many students turn to AI primarily due to a strong desire to achieve higher grades. This indicates that academic pressures may outweigh ethical considerations, suggesting that integrity-focused interventions must also address assessment cultures that overemphasise outcomes over learning.
Dr Sasha Nikolic
University of Wollongong
12 July 2025

Comments