Implementing
I have systematically incorporated generative AI tools into my assessment procedures across various Master’s level programmes. Based on this experience, I outline three effective methodologies for enhancing feedback whilst maintaining academic integrity. A critical prerequisite for implementation is ensuring students understand that AI assists the assessment process and confirming all submissions are anonymised before processing through external AI platforms.
Parallel Assessment Approach
The most comprehensive strategy employs AI as a collaborative evaluation tool while preserving academic expertise as the central authority. This parallel assessment system functions through a structured workflow:
- I conduct an independent evaluation of the student submission, developing professional judgements regarding strengths and limitations.
- Concurrently, I prompt the AI tool to analyse the same submission using specified assessment parameters.
- I then synthesise both evaluations through a comparative analysis, effectively creating a dialogic engagement with the AI similar to consultation with a junior colleague.
This comparative methodology proves particularly valuable for identifying evaluative blind spots. The AI occasionally highlights overlooked aspects, providing complementary perspectives. However, I frequently observe the AI demonstrates greater leniency in evaluation and sometimes fails to recognise discipline-specific nuances that require subject expertise. To address these limitations, I utilise platforms offering paragraph-level editing capabilities (such as ChatGPT’s canvas mode), enabling targeted refinement of AI-generated feedback.
Feedback Enhancement Through AI Refinement
An alternative approach positions AI primarily as a writing assistant for developing comprehensive feedback. With this methodology, AI transforms concise evaluative notes into detailed, structured assessments. When identifying specific weaknesses—such as inadequate theoretical engagement—I prompt the AI to elaborate these observations into constructive guidance with actionable improvement strategies. After that, I review the written text and make additional corrections to ensure that the intended message is included and that the suggested recommendations are meaningful for the students.
This process effectively converts professional judgements into detailed feedback whilst preserving the integrity of academic evaluation. Furthermore, AI assistance structures comments in supportive, implementation-oriented formats that facilitate student understanding and application.
Rubric-Guided Assessment
The third approach incorporates structured assessment rubrics to guide AI evaluation. By providing detailed descriptors for performance criteria across quality levels, I establish frameworks that align AI assessment with pedagogical objectives.
When evaluating research papers, for example, my assessment criteria include specific descriptors for each performance level that are compatible with Moodle rubric functionality. These descriptors guide the AI in generating targeted feedback addressing specific evaluation parameters, significantly improving assessment accuracy whilst maintaining necessary academic oversight. Nevertheless, it is essential that each output is reviewed by the human to confirm that the mechanical assessment is valid. The essay marking programmes like GoGrader is based on the same logic and based on my experience, it has made significant mistakes in the rubric-guided assessment.
This rubric-structured approach ensures consistency across student cohorts while facilitating intervention when the AI inadequately captures disciplinary nuances or fails to recognise the sophisticated analytical dimensions.