Empowering Social Sciences Educators on the Use of Artificial Intelligence in the Classroom

Key take-aways

Issues to Consider

  • When integrating AI into assessment practices, several critical issues require careful attention. Student privacy concerns mandate rigorous anonymisation of all work before AI processing. Additionally, the appropriate disclosure of AI use in evaluation processes raises important questions about transparency and academic integrity. 
  • Current AI tools demonstrate inconsistent ability to recognise disciplinary nuances and sophisticated conceptual work, potentially undermining evaluation quality. Many systems exhibit bias toward generosity in assessment, requiring constant vigilance from academic staff. The technology’s limitations in evaluating creativity, originality, and complex argumentation can create assessment blind spots if relied upon too heavily. Furthermore, teachers must navigate the learning curve associated with effective AI prompting and output refinement.

    Recommendations

Based on practical implementation experience, several recommendations emerge for effective AI integration:

  • To ensure transparency and manage student expectations, it is highly recommended to inform students about how AI is used in the course. Include a dedicated section on AI usage in the course syllabus—ideally under a clearly labelled “AI Policy.” This section should outline what AI tools are employed by the instructor, how they support assessment or feedback, and what uses are permitted for students. This not only promotes ethical practice and clarity but also reinforces students’ trust in the fairness and consistency of their learning environment.
  • Adopt a hybrid approach where AI supports rather than replaces academic judgment, particularly through parallel assessment methodologies that maintain your expertise as the central authority.
  • Develop clear rubrics with explicit performance descriptors to guide AI evaluation, significantly improving alignment with disciplinary expectations.
  • Consider if the given feedback is part of a formative assessment or summative evaluation. If students can improve their assignments, ask AI to provide feedback. If it is a final assessment, request evaluation. These words make a difference in the output that the AI produces. 
  • Implement regular quality checks comparing AI-generated feedback with your own evaluation to identify and address potential limitations or biases. 
  • Consider a phased implementation, beginning with formative assessments before expanding to summative evaluation contexts.
  • Document time savings and quality improvements to understand the added value of the AI tools while maintaining vigilance regarding potential drawbacks.
Accept Cookies