Empowering Social Sciences Educators on the Use of Artificial Intelligence in the Classroom

Glossary of key terms

🔎 AI Bias – is an anomaly in the output of AI systems, due to the prejudices and/or erroneous assumptions made during the system development process or prejudices in the training data, so the results from the AI system cannot be generalised widely.

🔎 AI Literacy – the ability to understand, use, and critically evaluate AI tools and their outputs. AI literacy encompasses skills like recognizing AI biases, verifying information, and integrating AI responsibly into learning processes.

🔎 Algorithm – a formula or set of rules (or procedure, processes, or instructions, or steps) for solving a problem or for performing a task. In Artificial Intelligence, the algorithm tells the machine how to find answers to a question or solutions to a problem. In Machine Learning, systems use many different types of algorithms. Common examples include decision trees, clustering algorithms, classification algorithms, or regression algorithms.

🔎 Chatbot – a computer program designed to simulate conversation with a human user, usually over the internet; especially one used to provide information or assistance to the user as part of an automated service.

🔎 Critical AI Use – an approach that emphasizes thoughtful and reflective use of AI tools in education. It involves questioning AI outputs, understanding their limitations, and ensuring AI serves as a complement to human cognition rather than a replacement.

🔎 Ethical AI – term used to indicate the development, deployment and use of AI that ensures compliance with ethical norms, including fundamental rights as special moral entitlements, ethical principles, and related core values. It is the second of the three core elements necessary for achieving Trustworthy AI.

🔎 Hallucination (in AI) – large language models, such as ChatGPT, are unable to identify if the phrases they generate make sense or are accurate. This can sometimes lead to inaccurate results, also known as ‘hallucination’ effects, where large language models generate plausible sounding but inaccurate text. Hallucinations can also result from biases in training datasets or the model’s lack of access to up-to-date information

🔎 Large Language Models (LLMs) – advanced AI models trained on vast amounts of text data to understand and generate human-like language. These models underpin many GenAI tools in education.

🔎 Machine Learning (ML) – is a branch of artificial intelligence (AI) and computer science which focuses on development of systems that are able to learn and adapt without following explicit instructions imitating the way that humans learn, gradually improving its accuracy, by using algorithms and statistical models to analyse and draw inferences from patterns in data.

🔎 Prompt – an input or instruction given to a GenAI model to elicit a desired response. Crafting effective prompts helps guide the AI to produce relevant and accurate outputs.

🔎 Prompt Engineering – the practice of designing and refining prompts to achieve specific outcomes from GenAI models, often involving experimentation to guide the AI’s response effectively.

References:

Other useful glossaries:

Accept Cookies