Funded by the European Union. Views and opinions expressed are however those of the author(s) only and do not necessarily reflect those of the European Union or the European Education and Culture Executive Agency (EACEA). Neither the European Union nor EACEA can be held responsible for them. Project number: 2023-1-NL01-KA220-HED-000155675. SEPTEMBER 30, 2024 WP2 AI LITERACY TOOLKIT UNIVERSITY OF NICOSIA ALL PARTNERS
Funded by the European Union. Views and opinions expressed are however those of the author(s) only and do not necessarily reflect those of the European Union or the European Education and Culture Executive Agency (EACEA). Neither the European Union nor EACEA can be held responsible for them. Project number: 2023-1-NL01-KA220-HED-000155675. This work is published under the responsibility of the INFINITE Project consortium. The opinions and arguments employed herein do not necessarily reflect the official views of the European Commission. The INFINITE AI Literacy Toolkit by the INFINITE Project is licensed under CC BY-NC-SA 4.0. To view a copy of this licence, visit: Creative Commons — AttributionNoncommercialShareAlike 4.0 International — CC BY-NC-SA 4.0 This license requires that re-users give credit to the creator. It allows re-users to distribute, remix, adapt, and build upon the material in any medium or format, for non-commercial purposes only. If others modify or adapt the material, they must license the modified material under identical terms. ● BY: Credit must be given to you, the creator. ● NC: Only non-commercial use of your work is permitted. Non-commercial means not primarily intended for or directed towards commercial advantage or monetary compensation. ● SA: Adaptations must be shared under the same terms. Funded by the European Union. Views and opinions expressed are however those of the author(s) only and do not necessarily reflect those of the European Union or the European Education and Culture Executive Agency (EACEA). Neither the European Union nor EACEA can be held responsible for them. Project Number: 2023-1-NL01-KA220-HED-000155675.
Funded by the European Union. Views and opinions expressed are however those of the author(s) only and do not necessarily reflect those of the European Union or the European Education and Culture Executive Agency (EACEA). Neither the European Union nor EACEA can be held responsible for them. Project number: 2023-1-NL01-KA220-HED-000155675. Document description Due date of deliverable 26/09/2024 Submission date 26/09/2024 File name WP2_AI Literacy Toolkit Deliverable responsible University of Nicosia Reviewer(s) Deliverable title Revision number 01 Status Draft Dissemination level PU Key words Toolkit, Artificial Intelligence, Higher Education, Professional Practices, Pedagogical Practices Revision History Versio n Date Reviewer(s) Comments 1.0 27/08/2024 Document Reviewer Relevant information about revision 2.1 30/09/2024 Document Reviewer Relevant information about revision 2.2 09/10/2024 Document Reviewer Feedback from partners Contributors Organisation Name(s) University of Nicosia Eleni Trichina, Efi Nisiforou University of Groningen Francisco José Castillo Hernández, Lucy Avraamidou University College Dublin Levent Gorgu, Eleni Mangina ALL DIGITAL Selin Tagmat CARDET Eleni Shaili University of the Aegean Apostolos Kostas, Alivisos Sofos, Dimitrios Spanos, Filippos Tzortzoglou
Funded by the European Union. Views and opinions expressed are however those of the author(s) only and do not necessarily reflect those of the European Union or the European Education and Culture Executive Agency (EACEA). Neither the European Union nor EACEA can be held responsible for them. Project number: 2023-1-NL01-KA220-HED-000155675. Table of Contents REVISION HISTORY ........................................................................................................................................ 3 CONTRIBUTORS .............................................................................................................................................. 3 TABLE OF CONTENTS ................................................................................................................................... 4 SECTION 1: INTRODUCTION ........................................................................................................................ 5 SECTION 2: THEORETICAL BACKGROUND ........................................................................................... 6 SECTION 3: AI-BASED TOOLS .................................................................................................................. 10 SECTION 4: COLLECTION OF GUIDELINES ON HOW HE ACADEMICS CAN LEVERAGE THE POWER OF AI FOR IMPROVED PROFESSIONAL AND PEDAGOGICAL PRACTICES ......... 12 SECTION 5: AI READINESS CHECKLIST ................................................................................................ 19 SECTION 6: CASE STUDIES ........................................................................................................................ 23 REFERENCES .................................................................................................................................................. 71
Funded by the European Union. Views and opinions expressed are however those of the author(s) only and do not necessarily reflect those of the European Union or the European Education and Culture Executive Agency (EACEA). Neither the European Union nor EACEA can be held responsible for them. Project number: 2023-1-NL01-KA220-HED-000155675. Section 1: Introduction The INFINITE AI Literacy Toolkit is an interactive support package for Higher Education (HE) academics to advance their professional practices for integrating Artificial Intelligence (AI) tools into their professional and pedagogical practice. The specific objectives are to: ● raise awareness about the affordances and challenges of AI for stimulating Innovative professional and pedagogical practice in HE; ● compare national/European data, results, and needs in terms of the integration of AI-based approaches in HE; ● equip HE academics with practical guidelines and best practices on how to select and integrate data- AI-based tools for professional and pedagogical practice; ● encourage HE academics to use AI tools with ethical responsibility and integrity in their professional and pedagogical practice; ● promote HE institutions' digital transformation through preparedness of the HE community, to leverage AI for professional and pedagogical practice. The Toolkit will be a foundational guide on best practices, which can be easily adopted and adapted by HEIs. On the one hand, the research activity that is part of this WP, provides the partnership with a deep understanding and expertise on the affordances and complexities of using AI-powered tools. This will produce highquality deliverables that meet the target audience's needs. A platform will be also given to key people in the field to express themselves freely and suggest the changes they wish to see In the HE sector.
Funded by the European Union. Views and opinions expressed are however those of the author(s) only and do not necessarily reflect those of the European Union or the European Education and Culture Executive Agency (EACEA). Neither the European Union nor EACEA can be held responsible for them. Project number: 2023-1-NL01-KA220-HED-000155675. Section 2: Theoretical Background This section presents the theoretical background of the Toolkit with definitions of key terms and notions related to the use of AI in HE along with the role of these advanced technologies in education, their challenges, and their benefits. Having the definitions outlined early on sets a common ground for the Toolkit use, it allows all readers and users to be on the same page and catch up regardless of their current knowledge level. Glossary of key terms Adaptive Learning Adaptive learning is a pedagogical approach that utilises advanced technology, specifically machine-learning algorithms to offer personalised learning experiences tailored to individual students’ needs, preferences, knowledge level and learning style. It uses data-driven algorithms and AI to dynamically adjust the content, the delivery, and the pace of instruction based on students' performance and engagement. By adapting to the specific requirements of each student, adaptive learning promotes effective and efficient learning, increases engagement, and enhances educational outcomes. (Gligorea et al., 2023) Artificial Intelligence (AI) Artificial Intelligence (AI) in education is a promising field that has attracted researchers’ attention. AI is the machine’s capacity to think like a human, learn and evolve (Limna et al., 2022). In educational practices, AI creates new opportunities, potentials, and challenges. It can support administrative tasks such as grading, teaching, and learning activities such as feedback provision. To some extent, AI can act like tutors by explaining concepts, giving feedback, and modifying teaching as in the case of adaptive systems, but also pedagogical tools, which students can use during the learning process (e.g., for cognitive tasks, scaffolding) (Hwang et al., 2020).
Funded by the European Union. Views and opinions expressed are however those of the author(s) only and do not necessarily reflect those of the European Union or the European Education and Culture Executive Agency (EACEA). Neither the European Union nor EACEA can be held responsible for them. Project number: 2023-1-NL01-KA220-HED-000155675. Automatic grading system An automatic grading system is a professional computer programme based on AI that simulates a teacher's behaviour to assign grades to student tasks in an educational setting. It evaluates student knowledge, analyses responses, provides feedback, and creates personalised training programmes. It is used in many AI education apps. The automatic grading systems provide the student with an evaluation score during his/her test. This method can assist teachers in better understanding their students' learning situations while students are more aware of their learning achievement and mastery of knowledge. Overall, these automatic grading systems can deal with the complexities of the teaching context and support students' learning process by giving them feedback and guidance (Limna et al., 2022; Yufeia et al., 2020). Automation The computer system automates tasks that typically require human intervention. By automating repetitive tasks like timetabling, attendance, and enrolment, schools and teachers can free up time for more meaningful interactions with students (European Commission, 2022). Bias Bias is the predisposition towards or against something that can manifest in AI systems in various ways. Data-driven AI, often built using machine learning, can inherit biases present in the training data. Logic-based AI, like rule-based systems, may reflect the biases of the knowledge engineer who defines the rules. Bias isn't always harmful; it can be beneficial in certain contexts. However, when it leads to discriminatory or unfair outcomes, it is a concern. It can arise unintentionally, due to limited exposure to diverse situations, or intentionally, if designed to favour a particular group. (European Commission, 2022)
Funded by the European Union. Views and opinions expressed are however those of the author(s) only and do not necessarily reflect those of the European Union or the European Education and Culture Executive Agency (EACEA). Neither the European Union nor EACEA can be held responsible for them. Project number: 2023-1-NL01-KA220-HED-000155675. Chatbots Chatbots, often referred to as dialogue systems or conversational agents, are programmes that communicate with people through text or voice commands in a way that mimics humanto-human conversation (European Commission, 2022). They are increasingly used in HE through various AI technologies. Their strength lies in their ability to engage users in a natural, conversational tone. For example, Georgia State University implemented a text-based chatbot called "Pounce" to assist students with tasks such as registration, admissions, financial aid, and other administrative processes. (Akgun & Greenhow, 2021) Facial recognition systems Facial recognition systems are utilised to track and analyse students' facial expressions. These systems offer valuable Insights Into student behaviour during learning activities, enabling educators to respond accordingly. This, in turn, supports teachers in adopting learner-focused approaches and enhancing student engagement. (Akgun & Greenhow, 2021) Learning Analytics Participants’ activities and interactions are available through the digital tools implemented, which provide teachers and learning designers with vast information regarding the formers’ learning progress. By collecting and analysing such data properly, education stakeholders can act to follow a practical approach (Klašnja-Milićević et al., 2020). Personal data Information relating to an identified or identifiable natural person, either directly or indirectly, by reference to one or more elements specific to that specific person (European Commission, 2022).
Funded by the European Union. Views and opinions expressed are however those of the author(s) only and do not necessarily reflect those of the European Union or the European Education and Culture Executive Agency (EACEA). Neither the European Union nor EACEA can be held responsible for them. Project number: 2023-1-NL01-KA220-HED-000155675. Personalised learning systems Personalised learning systems or adaptive learning platforms or intelligent tutoring systems, are typical and valuable applications of AI to support students and teachers. These platforms give students access to a range of learning materials based on their specific learning needs and subjects. (Akgun & Greenhow, 2021) Predictive analytics Predictive analytics refer to the use of statistical algorithms and machine learning techniques to make predictions about the future using current and historical data (European Commission, 2022). They are primarily employed to recognise and uncover patterns related to students by leveraging statistical data. For instance, these systems can be used to identify university students who may be in danger of failing or dropping out of a course. By pinpointing these individuals, educators can step in and provide the necessary support to help them succeed. (Akgun & Greenhow, 2021) Virtual Assistant A virtual personal assistant is a software application that can respond to spoken commands and carry out actions like dictation, reading aloud, and calendar management (European Commission, 2022). Virtual Reality Virtual reality technology uses computer-generated imagery and haptic feedback to create a sense of presence in a simulated world. It provides immersive experiences that can be customised to individual needs and preferences. (European Commission, 2022)
Funded by the European Union. Views and opinions expressed are however those of the author(s) only and do not necessarily reflect those of the European Union or the European Education and Culture Executive Agency (EACEA). Neither the European Union nor EACEA can be held responsible for them. Project number: 2023-1-NL01-KA220-HED-000155675. Section 3: AI-based tools The ever-evolving field of AI is transforming how we approach learning, working, and even creating. This new landscape offers a plethora of AI-based tools designed to empower researchers, learners, educators, and collaborators. From automating research tasks to fostering lifelong learning, these tools hold immense potential to streamline workflows, spark creative ideas, and enhance the overall learning and assessment experience. Based on the desk research conducted in Cyprus, Greece, the Netherlands, Ireland and Belgium, we list and explain below some of these possibilities, exploring AI applications for research (like Elicit), lifelong learning (like ChatGPT), collaboration (like Bit.ai), teaching, learning, and assessment (including grading with Gradescope, student support with Adaptiv, and even creating writing assistance with AI tools like ChatGPT, Gemini and Quillbot). The table below presents a summary of the AI-based tools. We have grouped and divided the tools based on what support they offer, i.e., which teaching and learning aspects they can augment. Type of support AI generative tools Personalised Learning & Assessment ● For students (adaptive learning, selfassessment) ● For teachers (offer recommendations for personalised teaching and accommodations, analyses student work) ● ALEKS ● Century ● Comproved ● DreamBox by Discovery Education ● Engage ● Knewton Alta ● Smart Sparrow ● Simbound Teaching, Learning and Assessment ● For teachers (they assist teachers in the design of a course, the ● ClassVR ● Cognii ● Course Hero ● Designs.ai ● Dodona
Funded by the European Union. Views and opinions expressed are however those of the author(s) only and do not necessarily reflect those of the European Union or the European Education and Culture Executive Agency (EACEA). Neither the European Union nor EACEA can be held responsible for them. Project number: 2023-1-NL01-KA220-HED-000155675. creation of the material, and managing coursework and grading). ● Dwengo Simulator ● Fast ForWord ● Gradescope ● MATHia Conversational Learning & Skills Development ● For teachers (they improve communication and practical skills) ● Alelo ● AutoTutor ● Braille AI Tutor ● Dwengo Simulator ● Linguineo Research & Writing Assistance ● For students & teachers (support students, teachers, and researchers with research and writing tasks) ● ASReview ● Bing Chat ● ChatGPT ● ChatPDF ● Connected Papers ● Consensus ● Elicit ● Gemini ● Grammarly ● Quillbot ● ResearchRabbit ● Squire AI Learning Collaboration & Knowledge Management For students & teachers (collaborate effectively and manage knowledge resources) ● Bit.ai ● NOLEJ Other tools For students & teachers (support content creation, accessibility, and improving the learning experience) ● Bing Image Creator ● Cognii Chatbot ● DALL-E ● Deepl ● D-ID ● Ivy Chatbot ● Midjourney ● Nuance Dragon ● Quizlet ● Sonix ● zSpace
Funded by the European Union. Views and opinions expressed are however those of the author(s) only and do not necessarily reflect those of the European Union or the European Education and Culture Executive Agency (EACEA). Neither the European Union nor EACEA can be held responsible for them. Project number: 2023-1-NL01-KA220-HED-000155675. This list provides a broad picture of how AI is impacting various aspects of education. The potential applications continue to evolve, offering possibilities for personalised learning, enhanced research capabilities, and improved teaching support. Section 4: Collection of Guidelines on how HE academics can leverage the power of AI for improved professional and pedagogical practices As the results of the desk research conducted under WP2 have revealed, AI-based tools have broad applications in HE, benefiting both teachers and students. They can streamline administrative tasks, inform data-driven decisions, and personalise learning. These tools also assist with assessment and feedback, enhancing student engagement and virtual support. This potential can significantly improve teaching quality, administrative efficiency and the overall learning experience. While AI offers many advantages, its integration into education raises ethical, legal, technological, and implementation concerns. These challenges require clear guidelines, training, and a focus on responsible use. The study also emphasises the need for critical evaluation of AI tools due to potential reliability and effectiveness issues. Therefore, given how AI applications could lead to harmful consequences, HE staff should ensure that the AI tools they are using are reliable, fair, safe, and trustworthy and that the data included is secure and protects the privacy of individuals. The guidelines provided can help HE staff to understand the affordances of AI and raise awareness of the possible risks, so that all stakeholders are engaged positively, critically and ethically with AI systems to maximise their potential.
Funded by the European Union. Views and opinions expressed are however those of the author(s) only and do not necessarily reflect those of the European Union or the European Education and Culture Executive Agency (EACEA). Neither the European Union nor EACEA can be held responsible for them. Project number: 2023-1-NL01-KA220-HED-000155675. The guidelines provided below are based on shared guidelines such as the OECD Framework for the Classification of AI systems1, OECD’s AI Principles2, EC’s Ethics Guidelines for Trustworthy AI3, the EC’s 2022 Ethical guidelines on the use of AI and data in teaching and learning for educators4 and the recent UNESCO AI competency frameworks5. According to the guidelines and frameworks above, several key principles underpin the ethical use of AI and data in teaching, learning, and assessment. These can be categorised under guidelines related to understanding AI systems, ethical considerations, and guidelines related to practical implementation. Understanding AI Systems ● Assess Purpose: Clearly define the intended purpose of any AI tool you plan to use. Align it with your educational objectives and the needs of your students. ● Evaluate Autonomy: Determine the level of autonomy the AI system has. This will help you understand the extent of human oversight required and potential risks. ● Consider Environment: Be aware of the social, cultural, and legal context in which the AI system operates. This will help you anticipate potential challenges and ensure appropriate use. ● Assess AI Competency: Evaluate your own AI literacy and consider professional development opportunities to deepen your understanding of AI applications in education. 1https://www.oecd.org/en/publications/oecd-framework-for-the-classification-of-aisystems_cb6d9eca-en.html 2 https://oecd.ai/en/ai-principles 3 https://digital-strategy.ec.europa.eu/en/library/ethics-guidelines-trustworthy-ai 4https://education.ec.europa.eu/news/ethical-guidelines-on-the-use-of-artificialintelligence-and-data-in-teaching-and-learning-for-educators 5https://unesdoc.unesco.org/ark:/48223/pf0000391104; https://www.unesco.org/en/articles/generation-ai-navigating-opportunities-and-risksartificial-intelligence-education
Funded by the European Union. Views and opinions expressed are however those of the author(s) only and do not necessarily reflect those of the European Union or the European Education and Culture Executive Agency (EACEA). Neither the European Union nor EACEA can be held responsible for them. Project number: 2023-1-NL01-KA220-HED-000155675. Ethical Considerations ● Beneficial Use: Ensure that AI tools are used to benefit students and enhance their learning experience. Focus on personalised learning, fostering critical thinking, and addressing inequalities. ● Transparency: Explain to students how AI systems work and how they are used in the learning process. Encourage students to critically evaluate AI outputs. You could also consider using open-source AI tools that are transparent and allow for customisation and modification. ● Fairness: Avoid using AI tools that could create biases or discrimination. Ensure all students have equal access to resources and opportunities, addressing potential gender, socioeconomic, or ability-based disparities. ● Privacy and Data: Respect students' privacy and handle their data responsibly. Adhere to data protection regulations and obtain informed consent when collecting or using student data. ● Human Agency: Maintain human oversight and allow students to have a say in their learning process. Encourage students to explore AI responsibly and creatively. ● Democratic Values: Ensure that AI tools are used in education In a way that aligns with democratic principles. This ensures that AI promotes and supports democratic values, such as the freedom of expression and inquiry (open discussion), equality of opportunity and access, and accountability. Practical Implementation ● Professional Development: Seek training and professional development on AI to understand its capabilities and limitations. Stay updated on the latest developments in AI and adjust your practices accordingly. Embrace lifelong learning and encourage a culture of continuous learning among students. ● Critical Evaluation: Evaluate AI tools carefully, considering their effectiveness, reliability, alignment with your educational goals, and potential impact on student learning outcomes.
Funded by the European Union. Views and opinions expressed are however those of the author(s) only and do not necessarily reflect those of the European Union or the European Education and Culture Executive Agency (EACEA). Neither the European Union nor EACEA can be held responsible for them. Project number: 2023-1-NL01-KA220-HED-000155675. ● Student Engagement: Involve students in the decisionmaking process regarding AI use in the classroom. Encourage them to explore AI responsibly and participate in discussions about its potential benefits and risks. ● Ethical Dilemmas: Be prepared to address ethical dilemmas that may arise from AI use and have a plan for responding to such situations. Develop a culture of open discussion and ethical decision-making in the classroom. ● Promote AI Literacy: Integrate AI literacy into your curriculum, encouraging students to understand how AI works, its potential benefits and risks, and how to use it responsibly. ● Discuss with colleagues: Collaborate with other educators to make more informed decisions and ensure a more consistent approach to using AI and data systems across schools. ● Collaborate with other schools: Share experiences and best practices and learn how other schools have implemented AI systems. This can also be useful in identifying and dealing with reliable providers of AI and data systems that adhere to the key requirements for trustworthy AI. Figure 1 below presents a visualised proposed framework that outlines the key principles for ethical and effective AI use in HE. A strong foundation in understanding AI systems is crucial, as it enables educators to assess the purpose, autonomy, and environmental context of AI tools. Building upon this foundation, ethical considerations, such as ensuring beneficial use, transparency, fairness, privacy, and human agency, must guide the implementation of AI. Finally, practical guidelines, including professional development, critical evaluation, student engagement, addressing ethical dilemmas, promoting AI
Funded by the European Union. Views and opinions expressed are however those of the author(s) only and do not necessarily reflect those of the European Union or the European Education and Culture Executive Agency (EACEA). Neither the European Union nor EACEA can be held responsible for them. Project number: 2023-1-NL01-KA220-HED-000155675. literacy, and fostering collaboration, provide a roadmap for educators to successfully integrate AI into their classrooms while upholding ethical standards and maximising its benefits for students. Practical Example: Using AI-powered Adaptive Learning for Personalised Instruction Scenario: A primary school wants to personalise maths instruction for students using an Intelligent Tutoring System (ITS). The school implements an ITS that adapts maths problems to each student's individual learning pace and style. The system uses data on student performance, engagement, and errors to predict their knowledge level and tailor subsequent problems accordingly. Implementation following the Framework: Understanding AI Systems Purpose: The school clearly defines the purpose - to provide personalised maths instruction and track student progress. Autonomy: The ITS has a degree of autonomy in adapting problems, but human teachers still oversee the learning process and provide guidance. Environment: The school considers the age and developmental level of students, ensuring the ITS is appropriate for their cognitive abilities. AI Competency: Teachers receive training on the ITS to understand its capabilities and limitations, as well as how to interpret student data.
Funded by the European Union. Views and opinions expressed are however those of the author(s) only and do not necessarily reflect those of the European Union or the European Education and Culture Executive Agency (EACEA). Neither the European Union nor EACEA can be held responsible for them. Project number: 2023-1-NL01-KA220-HED-000155675. Ethical Considerations Beneficial Use: The ITS is used to help students achieve their maths learning goals and close any knowledge gaps. Transparency: Teachers explain to students how the ITS works and how it adapts to their individual needs. The system provides clear feedback on student progress. Fairness: The ITS is designed to avoid bias in its recommendations, ensuring all students have equal access to resources and support. Privacy and Data: The school ensures that student data is handled securely and in compliance with privacy regulations. Practical Implementation Professional Development: Teachers receive ongoing training on the ITS to stay updated on its features and best practices. Critical Evaluation: The school regularly evaluates the effectiveness of the ITS in improving student learning outcomes and addresses any issues or concerns. Student Engagement: The ITS is designed to be engaging and interactive, with features like gamification and real-time feedback to motivate students. Ethical Dilemmas: The school has a plan to address ethical dilemmas that may arise, such as concerns about overreliance on AI or potential biases in the system. Promote AI Literacy: Students are taught about how AI works and how it is used in the ITS, fostering understanding and critical thinking. Discuss with colleagues: Teachers collaborate with each other to share experiences and best practices in using the ITS.
Funded by the European Union. Views and opinions expressed are however those of the author(s) only and do not necessarily reflect those of the European Union or the European Education and Culture Executive Agency (EACEA). Neither the European Union nor EACEA can be held responsible for them. Project number: 2023-1-NL01-KA220-HED-000155675. Figure 1: Visualised Framework Professional Development Critical Evaluation Student Engagement Ethical Dilemmas AI Literacy Collaboration Beneficial Use Democratic Values Transparency Fairness Privacy and Data Human Agency Assess Purpose Evaluate Autonomy Consider Environment Assess AI Competency
Funded by the European Union. Views and opinions expressed are however those of the author(s) only and do not necessarily reflect those of the European Union or the European Education and Culture Executive Agency (EACEA). Neither the European Union nor EACEA can be held responsible for them. Project number: 2023-1-NL01-KA220-HED-000155675. Section 5: AI Readiness Checklist This section provides a comprehensive checklist design to help HE academics to assess their level of readiness in using AI for professional and pedagogical practices. Based on existing tools such as the Readiness Assessment for Faculty Members by the National Science Foundation and Association of Computing Machinery, and the AI Readiness Self-Assessment Tool, by the AI Education Project at Harvard University, this checklist aims to provide a comprehensive framework for educators to evaluate their understanding, skills, and preparedness to effectively Integrate AI Into their teaching, learning, and assessment processes. AI Readiness Checklist 1. AI Awareness and Understanding Criteria Yes No Comments Are you familiar with key AI concepts (e.g., machine learning, neural networks)? Do you understand how AI is influencing HE and your discipline? Have you explored AI-enhanced tools for teaching, assessment and learning? Do you recognise the ethical implications of AI in educational contexts (e.g., bias, fairness)? Are you aware of how AI can assist in research workflows (e.g., data analysis, automation)? Are you aware of the potential benefits and challenges of using AI In education? Can you identify examples of AI-powered educational tools and applications?
Funded by the European Union. Views and opinions expressed are however those of the author(s) only and do not necessarily reflect those of the European Union or the European Education and Culture Executive Agency (EACEA). Neither the European Union nor EACEA can be held responsible for them. Project number: 2023-1-NL01-KA220-HED-000155675. 2. Pedagogical Integration of AI Criteria Yes No Comments Have you considered how AI tools can enhance your teaching methods (e.g., project-based learning)? Do you use AI to personalise learning experiences for students? Have you used or explored AI-driven educational tools like Intelligent tutoring systems or virtual assistants? Can AI tools you use provide adaptive learning pathways for students based on their progress? Are you integrating AI-related content into your curriculum to improve student AI literacy? Do the AI tools align with your specific learning objectives and outcomes? Does the AI tool provide formative feedback and learning analytics to assess student performance? Are AI-based insights being used to improve student engagement and success rates? 3. Professional Development in AI Criteria Yes No Comments Have you participated in professional development workshops or courses on AI in education? Do you engage with AI research communities or attend AI-related academic conferences? Are you actively seeking AI-focused educational resources or collaborations with AI experts? Are you prepared to integrate new AI technologies into your teaching practice?
Funded by the European Union. Views and opinions expressed are however those of the author(s) only and do not necessarily reflect those of the European Union or the European Education and Culture Executive Agency (EACEA). Neither the European Union nor EACEA can be held responsible for them. Project number: 2023-1-NL01-KA220-HED-000155675. Have you considered how AI can enhance your research methodologies or teaching strategies? Do you collaborate with other faculty members or industry experts on AI-related projects? Are you committed to staying up-to-date on the latest developments in AI and its applications in education? 4. Ethical Use of AI in Education and Research Criteria Yes No Comments Are you aware of the ethical implications of using AI in education? Do you consider data privacy when using AI tools in education? Are the AI tools you use compliant with data protection regulations (e.g., GDPR)? Are there clear policies on how student data is handled, stored, and anonymised by AI tools? Can students and educators control the AI tool’s data collection and usage? Are you aware of any biases that may exist in the AI algorithms used in your classroom? Does the AI tool promote fairness, diversity, and inclusivity? Is there transparency in how AI decisions are made (e.g., in grading, feedback)? Are ethical implications considered when integrating AI into research (e.g., automation of analysis, bias)?
Funded by the European Union. Views and opinions expressed are however those of the author(s) only and do not necessarily reflect those of the European Union or the European Education and Culture Executive Agency (EACEA). Neither the European Union nor EACEA can be held responsible for them. Project number: 2023-1-NL01-KA220-HED-000155675. 5. Institutional Support and AI Ecosystem Criteria Yes No Comments Does your institution provide resources for AI education (e.g., funding, infrastructure, training)? Is there institutional support for integrating AI into teaching (e.g., LMS integration, AI tool licences)? Are there policies and frameworks in place to support AI ethics and responsible use? Are faculty members encouraged to engage with AI-related research or curriculum development? Does your institution offer collaborative opportunities to work on AIrelated projects? Is there administrative support for developing and funding AI-driven teaching initiatives? Does your institution have partnerships with AI companies or research Institutions? After completing the AI Readiness Checklist, it is essential to reflect on your responses to identify areas of strength and areas where further development is needed. Consider questions such as: In which areas of AI do you feel most confident? Where do you see opportunities for growth? What kind of support, whether institutional, technical, or pedagogical, do you require to advance your AI readiness? Additionally, it is crucial to reflect on the ethical implications of AI in education, the potential benefits and risks, collaboration opportunities, and ensuring AI accessibility and inclusivity for all students. This self-reflection will help you tailor your professional development and AI integration efforts to meet your specific needs and goals.
Funded by the European Union. Views and opinions expressed are however those of the author(s) only and do not necessarily reflect those of the European Union or the European Education and Culture Executive Agency (EACEA). Neither the European Union nor EACEA can be held responsible for them. Project number: 2023-1-NL01-KA220-HED-000155675. Section 6: Case Studies This section provides thirty-six (36) national/EU case studies that offer evidence-based paradigms of AI tools integrations in HEIs, their affordances and challenges for professional and pedagogical practice. Case Study 1: An integrated framework for developing and evaluating an automated lecture style assessment system General information Dimitriadou, E., & Lanitis, A. (2023). An integrated framework for developing and evaluating an automated lecture style assessment system. arXiv (Cornell University). https://doi.org/10.48550/arxiv.2312.00201 The study aims to develop and evaluate an integrated system that provides an automated evaluation of an instructor's lecture style. This system aims to help teachers by giving instant feedback on their lecturing style, to improve quality and enhance student learning experiences. Description of case The proposed application analysed and extracted measurable biometric characteristics from video cameras and audio sensors using machine learning. These characteristics included facial expressions, body activity, speech rate and tone, hand movements, and facial pose. These features, in combination, provided a score reflecting the quality of the lecture style. The system’s performance was evaluated by comparing its assessments with human evaluations and through feedback from education officers, teachers, and students. Lessons learned The results indicated that the system effectively provided automated feedback that participants received well. It performed comparably to humans and, in some cases, even outperformed them. Participants appreciated the application's utility in enhancing lecture delivery through immediate feedback. Implications for practice With similar or even fewer differences between AI-driven and human evaluation of lecture quality, the system can be used in natural settings (e.g., a university classroom) to support teachers in improving their lecturing and increasing student engagement. The researchers aim to further improve the system by refining the biometric metrics used in the automated lecture-style evaluation
Funded by the European Union. Views and opinions expressed are however those of the author(s) only and do not necessarily reflect those of the European Union or the European Education and Culture Executive Agency (EACEA). Neither the European Union nor EACEA can be held responsible for them. Project number: 2023-1-NL01-KA220-HED-000155675. system, expanding its capabilities to include additional and wearable cameras and conducting real-time testing in classroom settings. Case Study 2: Student action recognition for improving teacher feedback during tele-education General information Dimitriadou, E., & Lanitis, A. (2024). Student action recognition for improving teacher feedback during tele-education. IEEE Transactions on Learning Technologies, 17, 569–584. https://doi.org/10.1109/tlt.2023.3301094 The aim of the research was to develop and evaluate a student action recognition system, reviewing students' behaviour participation and disaffection, intended to support teacher feedback during distance education. This system was designed to monitor student actions in online courses while protecting student privacy and providing real-time feedback to educators about student engagement without direct visual contact. Description of case An AI system was developed to recognise specific student actions using deep neural network architectures like GoogleNet, Inception-v3, and Faster R-CNN. The system used videos of student actions, processed locally on student devices, to train these networks. The effectiveness of the system was assessed through a comprehensive user evaluation involving students, parents, and educators, who provided feedback via online questionnaires and interviews. Lessons learned The results indicated that the system was effective in recognising student actions and was well-received by all stakeholders. Educators, in particular, found it useful for improving interaction and engagement in online settings. The system was well accepted due to the personal data protection measures applied. Implications for practice The AI system could enhance the effectiveness of online learning and distance education by providing insights into student behaviour, thus facilitating better educational outcomes.
Funded by the European Union. Views and opinions expressed are however those of the author(s) only and do not necessarily reflect those of the European Union or the European Education and Culture Executive Agency (EACEA). Neither the European Union nor EACEA can be held responsible for them. Project number: 2023-1-NL01-KA220-HED-000155675. Case Study 3: Ensuring academic integrity and trust in online learning environments: A longitudinal study of an AI-Centered proctoring system in tertiary educational institutions. General information Fidas, C., Belk, M., Constantinides, A., Portugal, D., Martins, P., Pietron, A. M., Pitsillides, A., & Avouris, N. (2023). Ensuring academic integrity and trust in online learning environments: A longitudinal study of an AI-Centered proctoring system in tertiary educational institutions. Education Sciences, 13(6), 566. https://doi.org/10.3390/educsci13060566 The research aimed to enhance the credibility of online examinations in HE by identifying scenarios/cases that threaten the credibility of online exams and proposing AI-driven solutions to address these threats. A longitudinal study involving stakeholders from three European HE institutions was conducted. Description of case The researchers designed and implemented an intelligent system titled TRUSTID. The system incorporates advanced biometric technologies for identity verification and continuous monitoring. Students first register their biometric data, such as facial and vocal characteristics, which TRUSTID continuously uses to verify the student's identity throughout the exam. The system is privacyfriendly, allowing students to securely control their personal biometric information. Additionally, TRUSTID monitors behavioural patterns and physical examination contexts, detecting unusual activities that may be related to cheating. This integrated system ensures that the same student remains throughout the test and supports examiners by offering real-time alerts and a secure, user-friendly interface for data security. Lessons learned The TRUSTID system, evaluated by stakeholders, showed resilience against impersonation attacks and received positive feedback in terms of usability and user experience. The system was robust in monitoring student behaviours and identifying anomalies, receiving positive feedback from students and instructors for its usability and ease of use. Privacy concerns were addressed with a privacypreserving biometric wallet, allowing secure control and sharing of biometric data. Overall, the TRUSTID system was well-received across various stakeholder groups, showing its potential applicability and effectiveness in maintaining academic integrity in online educational settings.
Funded by the European Union. Views and opinions expressed are however those of the author(s) only and do not necessarily reflect those of the European Union or the European Education and Culture Executive Agency (EACEA). Neither the European Union nor EACEA can be held responsible for them. Project number: 2023-1-NL01-KA220-HED-000155675. Implications for practice The system has the potential to enhance the integrity of online examinations by using advanced biometric verification methods to prevent common threats such as impersonation and cheating. Case Study 4: Automated Feedback to Students in Data Science Assignments: Improved Implementation and Results General information Alessandra Galassi & Pierpaolo Vittorini, CHItaly 2021: 14th Biannual Conference of the Italian SIGCHI Chapter, July 11–13, 2021, Bolzano, Italy, Association for Computing Machinery (ACM), New York, NY, USA, 8 pages. The research discusses the development and implementation of an automated feedback system for assignments in data science. This system focuses on grading assignments that involve language commands, their outputs, and natural language comments. The primary objective is to change students' learning experiences by providing fast and detailed feedback that can identify mistakes and offer improvement suggestions. The study evaluated the effectiveness of this system using student feedback collected through standardised and custom questionnaires. Description of case The research presents a case study on the development, implementation, and evaluation of an automated feedback system for data science assignments at the University of L’Aquila, Italy. The system was specifically designed to grade assignments involving R language commands, their outputs, and accompanying natural language comments. It used static code analysis and machine learning techniques to evaluate the correctness and quality of the R code and the associated comments. The system provided feedback with explanations for grading decisions, identification of errors, and suggestions for improvements. This feedback was intended to be detailed and instructive to help students learn from their mistakes. Lessons learned The study observed an increased engagement of students in the process. The automated feedback system led to higher levels of student engagement, as students could receive immediate feedback and make corrections quickly. Perceived Usefulness: Students found the feedback to be useful in understanding their mistakes and learning how to correct them.
Funded by the European Union. Views and opinions expressed are however those of the author(s) only and do not necessarily reflect those of the European Union or the European Education and Culture Executive Agency (EACEA). Neither the European Union nor EACEA can be held responsible for them. Project number: 2023-1-NL01-KA220-HED-000155675. Clear Error Identification: The system was effective in clearly identifying errors and providing impactful suggestions for improvement. Impact: The results show that the automatic feedback provided by the system was useful to students to understand their mistakes, to understand the correct statistical method to solve the problem, and to verify the preparation for the final exam. Furthermore, most of the students used the tool iteratively to improve their solutions. Only a few of them used the tool before submitting the solution or just to see the exercises. Implications for practice These findings highlight the AI system's potential in accurately grading student work in data science courses, with slight improvements observed when combining sentence embeddings with distance-based features. Case Study 5: An AI-Based System for Formative and Summative Assessment in Data Science Courses General information Amelio, A., & De Medio, C. (n.d.), 22 December 2020. An AI-Based System for Formative and Summative Assessment in Data Science Courses, International Journal of Artificial Intelligence in Education (2021) 31:159–185 https://doi.org/10.1007/s40593-020-00230-2 The paper discusses an AI-based system designed for formative and summative assessments in data science courses. This system automates the grading process and provides feedback to both students and professors. This study's aim is to evaluate the system's effectiveness by comparing the time taken for grading, the accuracy of the grading, and the impact on student learning outcomes. Description of case The study evaluated time efficiency on grading manually versus grading with the AI tool, the grading accuracy by comparing the AI tool's accuracy to the manual grading's accuracy, the learning outcomes (the impact of automated feedback of student performance in final exams and the usability of the tool, which was based on the students' feedback on the system's usability. Lessons learned The system was expected to enhance student learning by offering timely and accurate feedback. The Model performance showed that only a slight improvement in performance when distance-based features were included along with sentence embeddings, which suggests that sentence embeddings
Funded by the European Union. Views and opinions expressed are however those of the author(s) only and do not necessarily reflect those of the European Union or the European Education and Culture Executive Agency (EACEA). Neither the European Union nor EACEA can be held responsible for them. Project number: 2023-1-NL01-KA220-HED-000155675. alone were effective in representing the semantic content of the answers, especially when the answers and correct solutions had high lexical overlap. It was useful for both formative and summative assessments. In formative assessments, students used the tool for homework and received automated feedback, which was later compared to manual feedback. In summative assessments, exams were corrected either manually or through the AI system, allowing for a comparison of performance between human and AI grading. Implications for practice Efficiency in Grading, since the AI system reduces grading time, allowing instructors to focus on other educational tasks, and ensures consistent, unbiased evaluations, enhanced, Student Feedback, since it provides immediate, detailed feedback, helping students learn and improve continuously, Scalability, since it facilitates handling large classes, making it ideal for MOOCs and large enrolment courses, and Focus on Learning, since it frees up instructor time to offer personalised support and improve teaching strategies. Case Study 6: Enhancing Authentic Assessment in Higher Education: leveraging Digital Transformation and Artificial Intelligence General information Perla, L., & Vinci, V. (2023). Enhancing Authentic Assessment in Higher Education: Leveraging Digital Transformation and Artificial Intelligence. In AIxIA 2023 - 1st International Workshop on High-performance Artificial Intelligence Systems in Education (pp. 1-15). CEUR Workshop Proceedings. http://ceur-ws.org/VolXXXX/paperXXX.pdf The study focuses on implementing authentic assessment in HE through digital transformation and AI. It explores the integration of AI-based tools to improve the authenticity, personalisation, and flexibility of assessment methods, emphasising the shift towards hybrid teaching and online learning. Description of case The study explores the integration of digital transformation and AI to enhance authentic assessment in HE. The research focuses on leveraging AI to improve assessment methods, making them more aligned with real-world tasks and challenges. The authors propose that AI can be used to create more dynamic, personalised, and effective evaluation tools that better reflect students' learning progress and skills. This approach aims to move beyond traditional assessment techniques and incorporate digital technologies to support both students and educators in achieving more meaningful educational outcomes.
RkJQdWJsaXNoZXIy NzYwNDE=