Funded by the European Union. Views and opinions expressed are however those of the author(s) only and do not necessarily reflect those of the European Union or the European Education and Culture Executive Agency (EACEA). Neither the European Union nor EACEA can be held responsible for them. Project number: 2023-1-NL01-KA220-HED-000155675. Casus 4: Geautomatiseerde feedback aan studenten in data scienceopdrachten: verbeterde implementatie en resultaten General information Alessandra Galassi & Pierpaolo Vittorini, CHItaly 2021: 14th Biannual Conference of the Italian SIGCHI Chapter, July 11–13, 2021, Bolzano, Italy, Association for Computing Machinery (ACM), New York, NY, USA, 8 pages. The research discusses the development and implementation of an automated feedback system for assignments in data science. This system focuses on grading assignments that involve a language commands, their outputs, and natural language comments. The primary objective is to change students' learning experiences by providing fast and detailed feedback that can identify mistakes and offer improvement suggestions. The study evaluated the effectiveness of this system using student feedback collected through standardised and custom questionnaires. Description of case The research presents a case study on the development, implementation, and evaluation of an automated feedback system for data science assignments at the University of L’Aquila, Italy. The system was specifically designed to grade assignments involving R language commands, their outputs, and accompanying natural language comments. It used static code analysis and machine learning techniques to evaluate the correctness and quality of the R code and the associated comments. The system provided feedback with explanations for grading decisions, identification of errors, and suggestions for improvements. This feedback was intended to be detailed and instructive to help students learn from their mistakes. Lessons learned The study observed an increased engagement of students in the process. The automated feedback system led to higher levels of student engagement, as students could receive immediate feedback and make corrections quickly. Perceived Usefulness: Students found the feedback to be useful in understanding their mistakes and learning how to correct them. Clear Error Identification: The system was effective in clearly identifying errors and providing impactful suggestions for improvement. Impact: The results show that the automatic feedback provided by the system was useful to students to understand their mistakes, to understand the correct statistical method to solve the problem, and to verify the preparation for the final exam. Furthermore, most of the students used the tool iteratively to improve their solutions. Only a few of them used the tool before submitting the solution or just to see the exercises. Implications for practice These findings highlight the AI system's potential in accurately grading student work in data science courses, with slight improvements observed when combining sentence embeddings with distance-based features.
RkJQdWJsaXNoZXIy NzYwNDE=