Xavier Ochoa
New York University (NYU, USA)
Learning Analytics: Looking Ahead
What will Learning Analytics become in the coming years? This keynote argues that the future of the field will be defined by two major interconnected shifts: the consolidation of multimodal evidence and the growing role of Artificial Intelligence. As learners increasingly interact through speech, writing, gesture, collaboration, and physical action, Learning Analytics must expand its lens to capture the full richness of human learning. At the same time, artificial intelligence is transforming what can be modeled, interpreted, and supported, enabling more responsive and personalized forms of feedback and intervention. Moreover, AI could bring a paradigm shift in how Learning Analytics if practiced. This talk offers a forward-looking perspective on how multimodal and intelligent Learning Analytics can help us better understand complex learning processes while also confronting critical challenges around validity, fairness, ethics, and the role of human judgment in educational decision-making.
Biography
Xavier Ochoa is an Associate Professor of Learning Analytics in the Department of Administration, Leadership, and Technology at the Steinhardt School of Culture, Education, and Human Development. He holds a Ph.D. in Engineering (Computer Sciences) from the University of Leuven, Belgium (2008), an M.Sc. in Applied Computer Sciences from the Vrije Universiteit Brussels, Belgium (2002), and a B.S. in Computer Science from Escuela Superior Politécnica del Litoral (ESPOL), Ecuador (2000).
Since the beginning of his research career, Xavier has worked at the intersection of education and technology. His research focus on leveraging cutting-edge technology to augment human capabilities in education and drive innovation in pedagogical practices. He began his career with quantitative analysis and modeling of the production, discovery, consumption, and reuse of digital learning materials. This work led him to the emerging field of Learning Analytics. Currently, Xavier’s work is centered on Multimodal Learning Analytics, a sub-field that combines advanced quantitative techniques from Learning Analytics with recent advances in Artificial Intelligence (AI) and low-cost sensors. His goal is to develop and study tools that enhance awareness, self-reflection, and decision-making for students and instructors in both physical and online learning environments, especially for the development of 21st Century skills. He leads the Augment-Ed research group.
Xavier Ochoa has served as Vice-President of the Society for Learning Analytics Research (SoLAR) and Editor-in-Chief of the Journal of Learning Analytics and he is currently the chair of the ACM Learning@Scale group. He was a founding member and long-time coordinator of the Latin American Community on Learning Technologies (LACLO). He has received several awards, including the Best Researcher Award (2012 and 2018) and Best Professor Award (2013) at ESPOL, as well as the 5-year Best Researcher in Computer Science by the Institute of Electrical and Electronics Engineers (IEEE) in Ecuador (2014). He has also received multiple Best Paper Awards at international conferences, including LAK. Previously, Xavier was a Principal Professor at ESPOL (2002-2018) in Ecuador, where he directed the Information Technology Center (CTI) and the Teaching and Learning Technology (TEA) research group.
Kshitij Sharma
Norwegian University of Science and Technology (NTNU, Norway)
Seeing, Hearing, Helping: Multimodal AI and the Ethics of Learning Data
Combining multimodal data (MMD) with advanced artificial intelligence (AI) techniques provides new opportunities to understand and support complex learning phenomena in educational settings. Multimodal data sources, such as eye-tracking, speech, gesture, and facial expressions, offer rich insights into learners' cognitive and emotional states. For instance, eye-tracking data and the linguistic or prosodic features of speech can help identify a student’s level of expertise, while video data can provide valuable indicators of engagement, motivation, or confusion. These insights, when processed through AI models, can generate actionable feedback, timely and tailored suggestions that guide learners toward improved performance and deeper understanding. In this talk, I will present examples of educational systems that utilize actionable feedback to enhance learning processes and outcomes. Such systems are designed to respond in real time to learner behaviours, adapting content or instructional strategies based on detected needs. Moreover, many existing studies in the learning analytics and AI in education domains have been conducted in narrowly defined or controlled environments, limiting the generalizability of their findings. Moreover, the ethical implications of using sensitive data modalities, such as EEG signals, eye-tracking, or facial recognition, need careful consideration. Issues such as consent, privacy, bias, and data misuse must be addressed to ensure responsible research and deployment. I will discuss methodological strategies for using AI and multimodal data in education research. This includes exploring ways to improve generalisation across diverse learning contexts, thereby enhancing the practical relevance and impact of these technologies on a broader scale.
Biography
Kshitij Sharma is Associate Professor at Department of Computer Science. His background is in the area of Human-Computer Interaction and Collaborative/cooperative learning. In particular, my doctoral work was in the area of using multimodal data (EEG, eye-tracking, facial expressions, audio, dialogues, blood pressure, skin conductance, heart rate) to explain the differences between and predict, experts and novice groups; good and poor students; functional and non-functional groups. The main context for the application of my research has been education. My research interests are primarily in the area of Applied Machine Learning, Artificial Intelligence, and Human-Computer Interaction (HCI) with a heavy emphasis on groups’ behavior and physiological data such as eye-tracking, EEG, facial expressions (theoretical and practical methods in digital interaction). I seek to understand relations between users’ data (EEG, eye-tracking, system log data, users’ actions) and the profile of the user (expertise, motivation, strategy, performance) based on empirical experimentation (controlled experiments) and mixed methods analysis (utilizing a multitude of digital technologies). The knowledge gained from these studies is then used to provide feedback to the group or adapt to the needs of the group in a proactive manner. For this effort, in my studies, I have combined eye-tracking and users’ actions to provide more comprehensive results through data science, statistics, and machine learning practices.
After finishing my doctoral studies in 2015, I started working on developing methods based on Extreme Values Theory (EVT), a methodological space to compute features from abnormalities in data emerging out of collaborative work. EVT is well suited for big data time series. The results show an improvement over contemporary feature extraction methods in terms of their prediction capabilities. During the same period, I have expanded my application area beyond collaborative and educational technologies and have conducted studies in the context of e-commerce, information systems, and Entertainment Computing.