Technology

The data scientist putting humans at the center of educational AI

Roberto Martinez-Maldonado is a data science researcher at Monash University, Australia. Roberto researches technologies that utilize Artificial Intelligence (AI) to support teachers. He is also exploring ethical issues related to the use of AI in the classroom. Annie Brookman-Byrne talks to Roberto about how AI can be designed to protect the autonomy and safety of learners and teachers, and the many challenges that need to be overcome.

Annie Brookman-Byrne: What technologies are you integrating into the classroom?

Roberto Martinez-Maldonado: My research aims to deepen society’s understanding of the social and technological aspects of using AI in education. I work with teachers, students and other educational stakeholders to design learning analytics dashboards that can be used in physical learning environments.

With my team, I have deployed various types of sensors and AI algorithms in classrooms and teacher training environments. For example, we use location sensors to track and detect the most efficient use of classroom space by teachers. To find ways to teach students how to cope with stress, we used wristbands to measure students’ physiological stress levels during learning tasks. We also used personal audio devices paired with state-of-the-art AI services to automatically transcribe student conversations and determine whether they showed the development of effective teamwork.

Of course, integrating these technologies may raise ethical concerns, especially if they are perceived or used as tools for monitoring or evaluating teachers. I have been working with educators and students to design interfaces to ensure that AI educational tools reflect their values and practices. My hope is that this will ensure that these rapidly evolving technologies remain supportive tools that retain the agency of students and teachers, rather than posing a threat or attempting to replace the irreplaceable human dimension of education.

ABB: What are the potential benefits and concerns around generative AI, such as ChatGPT?

RMM: AI is entering its second wave of innovation. Generative AI, especially large-scale language models, is certainly paving the way for new ways to analyze valuable student data. For example, emerging AI technologies can analyze spoken conversations in learning environments through automatic transcription and automated analysis. In the past, these tasks took hours of a researcher’s time. Now, we are streamlining the process, providing students and teachers with insights that used to be available only to researchers. Providing summaries of students’ conversations at various stages of learning activities may prompt reflection and facilitate formative assessment. Teachers can identify patterns in their classes and use this information to provide informed feedback or adjust their teaching methods to help students achieve desired learning outcomes.

However, the new wave of AI is not without its drawbacks. Generative AI tools are already influencing assessment practices. Many of these tools are publicly available, and learners often use them without knowing their limitations or potential impact on learning trajectories. While AI can provide super tools to empower us, these technologies also have the potential to become tools of control and surveillance, crippling learners and educators.

“While AI can provide super tools to empower us, these technologies also have the potential to become tools of control and surveillance, crippling learners and educators.”

ABB: What are the other key challenges in integrating AI into education?

RMM: A major challenge is interpreting the output of AI. While AI can suggest or predict learning actions, the reasons why algorithms make certain choices are often opaque, making many AI systems seem like mysterious black boxes.

Another challenge is how to effectively balance personalized learning with maintaining a degree of standardization so that everyone has access to the same learning opportunities.

Data collected using AI is also a concern. How do we ensure the integrity of data when it is used for assessment purposes? Furthermore, as education platforms continue to collect and manage large amounts of learner data, how do we ensure that this information is used ethically and to protect the privacy and security of teachers and learners?

Artificial intelligence is inherently biased in terms of culture, gender and socio-economic status. This must be addressed to maintain equity and fairness in the education system.

In addition, it is crucial to achieve synergy between teachers and AI tools. It is crucial that AI technologies support educators rather than replace them, especially as economic pressures may drive the automation of various educational processes that could diminish the role of teachers.

“It’s critical that AI technology supports educators, not replaces them.”

ABB: How are you addressing these challenges with a human-centered approach and what more needs to be done?

RMM: Today’s young learners are facing an evolving and uncertain future. As artificial intelligence continues to reshape the workforce landscape, it is important to design with this uncertain future in mind. I strongly believe in a human-centered approach when integrating, designing, and researching AI in education. To this end, I have been building strong partnerships with stakeholders in education. Together, we aim to ensure that future AI applications develop in harmony with real educational needs and aspirations. When developing HCI, I make sure that the views of educators and learners are taken into account.

For example, external checks to ensure system trustworthiness and reliability are commonplace in aviation and healthcare. However, these mechanisms have not yet been applied in AI. A crucial first step for the industry as a whole is to draft an ethical code of conduct for AI, but we must also put in place mechanisms to prevent AI systems deployed in schools from undermining the agency of teachers and students.

“Predicting the future is challenging and it is vital that those affected by AI in education play an active role in influencing the future process.”

ABB: Are you hopeful about the future of the field?

RMM: Given my non-exhaustive but long list of challenges facing the field, it doesn’t seem like I’m very optimistic about the future. You could say I’m on the fence. I recognize that we live in a hyper-connected society that is currently becoming increasingly fragmented. It may be too naïve to expect governments to reach a general consensus on the optimal development of AI so as to minimize social harm. Strong partnerships between industry, educators, families, and researchers are critical to positively shaping the future we want, not just reacting to each new AI innovation. It is difficult to envision what the future holds. Predicting the future is challenging, and those affected by AI in education must play an active role in shaping that future.

留言

您的电子邮箱地址不会被公开。 必填项已用 * 标注