Technology

What should students, parents, and teachers know about AI?

Back in 2011, former Judge Kay Firth-Butterfield began thinking about how humans live and work with artificial intelligence (AI). She is a senior researcher at the University of Texas, where she studies the use and management of technology (AI.) In 2014, she became the world’s first AI Ethics Officer (at Lucid Holdings LLC) and is a leading expert in the field of responsible AI. She led the AI and machine learning effort at the World Economic Forum for more than five years, where she was responsible for helping to guide countries and businesses in the responsible use of new technologies. She is a member of the U.S. Council of Governments and the UNESCO Advisory Board, respectively.

Today, she is CEO of Good Tech Advisory, which works with governments, charities, businesses and academia to help use AI responsibly and productively, while remaining legally compliant. She has long been recognized as a leading woman in the field of AI governance and received the TIME 100 Influencer Award in February of this year.

In an interview with BOLD, she talks about the challenges schools and universities, students and teachers face in responding to advances in AI.

Helena Pozniak: Is the use of AI in education inevitable?

Kay Faith Butterfield: Yes, it’s inescapable now. Students will use it for homework. We have to focus more on how to make it safe for them to use, rather than banning it. Generative AI makes information on the Internet more accessible. As it develops, it is effectively the brain sitting next to you. But using AI correctly is vital – our children need to be educated to work and live with AI. Humans are the dominant ones. There are many biases in large-scale language models. All users must be trained on what machines can do for us and their limitations.

HP: What are the dangers of AI for children?

KFB: One of the things that worries me is that children develop their own beliefs, values and attitudes by the age of seven, so we have to think carefully about policy in the early years.

We do need to understand the impact of the educational toys we give them. Arguably, computers are better at influencing, suggesting and manipulating behavior than humans. We must also know where children’s data are, whether devices can be hacked and whether children can be identified.

“But getting it right is crucial – our children need to be educated to work and live with AI.”

Most AI toys aimed at young children (such as ‘smart dolls’) are made in China. If you want a connected toy to ‘talk’ to your child, then the toy must collect data from the interaction. Where is this information stored and is it secure? We don’t know, so there are huge data privacy issues.

We also need to discuss the extent to which we are willing to let technology ‘take care’ of our children. What if the connected doll becomes a child’s best friend but “dies”? How would the child react? Would this “death” be harder on the child than a teddy bear falling to pieces? What if your best friend was a machine? Is interacting with these machines preparing for the future? We don’t know yet, but we are testing it on the most vulnerable among us – children. Which brings us back to the need for widespread AI education so parents can make informed decisions about the toys and tools their kids use.

HP: What about older kids?

KFB: Artificial intelligence must be considered at all levels. We are educating the children of the future, who are likely to pursue multiple careers. As technology changes, they must be equipped to get the best out of it. We have to get everyone to understand AI so we can really engage in the debate about what kind of future we want for our children and grandchildren.

HP: We already understand AI, so why do we need AI education?

KFB: One of the biggest problems is that the capabilities of AI are beyond almost everyone’s understanding of AI. People don’t always understand what’s going on when AI is applied to education, hospitals and our voting structures. It’s important for everyone – especially teachers – to understand this. Teachers desperately need training. In addition, we are seeing a growing distrust of AI. Education will help people understand what they should be wary of and what they can use safely.

HP: What are the essential components of a responsible AI policy for schools?

KFB: One of the first actions is to educate children about what it means to interact with generative AI models. Schools must also ask: does AI increase your knowledge, or does it make you lazier? If children are going to learn about AI, they must learn to use it properly.

It’s fun for kids to interact with AI, but what does that mean in terms of privacy and data? Students need to realize that some of these tools can be hacked and schools need to install guardrails including We’ve already seen this backfire in the corporate world: in April 2023, engineers at South Korea’s Samsung uploaded sensitive code to ChatGPT, leading the company to ban the use of generative AI on its devices and intranet, and a number of U.S. banks to restrict its use. Any generative model trained from the Internet uses data uploaded to the Internet.

留言

您的电子邮箱地址不会被公开。 必填项已用 * 标注