When I was in graduate school back in 1993, we had to walk 30 minutes across campus to gain access to the university’s one computer lab with crude, first-generation internet access. Today, almost every student has a lightning-fast, web-enabled computer in their pocket—and they’re using it to access transformative generative AI (GenAI) tools with capabilities we could only dream of back in the ‘90s.
Take a look at Arizona State University, which just announced a groundbreaking partnership with OpenAI, the creator of ChatGPT, to bring AI into the classroom. The idea is to leverage AI to elevate teaching and enable students to keep pace with the fast-moving world they now inhabit.
“Artificial intelligence systems are here to stay, and we are optimistic about their ability to become incredible tools that help students to learn,” said ASU president Michael Crow.
That’s a breath of fresh air: until now, too many educators have been largely worried about whether GenAI would help students cheat on assignments. Not that it’s a baseless concern — at least nine out of 10 students now admit to using ChatGPT-type tools to complete their homework. In response, though, many schools and universities are rolling out ChatGPT bans and draconian punishments for kids who get caught getting a GenAI assist on assignments.
I am convinced the educational system is failing our kids when it employs these severe measures. As a former Silicon Valley executive, it’s been exciting to watch the rapid rise of AI technologies — but I’m alarmed at how few young people know how to use them effectively in professional settings. I’m not alone: Laura Newinski, KPMG’s chief operating officer, recently discovered that her company’s 800 interns felt that because their professors had discouraged, and in some cases even banned, the use of GenAI, they now had no idea how to use the tools in the real world.
That’s a big problem. By pushing back against GenAI, schools and colleges aren’t just failing to prepare graduates for the modern workplace, they’re also missing out on what promises to be an incredible force multiplier for forward-thinking educators.
Using AI tools, it’s possible for teachers to more rapidly diagnose students’ strengths and weaknesses, and to create curricula tailored to their unique needs. Carnegie Learning’s intelligent math tutoring, for instance, already leverages this kind of hyper-personalization to improve outcomes. Intuit and Khan Academy aim to provide free financial resources to all high school students across the United States using AI tools in Khan Academy’s financial education course.
As teachers embrace AI tools — which are capable of digesting and drawing upon the sum total of human knowledge — they’ll find it far easier to ensure students get up-to-date lessons and learning materials. New information can be integrated into classrooms in real time, guaranteeing that students aren’t stuck using outdated textbooks, and making it easier for teachers to keep up with new ideas and discoveries. Complex subjects can be greatly simplified: a kid who struggles to make sense of King Lear, for instance, can use a voice-operated AI app to instantly get a personalized and age-appropriate plot recap as exciting as any Netflix series.
Of course, there are valid concerns about putting AI tools in kids’ hands. (Just ask Taylor Swift about the potential risks involved.) But the same could be said of giving kids internet access: the dangers are real, but the need to keep young people safe shouldn’t mean we deny them the right to use the defining technologies of our time. What’s needed is leadership to make AI safer for kids: Common Sense Media’s efforts to develop a classroom-ready AI rating system, for example, is a big step in the right direction.
It could also be argued that something is lost along the way when young people rely on AI tools to write their book reports or essays. This is not a new sentiment: there has always been a tension between embracing new ideas and approaches, and insisting on doing things the way they have always been done. In the past, our children were taught to write in cursive, for example, but now — given that even young children have, or are furnished, computers to write with — it is no longer part of the curriculum in many schools.
The point is that teachers can either adopt new technologies or try to pretend they don’t have any place in educational settings. Too often they choose to go with outdated materials and methods and I believe that is a huge mistake. Instead of telling students that AI is antithetical to learning, we need to help them understand how to work with it, so they will be ready for the world that awaits them.
The bottom line: We can’t reverse technological progress. We must decide whether to resist it, or climb on board and embrace the opportunities it brings. As ASU’s partnership with OpenAI shows, when educators are pragmatic and optimistic about what lies ahead, it’s possible to use AI to elevate education and make our classrooms more inclusive and effective for everyone.
David B Wamsley is a former Silicon Valley tech executive and currently the CEO of Rosebud Communications.
The views and opinions expressed herein are the views and opinions of the author and do not necessarily reflect those of Nasdaq, Inc.
Image and article originally from www.nasdaq.com. Read the original article here.