sábado, 10 de junio de 2023

sábado, junio 10, 2023

The AI revolution already transforming education

Schools and universities are using ChatGPT in the classroom, but will it devalue the fundamentals of learning?

Bethan Staton and Madhumita Murgia in London 

Rachel Evans, director of digital learning & innovation at Wimbledon High School © FT montage: Anna Gordon/Open AI


When Lauren started researching the British designer Yinka Ilori for a school project earlier this year, she was able to consult her new study pal: artificial intelligence.

After an hour of scouring Google for information, the 16-year-old pupil asked an AI tool called ChatGPT, in which you input a question and get a generated answer, to write a paragraph about Ilori. 

It replied with fascinating details about the artist’s life that were new and — she later confirmed — factually correct.

“Some of the things it brought up I hadn’t found anywhere online,” says Lauren, a pupil at Wimbledon High School, a private girl’s school in south London. 

“I was actually surprised about how it was able to give me information that wasn’t widely available, and a different perspective.”

Since ChatGPT — a powerful, freely available AI software capable of writing sophisticated responses to prompts — arrived on the scene last year, it has prompted intense speculation about the long-term repercussions on a host of industries and activities.

But nowhere has the impact been felt more immediately than in education. 

Overnight, rather than labour through traditional exercises designed to develop and assess learning, students could simply instruct a computer to compose essays, answer maths questions or quickly perform complex coursework assignments and pass the results off as their own.

As a result, schools and universities have been forced into a fundamental rethink of how they conduct both tuition and academic testing.

Worries about AI-based plagiarism have pushed a number of institutions to opt for an outright ban of bots like ChatGPT. 

But enforcing this is difficult, because detecting when the technology has been used is so far unreliable.

Given how pervasive the technology already is, some educators are instead moving in the opposite direction and cautiously experimenting with ways to use generative AI to enhance their lessons.

Many students are keen for them to take this approach. 

For Lauren and her friends, months of playing around with ChatGPT have convinced them there is more to be gained from generative AI than simply cheating. 

And with the technology threatening to overhaul the jobs market and become a permanent communication tool in everyday lives, they are anxious to be prepared for the turbulence to come.

But these experiments raise the question of whether it is possible to open the door to AI in education without undercutting the most important features of human learning — about what it actually means to be numerate and to be literate.

“We don’t yet understand what generative AI is going to do to our world,” says Conrad Wolfram, the European co-founder of AI-driven research platform Wolfram, who has long pushed for an overhaul of the way maths is taught. 

“So it’s hard to work out yet how it should affect the content of education.”

AI enters the chat

When ChatGPT was launched by San Francisco-based tech company OpenAI in November 2022, the 300-odd-person team, backed by Microsoft, was expecting it to be a small-scale experiment that would help them build better AI systems in the future. 

What happened next left them stunned.

Within weeks, ChatGPT, a tool based on software known as a large language model, was being used by more than 100mn people globally. 

Now, it is being tested inside law firms, management consultancies, news publishers, financial institutions, governments and schools, for mental health therapy and legal advice, to write code, essays and contracts, summarise complex documents, and run online businesses.

For lecturers at the University of Cambridge, the timing of ChatGPT’s launch — as students headed home for Christmas holidays — was convenient.

“We were able to take stock,” says Professor Bhaskar Vira, the university’s pro-vice-chancellor for education. 

In the discussions that followed, teaching staff observed as other universities took action on ChatGPT, in some cases banning the technology, in others offering students guidance.

By the time students returned, the university had decided a ban would be futile. 

“We understood it wasn’t feasible,” Vira says. 

Instead, the university sought to establish fair use guidelines. 

“We need to have boundaries so they have a very clear idea of what is permitted and not permitted.”

Professor Bhaskar Vira, Cambridge university’s pro-vice-chancellor for education, has sought to establish fair use guidelines for ChatGPT © University of Cambridge


Their assessment was correct. 

A survey by Cambridge student newspaper Varsity last month found almost half of all students have used ChatGPT to complete their studies. 

One-fifth used it in work that contributed to their degree and 7 per cent planned to use it in exams. 

It was the equivalent, said one student, of “dropping one of your cleverer mates a message” asking for help.

Ayushman Nath, a 19-year-old engineering student at Cambridge’s Churchill College, discovered ChatGPT on TikTok like many of his peers. 

At first, people were posting funny videos of the chatbot telling jokes, but then slowly there was a shift.

Nowadays, Nath says it is common for students to paste in long articles or academic papers and ask for summaries, or to brainstorm ideas on a broad topic. 

He has used it to research a report on batteries for electric cars, for example. 

“You can’t use it to replace fundamental knowledge from scientific papers. 

But it’s really useful for quickly developing a high-level understanding of a complex topic, and coming up with ideas worth exploring,” he says.

However, Nath quickly learned that you cannot trust it to be 100 per cent accurate: “I remember it gave me some stats about electric vehicle batteries, and when I asked for citations, it told me it made them up.”

Accuracy is one of the major challenges with generative AI. 

Language models are known to “hallucinate”, which means they fabricate facts, sources and citations in unpredictable ways as undergraduate Nath discovered.

There is also evidence of bias in AI-written text, including sexism, racism and political partisanship, learned from the corpus of internet data, including social media platforms like Reddit and YouTube, that companies have used to train their systems.

Underpinning this is the “black box” effect, which means it is not clear how AI comes to its conclusions. 

“It can give you false information . . . it’s a vacuum that sucks a bunch of content off the internet and reframes it,” says Jonathan Jones, a history lecturer at the Virginia Military Institute. 

“We found a lot more myth and memory than hard truths.”

‘There is no going back’

Earlier this year at the Institut auf dem Rosenberg, one of Switzerland’s most elite boarding schools, 12th-grade student Karolina was working on an assignment for her sociolinguistics class. 

The project was on regional accents in Britain and its effects on people’s social standing and job prospects.

What she handed in was not an essay but a video, featuring an analytical dialogue on the subject between two women in the relevant accents. 

The script was based on Karolina’s own research. 

The women were not real: they were avatars generated by Colossyan Creator, AI software from a London-based start-up. 

“I watched it and I was in awe,” says Anita Gademann, Rosenberg’s director and head of innovation. 

“It was so much more impactful in making the point.”

Gademann says the school has encouraged students’ use of AI tools, following other qualification bodies including the International Baccalaureate and Wharton, the University of Pennsylvania’s business school. 

“There is no going back,” she says. 

“Children are using tech to study and learn, with or without AI.”

Over the past year, the school has observed that students’ assignments have become a lot more visual. 

Alongside written work, students regularly submit images or videos created by AI-powered art generators like Dall-E or Midjourney. 

The visuals themselves are a learning opportunity, says Gademann, citing a history class that evaluated anachronisms in AI-generated pictures of the Middle Ages, for instance.

Alongside written work, Rosenberg students regularly submit images or videos created by AI-powered art generators like Dall-E or Midjourney

There have been other successes: through repeated use, ChatGPT has improved the writing standard of students who previously struggled. 

“They are thinkers, they are intelligent, they can analyse, but [putting] something on paper, it’s hard,” Gademann says.

At Rosenberg, roughly 30 per cent of grades are already earned through debate and presentations. 

Gademann says the advent of generative AI has made it clear that standardised testing models have to change: “If a machine can answer a question, we shouldn’t be asking a human being to answer this same question.”

This overarching dilemma — to what extent assessments should be reshaped for AI — has become a pertinent one. 

Despite their problems, large language models can already produce university-level essays, and easily pass standardised tests such as the Graduate Management Admission Test (GMAT) and the Graduate Record Examinations (GRE), required for graduate school, as well as the US Medical Licensing Exam.

The software even received a B grade on a core Wharton School MBA course, prompting business school deans across the world to convene emergency faculty meetings on their future.

Earlier this year, Wolfram, the AI pioneer, twinned ChatGPT with a plug in called WolframAlpha, and asked it to sit the maths A-level, England’s standard mathematics qualification for 18-year-olds. 

The answer engine achieved 96 per cent.


Conrad Wolfram says education in the UK is hopelessly behind technological advances © Andreas Gebert/Picture-Alliance/dpa/AP Images


For Wolfram, this was further proof that maths education in the UK, where he is based, is hopelessly behind technological advances, forcing children to spend years learning longhand sums that can be easily done by computers.

Instead, Wolfram argues schools should be teaching “computational literacy”, learning how to solve tricky problems by asking computers complex questions and allowing them to do tedious calculations. 

This means students can step up “to the next level”, he says, and spend time using more human capabilities, such as being creative or thinking strategically.

Teaching young people to enjoy knowledge, rather than rote learn it, will better prepare children for a future world of work, Wolfram adds, predicting that menial jobs will be automated, while humans take on a higher-skilled supervisory role. 

“The vocational is the conceptual.”

‘Learning loss’

While AI tools are being rapidly implemented by students, and even integrated into the curriculum at some schools such as Rosenberg, the risks and limitations of the software remain clear.

A coalition of state and private schools in the UK are so concerned about the speed at which AI is developing, they are setting up a cross-sector body to advise “bewildered” educators on how best to use the technology. 

In a letter to The Times, the group also said they have “no confidence” that large digital companies are capable of regulating themselves.

Anna Mills, a writing instructor at the College of Marin, a community college in California, has spent a year testing language models, the technology underlying ChatGPT, such as OpenAI’s most advanced model GPT-4. 

Her main concern is that automating young people’s day-to-day lessons by allowing AI to do the legwork could lead to “learning loss”, a decline in essential literacy and numeracy skills.

At Wimbledon High School, where the use of AI is led by Rachel Evans, its director of digital learning and innovation, Lauren’s classmate Olivia has enjoyed using ChatGPT as a “creative spark” but is worried this risks eroding her own abilities. 

“When you actually want to start that yourself . . . it’s going to be really challenging if you haven’t had that practice.”

Rada, Lauren and Olivia of Wimbledon High School have mixed views about ChatGPT’s usefulness as a coursework aid © Anna Gordon/FT


Her friend Rada is less worried. 

She has found ChatGPT unreliable for giving answers, but useful for helping to structure her arguments. 

“It’s not good at answers, but it’s good at ‘flufferising’ them,” she says, referring to the chatbot’s ability to turn rough ideas into something more digestible.

Mills agrees that AI-produced essays are often articulate and well-structured, but they can lack originality and ideas. 

That, she says, should force educators to interrogate what students should get from essay tasks. 

“We assign writing because we think it helps people learn to think. 

Not to create more student essays,” she adds. 

“It’s the mainstay process that academia has developed to help people think and communicate and get further in their understanding. 

We want students to engage in that.”

Senior leaders at the Harris Federation, which runs 52 state-funded primary and secondary schools in London, are excited about the potential for generative AI to help students with research as well as freeing up teachers’ time by generating lesson plans or marking work.

Yet the federation’s chief executive, Sir Dan Moynihan, is concerned the technology could present an “equity issue”. 

Not only may poorer students struggle to access paid-for AI technology that will make work easier, he says, schools with tight budgets may use AI to cut corners in a way that is not necessarily the best for learning.

“I’m not a pessimist, but we have to collectively avoid this becoming a dystopian thing,” says Moynihan. 

“We need to make sure we don’t end up with AI working with large numbers of kids [and] teachers acting as pastoral support, or maintaining discipline.”

Life-changing technology

However, there are those who point out that educators are only just beginning to think of ways it might be used in classrooms.

In September 2022, entrepreneur Sal Khan, the founder of Khan Academy, a non-profit whose free online tutorials are viewed by millions of children globally, was approached by OpenAI to test out its new model GPT-4, which underpins the paid-for version of ChatGPT.

After Khan, who also runs a bricks-and-mortar private school in the heart of Silicon Valley, spent a weekend playing with it, he realised it was not just about producing answers: GPT-4 could provide rationales, prompt the student in a Socratic way and even write its own questions. 

“I always thought it would be 10-20 years before we could even hope to give every student an on-demand tutor,” says Khan. 

“But then I was like, wow, this could be months away.”

Sal Khan, the founder of Khan Academy, describes ChatGPT as a simplistic layer on top of a ‘very powerful technology that could be misused’ © Dai Sugano/MediaNews Group/Getty Images


By March, a model from Khan’s team had gone from “almost nothing to a fairly compelling tutor”, called Khanmigo. 

Khan pays OpenAI a fee to cover the computational cost of running the AI system, roughly $9-$10 per month per user.

The AI tutor uses GPT-4 to debate with students, coach them on subjects ranging from physics and English, and answer questions as pupils complete tutorials. 

Asking the software to provide an explanation for its answers increases its accuracy and improves the lesson, he says. 

The product is being rolled out to hundreds of teachers and children across Khan’s physical and virtual schools, and up to 100,000 pupils across 500 US school districts partnered with Khan Academy will access it by the end of 2023.

Khan describes ChatGPT as the gateway to a “very powerful technology” that can be misused. 

However, if it is adapted to be “pedagogically sound, with clear oversight and moderation filters” language models can be revolutionary.

“I don’t say lightly, I think it’s probably the biggest transformation of our life . . . especially in education,” Khan says. 

“You’re going to be able to awaken people’s curiosity, get them excited about learning. 

They’re going to have an infinitely patient tutor with them, always.”

Back in Wimbledon, Lauren and her classmates are becoming aware that generative AI, while useful, is no substitute for some of the most important and rewarding parts of the learning process.

“One of our main takeaways was the importance of being stuck,” says Lauren.

 “Generally in life you need to be able to overcome little hurdles to feel proud of your work.”

“It’s so vital not to ban the use of it in education, but instead . . . learn how to use it through proper, critical thinking,” her classmate Olivia adds. 

“Because it will be a tool in our futures.” 

0 comments:

Publicar un comentario