April 3, 2025

袭明社区论坛

WSJ: When Your Chil...
 
Notifications
Clear all

WSJ: When Your Child Uses AI to Cheat --- More students are hiding their secret weapon from parents and teachers

1 Posts
1 Users
0 Reactions
29 Views
张大军
(@26791pwpadmin)
Member Admin
Joined: 2 months ago
Posts: 47
Topic starter  

A high-school senior from New Jersey doesn't want the world to know that she cheated her way through English, math and history classes last year.

Yet her experience, which the 17-year-old told The Wall Street Journal with her parent's permission, shows how generative AI has rooted in America's education system, allowing a generation of students to outsource schoolwork to software with access to the world's knowledge.

Educators see benefits to using artificial intelligence in the classroom. Yet teachers and parents are left on their own to figure out how to stop students from using the technology to short-circuit learning. Companies providing AI tools offer little help.

The New Jersey student told the Journal why she used AI for dozens of assignments last year: Work was boring or difficult. She wanted a better grade. A few times, she procrastinated and ran out of time to complete assignments.

The student turned to OpenAI's ChatGPT and Google's Gemini to help spawn ideas and review concepts, which many teachers allow. More often, though, AI completed her work. Gemini solved math homework problems, she said, and aced a take-home test. ChatGPT did calculations for a science lab. It produced a tricky section of a history term paper, which she rewrote to avoid detection.

The student was caught only once.

Around 400 million people use ChatGPT every week, OpenAI said. Students are the most common users, according to the company, which offers a free version and advanced services costing as much as $200 a month. OpenAI hopes students will get into a lifelong habit of consulting ChatGPT whenever they have a question, a role played by Google for almost three decades.

Of students who reported using AI, nearly 40% of those in middle and high schools said they employed it without teachers' permission to complete assignments, according to a survey last year by Impact Research. Among college students who use AI, the figure was nearly half. An internal analysis published by OpenAI said ChatGPT was frequently used by college students to help write papers.

Students, operating on screens outside adult supervision, are left to decide whether to use or resist AI tools that can clandestinely deliver top grades. Age restrictions set by AI companies are easily circumvented.

Like the unknowns accompanying the introduction of social media a generation ago, there has been limited research on AI's academic merits and pitfalls for students, including the propensity for cheating.

"This is a gigantic public experiment that no one has asked for," said Marc Watkins, assistant director of academic innovation at the University of Mississippi.

The New Jersey student passed her classes last year, but she said she learned far less than she could have. She quit illicit AI use for her senior year. "I have tried to take a step back," she said, "and use my brain."

AI companies play down the idea that academic dishonesty is their problem. "OpenAI did not invent cheating," said Siya Raj Purohit, who belongs to the education team at the company. "People who want to cheat will find a way."

Journal owner News Corp has a content-licensing partnership with OpenAI.

Many educators worry that easy access to AI chatbots tempts students to avoid challenging academic work. Rapid advances in AI technology have made it difficult to detect in student work if employed with a bit of cunning.

"There are probably lots of students, K-12 and higher ed, who used ChatGPT to do their homework last night without learning anything," John B. King Jr., chancellor of the State University of New York system and the former education secretary, said at an education technology conference in October. "That's scary."

King shared a conference stage with Purohit, who offered a provocative response. Perhaps critical thinking and communication skills should be measured by the ability to use AI well, she said. "What is the value of an essay?" she asked, rhetorically, drawing from a recent discussion with a Wharton School of Business professor.

Daniel Willingham, a cognitive psychologist at the University of Virginia, has an answer. "Writing requires a type of thinking that other types of exercises don't," he said. "Writing prompts you to explain more carefully if you're explaining, argue more completely if you're arguing."

Jody Stallings, who teaches 8th-grade English in South Carolina, has his classes read Harper Lee's "To Kill a Mockingbird." Each day to start class, he asks students to write answers to questions about what they have read so far. The exercise gets students to think about the book, Stallings said, and refine those thoughts through writing.

Tech supporters have high hopes that AI will radically improve learning.

"Our children will have virtual tutors who can provide personalized instruction in any subject, in any language, and at whatever pace they need," OpenAI CEO Sam Altman said last year in a blog post.

The company's vice president of education, Leah Belsky, suggested that schools combat cheating by welcoming AI into the classroom. "Educators who incorporate AI into their teaching and assignments can successfully shift it from a tool students use without disclosure to a fully integrated, guided part of their learning process," she said.

Organizations and companies have launched AI-powered tutors designed to help students learn without a teacher present. Some educators have picked up AI tools to help them write lesson plans, worksheets or letters home.

Sandy Mangarella, a high-school English teacher in New Jersey, said chatbots help improve her lessons and devise new classroom activities. "It's kind of like having a colleague to talk to," she said.

The Education Department, various states, nonprofits, and companies -- including OpenAI -- have published broad guidance about how teachers can use AI responsibly, noting for instance that chatbot-generated information isn't always correct. Mostly, though, cheating is mentioned only briefly or not at all.

Jacob Moon, a high-school English teacher in Coosa County, Ala., said he rarely used to see evidence of cheating in his class. So far this school year, though, Moon has caught roughly two dozen students using AI for assignments, including essays.

"What scares me as a teacher the most," Moon said, is "what happens when they go into college and the workforce?"

Chris Prowell, a sophomore at the school, said classmates use AI to complete assignments all the time. But he doesn't, fearing he would be ill-prepared for college. Rampant AI cheating, he said, "discredits people that actually work hard on something."

Some educators are skeptical that students can use AI responsibly while doing work at home. Joshua Allard-Howells, a high-school English teacher in Sonoma County, Calif., said AI cheating spread quickly among his students last year.

He now requires them to write their first drafts in class by hand, with computers and phones prohibited and out of reach. The change has produced an unexpected benefit, he said: Students take more time on their work, and their writing is more authentic.

The downside: he can't assign homework. "If I do, it just gets cheated on," he said.

Dozens of companies advertise apps they claim can write essays and complete homework with AI software that can't be detected. You.com ran a Facebook ad in July that featured an image of a marketing student wearing a backpack, headphones and braces: "I wrote my essay in minutes, citations included, with You's Research Assistant."

The search-and-research tool was valued by investors at nearly one billion dollars. CEO Richard Socher and a spokeswoman for the company didn't respond to requests for comment.

Around the start of the academic year, Estonian company Aithor ran promotions on Facebook and Instagram for its writing assistant. The ads promised "Flawless essays in a click," showing two emojis of a graduation cap.

"In practice, we provide a starting framework that students still need to refine and personalize," Anatoly Terentyev, Aithor's chief marketing officer wrote in an email. The company was reviewing the language in its ads, he said.

"Teachers hate us," is the advertising slogan for Caktus AI. Harrison Leonard, the company's CEO, said the phrase refers to teachers who resist change. College students already know how to write, Leonard said, so Caktus AI is helping students prepare for work by learning to use AI. He said he wasn't creating a cheating tool.

Caktus AI presents itself differently on social media.

"The past three years I was playing football at a good college and really hated doing my school work and going to workouts and all that annoying work. So I built a software for myself that wrote all of my essays and did all of my homework problems instantly," said a Reddit post from Caktus AI's account.

Leonard, who played football for Notre Dame, didn't respond to follow-up questions about the post. Earlier, he had said, "I can't control the way the students interact with the platform."

Patricia Webb, an English professor at Arizona State University, typically bars AI use in her classes. Yet on some writing assignments, she suspects 20% to 40% of her students use it anyway, based on the writing styles she observes.

But without definitive evidence, she said, she rarely confronts those students.

Her attempted remedy is assigning writing based on personal experiences or interviews, which are more difficult to outsource to AI.

OpenAI has developed a tool that can reliably detect ChatGPT-generated writing but has declined to release it, the Journal found. An internal survey found that nearly 30% of users would use ChatGPT less if OpenAI rolled out the feature.

Some teachers turn to third-party AI detection tools. Yet often the software identifies student writing as genuine when it isn't and, in some instances, labels legitimate student work as AI-generated.

In 2023, Max Spero launched Pangram Labs, an AI-detection operation focused on helping companies weed out AI-generated product reviews.

In a Journal experiment, ChatGPT generated a ninth-grade essay on the themes of William Golding's "Lord of the Flies." The essay was reviewed by Pangram Labs' software, which identified the writing as almost certainly AI-generated.

The essay was then fed into HumanizeAI.pro, an app that boasts it can "transform your AI-generated content into natural, human-like text." Pangram Labs was less sure about this version. In one instance, the program said there might be some AI writing involved. On a later try with the same text, it declared the essay "fully human-written."

Pangram Labs is working to "defeat the humanizers," Spero said.


   
Quote
Share: