Search This Blog

Showing posts sorted by date for query cheating. Sort by relevance Show all posts
Showing posts sorted by date for query cheating. Sort by relevance Show all posts

Friday, July 11, 2025

Chegg: A Critical History of a Disruptor Turned Controversy Machine

Chegg, once hailed as a Silicon Valley disruptor democratizing access to education, has undergone a profound and troubling transformation since its founding in 2005. What began as a textbook rental company evolved into a billion-dollar homework help empire—an empire that, critics argue, has done more to undermine academic integrity than to foster genuine learning. Its business model capitalized on the structural weaknesses of American higher education and, in the process, normalized a shadow system of paid cheating.

Origins: Textbooks, Student Debt, and Disruption

Chegg was born at the intersection of inflated textbook costs and the neoliberal university. Founders Osman Rashid and Aayush Phumbhra sought to bring the efficiencies of the sharing economy to the campus bookstore. In its early years, Chegg attracted investor attention by promising cheaper textbook rentals—a modest but important service in an era of spiraling student debt.

But as textbook rentals became commodified, Chegg pivoted. By the early 2010s, it was building a suite of digital services: step-by-step solutions, tutoring, and subscription-based homework help under its Chegg Study brand. When Chegg went public in 2013, it promoted itself not just as a tech company, but as a partner in “student success.” In reality, it had found a way to turn student desperation into a profitable SaaS model.

Homework Help or Cheating-as-a-Service?

Chegg’s transformation into a homework help platform would eventually earn it a darker moniker: “Cheating-as-a-Service.”

Nowhere is this critique more powerfully detailed than in education journalist Derek Newton’s Cheat Sheet, a Substack project dedicated to exposing the industrial-scale cheating facilitated by platforms like Chegg, Course Hero, and Studypool. Newton, who has tracked the issue since 2019, documented case after case in which students used Chegg not to learn—but to submit answers for graded assignments and exams. Faculty across disciplines and institutions began reporting widespread cheating enabled by Chegg, especially during the remote learning surge triggered by COVID-19.

In one issue of Cheat Sheet, Newton wrote:

“Chegg isn’t an education company. It’s a cheating company. It monetizes academic dishonesty, obfuscates accountability, and deflects responsibility while raking in millions in subscription revenue.”

According to Newton, Chegg’s "ask an expert" function—where students submit specific questions and receive solutions within minutes—became a tool of choice for real-time cheating during online exams. Despite university honor codes, many students saw Chegg as a normalized part of academic life. Meanwhile, Chegg’s refusal to proactively block cheating or cooperate fully with universities left institutions scrambling.

Pandemic Profits and Ethical Collapse

During the COVID-19 pandemic, as universities shifted online, Chegg’s subscriber base soared. Students confined to Zoom classrooms flocked to digital platforms for support—or shortcuts. By 2021, Chegg had nearly 7 million subscribers and posted annual revenues of $776 million. Its stock price peaked above $100 in February 2021.

But that growth came with growing backlash. Professors and academic integrity officers called for investigations. Some universities demanded IP logs and timestamps from Chegg in academic misconduct cases. In response, Chegg adopted a policy of releasing user data only under subpoena—shifting the burden to faculty and administrators.

Chegg, for its part, insisted it was simply offering "study support" and denied facilitating cheating. But the evidence presented in Newton’s Cheat Sheet and other academic publications told a different story.

Collapse, AI Disruption, and Image Repair

In 2023, a new threat emerged: OpenAI’s ChatGPT. Free, flexible, and fast, ChatGPT began to supplant Chegg for the same user base. In a rare moment of corporate honesty, Chegg CEO Dan Rosensweig told investors that ChatGPT was impacting the company’s subscriber growth. Wall Street panicked. Chegg’s stock plummeted, its valuation shrank, and the company began rounds of layoffs—first 4% of its workforce, then 23% in 2024.

Desperate to stay relevant, Chegg pivoted again—this time toward “CheggMate,” its proprietary AI chatbot built in partnership with OpenAI. Yet the damage to its brand, and its future, was already apparent.

By 2025, Chegg was struggling to define its purpose in a rapidly changing education tech landscape. Its subscription model had been undermined by free AI. Its name remained tainted by years of academic dishonesty. And efforts to shift into AI tutoring raised further concerns about data privacy, surveillance, and automation in learning.

A Mirror of Higher Education’s Failures

Chegg’s rise and fall cannot be understood in isolation. It thrived in a system where students are overburdened, instructors are underpaid, and administrators look the other way as long as graduation rates and tuition dollars remain stable. Its gig-based backend—where underpaid "experts" supply answers for a global audience—mirrors the adjunctification of academic labor itself.

Derek Newton’s Cheat Sheet and other critical reporting have exposed how edtech platforms exploit the credibility crisis in higher education. The real scandal isn’t just that Chegg exists—it’s that the ecosystem made it necessary.

Conclusion

Chegg’s legacy may one day be viewed not as a revolution in learning, but as a symptom of higher education’s marketized decline. Like diploma mills and for-profit colleges before it, Chegg served the needs of students abandoned by the system—but did so at the cost of academic trust and intellectual growth.

As the AI era unfolds, and companies like Chegg scramble to reposition themselves, the Higher Education Inquirer will continue to ask: who profits, who pays, and who is left behind?


Sources

  • Derek Newton, Cheat Sheet newsletter: https://cheatsheet.substack.com

  • Chegg Inc. 10-K and Investor Calls (2015–2025)

  • The Chronicle of Higher Education, “Is Chegg Helping or Hurting?”

  • Inside Higher Ed, “Chegg, ChatGPT, and the New Arms Race in EdTech”

  • Bloomberg, “Chegg Warns of ChatGPT Threat”

  • Reddit threads: r/Professors, r/College, r/AcademicIntegrity

  • The Markup, “Chegg’s Gig-Economy Model and Academic Labor”

  • The Atlantic, “The Cheating Economy”

  • Higher Education Inquirer Archives on EdTech and Academic Integrity

Thursday, May 15, 2025

The Epic, Must-Read Coverage in New York Magazine (Derek Newton)


The Epic, Must-Read Coverage in New York Magazine
 
READ IN APP
 

Issue 364

Subscribe below to join 4,663 (+6) other smart people who get “The Cheat Sheet.” New Issues every Tuesday and Thursday.

The Cheat Sheet is free. Although, patronage through paid subscriptions is what makes this newsletter possible. Individual subscriptions start at $8 a month ($80 annual), and institutional or corporate subscriptions are $250 a year. You can also support The Cheat Sheet by giving through Patreon.


New York Magazine Goes All-In, And It’s Glorious

Venerable New York Magazine ran an epic piece (paywall) on cheating and cheating with AI recently. It’s a thing of beauty. I could have written it. I should have. But honestly, I could not have done much better.

The headline is brutal and blunt:

Everyone Is Cheating Their Way Through College

To which I say — no kidding.

The piece wanders around, in a good way. But I’m going to try to put things in a more collected order and share only the best and most important parts. If I can. Whether I succeed or not, I highly encourage you to go over and read it.

Lee and Cheating Everything

The story starts with Chungin “Roy” Lee, the former student at Columbia who was kicked out for selling cheating hacks and then started a company to sell cheating hacks. His story is pretty well known at this point, but if you want to review it, we touched on it in Issue 354.

What I learned in this story is that, at Columbia, Lee:

by his own admission, proceeded to use generative artificial intelligence to cheat on nearly every assignment. As a computer-science major, he depended on AI for his introductory programming classes: “I’d just dump the prompt into ChatGPT and hand in whatever it spat out.” By his rough math, AI wrote 80 percent of every essay he turned in.

And:

“Most assignments in college are not relevant,” [Lee] told me. “They’re hackable by AI, and I just had no interest in doing them.” While other new students fretted over the university’s rigorous core curriculum, described by the school as “intellectually expansive” and “personally transformative,” Lee used AI to breeze through with minimal effort.

The article says Lee’s admissions essay for Columbia was AI too.

So, for all the people who were up in arms that Columbia would sanction a student for building a cheating app, maybe there’s more to it than just that. Maybe Lee built a cheating app because he’s a cheater. And, as such, has no place in an environment based on learning. That said, it’s embarrassing that Columbia did not notice a student in such open mockery of their mission. Seriously, embarrassing.

Continuing from the story:

Lee said he doesn’t know a single student at the school who isn’t using AI to cheat. To be clear, Lee doesn’t think this is a bad thing. “I think we are years — or months, probably — away from a world where nobody thinks using AI for homework is considered cheating,” he said.

Also embarrassing for Columbia. But seriously, Lee has no idea what he is talking about. Consider this:

Lee explained to me that by showing the world AI could be used to cheat during a remote job interview, he had pushed the tech industry to evolve the same way AI was forcing higher education to evolve. “Every technological innovation has caused humanity to sit back and think about what work is actually useful,” he said. “There might have been people complaining about machinery replacing blacksmiths in, like, the 1600s or 1800s, but now it’s just accepted that it’s useless to learn how to blacksmith.”

I already regret writing this — but maybe if Lee had done a little more reading, done any writing at all, he could make a stronger argument. His argument here is that of a precocious eighth grader.

OpenAI/ChatGPT and Students

Anyway, here are sections and quotes from the article about students using ChatGPT to cheat. I hope you have a strong stomach.

As a brief aside, having written about this topic for years now, I cannot tell you how hard it is to get students to talk about this. What follows is the highest quality journalism. I am impressed and jealous.

From the story:

“College is just how well I can use ChatGPT at this point,” a student in Utah recently captioned a video of herself copy-and-pasting a chapter from her Genocide and Mass Atrocity textbook into ChatGPT.

More:

Sarah, a freshman at Wilfrid Laurier University in Ontario, said she first used ChatGPT to cheat during the spring semester of her final year of high school.

And:

After getting acquainted with the chatbot, Sarah used it for all her classes: Indigenous studies, law, English, and a “hippie farming class” called Green Industries. “My grades were amazing,” she said. “It changed my life.” Sarah continued to use AI when she started college this past fall. Why wouldn’t she? Rarely did she sit in class and not see other students’ laptops open to ChatGPT. Toward the end of the semester, she began to think she might be dependent on the website. She already considered herself addicted to TikTok, Instagram, Snapchat, and Reddit, where she writes under the username maybeimnotsmart. “I spend so much time on TikTok,” she said. “Hours and hours, until my eyes start hurting, which makes it hard to plan and do my schoolwork. With ChatGPT, I can write an essay in two hours that normally takes 12.”

This really is where we are. These students are not outliers.

Worse, being as clear here as I know how to be — 95% of colleges do not care. At least not enough to do anything about it. They are, in my view, perfectly comfortable with their students faking it, laughing their way through the process, because fixing it is hard. It’s easier to look cool and “embrace” AI than to acknowledge the obvious and existential truth.

But let’s keep going:

now, as one student put it, “the ceiling has been blown off.” Who could resist a tool that makes every assignment easier with seemingly no consequences?

Please mentally underline the “no consequences” part. These are not bad people, the students using ChatGPT and other AI products to cheat. They are making an obvious choice — easy and no penalty versus actual, serious work. So long as this continues to be the equation, cheating will be as common as breathing. Only idiots and masochists will resist.

Had enough? No? Here:

Wendy, a freshman finance major at one of the city’s top universities, told me that she is against using AI. Or, she clarified, “I’m against copy-and-pasting. I’m against cheating and plagiarism. All of that. It’s against the student handbook.” Then she described, step-by-step, how on a recent Friday at 8 a.m., she called up an AI platform to help her write a four-to-five-page essay due two hours later.

Of course. When you ask students if they condone cheating, most say no. Most also say they do not cheat. Then, when you ask about what they do specifically, it’s textbook cheating. As I remember reading in Cheating in College, when you ask students to explain this disconnect, they often say, “Well, when I did it, it was not cheating.” Wendy is a good example.

In any case, this next section is long, and I regret sharing all of it. I really want people to read the article. But this, like so much of it, is worth reading. Even if you read it here.

More on Wendy:

Whenever Wendy uses AI to write an essay (which is to say, whenever she writes an essay), she follows three steps. Step one: “I say, ‘I’m a first-year college student. I’m taking this English class.’” Otherwise, Wendy said, “it will give you a very advanced, very complicated writing style, and you don’t want that.” Step two: Wendy provides some background on the class she’s taking before copy-and-pasting her professor’s instructions into the chatbot. Step three: “Then I ask, ‘According to the prompt, can you please provide me an outline or an organization to give me a structure so that I can follow and write my essay?’ It then gives me an outline, introduction, topic sentences, paragraph one, paragraph two, paragraph three.” Sometimes, Wendy asks for a bullet list of ideas to support or refute a given argument: “I have difficulty with organization, and this makes it really easy for me to follow.”

Once the chatbot had outlined Wendy’s essay, providing her with a list of topic sentences and bullet points of ideas, all she had to do was fill it in. Wendy delivered a tidy five-page paper at an acceptably tardy 10:17 a.m. When I asked her how she did on the assignment, she said she got a good grade. “I really like writing,” she said, sounding strangely nostalgic for her high-school English class — the last time she wrote an essay unassisted. “Honestly,” she continued, “I think there is beauty in trying to plan your essay. You learn a lot. You have to think, Oh, what can I write in this paragraph? Or What should my thesis be? ” But she’d rather get good grades. “An essay with ChatGPT, it’s like it just gives you straight up what you have to follow. You just don’t really have to think that much.”

I asked Wendy if I could read the paper she turned in, and when I opened the document, I was surprised to see the topic: critical pedagogy, the philosophy of education pioneered by Paulo Freire. The philosophy examines the influence of social and political forces on learning and classroom dynamics. Her opening line: “To what extent is schooling hindering students’ cognitive ability to think critically?” Later, I asked Wendy if she recognized the irony in using AI to write not just a paper on critical pedagogy but one that argues learning is what “makes us truly human.” She wasn’t sure what to make of the question. “I use AI a lot. Like, every day,” she said. “And I do believe it could take away that critical-thinking part. But it’s just — now that we rely on it, we can’t really imagine living without it.”

Unfortunately, we’ve read this before. Many times. Use of generative AI to outsource the effort of learning is rampant.

Want more? There’s also Daniel, a computer science student at the University of Florida:

AI has made Daniel more curious; he likes that whenever he has a question, he can quickly access a thorough answer. But when he uses AI for homework, he often wonders, If I took the time to learn that, instead of just finding it out, would I have learned a lot more? At school, he asks ChatGPT to make sure his essays are polished and grammatically correct, to write the first few paragraphs of his essays when he’s short on time, to handle the grunt work in his coding classes, to cut basically all cuttable corners. Sometimes, he knows his use of AI is a clear violation of student conduct, but most of the time it feels like he’s in a gray area. “I don’t think anyone calls seeing a tutor cheating, right? But what happens when a tutor starts writing lines of your paper for you?” he said.

When a tutor starts writing your paper for you, if you turn that paper in for credit you receive, that’s cheating. This is not complicated. People who sell cheating services and the people who buy them want to make it seem complicated. It’s not.

And the Teachers

Like the coverage of students, the article’s work with teachers is top-rate. And what they have to say is not one inch less important. For example:

Brian Patrick Green, a tech-ethics scholar at Santa Clara University, immediately stopped assigning essays after he tried ChatGPT for the first time. Less than three months later, teaching a course called Ethics and Artificial Intelligence, he figured a low-stakes reading reflection would be safe — surely no one would dare use ChatGPT to write something personal. But one of his students turned in a reflection with robotic language and awkward phrasing that Green knew was AI-generated. A philosophy professor across the country at the University of Arkansas at Little Rock caught students in her Ethics and Technology class using AI to respond to the prompt “Briefly introduce yourself and say what you’re hoping to get out of this class.”

Students are cheating — using AI to outsource their expected learning labor — in a class called Ethics and Artificial Intelligence. And in an Ethics and Technology class. At what point does reality’s absurdity outpace our ability to even understand it?

Also, as I’ve been barking about for some time now, low-stakes assignments are probably more likely to be cheated than high-stakes ones (see Issue 64). I don’t really get why professional educators don’t get this.

But returning to the topic:

After spending the better part of the past two years grading AI-generated papers, Troy Jollimore, a poet, philosopher, and Cal State Chico ethics professor, has concerns. “Massive numbers of students are going to emerge from university with degrees, and into the workforce, who are essentially illiterate,”

To read about Jollimore’s outstanding essay, see Issue 346.

And, of course, there’s more. Like the large section above, I regret copying so much of it, but it’s essential reading:

Many teachers now seem to be in a state of despair. In the fall, Sam Williams was a teaching assistant for a writing-intensive class on music and social change at the University of Iowa that, officially, didn’t allow students to use AI at all. Williams enjoyed reading and grading the class’s first assignment: a personal essay that asked the students to write about their own music tastes. Then, on the second assignment, an essay on the New Orleans jazz era (1890 to 1920), many of his students’ writing styles changed drastically. Worse were the ridiculous factual errors. Multiple essays contained entire paragraphs on Elvis Presley (born in 1935). “I literally told my class, ‘Hey, don’t use AI. But if you’re going to cheat, you have to cheat in a way that’s intelligent. You can’t just copy exactly what it spits out,’” Williams said.

Williams knew most of the students in this general-education class were not destined to be writers, but he thought the work of getting from a blank page to a few semi-coherent pages was, above all else, a lesson in effort. In that sense, most of his students utterly failed. “They’re using AI because it’s a simple solution and it’s an easy way for them not to put in time writing essays. And I get it, because I hated writing essays when I was in school,” Williams said. “But now, whenever they encounter a little bit of difficulty, instead of fighting their way through that and growing from it, they retreat to something that makes it a lot easier for them.”

By November, Williams estimated that at least half of his students were using AI to write their papers. Attempts at accountability were pointless. Williams had no faith in AI detectors, and the professor teaching the class instructed him not to fail individual papers, even the clearly AI-smoothed ones. “Every time I brought it up with the professor, I got the sense he was underestimating the power of ChatGPT, and the departmental stance was, ‘Well, it’s a slippery slope, and we can’t really prove they’re using AI,’” Williams said. “I was told to grade based on what the essay would’ve gotten if it were a ‘true attempt at a paper.’ So I was grading people on their ability to use ChatGPT.”

The “true attempt at a paper” policy ruined Williams’s grading scale. If he gave a solid paper that was obviously written with AI a B, what should he give a paper written by someone who actually wrote their own paper but submitted, in his words, “a barely literate essay”? The confusion was enough to sour Williams on education as a whole. By the end of the semester, he was so disillusioned that he decided to drop out of graduate school altogether. “We’re in a new generation, a new time, and I just don’t think that’s what I want to do,” he said.

To be clear, the school is ignoring the obvious use of AI by students to avoid the work of learning — in violation of stated policies — and awarding grades, credit, and degrees anyway. Nearly universally, we are meeting lack of effort with lack of effort.

More from Jollimore:

He worries about the long-term consequences of passively allowing 18-year-olds to decide whether to actively engage with their assignments.

I worry about that too. I really want to use the past tense there — worried about. I think the age of active worry about this is over. Students are deciding what work they think is relevant or important — which I’d wager is next to none of it — and using AI to shrug off everything else. And again, the collective response of educators seems to be — who cares? Or, in some cases, to quit.

More on professors:

Some professors have resorted to deploying so-called Trojan horses, sticking strange phrases, in small white text, in between the paragraphs of an essay prompt. (The idea is that this would theoretically prompt ChatGPT to insert a non sequitur into the essay.) Students at Santa Clara recently found the word broccoli hidden in a professor’s assignment. Last fall, a professor at the University of Oklahoma sneaked the phrases “mention Finland” and “mention Dua Lipa” in his. A student discovered his trap and warned her classmates about it on TikTok. “It does work sometimes,” said Jollimore, the Cal State Chico professor. “I’ve used ‘How would Aristotle answer this?’ when we hadn’t read Aristotle. But I’ve also used absurd ones and they didn’t notice that there was this crazy thing in their paper, meaning these are people who not only didn’t write the paper but also didn’t read their own paper before submitting it.”

You can catch students using ChatGPT, if you want to. There are ways to do it, ways to limit it. And I wish the reporter had asked these teachers what happened to the students who were discovered. But I am sure I know the answer.

I guess also, I apologize. Some educators are engaged in the fight to protect and preserve the value of learning things. I feel that it’s far too few and that, more often than not, they are alone in this. It’s depressing.

Odds and Ends

In addition to its excellent narrative about how bad things actually are in a GPT-corrupted education system, the article has a few other bits worth sharing.

This, is pretty great:

Before OpenAI released ChatGPT in November 2022, cheating had already reached a sort of zenith. At the time, many college students had finished high school remotely, largely unsupervised, and with access to tools like Chegg and Course Hero. These companies advertised themselves as vast online libraries of textbooks and course materials but, in reality, were cheating multi-tools. For $15.95 a month, Chegg promised answers to homework questions in as little as 30 minutes, 24/7, from the 150,000 experts with advanced degrees it employed, mostly in India. When ChatGPT launched, students were primed for a tool that was faster, more capable.

Mentioning Chegg and Course Hero by name is strong work. Cheating multi-tools is precisely what they are.

I thought this was interesting too:

Students talk about professors who are rumored to have certain thresholds (25 percent, say) above which an essay might be flagged as an honor-code violation. But I couldn’t find a single professor — at large state schools or small private schools, elite or otherwise — who admitted to enforcing such a policy. Most seemed resigned to the belief that AI detectors don’t work. It’s true that different AI detectors have vastly different success rates, and there is a lot of conflicting data. While some claim to have less than a one percent false-positive rate, studies have shown they trigger more false positives for essays written by neurodivergent students and students who speak English as a second language.

I have a few things to say about this.

Students talk to one another. Remember a few paragraphs up where a student found the Trojan horse and posted it on social media? When teachers make efforts to stop cheating, to try catching disallowed use of AI, word gets around. Some students will try harder to get away with it. Others won’t try to cheat, figuring the risk isn’t worth it. Simply trying to stop it, in other words, will stop at least some of it.

I think the idea that most teachers think AI detectors don’t work is true. It’s not just teachers. Entire schools believe this. It’s an epic failure of messaging, an astonishing triumph of the misinformed. Truth is, as reported above, detectors do vary. Some are great. Some are junk. But the good ones work. Most people continue to not believe it.

And I’ll point out once again that the “studies have shown” thing is complete nonsense. As far as I have seen, exactly two studies have shown this, and both are deeply flawed. The one most often cited has made-up citations and research that is highly suspicious, which I pointed out in 2023 (see Issue 216). Frankly, I’ve not seen any good evidence to support this idea. As journalism goes, that’s a big miss in this story. It’s little wonder teachers think AI detectors don’t work.

On the subject of junk AI detectors, there’s also this:

I fed Wendy’s essay through a free AI detector, ZeroGPT, and it came back as 11.74 AI-generated, which seemed low given that AI, at the very least, had generated her central arguments. I then fed a chunk of text from the Book of Genesis into ZeroGPT and it came back as 93.33 percent AI-generated.

This is a failure to understand how AI detection works. But also ZeroGPT does not work. Again, it’s no wonder that teachers think AI detection does not work.

Continuing:

It’s not just the students: Multiple AI platforms now offer tools to leave AI-generated feedback on students’ essays. Which raises the possibility that AIs are now evaluating AI-generated papers, reducing the entire academic exercise to a conversation between two robots — or maybe even just one.

I don’t have nearly the bandwidth to get into this. But — sure. I have no doubt.

Finally, I am not sure if I missed this at the time, but this is important too:

In January 2023, just two months after OpenAI launched ChatGPT, a survey of 1,000 college students found that nearly 90 percent of them had used the chatbot to help with homework assignments. In its first year of existence, ChatGPT’s total monthly visits steadily increased month-over-month until June, when schools let out for the summer. (That wasn’t an anomaly: Traffic dipped again over the summer in 2024.) Professors and teaching assistants increasingly found themselves staring at essays filled with clunky, robotic phrasing that, though grammatically flawless, didn’t sound quite like a college student — or even a human. Two and a half years later, students at large state schools, the Ivies, liberal-arts schools in New England, universities abroad, professional schools, and community colleges are relying on AI to ease their way through every facet of their education.

As I have said before, OpenAI is not your friend (see Issue 308). It’s a cheating engine. It can be used well, and ethically. But so can steroids. So could OxyContin. It’s possible to be handed the answers to every test you’ll ever take and not use them. But it is delusional to think any significant number of people don’t.

All wrapped up, this is a show-stopper of an article and I am very happy for the visibility it brings. I wish I could feel that it will make a difference.

Wednesday, April 30, 2025

The Role of U.S. Higher Education in Mass Surveillance: A Cornerstone of Authoritarianism and Fascism

In the digital age, surveillance has become a pervasive aspect of daily life. It reaches far beyond the government’s watchful eye; it infiltrates our personal spaces, our interactions, and even our educational institutions. In the United States, universities and colleges—typically seen as bastions of free thought and intellectual exploration—have, over the years, quietly embraced practices that align more with authoritarian surveillance than the fostering of academic freedom. The result is an unsettling convergence of education, technology, and control that deserves close scrutiny.

The Rise of Mass Surveillance in U.S. Higher Education

Mass surveillance is not a concept confined solely to government agencies or the private sector. U.S. colleges and universities have increasingly adopted surveillance technologies, often in ways that blur the lines between student safety, security, and governmental overreach. The methods used are diverse: from sophisticated digital monitoring of online activity to the installation of cameras throughout campuses, as well as the tracking of students’ movements and behaviors.

On-Campus Surveillance

Many campuses are equipped with thousands of security cameras, often without students' knowledge of the exact extent of their monitoring. These cameras track students' movements around buildings, dorms, and even outdoor spaces. Security personnel, working alongside private contractors, have access to this footage, creating a network of real-time surveillance. Additionally, some universities have partnered with police departments or government entities to share data from campus surveillance, effectively extending the government’s reach into spaces historically seen as separate from state control.

In some instances, universities have utilized facial recognition technology—a tool that, while growing in popularity among law enforcement and private corporations, is still highly controversial due to concerns about privacy, accuracy, and racial biases. Campuses like the University of California, Berkeley, and George Washington University have implemented or explored the use of facial recognition, drawing criticism from civil rights groups who argue that such technologies contribute to surveillance regimes that disproportionately target marginalized communities.

Digital Surveillance: Monitoring Online Activity

In the realm of digital surveillance, universities have also emerged as key players. The rapid digitization of academic spaces has made it easier for educational institutions to monitor and record students' online activities, including emails, internet browsing habits, and even participation in online discussions. These tools, ostensibly designed to protect students from online threats or cheating, can also be used to track the political views or social connections of students and faculty members.

University systems that monitor students' academic behavior are often integrated with third-party services that collect vast amounts of data. Companies like Google, which provide software for research and communication, have been instrumental in creating environments where personal data can be easily harvested and stored. As a result, students and faculty members are under constant scrutiny, even if they are unaware of the depth of data being collected on them.

Off-Campus Surveillance and Law Enforcement

While much of the surveillance happens on university grounds, the cooperation between educational institutions and law enforcement extends far beyond campus boundaries. Many universities share information with federal agencies like the FBI or local police departments, creating a synergy of surveillance that goes beyond the walls of academia. This collaboration is often justified as part of maintaining national security or preventing crimes, but it carries profound implications for privacy and civil liberties.

After the 9/11 attacks, for example, universities in the U.S. were encouraged to collaborate with federal intelligence agencies under the auspices of the USA PATRIOT Act and other anti-terrorism measures. This led to the surveillance of students’ political activities, associations, and even participation in protests. While much of this occurred covertly, the ramifications were far-reaching, particularly for marginalized groups who found themselves disproportionately surveilled due to their activism.

Surveillance of International Students: A First Step Toward Widespread Control

One of the most chilling aspects of surveillance on U.S. campuses is the specific targeting of international students. Historically, international students have been a vulnerable demographic in the context of surveillance and control. This began in earnest post-9/11, when the U.S. government imposed stricter regulations on foreign students, requiring universities to report on students' status, academic performance, and even their physical locations.

The Student and Exchange Visitor Information System (SEVIS) was established to track international students in real-time, linking student data to immigration and law enforcement agencies. While this system was presented as a means of ensuring national security, it effectively treated international students as suspects, placing them under heightened scrutiny. Universities, in turn, became instruments of surveillance, forced to comply with federal mandates to report any changes in a student's enrollment status, academic performance, or even the duration of their stay in the U.S.

For international students, this surveillance has been particularly invasive, as their movements—whether related to academic matters or personal lives—are constantly monitored by both their institutions and government entities. The stigma of being under the microscope contributes to a sense of alienation and powerlessness. It also encourages conformity, making it difficult for international students to freely express political or ideological dissent for fear of jeopardizing their academic status or immigration status.

The Threat of TPUSA’s Professor Watchlist

Another troubling element of surveillance within higher education is the growing trend of surveillance outside official university systems. Conservative student groups, particularly Turning Point USA (TPUSA), have taken it upon themselves to monitor and track the activities of professors whose political views they deem “liberal” or “left-wing.” One of TPUSA’s most controversial initiatives has been the creation of the Professor Watchlist, which compiles and publicly names professors accused of engaging in “liberal indoctrination” or promoting “liberal agendas.”

While TPUSA claims the Professor Watchlist is a tool to expose bias in academia, its purpose appears to be less about fostering academic debate and more about intimidating faculty members and curbing academic freedom. Professors listed on the watchlist are often subjected to harassment, threats, and, in some cases, professional repercussions, as conservative groups or donors seek to pressure universities into disciplining or firing faculty. The Watchlist represents a form of extrajudicial surveillance—non-governmental in origin but with highly political aims.

The real danger of such initiatives lies in their ability to undermine the independence of higher education. It is not just the professors listed who are impacted, but the entire academic community. Faculty members may begin to self-censor, avoiding controversial or politically sensitive topics for fear of being targeted, and students may find their ability to engage in free inquiry increasingly stifled.

The Professor Watchlist serves as a reminder that surveillance of academic institutions is not just the work of government agencies or private corporations; it is also deeply politicized, with various ideological groups using the tools of surveillance to exert control over education and the intellectual freedoms that it should represent.

Little Resistance: The Silence of Academia

Despite these troubling developments, resistance within academia has been minimal. Universities, which are supposed to serve as protectors of free speech, intellectual diversity, and civil liberties, have largely failed to challenge the growing surveillance apparatus both on and off their campuses. This silence is not without reason—many academic institutions have willingly participated in these surveillance efforts, citing concerns over campus security, student safety, and the desire to combat terrorism.

Additionally, many students and faculty members have become desensitized to surveillance. A generation raised in the digital age, where privacy is increasingly an afterthought and constant connectivity is the norm, may not fully grasp the implications of mass surveillance. Those who do speak out often find themselves at odds with institutional priorities or are silenced by threats of punishment, surveillance of their own activities, or other forms of retaliation.

The fear of retribution has also led to a chilling effect on dissent. Students who voice political opinions, especially those that challenge the status quo, may find themselves under increased scrutiny. This environment creates a culture where conformity reigns, and open discourse is stifled, not necessarily by overt repression, but by the omnipresent surveillance that discourages any behavior that might be deemed "out of line."

Mass Surveillance as a Tool of Authoritarianism and Fascism

The convergence of surveillance practices on college campuses with broader state interests should not be dismissed as incidental. Throughout history, mass surveillance has been a hallmark of authoritarian and fascist regimes. From Stalinist Russia to Nazi Germany, the power to monitor and control individuals through surveillance has been a tool used by oppressive governments to stifle dissent, control behavior, and consolidate power.

In a fascist regime, surveillance serves not just as a means of security, but as a tool of indoctrination and social control. The existence of surveillance constantly reminds individuals that they are being watched, creating a pervasive sense of fear and self-censorship. The same mechanism is increasingly visible in today’s U.S. higher education system, where students and faculty members may unconsciously internalize the need to comply with institutional norms, which are often shaped by external pressures from governmental and corporate entities.

The Implications for Democracy

The implications of this trend are far-reaching. When educational institutions no longer stand as a safe space for the free exchange of ideas, when they themselves become complicit in the surveillance of their own communities, it erodes the very foundation of democratic society. Free thought and intellectual exploration—the core tenets of higher education—cannot thrive in an atmosphere of constant monitoring and fear.

Mass surveillance on campuses also reinforces systemic inequalities. As surveillance technologies disproportionately affect marginalized groups—whether due to racial profiling, political dissent, or nationality—it contributes to a broader structure of control that undermines the principles of equal treatment and justice. In a society where the surveillance state extends into universities, it’s not hard to imagine a future where academic freedom becomes a thing of the past, with institutions serving instead as instruments of political and corporate control.

Conclusion

The role of U.S. higher education in the rise of mass surveillance—both on and off-campus—raises serious concerns about privacy, freedom, and the future of democratic values. Universities, which once stood as symbols of intellectual autonomy, are now complicit in the surveillance mechanisms that have come to define authoritarian and fascist regimes. The lack of widespread resistance from within academia only exacerbates the situation, highlighting the need for a renewed commitment to the values of free thought and privacy.

If we are to preserve the integrity of higher education as a space for critical thinking and dissent, we must confront the creeping normalization of surveillance in these institutions. It’s time for students, faculty, and administrators to take a stand, not just against the overt surveillance on campus, but against the creeping authoritarianism that it represents in the broader context of our society. The fight for academic freedom and privacy is not just a fight for the rights of students and educators—it’s a fight for the soul of democracy itself.

Monday, February 10, 2025

We asked AI to tell on itself... (Glen McGhee)

This is what AI told us about its own limitations.  

1. Illusion of Learning: Teachers may perceive an illusion of progress due to improved homework scores when students use ChatGPT. A study in Turkey found that while homework scores increased with ChatGPT use, final exam performance actually decreased by 17% for students using unprompted ChatGPT[4], that is, without guiding prompts, simply navigating it on their own]. This suggests that teachers might observe apparent progress that doesn't translate to actual learning.

2. Overestimation of Knowledge: Students using ChatGPT may experience "Illusory Knowledge," where they believe they've learned more than they actually have[4]. This could lead teachers to overestimate class progress based on students' self-reported understanding [[I have personally experienced this while teaching -- student head nods in synch with my points -- only to learn later that this was a student reflex to pacify the teacher. As a teacher, I find this befuddling.]]

3. Reduced Critical Thinking: Teachers might notice a decline in students' critical thinking and problem-solving abilities due to overreliance on ChatGPT[6]. This could create a false impression of progress in content knowledge while masking deficits in crucial cognitive skills.

4. Difficulty in Assessment: The use of ChatGPT poses challenges for teachers in accurately assessing student abilities, potentially leading to a skewed perception of class progress[1]. [[Yes!]]

It's important to note that research on this specific aspect of teacher perception is limited, and more studies are needed to fully understand the phenomenon of perceived illusory progress in classrooms using ChatGPT.

Citations:
[1] https://asiacall.info/proceedings/index.php/articles/article/download/81/64/757
[2] https://community.openai.com/t/chatgpt-4-and-the-illusion-of-learning-a-cautionary-tale-from-a-new-user/1009557
[3] https://rgsa.openaccesspublications.org/rgsa/article/download/8162/4017/29639
[4] https://www.oneusefulthing.org/p/post-apocalyptic-education
[5] https://huskiecommons.lib.niu.edu/cgi/viewcontent.cgi?article=8882&context=allgraduate-thesesdissertations
[6] https://pmc.ncbi.nlm.nih.gov/articles/PMC10670526/
[7] https://www.researchgate.net/publication/382619812_Perceptions_of_pre-service_teachers_on_the_use_of_ChatGPT_during_their_training_and_the_future_implications_for_their_future_role_in_high_schools_A_research_in_progress
[8] https://www.ciocoverage.com/edtech-and-ai-in-education-the-illusion-of-progress/

Other Possible negative effects of ChatGPT usage include:

1. Cognitive thinning: ChatGPT usage may lead to a reduction in students' cognitive efforts and critical thinking skills. Students who rely heavily on ChatGPT for academic tasks may experience a decline in their ability to think critically and solve problems independently[1][2].
[[ChatGPT encourages a decline in cognitive development.]]

2. Memory loss: Research has found a concerning link between ChatGPT usage and memory loss in students. As students rely on the AI to quickly answer questions, they may reduce their own cognitive efforts, resulting in poor memory retention[2][4].
[[Surprisingly, there are lots of studies on this. Memory loss in adults is linked to dementia. What about kids?]]

3. Procrastination: Students who use ChatGPT frequently are more likely to procrastinate on their academic tasks. This may be due to the ease with which they can obtain information or complete assignments using the AI tool[2][6].
[[Undermines the development of time management skills.]]

4. Declining academic performance: Studies have shown a correlation between ChatGPT usage and a drop in GPA. This suggests that while the tool may make schoolwork easier in the short term, it may negatively impact overall academic performance[2][4]. [[I don't get this. Failing students now have a proverbial army they can muster at a moment's notice -- unless (below) they are too poor to access the technology?]]

5. Reduced cognitive load: While ChatGPT can ease students' cognitive load during research tasks, this comes at the expense of developing stronger arguments and critical thinking skills. Students using ChatGPT tend to produce weaker arguments compared to those using traditional research methods like Google[8]. 

These effects suggest a "thinning out of developmental cognitive strands" in the classroom, as students may become overly reliant on AI tools at the expense of developing crucial cognitive and academic skills. This trend could potentially lead to a less intellectually robust learning environment if not properly managed.

Citations:
[1] https://pmc.ncbi.nlm.nih.gov/articles/PMC10670526/
[2] https://futurism.com/the-byte/chatgpt-memory-loss-procrastination
[3] https://www.nea.org/nea-today/all-news-articles/chatgpt-enters-classroom-teachers-weigh-pros-and-cons
[4] https://evolutionnews.org/2024/03/chatgpt-linked-with-memory-loss-poor-academic-performance/
[5] https://blog.acceleratelearning.com/ai-in-education-the-impact-of-chatgpt
[6] https://www.psypost.org/chatgpt-linked-to-declining-academic-performance-and-memory-loss-in-new-study/
[7] https://gchumanrights.org/gc-preparedness/preparedness-science-technology/article-detail/the-chatgpt-effect-in-higher-education.html
[8] https://www.psypost.org/study-finds-chatgpt-eases-students-cognitive-load-but-at-the-expense-of-critical-thinking/

Since the widespread adoption of AI in education, several effects on both teachers and students have worsened in regard to the online delivery of instruction and assessment:

1. Academic dishonesty: The use of ChatGPT for cheating has become more prevalent and sophisticated. Over 25% of teachers have caught students using ChatGPT to cheat, leading to increased time spent on checking for AI plagiarism[5].

2. Decline in critical thinking: Students' overreliance on ChatGPT has led to a more significant decrease in critical thinking and problem-solving skills. This effect is particularly concerning in subjects requiring complex analysis, such as literature, history, and philosophy[1][2].

3. Reduced creativity: The ease of generating content with ChatGPT has further diminished students' ability to express original ideas and perspectives, leading to more formulaic and less creative work[1][5].

4. Widening equity gaps: The introduction of AI tools has exacerbated existing inequalities, as not all students have equal access to or can afford AI tools like ChatGPT, creating an unfair advantage for those who do[1].

5. Increased cognitive offloading: Studies have shown a more pronounced decline in memory retention and independent learning as students increasingly rely on AI for information retrieval and task completion[1][2].

6. Compromised learning outcomes: The overuse of ChatGPT has led to a more significant negative impact on educational support and reduced social interaction between students and faculty, affecting the overall learning experience[2].

7. Challenges in assessment: Teachers are facing greater difficulties in accurately assessing students' knowledge and understanding due to the prevalence of AI-generated work[1][5].

8. Increased workload for teachers: Educators are spending more time revamping lesson plans to be "AI-proof" and developing new strategies to maintain academic integrity[5]. [[To say nothing about time wasted trying to catch cheaters and making the accusations stick? Why bother ....]]

9. Erosion of research skills: Students' ability to conduct independent research and critically evaluate sources has further deteriorated due to the convenience of AI-generated responses[2][4].

10. Perpetuation of biases: The increased use of ChatGPT has led to more instances of unintentional biases being reinforced in educational settings, as the AI may generate responses that reflect prejudices or imbalances present in its training data[2].

Regarding similar effects of online instructional delivery, even BEFORE the adoption of AI:
Many of the effects attributed to ChatGPT in education were already observed in online learning environments before the widespread adoption of AI, albeit to varying degrees:

1. Reduced human interaction: Online education has long grappled with the challenge of maintaining meaningful teacher-student and student-student interactions. This issue predates AI adoption, with concerns about social isolation and decreased engagement being prevalent in early online learning research[3][6].

2. Changes in assessment methods: The shift to online learning necessitated changes in assessment strategies to combat academic dishonesty, a challenge that existed before AI but has been exacerbated by it[1].

3. Time-saving for teachers: Learning Management Systems (LMS) and other educational technologies have been assisting teachers with tasks like grading and content delivery for years, though not to the extent that AI now promises[2].

4. Equity issues: The digital divide and unequal access to technology have been persistent challenges in online education, mirroring current concerns about AI accessibility[1][7].

5. Privacy concerns: Data privacy and security issues have been central to online education discussions long before AI entered the picture[1][4].

6. Potential for personalized learning: Adaptive learning systems, precursors to AI-driven personalization, have been a feature of online education for some time[5].

7. Risk of overreliance on technology: This concern has been present since the early days of educational technology integration, though AI intensifies it[1].

8. Impact on critical thinking skills: There have been ongoing debates about how online learning affects the development of critical thinking skills, a concern now amplified with AI[1][5].

9. Transformation of teaching roles: Online education has been gradually shifting the role of teachers from lecturers to facilitators, a trend that AI is accelerating[2][7].

10. Need for teacher training: The transition to online teaching has always required professional development, similar to the current need for AI-related training[8].

While many of these effects were present in online education before AI, the introduction of AI has generally intensified and accelerated these trends. The key difference lies in the scale, speed, and sophistication that AI brings to these existing challenges and opportunities in online education.

Citations:
[1] https://www.educationnext.org/a-i-in-education-leap-into-new-era-machine-intelligence-carries-risks-challenges-promises/
[2] https://www.waldenu.edu/programs/education/resource/five-pros-and-cons-of-ai-in-the-education-sector
[3] https://education.illinois.edu/about/news-events/news/article/2024/10/24/ai-in-schools--pros-and-cons
[4] https://www.ed.gov/sites/ed/files/documents/ai-report/ai-report.pdf
[5] https://pmc.ncbi.nlm.nih.gov/articles/PMC9513244/
[6] https://openedx.org/blog/what-is-the-impact-of-ai-in-the-online-learning-space/
[7] https://www.onlineeducation.com/features/ai-in-distance-learning
[8] https://er.educause.edu/articles/2024/1/will-our-educational-system-keep-pace-with-ai-a-students-perspective-on-ai-and-learning

ChatGPT has introduced several negative changes to the role of teachers in the classroom:

1. Increased workload: Teachers are spending more time checking for AI plagiarism and revamping lesson plans to be "AI-proof"[4]. This additional burden takes away from time that could be spent on more meaningful educational activities.

2. Diminished teacher-student interaction: The use of ChatGPT for educational support can reduce opportunities for face-to-face interactions between students and faculty, potentially affecting the learning experience and social aspects of education[1].

3. Challenges in assessment: Teachers face difficulties in accurately assessing students' knowledge and understanding when AI tools are used to complete assignments[2]. This complicates the evaluation process and may lead to inaccurate assessments of student progress.

4. Shift in pedagogical focus: Educators are forced to adapt their teaching methods to counteract the negative effects of AI tools, potentially moving away from traditional teaching approaches that may still hold value[4].

5. Ethical dilemmas: Teachers must navigate new ethical challenges related to AI use in education, including issues of academic integrity and equitable access to technology[2][3].

6. Potential overreliance on AI: There is a risk that teachers might start relying on AI tools before developing in-depth knowledge about teaching and classroom management, potentially compromising the quality of education[7].

7. Reduced emphasis on critical thinking: As AI tools can provide quick answers, teachers may find it challenging to encourage students to engage in deep, critical thinking processes[1][2].

8. Perpetuation of biases: Teachers must be vigilant about the potential biases in AI-generated content, which could inadvertently reinforce stereotypes or prejudices in the classroom[1][3].

These changes highlight the complex challenges teachers face in integrating AI tools like ChatGPT into education while maintaining the quality and integrity of the learning experience.

Citations:
[1] https://pmc.ncbi.nlm.nih.gov/articles/PMC10670526/
[2] https://www.techbusinessnews.com.au/blog/chatgpt-may-lead-to-the-downfall-of-eduction-and-critical-thinking/
[3] https://citejournal.org/volume-23/issue-1-23/editorial/editorial-chatgpt-challenges-opportunities-and-implications-for-teacher-education/
[4] https://www.nea.org/nea-today/all-news-articles/chatgpt-enters-classroom-teachers-weigh-pros-and-cons
[5] https://www.pewresearch.org/short-reads/2024/05/15/a-quarter-of-u-s-teachers-say-ai-tools-do-more-harm-than-good-in-k-12-education/
[6] https://blogs.worldbank.org/en/education/how-use-chatgpt-support-teachers-good-bad-and-ugly
[7] https://www.tc.columbia.edu/articles/2023/january/navigating-the-risks-and-rewards-of-chatgpt/