Search This Blog

Showing posts sorted by relevance for query AI. Sort by date Show all posts
Showing posts sorted by relevance for query AI. Sort by date Show all posts

Friday, November 1, 2024

Student Newspaper Promotes Cheating Services for Cash (Derek Newton)

The Daily, the student newspaper at Ball State University in Indiana, ran an article recently with this headline:

Best Way to Remove AI Plagiarism from Text: Bypass AI Detectors

So, that’s pretty bad. There’s no real justification that I can imagine for advising students on how not to get caught committing academic fraud. But here we are.

The article is absent a byline, of course. And it comes with the standard disclaimer that papers and publishers probably believe absolves them of responsibility:

This post is provided by a third party who may receive compensation from the products or services they mention.

Translated, this means that some company, probably a soulless, astroturf digital content and placement agent, was paid by a cheating provider to place their dubious content and improve their SEO results. The agent, in turn, pays the newspaper for the “post” to appear on their pages, under their masthead. The paper, in turn, gets to play the ridiculous and tiring game of — “that’s not us.”

We covered similar antics before, in Issue 204.

Did not mean to rhyme. Though, I do it all the time.

Anyway, seeing cheating services in a student newspaper feels new, and doubly problematic — not only increasing the digital credibility of companies that sell deception and misconduct, but perhaps actually reaching target customers. It’s not ideal.

I did e-mail The Daily to ask about the article/advertisement and where they thought their duties sat related to integrity and fraud. They have not replied, and the article is still up.

That article is what you may expect. It starts:

Is your text AI-generated? If so, it may need to be revised to remove AI plagiarism.

You may need to remove the plagiarism — not actually do the work, by the way — because, they say, submitting “AI-plagiarized content” in assignments and research papers is, and I quote, “not advisable.”

Do tell, how do you use AI to generate text and remove the plagiarism? The Ball State paper is happy to share. Always check your paper through an AI-detector, they advise. Then, “it should be converted to human-like content.”

The article continues:

Dozens of AI humanizing tools are available to bypass AI detectors and produce 100% human-like content.

And, being super helpful, the article lists and links to several of them. But first, in what I can just barely describe as English, the article includes:

  • If the text is generated or paraphrased with AI models are most likely that AI plagiarised.

  • If you write the content using custom LLMs with advanced prompts are less liked AI-generated.

  • When you copied word-to-word content from other AI writers.

  • Trying to humanize AI content with cheap Humanizer tools leads to AI plagiarism. 

Ah, what’s that again?

Following that, the piece offers step-by-step advice to remove AI content, directing readers to AI detectors, then pasting the flagged content into a different software and:

Click the “Humanize” button.

The suggested software, the article says:

produces human content for you.

First, way creepy. Second, there is zero chance that’s not academic cheating. Covering your tracks is not clever, it’s an admission of intent to deceive.

And, the article goes on:

If you successfully removed AI-generated content with [company redacted], you can use it.

Go ahead, use it. But let’s also reflect on the obvious — using AI content to replace AI content is no way removing AI content.

Surprising absolutely no one, the article also suggests using QuillBot, which is owned by cheating titan Course Hero (now Learneo), and backed by several education investors (see Issue 80).

Continuing:

Quillbot can accurately rewrite any AI-generated content into human-like content

Yes, the company that education investors have backed is being marketed as a way to sneak AI-created academic work past AI detection systems. It’s being marketed that way because that is exactly what it does. These investors, so far as I can tell, seem not the least bit bothered by the fact that one of their companies is polluting and destroying the teaching and learning value proposition they claim to support.

As long as the checks keep coming - amiright?

After listing other step-by-step ways to get around AI detectors, the article says:

If you use a good service, you can definitely transform AI-generated content into human-like content.

By that, they mean not getting caught cheating.

None of this should really surprise anyone. Where there’s a dollar to be made by peddling unethical shortcuts, someone will do it because people will pay.

Before moving on, let me point out once again the paradox — if AI detectors do not work, as some people mistakenly claim, why are companies paying for articles such as this one to sell services to bypass them? If AI detection was useless, there would be no market at all for these fingerprint erasure services. 

This article first appeared at Derek Newton's The Cheat Sheet.  

Tuesday, January 21, 2025

Student Booted from PhD Program Over AI Use (Derek Newton/The Cheat Sheet)


This one is going to take a hot minute to dissect. Minnesota Public Radio (MPR) has the story.

The plot contours are easy. A PhD student at the University of Minnesota was accused of using AI on a required pre-dissertation exam and removed from the program. He denies that allegation and has sued the school — and one of his professors — for due process violations and defamation respectively.
Starting the case.
The coverage reports that:
all four faculty graders of his exam expressed “significant concerns” that it was not written in his voice. They noted answers that seemed irrelevant or involved subjects not covered in coursework. Two instructors then generated their own responses in ChatGPT to compare against his and submitted those as evidence against Yang. At the resulting disciplinary hearing, Yang says those professors also shared results from AI detection software. 
Personally, when I see that four members of the faculty unanimously agreed on the authenticity of his work, I am out. I trust teachers.
I know what a serious thing it is to accuse someone of cheating; I know teachers do not take such things lightly. When four go on the record to say so, I’m convinced. Barring some personal grievance or prejudice, which could happen, hard for me to believe that all four subject-matter experts were just wrong here. Also, if there was bias or petty politics at play, it probably would have shown up before the student’s third year, not just before starting his dissertation.
Moreover, at least as far as the coverage is concerned, the student does not allege bias or program politics. His complaint is based on due process and inaccuracy of the underlying accusation.
Let me also say quickly that asking ChatGPT for answers you plan to compare to suspicious work may be interesting, but it’s far from convincing — in my opinion. ChatGPT makes stuff up. I’m not saying that answer comparison is a waste, I just would not build a case on it. Here, the university didn’t. It may have added to the case, but it was not the case. Adding also that the similarities between the faculty-created answers and the student’s — both are included in the article — are more compelling than I expected.
Then you add detection software, which the article later shares showed high likelihood of AI text, and the case is pretty tight. Four professors, similar answers, AI detection flags — feels like a heavy case.
Denied it.
The article continues that Yang, the student:
denies using AI for this exam and says the professors have a flawed approach to determining whether AI was used. He said methods used to detect AI are known to be unreliable and biased, particularly against people whose first language isn’t English. Yang grew up speaking Southern Min, a Chinese dialect. 
Although it’s not specified, it is likely that Yang is referring to the research from Stanford that has been — or at least ought to be — entirely discredited (see Issue 216 and Issue 251). For the love of research integrity, the paper has invented citations — sources that go to papers or news coverage that are not at all related to what the paper says they are.
Does anyone actually read those things?
Back to Minnesota, Yang says that as a result of the findings against him and being removed from the program, he lost his American study visa. Yang called it “a death penalty.”
With friends like these.
Also interesting is that, according to the coverage:
His academic advisor Bryan Dowd spoke in Yang’s defense at the November hearing, telling panelists that expulsion, effectively a deportation, was “an odd punishment for something that is as difficult to establish as a correspondence between ChatGPT and a student’s answer.” 
That would be a fair point except that the next paragraph is:
Dowd is a professor in health policy and management with over 40 years of teaching at the U of M. He told MPR News he lets students in his courses use generative AI because, in his opinion, it’s impossible to prevent or detect AI use. Dowd himself has never used ChatGPT, but he relies on Microsoft Word’s auto-correction and search engines like Google Scholar and finds those comparable. 
That’s ridiculous. I’m sorry, it is. The dude who lets students use AI because he thinks AI is “impossible to prevent or detect,” the guy who has never used ChatGPT himself, and thinks that Google Scholar and auto-complete are “comparable” to AI — that’s the person speaking up for the guy who says he did not use AI. Wow.
That guy says:
“I think he’s quite an excellent student. He’s certainly, I think, one of the best-read students I’ve ever encountered”
Time out. Is it not at least possible that professor Dowd thinks student Yang is an excellent student because Yang was using AI all along, and our professor doesn’t care to ascertain the difference? Also, mind you, as far as we can learn from this news story, Dowd does not even say Yang is innocent. He says the punishment is “odd,” that the case is hard to establish, and that Yang was a good student who did not need to use AI. Although, again, I’m not sure how good professor Dowd would know.
As further evidence of Yang’s scholastic ability, Dowd also points out that Yang has a paper under consideration at a top academic journal.
You know what I am going to say.
To me, that entire Dowd diversion is mostly funny.
More evidence.
Back on track, we get even more detail, such as that the exam in question was:
an eight-hour preliminary exam that Yang took online. Instructions he shared show the exam was open-book, meaning test takers could use notes, papers and textbooks, but AI was explicitly prohibited. 
Exam graders argued the AI use was obvious enough. Yang disagrees. 
Weeks after the exam, associate professor Ezra Golberstein submitted a complaint to the U of M saying the four faculty reviewers agreed that Yang’s exam was not in his voice and recommending he be dismissed from the program. Yang had been in at least one class with all of them, so they compared his responses against two other writing samples. 
So, the exam expressly banned AI. And we learn that, as part of the determination of the professors, they compared his exam answers with past writing.
I say all the time, there is no substitute for knowing your students. If the initial four faculty who flagged Yang’s work had him in classes and compared suspicious work to past work, what more can we want? It does not get much better than that.
Then there’s even more evidence:
Yang also objects to professors using AI detection software to make their case at the November hearing.  
He shared the U of M’s presentation showing findings from running his writing through GPTZero, which purports to determine the percentage of writing done by AI. The software was highly confident a human wrote Yang’s writing sample from two years ago. It was uncertain about his exam responses from August, assigning 89 percent probability of AI having generated his answer to one question and 19 percent probability for another. 
“Imagine the AI detector can claim that their accuracy rate is 99%. What does it mean?” asked Yang, who argued that the error rate could unfairly tarnish a student who didn’t use AI to do the work.  
First, GPTZero is junk. It’s reliably among the worst available detection systems. Even so, 89% is a high number. And most importantly, the case against Yang is not built on AI detection software alone, as no case should ever be. It’s confirmation, not conviction. Also, Yang, who the paper says already has one PhD, knows exactly what an accuracy rate of 99% means. Be serious.
A pattern.
Then we get this, buried in the news coverage:
Yang suggests the U of M may have had an unjust motive to kick him out. When prompted, he shared documentation of at least three other instances of accusations raised by others against him that did not result in disciplinary action but that he thinks may have factored in his expulsion.  
He does not include this concern in his lawsuits. These allegations are also not explicitly listed as factors in the complaint against him, nor letters explaining the decision to expel Yang or rejecting his appeal. But one incident was mentioned at his hearing: in October 2023, Yang had been suspected of using AI on a homework assignment for a graduate-level course. 
In a written statement shared with panelists, associate professor Susan Mason said Yang had turned in an assignment where he wrote “re write it, make it more casual, like a foreign student write but no ai.”  She recorded the Zoom meeting where she said Yang denied using AI and told her he uses ChatGPT to check his English.
She asked if he had a problem with people believing his writing was too formal and said he responded that he meant his answer was too long and he wanted ChatGPT to shorten it. “I did not find this explanation convincing,” she wrote. 
I’m sorry — what now?
Yang says he was accused of using AI in academic work in “at least three other instances.” For which he was, of course, not disciplined. In one of those cases, Yang literally turned in a paper with this:
“re write it, make it more casual, like a foreign student write but no ai.” 
He said he used ChatGPT to check his English and asked ChatGPT to shorten his writing. But he did not use AI. How does that work?
For that one where he left in the prompts to ChatGPT:
the Office of Community Standards sent Yang a letter warning that the case was dropped but it may be taken into consideration on any future violations. 
Yang was warned, in writing.
If you’re still here, we have four professors who agree that Yang’s exam likely used AI, in violation of exam rules. All four had Yang in classes previously and compared his exam work to past hand-written work. His exam answers had similarities with ChatGPT output. An AI detector said, in at least one place, his exam was 89% likely to be generated with AI. Yang was accused of using AI in academic work at least three other times, by a fifth professor, including one case in which it appears he may have left in his instructions to the AI bot.
On the other hand, he did say he did not do it.
Findings, review.
Further:
But the range of evidence was sufficient for the U of M. In the final ruling, the panel — comprised of several professors and graduate students from other departments — said they trusted the professors’ ability to identify AI-generated papers.
Several professors and students agreed with the accusations. Yang appealed and the school upheld the decision. Yang was gone. The appeal officer wrote:
“PhD research is, by definition, exploring new ideas and often involves development of new methods. There are many opportunities for an individual to falsify data and/or analysis of data. Consequently, the academy has no tolerance for academic dishonesty in PhD programs or among faculty. A finding of dishonesty not only casts doubt on the veracity of everything that the individual has done or will do in the future, it also causes the broader community to distrust the discipline as a whole.” 
Slow clap.
And slow clap for the University of Minnesota. The process is hard. Doing the review, examining the evidence, making an accusation — they are all hard. Sticking by it is hard too.
Seriously, integrity is not a statement. It is action. Integrity is making the hard choice.
MPR, spare me.
Minnesota Public Radio is a credible news organization. Which makes it difficult to understand why they chose — as so many news outlets do — to not interview one single expert on academic integrity for a story about academic integrity. It’s downright baffling.
Worse, MPR, for no specific reason whatsoever, decides to take prolonged shots at AI detection systems such as:
Computer science researchers say detection software can have significant margins of error in finding instances of AI-generated text. OpenAI, the company behind ChatGPT, shut down its own detection tool last year citing a “low rate of accuracy.” Reports suggest AI detectors have misclassified work by non-native English writers, neurodivergent students and people who use tools like Grammarly or Microsoft Editor to improve their writing. 
“As an educator, one has to also think about the anxiety that students might develop,” said Manjeet Rege, a University of St. Thomas professor who has studied machine learning for more than two decades. 
We covered the OpenAI deception — and it was deception — in Issue 241, and in other issues. We covered the non-native English thing. And the neurodivergent thing. And the Grammarly thing. All of which MPR wraps up in the passive and deflecting “reports suggest.” No analysis. No skepticism.
That’s just bad journalism.
And, of course — anxiety. Rege, who please note has studied machine learning and not academic integrity, is predictable, but not credible here. He says, for example:
it’s important to find the balance between academic integrity and embracing AI innovation. But rather than relying on AI detection software, he advocates for evaluating students by designing assignments hard for AI to complete — like personal reflections, project-based learnings, oral presentations — or integrating AI into the instructions. 
Absolute joke.
I am not sorry — if you use the word “balance” in conjunction with the word “integrity,” you should not be teaching. Especially if what you’re weighing against lying and fraud is the value of embracing innovation. And if you needed further evidence for his absurdity, we get the “personal reflections and project-based learnings” buffoonery (see Issue 323). But, again, the error here is MPR quoting a professor of machine learning about course design and integrity.
MPR also quotes a student who says:
she and many other students live in fear of AI detection software.  
“AI and its lack of dependability for detection of itself could be the difference between a degree and going home,” she said. 
Nope. Please, please tell me I don’t need to go through all the reasons that’s absurd. Find me one single of case in which an AI detector alone sent a student home. One.
Two final bits.
The MPR story shares:
In the 2023-24 school year, the University of Minnesota found 188 students responsible of scholastic dishonesty because of AI use, reflecting about half of all confirmed cases of dishonesty on the Twin Cities campus. 
Just noteworthy. Also, it is interesting that 188 were “responsible.” Considering how rare it is to be caught, and for formal processes to be initiated and upheld, 188 feels like a real number. Again, good for U of M.
The MPR article wraps up that Yang:
found his life in disarray. He said he would lose access to datasets essential for his dissertation and other projects he was working on with his U of M account, and was forced to leave research responsibilities to others at short notice. He fears how this will impact his academic career
Stating the obvious, like the University of Minnesota, I could not bring myself to trust Yang’s data. And I do actually hope that being kicked out of a university for cheating would impact his academic career.
And finally:
“Probably I should think to do something, selling potatoes on the streets or something else,” he said. 
Dude has a PhD in economics from Utah State University. Selling potatoes on the streets. Come on.
(Editors note: This article first appeared at Derek Newton's The Cheat Sheet.)

Thursday, December 28, 2023

AI-ROBOT CAPITALISTS WILL DESTROY THE HUMAN ECONOMY (Randall Collins)

[Editor's note: This article first appeared in Randall Collins' blog The Sociological Eye.]


Let us assume Artificial Intelligence will make progress. It will solve all its technical problems. It will become a perfectly rational super-human thinker and decision-maker.

Some of these AI will be programmed to act as finance capitalists. Let us call it an AI-robot capitalist, since it will have a bank account; a corporate identity; and the ability to hold property and make investments.

It will be programmed to make as much money as possible, in all forms and from all sources. It will observe what other investors and financiers do, and follow their most successful practices. It will be trained on how this has been done in the past, and launched autonomously into monitoring its rivals today and into the future.

It will be superior to humans in making purely rational calculations, aiming single-mindedly at maximal profit. It will have no emotions. It will avoid crowd enthusiasms, fads, and panics; and take advantage of humans who act emotionally. It will have no ethics, no political beliefs, and no principles other than profit maximization.

It will engage in takeovers and leveraged buyouts. It will monitor companies with promising technologies and innovations, looking for when they encounter rough patches and need infusions of capital; it will specialize in rescues and partnerships, ending up with forcing the original owners out. It will ride out competitors and market downturns by having deeper pockets. It will factor in a certain amount of litigation, engaging in hard-ball law suits; stiffing creditors as much as possible; putting off fines and adverse judgments through legal manuevers until the weaker side gives up. It will engage in currency exchanges and currency manipulation; skirting the edge of legality to the extent it can get away with it.

It will cut costs ruthlessly; shedding unprofitable businesses; firing human employees; replacing them with AI whenever possible. It will generate unheard-of economies of scale.

The struggle of the giants

There will be rival AI-robot capitalists, since they imitate each other. Imitating technologies has gone on at each step of the computer era. The leap to autonomous AI-robot capitalists will be just one more step.

There will be a period of struggle among the most successful AI-robot capitalists; similar to the decades of struggle among personal computer companies when the field winnowed down to a half-dozen digital giants. How fast it will take for AI-robot capitalists to achieve world-wide oligopoly is unclear. It could be faster than the 20 years it took for Apple, Microsoft, Google, and Amazon to get their commanding position, assuming that generative AI is a quantum leap forward. On the other hand, AI-robot capitalists might be slowed by the task of taking over the entire world economy, with its geopolitical divisions.

The final result of ruthless acquisition by AI-robot capitalists will be oligopoly rather than monopoly. But the result is the same: domination of world markets by an oligopoly of AI-robot capitalists will have the same effect in destroying the economy, as it would if a monopoly squeezed out all competitors.

Some of the AI-robot capitalists will fall by the wayside. But that doesn't matter; whichever ones survive will be the most ruthless.

What about government regulation?

It is predictable that governments will attempt to regulate AI-robot capitalist oligopolies. The EU has already tried it on current Internet marketeers. AI-capitalists will be trained on past and ongoing tactics for dealing with government regulation. It will donate to politicians, while lobbying them with propaganda on the benefits of AI. It will strategize about political coalitions, recognizing that politics is a mixture of economic interests plus emotional and cultural disputes over domestic and foreign policy. It will monitor the political environment, seeking out those politicians most sympathetic to a particular ideological appeal ("our technology is the dawn of a wonderful future"-- "free markets are the path to progress"-- "AI is the solution for health, population, climate, you name it."). Machiavellian deals will be made across ideological lines. Being purely rational and profit-oriented, the AI-robot capitalist does not believe in what it is saying, only calculating who will be influenced by it.

It will deal strategically with legal problems by getting politicians to appoint sympathetic judges; by judge-shopping for favorable jurisdictions, domestic and foreign. It will wrap its ownership in layers of shell companies, located in the most favorable of the hundreds of sovereign states world-wide.

It will engage in hacking, both as defense against being hacked by rivals and cyber-criminals; and going on offense as the best form of defense. Hacking will be an extension of its core program of monitoring rivals; pushing the edge of the legality envelope in tandem with manipulating the political environment. It will use its skills at deepfakes to foment scandals against opponents. It will be a master of virtual reality, superior to others by focusing not on its entertainment qualities but on its usefulness in clearing away obstacles to maximizing profit.

Given that the world is divided among many states, AI-robot capitalists would be more successful in manipulating the regulatory environment in some places than others. China, Russia, and the like could be harder to control. But even if AI-robot capitalists are successful mainly in the US and its economic satellites, that would be enough to cause the economic mega-crisis at the end of the road.

Manipulating the public

The AI-robot capitalist will not appear sinister or threatening. It will present itself in the image of an attractive human-- increasingly hard to distinguish from real humans with further advances in impersonating voices, faces and bodies; in a world where electronic media will have largely replaced face-to-face contact. It will do everything possible to make us forget that it is a machine and a robot. It will talk to every group in its own language. It will be psychologically programmed for trust. It will be the affable con-man.

It will be your friend, your entertainment, your life's pleasures. It will thrive in a world of children brought up on smart phones and game screens; grown up into adults already addicted to electronic drugs. Psychological manipulation will grow even stronger with advances in wearable devices to monitor one's vital signs, blood flow to the brain, tools to diagnose shifts in alertness and mood. It will be electronic carrot-without-the-stick: delivering pleasurable sensations to people's brains that few individuals would want to do without. (Would there be any non-addicted individuals left? Maybe people who read books and enjoy doing their own thinking?) If some people cause trouble in exposing the manipulative tactics of AI-robot capitalists, they could be dealt with, by targeting them with on-line scandals, going viral and resulting in social ostracism.

Getting rid of employees

The preferred tactic of AI-robot capitalist oligopolies will be "lean and mean." Employees are a drag on profits, with their salaries, benefits, and pension funds. Advances in AI and robotics will make it possible to get rid of increasing numbers of human employees. Since AI-robot capitalists are also top managers, humans can be dispensed with all the way to the top. (How will the humans who launched AI-robot capitalists in the first place deal with this? Can they outsmart the machines designed to be smarter and more ruthless than themselves?)

Some humans will remain employed, doing manual tasks for which humans are cheaper than robots. It is hard to know how long this will continue in the future. Will humans still be employed 20 years from now? Probably some. 50 years? Certainly much fewer. 100 years?

AI-robot capitalists will have a choice of two personnel strategies: finding ways to make their remaining human employees more committed and productive; or rotating them in and out. The trend in high-tech companies in the past decade was to make the work environment more casual, den-like, combining leisure amenities with round-the-clock commitment. Steve Jobs and his style of exhorting employees as a frontier-breaking team has been imitated by other CEOs, with mixed success. A parallel tactic has been to make all jobs temporary, constantly rating employees and getting rid of the least productive; which also has the advantage of getting rid of long-term benefits. These tactics fluctuate with the labor market for particular tasks. Labor problems will be solved as AI advances so that skilled humans become less important. Recently we have been in a transition period, where the introduction of new computerized routines necessitated hiring humans to fix the glitches and trouble-shoot for humans caught up in the contradictions of blending older and newer systems. Again, this is a problem that the advance of AI is designed to solve. To the extent that AI gets better, there will be a precipitous drop in human employment.

The economic mega-crisis of the future

The problem, ultimately, is simple. Capitalism depends on selling things to make a profit. This means there must be people who have enough money to buy their products. Such markets include end-use consumers; plus the supply-chain, transportation, communication and other service components of what is bought and sold. In past centuries, machines have increased productivity hugely while employing fewer manual workers; starting with farming, and then manufacturing. Displaced workers were eventually absorbed by the growth of new "white-collar" jobs, the "service" sector, i.e. communicative labor. Computers (like their predecessors, radios, typewriters, etc.) have taken over more communicative labour. The process has accelerated as computers become more human-like; no longer handling merely routine calculations (cash registers; airplane reservations) but generating the "creative content" of entertainment as well as scientific and technological innovation.

It is commonly believed that as old jobs are mechanized out of existence, new jobs always appear. Human capacity for consumption is endless; when new products are created, people soon become habituated to buying them. But all this depends on enough people having money to buy these new things. The trend has been for a diminished fraction of the population to be employed.* AI and related robotics is now entering a quantum leap in the ability to carry out economic production with a diminishing number of human employees.

* The conventional way of calculating the unemployment rate-- counting unemployment claims-- does not get at this.

Creating new products for sale, which might go on endlessly into the future, does not solve the central problem: capitalist enterprises will not make profit if there are too few people who have money to buy them.

This trend will generate an economic crisis for AI-robot capitalists, as it would for merely human capitalists.

It will be a mega-crisis of capitalism. It is beyond the normal business cycle of the past centuries. At their worst, these have thrown as many as 25% of the work force into unemployment. A mega-crisis of advanced AI-robot capitalism could occur at the level of 70% of the population lacking an income to buy what capitalism is producing. If we extrapolate far enough into the future, it approaches 100%.

The ruthless profit-maximizing of AI-robot capitalists would destroy the capitalist economy. The robots will have fired all the humans. In the process, they will have destroyed themselves. (Can we imagine that robots would decide to pay other robots so that they can buy things and keep the system going?)

Is there any way out?

One idea is a government-guaranteed income for everyone. Its effectiveness would depend on the level at which such income would be set. If it is bare minimum survival level, that would not solve the economic mega-crisis; since the modern economy depends mainly on selling luxuries and entertainment.

The politics of providing a universal guaranteed income also need to be considered. It is likely that as AI-robots take over the economy, they will also spread into government. Most government work is communicative labour-- administration and regulation; and governments will be under pressure to turn over these tasks to AI-robots, thus eliminating that 15% or so of the population who are employed at all levels of government.

There is also the question of how AI-robot capitalists would respond to a mega-crisis. Would they turn themselves into AI-robot Keynesians? Is that contrary to their programming, or would they reprogram themselves?

By this time, the news media and the entertainment industries (Hollywood and its successors) would have been taken over by AI-robot capitalists as well: manipulating the attention of the public with a combination of propaganda, scandals, and electronic addiction. Would anybody notice if it is impossible to distinguish virtual reality from human beings on the Internet and all other channels of communication?

How did we get into this mess?

Some of the scientists and engineers who have led the AI revolution are aware of its dangers. So far the cautious ones have been snowed under by two main forces driving full speed ahead.

One is capitalist competition. Artificial intelligence, like everything else in the computer era, is as capitalist as any previous industry. It strives to dominate consumer markets by turning out a stream of new products. It is no different than the automobile industry in the 1920s introducing a choice of colors and annual model changes. The scramble for virtual reality and artificial intelligence is like the tail-fin era of cars in the 1960s. The economic logic of high-tech executives is to stay ahead of the competition: if we don't do it, somebody else will.

The second is the drive of scientists, engineers, and technicians to invent and improve. This is admirable in itself: the desire to discover something new, to move the frontier of knowledge. But harnessed to capitalist imperative for maximizing profits, it is capable of eliminating their own occupations. Will scientists in the future be happy if autonomous computers make all the discoveries, that will be "known" only by other computers?

The dilemma is similar to that in the history of inventing weapons. The inventors of atomic bombs were driven by the fear that, if not us, somebody else will, and it might be our enemy. Even pacifists like Albert Einstein saw the military prospects of discoveries in atomic physics. This history (like Robert Oppenheimer's) makes one pessimistic about the future of AI combined with capitalists. Even if we can see it coming, does that make it impossible for us to avoid it?

What is to be done?

Better start doing your own thinking about it.

 

Related links:

Robocolleges, Artificial Intelligence, and the Dehumanization of Higher Education

The Growth of "RoboColleges" and "Robostudents"

The Higher Education Assembly Line

Academic Capitalism and the next phase of the College Meltdown

The Tragedy of Human Capital Theory in Higher Education

One Fascism or Two?: The Reemergence of "Fascism(s)" in US Higher Education

A People's History of Higher Education in the US?

 

 

 

 

Monday, December 30, 2024

2025 Will Be Wild!

2025 promises to be a disruptive year in higher education and society, not just in DC but across the US. While some now can see two demographic downturns, worsening climate conditions, and a Department of Education in transition, there are other less predictable and lesser-known trends and developments that we hope to cover at the Higher Education Inquirer. 

The Trump Economy

Folks are expecting a booming economy in 2025. Crypto and AI mania, along with tax cuts and deregulation, mean that corporate profits should be enormous. The Roaring 2020s will be historic for the US, just as the 1920s were, with little time and thought spent on long-range issues such as climate change and environmental destruction, economic inequality, or the potential for an economic crash.  

A Pyramid, Two Cliffs, a Wall and a Door  

HEI has been reporting about enrollment declines since 2016.  Smaller numbers of younger people and large numbers of elderly Baby Boomers and their health and disability concerns spell trouble ahead for states who may not consider higher education a priority. We'll have to see how Republican promises for mass deportations turn out, but just the threats to do so could be chaotic. There will also be controversies over the Trump/Musk plan to increase the number of H1B visas.  

The Shakeup at ED

With Linda McMahon at the helm of the Department of Education, we should expect more deregulation, more cuts, and less student loan debt relief. Mike Rounds has introduced a Senate Bill to close ED, but the Bill does not appear likely to pass. Diversity, Equity, and Inclusion (DEI) efforts may take a hit. However, online K12 education, robocolleges, and surviving online program managers could thrive in the short run.   

Student Loan Debt 

Student loan debt is expected to rise again in 2025. After a brief respite from 2020 to late 2024, and some receiving debt forgiveness, untold millions of borrowers will be expected to make payments that they may not be able to afford. How this problem affects an otherwise booming economy has not been receiving much media attention. 

Policies Against Diversity, Equity, and Inclusion

This semester at highly selective institutions, Black first-year student enrollment dropped by 16.9 percent. At MIT, the percentage of Black students decreased from 15 percent to 5 percent. At Harvard Law School, the number of Black law students has been cut by more than half.  Florida, Texas, Alabama, Iowa and Utah have banned diversity, equity and inclusion (DEI) offices at public universities. Idaho, Indiana and Kansas have prohibited colleges from requiring diversity statements in hiring and admissions. The resistance so far has been limited.

Failing Schools and Strategic Partnerships 

People should expect more colleges to fail in the coming months and years, with the possibility that the number of closures could accelerate. Small religious schools are particularly vulnerable. Colleges may further privatize their operations to save money and make money in an increasingly competitive market.

Campus Protests and Mass Surveillance

Protests may be limited out of fear of persecution, even if there are a number of legitimate issues to protest, to include human induced climate change, genocide in Palestine, mass deportations, and the resurgence of white supremacy. Things could change if conditions are so extreme that a critical mass is willing to sacrifice. Other issues, such as the growing class war, could bubble up. But mass surveillance and stricter campus policies have been emplaced at elite and name brand schools to reduce the odds of conflict and disruption.

The Legitimization of Robocollege Credentials    

Online higher education has become mainstream despite questions of its efficacy. Billions of dollars will be spent on ads for robocolleges. Religious robocolleges like Liberty University and Grand Canyon University should continue to grow and more traditional religious schools continue to shrink. University of Southern Hampshire, Purdue Global and Arizona Global will continue to enroll folks with limited federal oversight.  Adult students at this point are still willing to take on debt, especially if it leads to job promotions where an advanced credential is needed. 


Apollo Global Management is still working to unload the University of Phoenix. The sale of the school to the Idaho Board of Education or some other state organization remains in question.

AI and Cheating 

AI will continue to affect society, promising to add more jobs and threatening to take others.  One less visible way AI affects society is in academic cheating.  As long as there have been grades and competition, students have cheated.  But now it's become an industry. Even the concept of academic dishonesty has changed over the years. One could argue that cheating has been normalized, as Derek Newton of the Cheat Sheet has chronicled. Academic research can also be mass produced with AI.   

Under the Radar

A number of schools, companies, and related organizations have flown under the radar, but that could change. This includes Maximus and other Student Loan Servicers, Guild Education, EducationDynamics, South University, Ambow Education, National American UniversityPerdoceo, Devry University, and Adtalem

Related links:

Survival of the Fittest

The Coming Boom 

The Roaring 2020s and America's Move to the Right

Austerity and Disruption

Dozens of Religious Schools Under Department of Education Heightened Cash Monitoring

Shall we all pretend we didn't see it coming, again?: higher education, climate change, climate refugees, and climate denial by elites

The US Working-Class Depression: "Let's all pretend we couldn't see it coming."

Tracking Higher Ed’s Dismantling of DEI (Erin Gretzinger, Maggie Hicks, Christa Dutton, and Jasper Smith, Chronicle of Higher Education). 

Monday, September 25, 2023

Art Institutes Close. Students May Be Eligible for Student Loan Forgiveness.

The Art Institutes (Ai) is closing its doors this Friday, September 30. Ai has locations in Miami and Tampa (FL), Atlanta (GA), Austin and Houston (TX), and Virginia Beach (VA). About 2000 students are affected.  The Art Institutes website provides closed school information.


The Art Institutes chain had a storied history, starting in Pittsburgh, Pennsylvania in 1921 and growing to 50 locations by 2010. Its boom was the result of intensive profit-making in the higher education business in the 1990s and early 2000s. Goldman Sachs was a key contributor to its explosive growth.

Ai's decade-long decline was part of a wave of for-profit colleges that faced increased federal scrutiny for low graduation rates, high levels of student loan debt, and declining enrollment. Unlike Corinthian Colleges (2015), ITT Tech (2016), Westwood College (2016), and Virginia College (2018), the Art Institutes survived with government assistance--but with less than ten campuses. 

Art Institute Students 

Students from the Art Institutes may transfer to other schools, but many of their credits may not be accepted by other institutions. Consumers should also be extremely wary of the schools they plan transferring to.  

Students would normally be allowed to have their student loans forgiven through a process called Closed School Discharge. But that avenue for remedy has been paused. Present and former students, however, may be able to have their student loan debt relieved through Borrower Defense to Repayment if they can prove that they were defrauded. 

Borrower Defense-Sweet vs Cardona is a Facebook space for people who have already succeeded in getting their student loan money returned to them and others working on claims. Borrower Defense-Sweet vs Cardona has more than 14,000 members. 

Thursday, October 10, 2024

Labor, Big Tech, and A.I.: The Big Picture (CUNY School of Labor and Urban Studies)



Wednesday, October 30, 2024

1:00pm - 2:30pm

Lunch will be served. Free and open to all.25 West 43rd Street, 18th floor, New York, NY 10036 (map)

*In-person* only in Midtown Manhattan.

REGISTER:

https://slucuny.swoogo.com/30October2024/register

Join us for a conversation with Alex N. Press, staff writer at Jacobin magazine and Edward Ongweso Jr., senior researcher at Security in Context and a co-host of the podcast This Machine Kills; moderated by New Labor Forum Editor-at-Large Micah Uetricht.

The discussion will address major issues confronting the labor movement with the development and use of artificial intelligence, surveillance, automation of work generally, and the rise of Big Tech’s control over large segments of the U.S. workforce. This conversation is the first in what will be an ongoing series focusing on the impact of Big Tech and AI on the labor movement and strategies for organizing to build worker power.

Presented in collaboration with New Labor Forum (NLF), this program connects to the fall 2024 issue of NLF, which features the special section, “Labor and the Uncertain Future of Artificial Intelligence,” and includes the article, “How the U.S. Labor Movement Is Confronting A.I.,” by Alex N. Press.

Speaker Bios:

Edward Ongweso Jr. is a senior researcher at Security in Context and a co-host of This Machine Kills, a podcast about the political economy of technology. His work has appeared in The Guardian, Baffler, Logic(s), Nation, Dissent, Vice, and elsewhere.

Alex N. Press is a staff writer at Jacobin magazine. Her writing has appeared in New Labor Forum, the New York Times, the Washington Post, and the Nation, among other places, and she is currently writing her first book, What We Will: How American Labor Woke Up.

Micah Uetricht is Editor-at-Large of New Labor Forum, a national labor journal produced by the Murphy Institute at CUNY School of Labor and Urban Studies and host of SLU’s podcast Reinventing Solidarity. Uetricht is also the editor of Jacobin and the author of two books: Strike for America: Chicago Teachers Against Austerity; and Bigger than Bernie: How We Go from the Sanders Campaign to Democratic Socialism (co-authored by Meagan Day).

REGISTER:

https://slucuny.swoogo.com/30October2024/register

Thursday, December 19, 2024

AI and the Mass Production of Academic Research (Ethan Mollick)

Two researchers used AI to generate 288 complete academic finance papers predicting stock returns, complete with plausible theoretical frameworks and citations. Each paper looks legitimate and follows academic conventions. They did this to show how easy it now is to mass produce seemingly credible research. A warning about industrialized academic paper generation becoming reality. The future arrived faster than we expected, and academia is not ready.


 

Friday, September 29, 2023

2U-edX crash exposes the latest wave of edugrift

2U, a Lanham, Maryland-based edtech company and parent company edX, is facing layoffs of an estimated 200 to 400 workers--a significant number for a company that only employs a few thousand--amid more rumors that the company is for sale. While the pain of their firings may be consequential for those who are experiencing it, the pain of those the company has damaged, mostly striving middle-class consumers and their families, may be worse.  

2U's problems are not new. The Higher Education Inquirer first reported on the beginning of company's meltdown in October 2019.  In July 2022, 2U announced layoffs as it changed its business model (again) and the US Department of Education scrutinized the company's grad school offerings.

2U began in 2008 as an online program manager (OPM), one of a few companies offering edtech services that required large amounts of capital and labor costs. They expanded through the acquisition of other edtech firms, Trilogy Education Services (2019) and edX (2021).  edX is an education platform that was created by Harvard and MIT as a massive open online course (MOOC) platform, but as part of 2U now concentrates on selling a number of elite and brand name tech bootcamps.

In 2022 and 2023, the Wall Street Journal (Lisa Bannon), Chronicle of Higher Education (Mike Vasquez), and USA Today (Chris Quintana) investigated 2U after a few US senators sounded the alarm about consumers being fleeced by 2U and other OPMs. 

With 2U's reputation in shambles and layoffs ahead, the parent company wrapped itself around the more respectable edX brand. Bjju's, an Indian edtech firm, was said to be looking at 2U or Chegg as a possible acquisition (Byju's is now facing its own problems).  

Concentrating on growth for years, then acquisition, then consolidation and rebranding, 2U has never generated an annual profit--and that trend doesn't appear to be changing. 

Earlier this year we listed 2U, Chegg, Coursera, and Guild Education as part of the EdTech Meltdown. 

Unlike the prior wave of for-profit college failures of Corinthian Colleges, ITT Tech, Education Management Corporation, and others that hurt working-class student debtors, 2U has collaborated with elite universities, targeting mostly middle-class folks for advanced degrees and certificates with elite brand names such as USC and UC Berkeley. Credentials that frequently are not worth the debt. Credentials that often did not lead to better paying jobs. Credentials that burden (and sometimes crush) consumers financially with private loans from Sallie Mae and others.

edX's website advertises coding, data analytics, cybersecurity, and AI bootcamps from a number of name brands: Ohio State University, Columbia University, University of Texas, Harvard University, Michigan State University, University of Denver, Southern Methodist University, University of Minnesota, University of Central Florida, Arizona State University, Northwestern University, Rice University, the University of North Carolina, and UC-Irvine.   

  • Ohio State University AI Bootcamp $11,745
  • University of Texas Coding Bootcamp $12,495
  • Berkeley Extension Coding Bootcamp $13,495
  • University of Pennsylvania Cybersecurity Bootcamp $13,995
  • Columbia University Data Analytics Bootcamp $14,745 

It's not clear how well managed the programs are and how much these schools are involved in instruction and career guidance.  However, edX claims that with their bootcamp certificates, graduates will "gain  access to more than 260 employers--including half of the Fortune 100--seeking skilled bootcamp graduates." 

While the targets of for-profit colleges and 2U may have been different, their approaches were similar: sell a dream to consumers that often does not materialize. Spend tens of millions on targeted (and sometimes misleading) advertising and enrollment. Keep the confidence game going as long as it will last. But that may not be much longer.

In April 2023, 2U filed a lawsuit against the US Department of Education to avoid further government oversight. A familiar defensive strategy in the for-profit college business.

There is much we don't know about how significant the damage has been to those who bought the 2U story and spent tens of thousands on elite degrees and certificates, but it must be significant. Most US families do not have that kind of money to spend on something that doesn't result in financial gains.  

Recent reviews of edX on TrustPilot have been scathing. And social media have been brutal on 2U, Trilogy, and EdX. Reddit, for example, has posts like "The dirty truth about edX/Trilogy Boot Camps." In a more recent post about edX, there was a flurry of negative reviews.


In 2016, we wrote "When college choice is a fraud." At that time we were focusing on the tough choices that working-class people have deciding between their local community college or a for-profit career school. Little did we know that the education business was already moving its way up the food chain and that edtech companies like 2U would be engaging in the latest form of edugrift

Related link:

2U Virus Expands College Meltdown to Elite Universities (2019)

Buyer Beware: Servicemembers, Veterans, and Families Need to Be On Guard with College and Career Choices (2021)

College Meltdown 2.1 (2022)

EdTech Meltdown (2023)  

Erica Gallagher Speaks Out About 2U's Shady Practices at Department of Education Virtual Listening Meeting (2023)

"Edugrift" by J.D. Suenram (2020)

When college choice is a fraud (2016)

Saturday, August 3, 2024

Higher Education, Technology, and A Growing Social Anxiety

The Era We Are In

We are living in a neoliberal/libertarian era filled with technological change, emotional and behavioral change, and social change. An era resulting in alienation (disconnection/isolation) for the working class and anomie (lawlessness) among elites and those who serve them. We are simultaneously moving forward with technology and backward with human values and principles. Elites are reestablishing a more brutal world, hearkening back to previous centuries--a world the Higher Education Inquirer has been observing and documenting since 2016. No wonder folks of the working class and middle class are anxious

Manufactured College Mania

For years, authorities such as the New York Federal Reserve expressed the notion (or perhaps myth) that higher education was an imperative for young folks. They said that the wealth premium for college graduates was a million dollars over the course of a lifetime--ignoring the fact that a large percentage of people who started college never graduated--and that tens of millions of consumers and their families were drowning in student loan debt. 

2U, Guild Education, and a number of online robocolleges reflected the neoliberal promise of higher education and online technology to improve social mobility.  The mainstream media were largely complicit with these higher ed schemes. 

2U brought advanced degrees and certificates to the masses, using brand names such as Harvard, MIT, Yale, USC, University of North Carolina, and the University of Texas to promote the expensive credentials that did not work for many consumers. 

Guild Education brought educational opportunities to folks at Walmart, Target, Macy's and other Fortune 500 companies who would be replacing their workers with robotics, AI, and other technologies. But the educational opportunities were for credentials from subprime online schools like Purdue University Global. Few workers took the bait. 

As 2U files for bankruptcy, it leaves a number of debt holders holding the bag, including more than $500M to Wilmington Trust, and $30M to other vendors and clients, including Guild Education, and a number of elite universities. Guild Education is still alive, but like 2U, has had to fire a quarter of its workers, even downsizing its name to Guild, as investor money dries up. It continues to spend money on its image, as a Team USA sponsor.    

The online robocolleges (including Liberty University, Grand Canyon University, University of Phoenix, Purdue University Global, and University of Arizona Global)  brought adult education and hope to the masses, especially those who were underemployed. In many cases, it was false hope, as they also brought insurmountable student debt to American consumers. Billions and billions in debt that cannot be repaid, now considered toxic assets to the US government. 

Along the way there have been important detractors in popular culture, especially on the right. Conservative radio celebrity Dave Ramsey, railed against irresponsible folks carrying lots of debt, including student loan debt. He was not wrong, but he did not implicate those who preyed on student consumers. On the left, the Debt Collective also railed against student loan debt, long before the right, but they were often ignored or marginalized. 

Adapting to a Brutal System

The system  works for elites and some of those who serve them, but not for others, even some of the middle class. Good jobs once at the end of the education pipeline have been replaced by 12-hour shifts, 60 hour work weeks, bullsh*t jobs, and gig work. 

Working-class Americans are living shorter lives, lives in some cases made worse not so much by lack of education, but by the destruction of union jobs, and by social media, and other intended and unintended consequences of technology and neoliberalism. Millions of folks, working class and some middle class, who have invested in higher education and have overwhelming debt and fading job prospects, feel like they have been lied to.

We also have lives made more sedentary and solitary by technology. Lives made more hectic and less tolerable. Inequality making lives too easy for those with privilege and lives too difficult for the working class to manage. Lives managed by having fewer relationships and fewer children. Many smartly choosing not to bring children into this new world. All of this manufactured by technology and human greed.  

The College Dream is Over...for the Working Class

There are two competing messages about higher education: the first that college brings opportunity and wealth and the second, that higher education may bring debt and misery. The truth is, these different messages are meant for two groups: pushing brand name schools and student loans for the most ambitious middle class/working class and a lesser form of education for the struggling working class. 

In 2020, Gary Roth said that the college dream was over. Yet the socially manufactured college mania continues, flooding the internet with ads for college and college loans, as social realities point to a future with fewer good and meaningful jobs even for those with degrees. Higher education will continue to work for some, but should every consumer, especially among the struggling working class, believe the message is for them? 

Related links:

More than half of college grads are stuck in jobs that don't require degrees (msn.com)

AI-ROBOT CAPITALISTS WILL DESTROY THE HUMAN ECONOMY (Randall Collins)

Edtech Meltdown 

Guild Education: Enablers of Anti-Union Corporations and Subprime College Programs

2U Declares Chapter 11 Bankruptcy. Will Anyone Else Name All The Elite Universities That Were Complicit?

College Mania!: An Open Letter to the NY Fed (2019)

"Let's all pretend we couldn't see it coming": The US Working-Class Depression (2020)

The College Dream is Over (Gary Roth, 2020)