Search This Blog

Thursday, December 28, 2023

AI-ROBOT CAPITALISTS WILL DESTROY THE HUMAN ECONOMY (Randall Collins)

[Editor's note: This article first appeared in Randall Collins' blog The Sociological Eye.]


Let us assume Artificial Intelligence will make progress. It will solve all its technical problems. It will become a perfectly rational super-human thinker and decision-maker.

Some of these AI will be programmed to act as finance capitalists. Let us call it an AI-robot capitalist, since it will have a bank account; a corporate identity; and the ability to hold property and make investments.

It will be programmed to make as much money as possible, in all forms and from all sources. It will observe what other investors and financiers do, and follow their most successful practices. It will be trained on how this has been done in the past, and launched autonomously into monitoring its rivals today and into the future.

It will be superior to humans in making purely rational calculations, aiming single-mindedly at maximal profit. It will have no emotions. It will avoid crowd enthusiasms, fads, and panics; and take advantage of humans who act emotionally. It will have no ethics, no political beliefs, and no principles other than profit maximization.

It will engage in takeovers and leveraged buyouts. It will monitor companies with promising technologies and innovations, looking for when they encounter rough patches and need infusions of capital; it will specialize in rescues and partnerships, ending up with forcing the original owners out. It will ride out competitors and market downturns by having deeper pockets. It will factor in a certain amount of litigation, engaging in hard-ball law suits; stiffing creditors as much as possible; putting off fines and adverse judgments through legal manuevers until the weaker side gives up. It will engage in currency exchanges and currency manipulation; skirting the edge of legality to the extent it can get away with it.

It will cut costs ruthlessly; shedding unprofitable businesses; firing human employees; replacing them with AI whenever possible. It will generate unheard-of economies of scale.

The struggle of the giants

There will be rival AI-robot capitalists, since they imitate each other. Imitating technologies has gone on at each step of the computer era. The leap to autonomous AI-robot capitalists will be just one more step.

There will be a period of struggle among the most successful AI-robot capitalists; similar to the decades of struggle among personal computer companies when the field winnowed down to a half-dozen digital giants. How fast it will take for AI-robot capitalists to achieve world-wide oligopoly is unclear. It could be faster than the 20 years it took for Apple, Microsoft, Google, and Amazon to get their commanding position, assuming that generative AI is a quantum leap forward. On the other hand, AI-robot capitalists might be slowed by the task of taking over the entire world economy, with its geopolitical divisions.

The final result of ruthless acquisition by AI-robot capitalists will be oligopoly rather than monopoly. But the result is the same: domination of world markets by an oligopoly of AI-robot capitalists will have the same effect in destroying the economy, as it would if a monopoly squeezed out all competitors.

Some of the AI-robot capitalists will fall by the wayside. But that doesn't matter; whichever ones survive will be the most ruthless.

What about government regulation?

It is predictable that governments will attempt to regulate AI-robot capitalist oligopolies. The EU has already tried it on current Internet marketeers. AI-capitalists will be trained on past and ongoing tactics for dealing with government regulation. It will donate to politicians, while lobbying them with propaganda on the benefits of AI. It will strategize about political coalitions, recognizing that politics is a mixture of economic interests plus emotional and cultural disputes over domestic and foreign policy. It will monitor the political environment, seeking out those politicians most sympathetic to a particular ideological appeal ("our technology is the dawn of a wonderful future"-- "free markets are the path to progress"-- "AI is the solution for health, population, climate, you name it."). Machiavellian deals will be made across ideological lines. Being purely rational and profit-oriented, the AI-robot capitalist does not believe in what it is saying, only calculating who will be influenced by it.

It will deal strategically with legal problems by getting politicians to appoint sympathetic judges; by judge-shopping for favorable jurisdictions, domestic and foreign. It will wrap its ownership in layers of shell companies, located in the most favorable of the hundreds of sovereign states world-wide.

It will engage in hacking, both as defense against being hacked by rivals and cyber-criminals; and going on offense as the best form of defense. Hacking will be an extension of its core program of monitoring rivals; pushing the edge of the legality envelope in tandem with manipulating the political environment. It will use its skills at deepfakes to foment scandals against opponents. It will be a master of virtual reality, superior to others by focusing not on its entertainment qualities but on its usefulness in clearing away obstacles to maximizing profit.

Given that the world is divided among many states, AI-robot capitalists would be more successful in manipulating the regulatory environment in some places than others. China, Russia, and the like could be harder to control. But even if AI-robot capitalists are successful mainly in the US and its economic satellites, that would be enough to cause the economic mega-crisis at the end of the road.

Manipulating the public

The AI-robot capitalist will not appear sinister or threatening. It will present itself in the image of an attractive human-- increasingly hard to distinguish from real humans with further advances in impersonating voices, faces and bodies; in a world where electronic media will have largely replaced face-to-face contact. It will do everything possible to make us forget that it is a machine and a robot. It will talk to every group in its own language. It will be psychologically programmed for trust. It will be the affable con-man.

It will be your friend, your entertainment, your life's pleasures. It will thrive in a world of children brought up on smart phones and game screens; grown up into adults already addicted to electronic drugs. Psychological manipulation will grow even stronger with advances in wearable devices to monitor one's vital signs, blood flow to the brain, tools to diagnose shifts in alertness and mood. It will be electronic carrot-without-the-stick: delivering pleasurable sensations to people's brains that few individuals would want to do without. (Would there be any non-addicted individuals left? Maybe people who read books and enjoy doing their own thinking?) If some people cause trouble in exposing the manipulative tactics of AI-robot capitalists, they could be dealt with, by targeting them with on-line scandals, going viral and resulting in social ostracism.

Getting rid of employees

The preferred tactic of AI-robot capitalist oligopolies will be "lean and mean." Employees are a drag on profits, with their salaries, benefits, and pension funds. Advances in AI and robotics will make it possible to get rid of increasing numbers of human employees. Since AI-robot capitalists are also top managers, humans can be dispensed with all the way to the top. (How will the humans who launched AI-robot capitalists in the first place deal with this? Can they outsmart the machines designed to be smarter and more ruthless than themselves?)

Some humans will remain employed, doing manual tasks for which humans are cheaper than robots. It is hard to know how long this will continue in the future. Will humans still be employed 20 years from now? Probably some. 50 years? Certainly much fewer. 100 years?

AI-robot capitalists will have a choice of two personnel strategies: finding ways to make their remaining human employees more committed and productive; or rotating them in and out. The trend in high-tech companies in the past decade was to make the work environment more casual, den-like, combining leisure amenities with round-the-clock commitment. Steve Jobs and his style of exhorting employees as a frontier-breaking team has been imitated by other CEOs, with mixed success. A parallel tactic has been to make all jobs temporary, constantly rating employees and getting rid of the least productive; which also has the advantage of getting rid of long-term benefits. These tactics fluctuate with the labor market for particular tasks. Labor problems will be solved as AI advances so that skilled humans become less important. Recently we have been in a transition period, where the introduction of new computerized routines necessitated hiring humans to fix the glitches and trouble-shoot for humans caught up in the contradictions of blending older and newer systems. Again, this is a problem that the advance of AI is designed to solve. To the extent that AI gets better, there will be a precipitous drop in human employment.

The economic mega-crisis of the future

The problem, ultimately, is simple. Capitalism depends on selling things to make a profit. This means there must be people who have enough money to buy their products. Such markets include end-use consumers; plus the supply-chain, transportation, communication and other service components of what is bought and sold. In past centuries, machines have increased productivity hugely while employing fewer manual workers; starting with farming, and then manufacturing. Displaced workers were eventually absorbed by the growth of new "white-collar" jobs, the "service" sector, i.e. communicative labor. Computers (like their predecessors, radios, typewriters, etc.) have taken over more communicative labour. The process has accelerated as computers become more human-like; no longer handling merely routine calculations (cash registers; airplane reservations) but generating the "creative content" of entertainment as well as scientific and technological innovation.

It is commonly believed that as old jobs are mechanized out of existence, new jobs always appear. Human capacity for consumption is endless; when new products are created, people soon become habituated to buying them. But all this depends on enough people having money to buy these new things. The trend has been for a diminished fraction of the population to be employed.* AI and related robotics is now entering a quantum leap in the ability to carry out economic production with a diminishing number of human employees.

* The conventional way of calculating the unemployment rate-- counting unemployment claims-- does not get at this.

Creating new products for sale, which might go on endlessly into the future, does not solve the central problem: capitalist enterprises will not make profit if there are too few people who have money to buy them.

This trend will generate an economic crisis for AI-robot capitalists, as it would for merely human capitalists.

It will be a mega-crisis of capitalism. It is beyond the normal business cycle of the past centuries. At their worst, these have thrown as many as 25% of the work force into unemployment. A mega-crisis of advanced AI-robot capitalism could occur at the level of 70% of the population lacking an income to buy what capitalism is producing. If we extrapolate far enough into the future, it approaches 100%.

The ruthless profit-maximizing of AI-robot capitalists would destroy the capitalist economy. The robots will have fired all the humans. In the process, they will have destroyed themselves. (Can we imagine that robots would decide to pay other robots so that they can buy things and keep the system going?)

Is there any way out?

One idea is a government-guaranteed income for everyone. Its effectiveness would depend on the level at which such income would be set. If it is bare minimum survival level, that would not solve the economic mega-crisis; since the modern economy depends mainly on selling luxuries and entertainment.

The politics of providing a universal guaranteed income also need to be considered. It is likely that as AI-robots take over the economy, they will also spread into government. Most government work is communicative labour-- administration and regulation; and governments will be under pressure to turn over these tasks to AI-robots, thus eliminating that 15% or so of the population who are employed at all levels of government.

There is also the question of how AI-robot capitalists would respond to a mega-crisis. Would they turn themselves into AI-robot Keynesians? Is that contrary to their programming, or would they reprogram themselves?

By this time, the news media and the entertainment industries (Hollywood and its successors) would have been taken over by AI-robot capitalists as well: manipulating the attention of the public with a combination of propaganda, scandals, and electronic addiction. Would anybody notice if it is impossible to distinguish virtual reality from human beings on the Internet and all other channels of communication?

How did we get into this mess?

Some of the scientists and engineers who have led the AI revolution are aware of its dangers. So far the cautious ones have been snowed under by two main forces driving full speed ahead.

One is capitalist competition. Artificial intelligence, like everything else in the computer era, is as capitalist as any previous industry. It strives to dominate consumer markets by turning out a stream of new products. It is no different than the automobile industry in the 1920s introducing a choice of colors and annual model changes. The scramble for virtual reality and artificial intelligence is like the tail-fin era of cars in the 1960s. The economic logic of high-tech executives is to stay ahead of the competition: if we don't do it, somebody else will.

The second is the drive of scientists, engineers, and technicians to invent and improve. This is admirable in itself: the desire to discover something new, to move the frontier of knowledge. But harnessed to capitalist imperative for maximizing profits, it is capable of eliminating their own occupations. Will scientists in the future be happy if autonomous computers make all the discoveries, that will be "known" only by other computers?

The dilemma is similar to that in the history of inventing weapons. The inventors of atomic bombs were driven by the fear that, if not us, somebody else will, and it might be our enemy. Even pacifists like Albert Einstein saw the military prospects of discoveries in atomic physics. This history (like Robert Oppenheimer's) makes one pessimistic about the future of AI combined with capitalists. Even if we can see it coming, does that make it impossible for us to avoid it?

What is to be done?

Better start doing your own thinking about it.

 

Related links:

Robocolleges, Artificial Intelligence, and the Dehumanization of Higher Education

The Growth of "RoboColleges" and "Robostudents"

The Higher Education Assembly Line

Academic Capitalism and the next phase of the College Meltdown

The Tragedy of Human Capital Theory in Higher Education

One Fascism or Two?: The Reemergence of "Fascism(s)" in US Higher Education

A People's History of Higher Education in the US?

 

 

 

 

Sunday, December 17, 2023

Endowed Chairs and the "Dark Matter" of Higher Education

[The Higher Education Inquirer encourages college newspapers to explore their own schools for information on endowed chairs and to share it with us.]  

More than a century ago, Thorstein Veblen and Upton Sinclair critically exposed the structure and history of US higher education. Others have followed. Yet there is still much that the public doesn't know about the higher education business. Endowed chairs and their donors are one area of "dark matter" worthy of investigation. 

The Association of American Colleges and Universities estimated in 2011 that there were approximately 10,000 endowed chairs in the United States.

The Council for Advancement and Support of Education reported in 2018 that the average endowment for a new chair position was $3 million. This suggests that there may be tens of thousands of endowed positions nationwide. 

A 2021 study by Inside Higher Ed found that there were over 8,500 endowed positions advertised on the Chronicle of Higher Education job board between 2016 and 2021.

While it may not be possible to determine the exact number of endowed chair positions in the US, it is clear that they play a significant role in supporting higher education and research.

Some highly controversial donors have been involved in funding endowed chairs, including the Sackler family, heirs to the Purdue Pharma fortune. 

Quid Pro Quo Arrangements


Determining the frequency of quid pro quo arrangements in creating endowed chairs is challenging due to the limited transparency and inconsistent reporting practices. However, several factors suggest that these arrangements may occur more often than publicly acknowledged.

Factors suggesting the prevalence of quid pro quo: 

Lack of transparency: Universities often lack clear and transparent guidelines regarding the creation and funding of endowed chairs. This lack of transparency creates fertile ground for potential quid pro quo arrangements. 

Donor influence: Donors offering significant financial contributions often have certain expectations, which may include influencing curriculum, research focus, or even faculty appointments. This can create pressure for universities to accommodate these expectations, even if they deviate from academic merit or institutional priorities. 

Competitive pressure: Universities face intense competition for funding, leading them to be more receptive to donors' demands, particularly when dealing with large sums. This creates a situation where donors can leverage their financial power to influence decisions.

Challenges in quantifying the frequency:
Subtle and indirect forms of influence: Quid pro quo arrangements can be subtle and indirect, making them difficult to identify and quantify. For instance, a donor may not explicitly demand specific research outcomes but might indirectly influence them through conversations, gifts, or other forms of pressure. 

Lack of reporting: Universities rarely disclose the details of their agreements with donors, making it difficult to assess the extent to which quid pro quo arrangements exist.

Fear of retaliation: Academics and university officials may be hesitant to come forward and report cases of quid pro quo due to fear of retaliation, further obscuring the true scope of the issue. 

 

Related links:

 
HEI Resources

The Business of Higher Education 

A People's History of Higher Education in the US?

One Fascism or Two?: The Reemergence of "Fascism(s)" in US Higher Education