How Marx's Capital Can Help Us Think About the AI Revolution
Automation is often a horror show under capitalism. In a different system, though, we could let human values dictate which activities we do and don't want to turn over to algorithms.
The following is the transcript of a talk I gave a few weeks ago at King’s College London. Note that I’ve cut most of the references to PowerPoint slides and a few of the quotes from Capital but otherwise kept it intact, meaning that it’s a couple thousand words longer than where I would normally cap out an essay here.
OK, friends and comrades, since among other things we’re talking about Marx’s Capital, let’s start with that book. This is my miniature schnauzer Lucy, illustrating a general fact about many owners of copies of Capital, which is that they use it for purposes other than reading:
In fact, Lucy is putting it to better use than many people do. Often it just sits on a shelf, looking imposing.
In my experience, grassroots socialists who sign up for Capital reading groups very often give up within the first three chapters. This is by the most dry and abstract section of the entire book. Marx focuses there on what he calls the “metaphysical subtleties and theological niceties” of commodities and money. He needs to set up an analysis of markets in general, which have existed for much of human history, before he can get into the specific kind of market that’s most important for understanding the dynamics of capitalism, which is the labor market.
Then Chapters 4-6 transition between the two subjects. Marx points out there that the circuit of capital is M-C-M’, i.e. money to commodities to a greater amount of money than you started with. The capitalist doesn’t just make money to stuff it under his mattress in the form of paper bills, or fill a pool with it in the form of gold coins like Scrooge McDuck. He wants to keep it in the process of circulation, to turn it into more money. M-C-M’ isn’t Marx’s definition of capital exactly, but it is the first thing he points out about it. And he presents it in chapters four through six as being quite mysterious. Where does the difference between the M and the M’, between the money the capitalist starts with and the greater amount of money they end up with, come from?
Of course, Marx concedes, some people are savvier at business than others, they’re good at buying cheap and selling dear, so that can explain the profits of some individual capitalists. But it can’t really explain the collective profits of the capitalist class. Marx says in this part of the book that the problem with business-savviness is an explanation is that given buyers and sellers can gain at advantage over one another with greater business acumen, but the entire capitalist class of a country can’t collectively “defraud itself.” So we have to turn elsewhere to figure out what’s going on here. And the specific elsewhere he turns is of course the labor-power of the working class, in other words their capacity to work and hence to create new commodities. Marx says that labor-power is sort of a magical commodity with the capacity to create more value than it itself is worth. And so in the third section of the book he starts to explore how that works, but first he pulls back the camera and looks at how labor works in general, in any society.
At the beginning of Chapter 7, he writes:
The fact that the production of use-values, or goods, is carried on under the control of a capitalist and on his behalf does not alter the general character of that production. We shall therefore, in the first place, have to consider the labour process independently of any specific social formation…
We presuppose labour in a form in which it is an exclusively human characteristic. A spider conducts operations which resemble those of a weaver, and a bee would put many a human architect to shame by the construction of its honeycomb cells. But what distinguishes the worst architect from the best of bees is that the architect builds the cell in his mind before he constructs it in wax. At the end of every labour process, a result emerges which had already been conceived by the worker at the beginning, hence already existed ideally. Man not only effects a change of form in the materials of nature; he also realizes his own purposes in those materials. And this is a purpose he is conscious of, it determines the mode of his activity with the rigidity of of a law, and he must subordinate his will to it.
This subordination is no mere momentary act. Apart from the exertion of the working organs, a purposeful will is required for the entire duration of the work. This means close attention. The less he is attracted by the nature of the work and the way in which it has to be accomplished, and the less, therefore, he enjoys it as the free play of his own physical and mental powers, the closer his attention is forced to be.
In other words, work—of any kind, in any society—can be hard and annoying, when you’re working like a human being and not a bee, creating things in your mind before you create them in external reality. This is precisely why so many students in colleges and universities around the world have decided to experiment with letting digital bees create their term papers for them. Why pay “close attention,” applying a “purposeful will”, when you can just set loose your swarm of ChatGPT bees?
In this particular example, the use of AI makes the activity itself almost completely pointless. The purpose of writing a paper isn’t to bring a paper into existence but to do the things you need to do in order to produce that result. You could, I suppose, automate both halves of the process, so grading algorithms could evaluate the work of writing algorithms and a college degree was awarded when it was all over, but it seems like it would be better and more efficient to simply have middle-class parents pay for the degree and just hand it to the lucky child on their eighteenth birthday, if actual learning isn’t going to enter into the equation.
There an awful lot of other areas, though, where the thing people are laboring to produce matters in and of itself. Whether a car is manufactured by auto workers or by robots, you can use it to drive around. So, really, as far as that sort of thing goes, the more automation the better.
To use a tiny personal example, I grew up in mid-Michigan, traditionally the car factory capital of the world, and my dad worked in one of those factories for a minute in the 1970s and he still has a scar from an industrial accident, and his grandfather worked in one before he became a full-time UAW organizer and actually lost part of one of his fingers in an industrial accident. A friend of mine from back home I used to drink whiskey and listen to Tom Waits records with worked in one of those places for years and, while that guy to the best of my knowledge managed to completely avoid industrial accidents—thank you, greater safety regulations passed in the last several decades, and thank you, UAW—I’ve heard a lot from him about the brain-melting, soul-crushing boredom of working on that assembly line. So while term papers existing without having to be written by philosophy students are simply a waste of paper or pixels, cars existing without having to be manufactured by factory workers seem like the best of both worlds.
Or at least that’s how things stand if you’re thinking at the level of analysis Marx adopts in Ch. 7—if you’re thinking, other words, about the labor process in general, abstracted from any particular society’s “mode of production” (e.g. feudalism or capitalism or socialism). But when you start thinking about how automation plays out within capitalism in particular, you start to get a very different picture.
That’s what Marx explores in Ch. 15, on machinery. There, he starts with a shot at John Stuart Mill, who he presents as being confused about why the invention of labor-saving technology never actually seems to save any labor. Instead of everyone having the same standard of living but more leisure time as the result of new technology making the production process more efficient, you get two things. Some laborers work as hard as ever, but more productively, and others at least temporarily lose their jobs. And while the first point is actually more important for his overall analysis in the rest of the book, the second is what’s really going to matter for what we’re thinking about together right now, since in much of this discussion Marx sounds disconcertingly like he’s anticipating discussions in the 2020s about the effects of the AI revolution we’re having right now in destroying all sorts of jobs.
To be clear, Marx absolutely does not anticipate AI, or for that matter punch-card computers. During his discussion of how to think about the difference between handicraft tools and machinery properly speaking, he says:
Mathematicians and experts on mechanics—and they are occasionally followed in this by English economists—call a tool a simple machine and a machine a complex tool. They see no essential difference between them, and even give the name of a machine to simple mechanical aids such as the lever, the inclined plane, the screw, the wedge, etc.
And then there’s a sentence no one would have written in the age of airplanes, atom bombs and microwave ovens, never mind smart phones and satellite navigation, never mind algorithms learning to do everything from relieve students of the burden of writing term papers to giving out legal and medical advice:
“As a matter of fact,” he says—and remember he’s talking here about “the lever, the inclined plane, the screw, the wedge, etc.”—"every machine is a combination of these simple aids, or powers, no matter how they may be disguised.”
I’m not belaboring Marx’s inability to see where technology would be sixteen decades after he wrote that passage because I think it’s somehow discrediting, or because I think anyone else in the 1860s would have been able to do better. On the contrary, I think his entirely predictable nineteenth-century understanding of what mechanical innovation was all about makes the seeming timeliness of much what comes next that much more remarkable.
We’ll get to that. First, though, notice that if the “combination of these simple aids and powers” doesn’t seem very impressive to us with our jaded second-decade-of-the-twenty-first-century eyes, it sure seems impressive to Karl Marx.
There’s a long passage, for example, where he rhapsodizes about the development of the steam power over the course of several decades until you started getting “steam enginess of colossal size” powering whole ocean liners in the 1850s. Similarly, a little bit later in the chapter, we see him talk about the mechanical lathe as a “Cylcopean” “reproduction of the ordinary foot lathe,” a mechanical shearing machine that “shears iron as easily as a tailor’s scissors cut cloth,” and a steam-hammer that “works with an ordinary hammer head, but of such a weight that even Thor himself could not wield it.” The footnote at the end of that last sentence, by the way, takes you to an example of a machine that “is in fact called ‘Thor’” which “forges a shaft of 16 ½ tons with as much ease as a blacksmith forges a horse-shoe.” And back in the main text, we get him talking about steam-hammers for which it’s “mere child’s play” to “crush a block of granite into powder” yet which are “no less capable of driving a nail into a piece of soft wood with a succession of light-taps.”
There’s an almost child-like sense of wonder that breathes in these passages, even a kind of steampunk techno-utopianism. He’s extremely excited about the potential of all of this new technology.
At the same time, he’s very aware that it’s all been a catastrophe for the working class.
One of the very first consequences of the introduction of real industry machinery to replace handicraft tools, he tells us, was that, now that so much of he heavy lifting could be done mechanically and you no longer necessarily needed big burly guys operating the contraption, women and young children could be brought in to work at the factory. He says that the male worker, instead of just selling himself to the capitalist, becomes something like a slave-dealer, also having to sell his wife and children to the boss in order to put food on the table. He goes on and on in this chapter about the factory codes that tyrannize over workers like the edicts of kings in absolute monarchies. He quotes from reports of factory inspectors where managers made light of workers losing a finger or two in industrial accidents as if it were no big deal, and he explicitly compares the bosses who expressed these attitudes to the Confederates who’d just been defeated in their war to preserve slavery. He writes about mothers doping their toddlers up with opiates so they can stay by themselves all day and all night while mum and dad are both at the factory, and the kids becoming little junkies who need progressively bigger and bigger doses. And the picture he paints of what goes on outside the factory gates in wake of the constant revolutionizing of the industrial process is, if anything, even grimmer than what he says about what goes on inside. Much of Ch. 15 is spent talking about technological unemployment, and after all this talk about the glories of steam-engine technology and the horrors of nineteenth century factories, much of this part abruptly feels like it could have been written in the last year or two.
Let’s put the 1860s aside for a moment and think about some of what’s been going on in the 202s. In past generations, it was taken for granted that the only people who had to worry about automation eliminating their jobs were blue-collar laborers but that’s becoming a distant memory. Let’s take two examples from the state I live in these days, California. Hollywood was shut down last year by a strike that was driven, in part, by writers seeking guarantees against their jobs being automated away by screenwriting AIs and actors worried about their AI-generated likenesses. And at the opposite end of the spectrum of job-types from actors or screenwriters, organized labor successfully pushed for a bill in the state legislature to require that self-driving trucks be banned from California highways if they didn’t have a backup human operator on board and Governo Gavin Newsom vetoed the bill, trusting in the trucks’ algorithms to avoid killing or maiming anyone in unexpected traffic situations. Just in the last year you can find a multitude of articles from everywhere from Harvard Health to the New York Times fretting about much of what doctors and lawyers currently do being displaced onto Artificial Intelligence, which is rapidly gaining the ability to match up symptoms to diseases or look up relevant case law as easily as Thor the steam machine could crush granite into powder or drive a nail into soft wood with a series of light taps. And I was going to show you some of these articles in the slides before I realized it would be much funnier to show you the first thing that comes up when I do a google search for “AI coming for doctors and lawyers,” which is this helpful “AI Overview” synthesizing the information I’m searching for:
I’m not going to take the bit as far as reading this quote out for you in a robot voice, but it is a little disconcerting to read reassurance from this particular source. The Overview AI, after all, has a bit of a conflict of interest. What it says is that, don’t worry, we’re only coming for the repetitive parts of your doctor and lawyer jobs, not for your whole profession. In fact, it’s just going to be used to lighten your load! And on that last point, I would suggest that it the AIs should read Capital. Because whether as a particular doctor or lawyer, you’re going to have a lightened load, free to devote your resources to the least routinized parts of what you do, or whether you’re going to find yourself as shafted as long-distance truck drivers replaced by self-driving freight or screenwriters losing their script-polishing gig to AI is going to entirely depend on your relationship to the firm or practice at which you work. Do you own the place, or are you an emplyee? Partners in law firms can indeed reap these benefits. Lawyers hired by those guys have no say in the matter, and if what they do can be automated away, they get the shaft.
Because as Marx points out again and again over the course of Ch. 15, returning the point from different angles, the incentive of the capitalist is most definitely not to keep everyone at their jobs and pay them all as much as ever. Some people work as hard as ever. Some people end up out of work. And this is what I was saying earlier about how Marx’s description of what goes on outside the factory gates, among people left out in the cold by technological unemployment, is if anything even grimmer than his description of what goes on inside. A lot of economists of his time, like a lot of economists of ours, argue that the scales will always balance out sooner or later. As new technology destroys some old jobs or even some new industries, the capital this frees up will result in new investment, creating new jobs and eventually always employing as many people as ever. And Marx makes a technical economic argument, not necessarily that things won’t work out this way in any given instance, but that there’s no particular structural guarantee that they’ll always work out this way. And even if it does, he pours all sorts of scorn on the notion that this makes it all OK. He quotes these bourgeois economists who say, well, the pain of job dislocation is only temporary, and he makes this morbid German-language pun about how your pain is indeed only “temporary” if you starve to death and thus exit the “temporal” world. And of course in societies like the US and the UK, even after the neoliberal turn starting with Thatcher and Reagan, no one literally starves to death because of long-term unemployment, but fentanyl overdoses, for example, kill you just as dead as starvation, and the figures on “deaths of despair” in regions that have been ravaged by mass job loss resulting from whatever combination of automation and neoliberal trade policies—economists have these polite little debates about the proportions of the two causes—make for some truly godawful reading. And as Marx points out in Ch. 15, when things work out exactly as advertised, with new jobs opening up to make up for the lost old jobs as the economy changes, the new jobs often end up going to the “new stream of men” rather than to the “original victims” of job loss. Think here about the advice you used to hear several years ago that people in depressed post-industrial regions should “learn to code.” This was more or less the Obama administration’s solution to mass unemployment in coal country, to sprinkle a few “technology training centers” through Appalachia, so any middle-aged laid-off coal miners who did show up to learn to code could then look forward to the exciting prospect of competing for coding jobs with 22-year-old Computer Science graduates. Good luck with that.
But at this point you might be wondering how Marx could possibly have the other side of his perspective that I talked about earlier—that steampunk techno-utopianism. Because he absolutely does. In footnote 33, he connects the dots, telling us that the “field of application for machinery” would “be entirely different in a communist society from what it is in bourgeois society.” And to see how, we have to look at the part of the chapter where he seemingly does the opposite—not looking ahead to a technologically advanced socialist future but looking backwards to ancient Greece and Rome:
“‘If,’ dreamed Aristotle, the greatest thinker of antiquity, ‘if every tool, when summoned, or even by intelligent anticipation, could do the work that befits it, just as the creations of Daedalus moved of themselves…and if the weavers shuttles were to weave of themselves, then there would be no need either of apprentices for the master craftsman, or of slaves for the lords.’ And Antipater, a Greek poet of the time of Cicero, hailed the water wheel for grinding corn, that most basic form of all productive machinery, as the liberator of female slaves and he restorer of the golden age.
Oh those heathens! They understood nothing of political economy and Christianity...
In the Victorian England where Marx was writing, this advanced modern enlightened Christian society, automation meant some people worked as hard as ever and others starve. These silly heathens imagined that labor-saving technology would actually be used to save labor, if you can imagine such a thing. And of course Marx thinks that’s exactly what can happen in the socialist future. If workers collectively control the means of production, they can decide how to use labor-saving technology, perhaps sometimes collectively and democratically deciding to reap the advantages in greater productivity so they can all enjoy a higher standard of living and sometimes deciding that the rate of consumption is good enough and they want to reap the benefits in terms of less work to go around for everyone.
In the 1960s sci-fi sitcom The Jetsons, the future is exactly like the era in which the show was created. Economics, gender roles, everything—except that technology has dramatically advanced (and for some reason the family dog is able to talk). Patriarch George Jetson works at Mr. Spacely’s sprocket factory, where automation has advanced so far that he’s sometimes portrayed as only having to push a single button at the start of his shift. After that, he can relax for the rest of the day. In one episode, he complains to his wife about having to work 2 entire hours and she’s horrified.
The incoherent thing about the Jetsons fantasy is that capitalist property relations are so clearly intact. Why, given the continuing separation of labor and ownership, would Mr. Spacely pay George a good enough living to support his wife Jane, his boy Elroy, his daughter Judy, and his dog Astro, for doing such a minimal amount of work? Presumably if George was one of 10,000 employees Mr. Spacely started with before he made the production process that efficient, he’d fire 9,999 of them and the remaining one would still be run ragged all day. But if George and his fellow workers had taken over the means of production by the time this innovation happened, they really could decide to each do a tiny amount of work and go home for the rest of the day to their families.
In maybe the best flourish in the whole chapter, Marx mocks the bourgeois apologists who dismiss concerns about how automation plays out under capitalism on the grounds that automation is a net benefit for society. He imagines Bill Sikes, the cut-throat from Charles Dickens’s Oliver Twist, making a similar defense at his criminal trial:
‘Gentlemen of the jury, no doubt the throat of this commercial traveller has been cut. But that is not my fault, it is the fault of the knife. Must we, for such a temporary inconvenience, abolish the use of the knife? Only consider! Where would griculture and trade be without the knife? Is it not as salutary in surgery, as it is skilled in anatomy? And a willing assistant at the festive table? If you abolish the knife—you hurl us back into the depths of barbarism.’
And this is exactly the point that’s most salient about AI. Objecting to the social devastation caused by job after job being automated away by everything from screenwriting AI to self-driving trucks to automated legal advice isn’t objecting to the technology itself. It’s objecting to its capitalist application. In a better society, where the knife (to stretch the metaphor) has been confiscated from Bill Sikes and put under social ownership, we can all decide for ourselves where and when to cut. And I want to end on just taking a moment to think about what kinds of decisions we might want to make together under those circumstances:
In a society where no one was desperate to go to law school and get a job as a lawyer because it was a way out of the working class, a way to be upwardly mobile to provide a better life for yourself and your loved ones, because we could distribute the fruits of a high-tech economy in an egalitarian way, lifting up the entire working class, it would be just fine for whatever kind of legal system we had at that point to automate the annoying parts and only keep the high-level reasoning about what’s fair or how to interpret ambiguous or conflicting laws in human hands. In a society where not driving a truck didn’t mean no longer making a good living, if the truck’s safety algorithms really were as good as human judgment in avoiding killing and maiming people—a big ‘if’ and I’d have a lot more confidence if the profit motive weren’t in the mix giving people incentives to be overconfident about it—then, yeah, whatever automate it all away. If automated cancer screenings make it easier for people not to get cancer, great. If the use of AI in economic planning makes it easier, after workers have taken over the means of production, to move from whatever kind of market socialism we know how to design at this point in history to fully planned communism, even better!
When it comes to writing, though, whether screenwriting or term paper writing, I’d suggest the calculation is very different, because none of the purposes of writing, ranging from learning how to reason the way writing papers for college classes help you to do to creating art, which is what one hopes we’re looking for when we watch movies (even if that’s not necessarily what Hollywood executives are hoping for), the calculus may be entirely different. (And by the way I think we’d have a lot fewer students looking for workarounds to writing papers if we took the pressure of the PMC upward-mobility rate race out of our education system and had people enrolling in classes for reasons that were far more about hoping to actually learn things.) The science fiction writer Ted Chiang—if you’ve ever seen the movie Arrival, he’s the guy who wrote the original short story it was based on—has a wonderful essay from the New Yorker last year, called,“Why AI Isn’t Going to Make Art.”
Chiang writes:
The task that generative A.I. has been most successful at is lowering our expectations, both of the things that we read and of ourselves when we write anything for others to read. It is a fundamentally dehumanizing technology because it treats us as less than what we are: creators and apprehenders of meaning. It reduces the amount of intention in the world.
And as much as it might seem like an odd fit with my endorsement of a digital version of what I’ve been calling Marx’s steampunk techno-utopianism, I think Chiang is one thousand percent correct about this point. The more we use AI for writing or similar creative tasks, the less we’re being like human architects, creating in our heads before we create the product, and the more we’re creating like bees.
Using his own field as an example, he writes:
When you are writing fiction, you are—consciously or unconsciously—making a choice about almost every word you type; to oversimplify, we can imagine that a ten-thousand-word short story requires sometihng on the order of ten thousand choices. When you give a generative-A.I. program a prompt, you are making very few choices; if you supply a hundred-word prompt, you have made on the order of a hundred-choices.
This is why it drives me crazy when I hear people like well-intentioned but basically ditzy university administrators or pedagogy experts talk about how there are “right and wrong ways” to use AI in student writing, or hacky screenwriters saying there are right ways to use it to “help” write screenplays. Every single possible use of it, in any writing context whatsoever, is to some extent or another a decrease in the amount of intentionaltiy involved in the process. Every single possible use of it decreases the gap between the architect and the bee.
So, look, let’s wield the knife wisely. Dishwashers are good because there’s no innate human value in spending your hours washing dishes. Automating away every aspect of what goes on in auto factories is bad under capitalism but would be very good in an advanced socialist society where doing so wouldn’t create any poverty and misery but rather free people up to do things that do have innate human value. Automating away a lot of white-collar PMC jobs would similarly be excellent under a socialist mode of production. We want to provide everyone with their material needs, we want to provide comfort and stability and even abundance for everyone so they can devote themselves to things that do have value—to spending time with their loved ones, certainly, to reading, to writing, to thinking, to reasoning, to creating art. And those things have value precisely because we engage in these activities like people and not like bees. So I’m an unrepentant techno-optimist. But even under an enlightened post-capitalist economic order, I don’t want us to slash the hell out everything in sight because we happen to be holding a really sharp knife. Frankly, I’d be happy if we (a) had an economy that was as close as we can possibly get to Fully Automated Luxury Communism, so everyone who had the slightest inclination in that direction could spend their days writing philosophical treatises or science fiction novels or epic poetry, but (b) we also had cultural norms where if it was revealed that you’d used generative AI to write even one part of one sentence of any of those things, everyone would expect you to commit ritual suicide. Because the whole point of creating a better form of society is precisely so we can all live up to our potential as human beings. And with that, friends and comrades, I think I’ll leave it there for today.
The core issue is whether humanity can address the alignment problem with the M-C-M' algorithm.
Great analysis. I'm a big fan of Marx's early passage that envisions a better world where "I can hunt in the morning, fish in the afternoon and be a critical critic in the evening, just as I have a mind." He never anticipated that machines could automate being a critical critic – but you show how well that flows from the rest of his theory.