The movie Moneyball is about how the internal process of storytelling to pick baseball players for a team was replaced by a more mechanistic process of analyzing players by their mechanics and creating an optimized team based on pure statistics. The irony, of course, is that Moneyball itself is a story about Billy Beane and the (sometimes fictionalized) conflicts he had to go through in order to players, fans, and the money people that he was on the right track. The Oakland As still have the record for number of consecutive games in a row.
We are a storytelling species. I think it's literally written into our DNA. We tell stories to help make sense of the world, put it in order, and satisfy our sense of justice. Stories help us teach ourselves and others how the world works, or how the world should work. There's even the possibility that our consciousness is rich enough, subtle enough, deep enough that it actually relies on narrative to maintain a coherent sense of self.
The practice of moneyball, the use of cold statistics to build and optimize sports teams, removed the storytelling from inside the game, where the me who make money live, and if they no longer use post-hoc stories to explain the viability of the team, or to justify the expense of hiring some players, they still tell each other stories about their own wisdom, skill, and intelligence in using the system, and maintain a sense of their own life-stream that has coherence from beginning to end.
I've been interviewing for work, and a lot (and I mean a lot) of the jobs are in logistics optimization. Basically, while the Internet has disintermediated some things, the process of getting products from the farm to your kitchen table, or turning crude oil into a shoe you're currently wearing, in a way that's highly customizable, fast, less energy intensive, and environmentally friendly, still has a lot of work. The question is, "How do we get rid of warehouses?"
The math for this is called a system of constraints. A system of constraints is "Well, we have x, y, and z. They work with efficiencies a, b, and c, and we want outcomes i, j, and k. What is the optimal combination of x, y, and z to get that outcome?" Clever mathematicians can take all of these and turn them into a matrix of linear equations, you know,
The math for this was discovered in 1949 by a guy named Leonid Kantorovich. Kantorovich was Russian, yes, and when he proved that it could work in theory, the entire Soviet economy was turned over to him. And they tried. Without computers, they tried to do exactly what Kantorovich proposed.
They failed. America always outpaced the USSR in development. America relied not on solution of constraints to solve economic distribution, they used market signals, and let people allocate their funds as they saw fit. America solved the solution of constraints on the only computer big enough to solve it: the collective brains of the American people. Not only that, but the American solution continued to be self-optimizing because we kept coming up with new things to put into it. Nobody knew humans would love music 24/7 until we invented radio. Nobody really knew just how insatiable was our need for storytelling until Netflix came along. The USSR wasn't ready for Rock'n'Roll, or modern computing infrastructure. The Soviet System wasn't actually designed to learn, to renew, to adapt. It could only solve the constraints it understood.
At one of my interviews, while waiting for the technical lead to show up and interview me, I sat with the HR person in the lobby and pointed at the collection of books they had on data science. One of them was a textbook on Kantorovich's theory and the developments in it, many of them by brilliant Russian mathematicians trying to solve the shortcomings in the system. (For details, I strongly recommend Francis Spuffords book Red Plenty, a fictionalized series of stories that starts with Kantorovich's discovery and uses illuminating anecdotes to show how poorly this very cyberpunk desire meshed with a people still recovering from the Leninist overthrow of the Czar. A great essay on this is Cosma Shalizi's In Soviet Union, Optimization Problem Solves You.) I thumbed through the book and commented to the HR guy, "The funny thing is, all of these books are old. There's nothing new here. What's new is that we finally have enough CPU and memory with which to do it." A software developer who was in the same room looked over at us and said, "I like this guy already.")
A corporation is, like the Soviet Union, a command economy. Units within the corporation receive resources "from each according to their ability, to each according to their need." Like the Soviet Union, if you couldn't do the "from" part well enough, your part was quickly terminated. At least in a corporation, you're only fired, not shot or starved out. This trend of using more and more "big algebra" to solve these systems of constraints and show how to allocate resources and optimize outcomes is what's really going on when executives use the term "Artificial Intelligence."
The problem is that corporations depend on market signals to tell them what to optimize for. But large-scale AI systems do have millions of inputs, and it's sometimes notoriously difficult to figure out why a certain recommendation comes out of AIs.
The other problem is that while some of the market signals are generated by individuals, other market signals are now being generated by AIs. Which means that the output of one AI is the input of another, and vice-versa.
When we, you and I, are put into the machine, and made apart of The Google Way, you and I become "fungible linearized assets" to be "allocated" as needed by the corporation. Our purpose becomes short-term, optimized based on our years of experience and previous activity, matriced with everyone else who kinda sorta looks like us, but based on qualities that each of us, individually, may not be aware of.
We are a storytelling species. We tell ourselves stories that make our life seem comprehensible. Who decides what "optimized" means, and when our lives are "optimized" in such a way that we cannot, with our seven-plus-or-minus-two minds, understand why the seven-million-inputs AI decided something "in the best interests of the optimization," what happens to our sense of self?
I love to make fun of Rod Dreher, but lets give the man his due: for all his faults, and they are faults manifest and legion, his Benedict Option is basically a story about holding the line against a future world where optimization and communication exceed human bandwidth. That's not a bad world to want. There are days when even I understand how badly we may want it.
I do want a world that's comprehensible. The more we turn it over to AIs, the less so that it is. The only reason we understand corporations is that they're slow; slow enough that we can, in retrospect, arrive at the terrible conclusions they often reach. That capacity for retrospection is fading fast.
What do we do then?
We are a storytelling species. I think it's literally written into our DNA. We tell stories to help make sense of the world, put it in order, and satisfy our sense of justice. Stories help us teach ourselves and others how the world works, or how the world should work. There's even the possibility that our consciousness is rich enough, subtle enough, deep enough that it actually relies on narrative to maintain a coherent sense of self.
The practice of moneyball, the use of cold statistics to build and optimize sports teams, removed the storytelling from inside the game, where the me who make money live, and if they no longer use post-hoc stories to explain the viability of the team, or to justify the expense of hiring some players, they still tell each other stories about their own wisdom, skill, and intelligence in using the system, and maintain a sense of their own life-stream that has coherence from beginning to end.
I've been interviewing for work, and a lot (and I mean a lot) of the jobs are in logistics optimization. Basically, while the Internet has disintermediated some things, the process of getting products from the farm to your kitchen table, or turning crude oil into a shoe you're currently wearing, in a way that's highly customizable, fast, less energy intensive, and environmentally friendly, still has a lot of work. The question is, "How do we get rid of warehouses?"
The math for this is called a system of constraints. A system of constraints is "Well, we have x, y, and z. They work with efficiencies a, b, and c, and we want outcomes i, j, and k. What is the optimal combination of x, y, and z to get that outcome?" Clever mathematicians can take all of these and turn them into a matrix of linear equations, you know,
a ⋅ x = i, with no exponents: no squared, no cubed, none of that. Each one, on a graph, describes a straight line. Even if you have millions of these (and yes, these problems end up with thousands of these equations), you can find a collection of solutions that satisfies the desired outcome given the input material.The math for this was discovered in 1949 by a guy named Leonid Kantorovich. Kantorovich was Russian, yes, and when he proved that it could work in theory, the entire Soviet economy was turned over to him. And they tried. Without computers, they tried to do exactly what Kantorovich proposed.
They failed. America always outpaced the USSR in development. America relied not on solution of constraints to solve economic distribution, they used market signals, and let people allocate their funds as they saw fit. America solved the solution of constraints on the only computer big enough to solve it: the collective brains of the American people. Not only that, but the American solution continued to be self-optimizing because we kept coming up with new things to put into it. Nobody knew humans would love music 24/7 until we invented radio. Nobody really knew just how insatiable was our need for storytelling until Netflix came along. The USSR wasn't ready for Rock'n'Roll, or modern computing infrastructure. The Soviet System wasn't actually designed to learn, to renew, to adapt. It could only solve the constraints it understood.
At one of my interviews, while waiting for the technical lead to show up and interview me, I sat with the HR person in the lobby and pointed at the collection of books they had on data science. One of them was a textbook on Kantorovich's theory and the developments in it, many of them by brilliant Russian mathematicians trying to solve the shortcomings in the system. (For details, I strongly recommend Francis Spuffords book Red Plenty, a fictionalized series of stories that starts with Kantorovich's discovery and uses illuminating anecdotes to show how poorly this very cyberpunk desire meshed with a people still recovering from the Leninist overthrow of the Czar. A great essay on this is Cosma Shalizi's In Soviet Union, Optimization Problem Solves You.) I thumbed through the book and commented to the HR guy, "The funny thing is, all of these books are old. There's nothing new here. What's new is that we finally have enough CPU and memory with which to do it." A software developer who was in the same room looked over at us and said, "I like this guy already.")
A corporation is, like the Soviet Union, a command economy. Units within the corporation receive resources "from each according to their ability, to each according to their need." Like the Soviet Union, if you couldn't do the "from" part well enough, your part was quickly terminated. At least in a corporation, you're only fired, not shot or starved out. This trend of using more and more "big algebra" to solve these systems of constraints and show how to allocate resources and optimize outcomes is what's really going on when executives use the term "Artificial Intelligence."
The problem is that corporations depend on market signals to tell them what to optimize for. But large-scale AI systems do have millions of inputs, and it's sometimes notoriously difficult to figure out why a certain recommendation comes out of AIs.
The other problem is that while some of the market signals are generated by individuals, other market signals are now being generated by AIs. Which means that the output of one AI is the input of another, and vice-versa.
When we, you and I, are put into the machine, and made apart of The Google Way, you and I become "fungible linearized assets" to be "allocated" as needed by the corporation. Our purpose becomes short-term, optimized based on our years of experience and previous activity, matriced with everyone else who kinda sorta looks like us, but based on qualities that each of us, individually, may not be aware of.
We are a storytelling species. We tell ourselves stories that make our life seem comprehensible. Who decides what "optimized" means, and when our lives are "optimized" in such a way that we cannot, with our seven-plus-or-minus-two minds, understand why the seven-million-inputs AI decided something "in the best interests of the optimization," what happens to our sense of self?
I love to make fun of Rod Dreher, but lets give the man his due: for all his faults, and they are faults manifest and legion, his Benedict Option is basically a story about holding the line against a future world where optimization and communication exceed human bandwidth. That's not a bad world to want. There are days when even I understand how badly we may want it.
I do want a world that's comprehensible. The more we turn it over to AIs, the less so that it is. The only reason we understand corporations is that they're slow; slow enough that we can, in retrospect, arrive at the terrible conclusions they often reach. That capacity for retrospection is fading fast.
What do we do then?