Jul. 17th, 2019

elfs: (Default)
Yesterday, I had an interview at a company that specializes in niche logistics. Niche logistics is a fancy way of saying "There are special handling needs for the material." This could be something as fundamental as food: it spoils, so you have time limitations, or as complicated as nuclear materials, or simply something so big it needs special permitting to travel, but you still need to get it there "just in time."

My first interviewer was late, so I sat down in a sort of "casual sitting nook for the devs" that, you know, the developers never actually sit in. There was a bookshelf. I was assiduously avoiding looking at my phone so as not to be completely rude. The books were all on the fundamentals of artificial intelligence and data science, and one of them was the classic Applied Linear Algebra. I picked it up and started to thumb through it, remembered something. I put down the book and pulled out my Nook, pulled up chapter one of Francis Spufford's Red Plenty and read it. Spufford's book starts with the discoverer of Linear Algebra having his great insight, and the entirety of the book is a refutation both of his belief in what Linear Algebra was capable of achieving, and of the Soviet Union's belief that it was applicable to human beings. Learning his name, I quickly read his Wikipedia page.

The HR guy came around. "Still waiting?" he asked. I said that I was, my first interviewer hadn't shown up yet. He made a comment about how I'd found something to read. I gestured toward the bookshelf and said, "It's really interesting that these are all old books. There's nothing new about AI and data science. It's just that we have enough CPUs and memory to do it all with."

One of the data science engineers, sitting in the open-bay workspace, looked over and said, "I like this guy already." Which set the tone for the day. Soon thereafter, the guy who was supposed to conduct my first interview showed up.

Later, lunch was in-house at the company, and I sat with a couple of engineers, including the data science guy. Having gotten the gist of what they do, I asked him how the company avoids the Kantorovich pardox: AI systems can only tell you what to do with the resources you have, not the resources you want. The USSR's bid to cybernize its economic base failed for a lot of reasons, but one big reason, aside from the fact that there were no computers fast enough to actually run Kantorovich's math as written, was that even if they could run it, it couldn't adapt to new resources and new human desires based on the availability of new resources, or new human innovations that created whole new levels of resource. (No one knew human beings wanted an unending stream of music entertainment until radio was invented, after all.)

We have a lively conversation about how "the algorithm" (Kantorovich's basic theory and all its consequential descendents) is what we actually run, but "The Algorithm," with the capital letters that most C-Suites put behind it when going onto CNBC or Bloomberg, is more of a business process, with salespeople, inventories from on-the-ground resources like trucks and warehouses, and the ever-changing market landscape of human desires, requiring a team of developers to constantly update both the data sets and the strings of constraints used to analyze it into a short-term outcome.

All that by gluing a few vague things I knew together, and yet it worked okay. It's wasn't entirely bullshit, and I freely admitted that the actual underlying math wasn't something I knew about compared to the economic analysis that Spufford provided and that I could dredge up from my own college major.

(Title explained here, because Dreamwidth isn't very helpful about Youtube videos. Especially trenchant because I, like Comicus, am still looking for work.)
elfs: (Default)
The movie Moneyball is about how the internal process of storytelling to pick baseball players for a team was replaced by a more mechanistic process of analyzing players by their mechanics and creating an optimized team based on pure statistics. The irony, of course, is that Moneyball itself is a story about Billy Beane and the (sometimes fictionalized) conflicts he had to go through in order to players, fans, and the money people that he was on the right track. The Oakland As still have the record for number of consecutive games in a row.

We are a storytelling species. I think it's literally written into our DNA. We tell stories to help make sense of the world, put it in order, and satisfy our sense of justice. Stories help us teach ourselves and others how the world works, or how the world should work. There's even the possibility that our consciousness is rich enough, subtle enough, deep enough that it actually relies on narrative to maintain a coherent sense of self.

The practice of moneyball, the use of cold statistics to build and optimize sports teams, removed the storytelling from inside the game, where the me who make money live, and if they no longer use post-hoc stories to explain the viability of the team, or to justify the expense of hiring some players, they still tell each other stories about their own wisdom, skill, and intelligence in using the system, and maintain a sense of their own life-stream that has coherence from beginning to end.

I've been interviewing for work, and a lot (and I mean a lot) of the jobs are in logistics optimization. Basically, while the Internet has disintermediated some things, the process of getting products from the farm to your kitchen table, or turning crude oil into a shoe you're currently wearing, in a way that's highly customizable, fast, less energy intensive, and environmentally friendly, still has a lot of work. The question is, "How do we get rid of warehouses?"

The math for this is called a system of constraints. A system of constraints is "Well, we have x, y, and z. They work with efficiencies a, b, and c, and we want outcomes i, j, and k. What is the optimal combination of x, y, and z to get that outcome?" Clever mathematicians can take all of these and turn them into a matrix of linear equations, you know, a ⋅ x = i, with no exponents: no squared, no cubed, none of that. Each one, on a graph, describes a straight line. Even if you have millions of these (and yes, these problems end up with thousands of these equations), you can find a collection of solutions that satisfies the desired outcome given the input material.

The math for this was discovered in 1949 by a guy named Leonid Kantorovich. Kantorovich was Russian, yes, and when he proved that it could work in theory, the entire Soviet economy was turned over to him. And they tried. Without computers, they tried to do exactly what Kantorovich proposed.

They failed. America always outpaced the USSR in development. America relied not on solution of constraints to solve economic distribution, they used market signals, and let people allocate their funds as they saw fit. America solved the solution of constraints on the only computer big enough to solve it: the collective brains of the American people. Not only that, but the American solution continued to be self-optimizing because we kept coming up with new things to put into it. Nobody knew humans would love music 24/7 until we invented radio. Nobody really knew just how insatiable was our need for storytelling until Netflix came along. The USSR wasn't ready for Rock'n'Roll, or modern computing infrastructure. The Soviet System wasn't actually designed to learn, to renew, to adapt. It could only solve the constraints it understood.

At one of my interviews, while waiting for the technical lead to show up and interview me, I sat with the HR person in the lobby and pointed at the collection of books they had on data science. One of them was a textbook on Kantorovich's theory and the developments in it, many of them by brilliant Russian mathematicians trying to solve the shortcomings in the system. (For details, I strongly recommend Francis Spuffords book Red Plenty, a fictionalized series of stories that starts with Kantorovich's discovery and uses illuminating anecdotes to show how poorly this very cyberpunk desire meshed with a people still recovering from the Leninist overthrow of the Czar. A great essay on this is Cosma Shalizi's In Soviet Union, Optimization Problem Solves You.) I thumbed through the book and commented to the HR guy, "The funny thing is, all of these books are old. There's nothing new here. What's new is that we finally have enough CPU and memory with which to do it." A software developer who was in the same room looked over at us and said, "I like this guy already.")

A corporation is, like the Soviet Union, a command economy. Units within the corporation receive resources "from each according to their ability, to each according to their need." Like the Soviet Union, if you couldn't do the "from" part well enough, your part was quickly terminated. At least in a corporation, you're only fired, not shot or starved out. This trend of using more and more "big algebra" to solve these systems of constraints and show how to allocate resources and optimize outcomes is what's really going on when executives use the term "Artificial Intelligence."

The problem is that corporations depend on market signals to tell them what to optimize for. But large-scale AI systems do have millions of inputs, and it's sometimes notoriously difficult to figure out why a certain recommendation comes out of AIs.

The other problem is that while some of the market signals are generated by individuals, other market signals are now being generated by AIs. Which means that the output of one AI is the input of another, and vice-versa.

When we, you and I, are put into the machine, and made apart of The Google Way, you and I become "fungible linearized assets" to be "allocated" as needed by the corporation. Our purpose becomes short-term, optimized based on our years of experience and previous activity, matriced with everyone else who kinda sorta looks like us, but based on qualities that each of us, individually, may not be aware of.

We are a storytelling species. We tell ourselves stories that make our life seem comprehensible. Who decides what "optimized" means, and when our lives are "optimized" in such a way that we cannot, with our seven-plus-or-minus-two minds, understand why the seven-million-inputs AI decided something "in the best interests of the optimization," what happens to our sense of self?

I love to make fun of Rod Dreher, but lets give the man his due: for all his faults, and they are faults manifest and legion, his Benedict Option is basically a story about holding the line against a future world where optimization and communication exceed human bandwidth. That's not a bad world to want. There are days when even I understand how badly we may want it.

I do want a world that's comprehensible. The more we turn it over to AIs, the less so that it is. The only reason we understand corporations is that they're slow; slow enough that we can, in retrospect, arrive at the terrible conclusions they often reach. That capacity for retrospection is fading fast.

What do we do then?

Profile

elfs: (Default)
Elf Sternberg

December 2025

S M T W T F S
 12345 6
78910111213
14151617181920
21222324252627
28293031   

Most Popular Tags

Style Credit

Expand Cut Tags

No cut tags
Page generated Jan. 3rd, 2026 11:25 pm
Powered by Dreamwidth Studios