Good morning! Below, a few thoughts on the anniversary of Large Language Model madness dominating Silicon Valley discourse. If you’re interested in working with me or have thoughts about the newsletter, get in touch.
Up front, I should admit that this is a case of rationalizing an intuition: There’s something about software like ChatGPT, or rather the way people talk about it, that I find distasteful. Writers, of course, have a very specific antagonism to this kind of software, because we think of writing as a process first, and an output second.1
But in the year since OpenAI’s chatty bot debuted, I’ve become less interested in doomer narratives, and more skeptical about whether the benefits of Large Language Models2 outweigh their costs. Not, like, human extinction costs, but will this software actually generate enough money to pay back the investment required to create and operate it?
An common technology journalist error is skipping steps: A decade ago, I wrote about the economic and environmental impact that self-driving cars would have on the world, predicated on what engineers told me about the self-driving as a technical challenge: It was something that would be solved with sufficient time and capital. Now, with the embarrassment of Tesla’s “Full Self Driving” on display, and two companies that do operating self-driving cars, Cruise and Waymo, suffering political and financial distress, those stories seem a bit premature.
Similarly, a lot of “AI” coverage starts with the assumption that its economic and social impact will be vast, but don’t quite figure out the intermediate steps between “ChatGPT exists” and “profit.”
By now, it’s obvious that LLMs haven’t destroyed the white collar economy, even if they are chipping away at the lowest-common-denominator writing jobs. I don’t attribute my own recent departure from staff employment to ChatGPT, although I do appreciate the irony.
Still, I can’t find too many examples of LLMs changing the way people do business. (Prove me wrong and send some in.) You see lots more trash content on the web, and I hear from a lot of folks who say they enjoy using LLMs (typically without paying for them) as a better user experience for search, or to access various kinds of technical information. My own continuing experiments with ChatGPT have yet to blow me away, but I’m assured it makes a good research assistant. Online publishers have so far proven unable deploy LLMs to create content that people want to read or advertise against. The LLM revolution may well be coming, but we can’t assume so without more reporting on the business models that will bring it to fruition.
Right now, we just don’t know much about the unit economics of deploying LLM user interfaces, only that it is quite a bit more costly than traditional service. And that’s before the industry has solved its intellectual property problem; which likely represents years of uncertainty ahead. However it all shakes out, I’m willing to bet the cost of training data for LLMs will be above zero. And as venture fund A16Z points out, “imposing the cost of actual or potential copyright liability on the creators of AI models will either kill or significantly hamper their development.”
I doubt that; before LLMs, most machine learning was used on proprietary data sets—think satellite companies training software to recognize buildings and vehicles in their observation databases, then selling that service. The company I’m watching most closely (after Microsoft, I suppose) when it comes to LLMs is SalesForce, because automating corporate bureaucracy is kind of its whole thing. The company is integrating OpenAI’s software into its platform (though it’s not clear how much its paying as part of the deal, or how many people are being paid to monitor the software.) On the most recent earnings call, SalesForce president Brian Millham had this to say:
We've launched Sales GPT and Slack Sales Elevate internally, and our global support team is live with Service GPT, and we're seeing incredible results. We've streamlined our quoting process with automation, eliminating over 200,000 manual approvals so far this year. And since the introduction in September, our AI-driven chatbot has autonomously resolved thousands of employee-related queries without the need for human involvement.
This is fascinating, but I’m so curious about the denominators for those numbers—and what that increase in productivity means for the bottom line.
We're seeing great success with our products and so our customers, which is clearly reflected in the high-level engagement and participation we're seeing in our events.
In general, businesses don’t use high-level engagement as a way to measure success.
But here’s the thing that interests me most—a concrete example from CEO Marc Benioff:
So, as an example, the Copilot is I'm writing an email. So, now my—I'm saying to the Copilot, hey, now can you rewrite this email for me or some—make this 50% shorter or put it into the words of William Shakespeare. That's all possible and sometimes it's a cool party trick. It's a whole different situation when we say, "I want to write an email to this customer about their contract renewal … And I want to write this email that really references the huge value that they receive from our product and their log-in rates. And I also want to emphasize how the success of all the agreements that we have signed with them have impacted them, and that we're able to provide this rich data to the Copilot and through the prompt and the prompt engineering that is able to deliver tremendous value back to the customer. And this data, this customer value will only be provided by companies who have the data. And we are just very fortunate to be a company with a lot of data. And we're getting a lot more data than we've ever had.
Two things occur to me here: First, this is a meta-example where Benioff wants to reference the huge value of SalesForce’s AI, but says this customer value will “only be provided by companies who have the data.” That suggests that the real returns on LLMs will be from companies that own data, not companies that build models.
The second thing is what we’re talking about here—specifically, as in an automated email writing tool that incorporates relevant data, and generally, as in the value of large electronic databases—is the dream of the oughties. Remember “Big Data” and “data is the new oil” from 2012? Everything old is new again.
P.S. I gave ChatGPT the obligatory chance to weigh in on my questions, but since I’m not a paying subscriber, it could only speak from the perspective of January 2022. For the record,
Those are valid concerns. The sustainability of large language models (LLMs) does depend on finding viable business models. OpenAI has been exploring different approaches, such as licensing the technology to businesses, offering premium subscription plans, and providing specialized solutions for industries like customer support or content creation. It's a tricky balance, but if they can strike the right chord between accessibility and value, there's a good chance of financial sustainability.
Go see “American Fiction”
I must recommend the new film “American Fiction,” which comes out nationwide this Friday. You’ve probably seen rave reviews for the adaption of Percival Everett’s novel Erasure. The reductive plot summary: A serious black writer makes a commercial breakthrough with an anonymously published novel built on prejudice and racial pandering; hijinks ensue. It’s more than that, though.
This conceit of the satire is that mainstream (white) audiences aren’t interested in stories about black people that diverge from stereotypical narratives about rap, drugs and crime. This reflects not only Everett’s novel, but also the experience of the film’s screenwriter and director, Cord Jefferson. Cord (full disclosure, a friend of mine) began his career as a journalist, often writing about the black experience. He found no shortage of editors asking him to weigh in on racism and inequality, but had a harder time telling stories that portrayed black people with agency and fullness of life.
That’s why the structure of “American Fiction” (and presumably Erasure, which I have yet to read) is such an impressive feat. I hope it isn’t a spoiler to tell you that the satire, busily skewering American media’s racial mores, becomes the ironic backdrop to the engrossing drama at the center of the movie—an upper-class black family’s experiences with the universal difficulties of adulthood, sexuality, elder care and death.3 It’s a rare film that can do two things at once so well.
Thanks for reading! We will return two Tuesdays a month in 2024. Please forward this to anyone who might be interested.
APHORISM TIME: “Plans are nothing, planning is everything,” according to Ike Eisenhower, who famously planned some things. Similarly, “clear thinking becomes clear writing: one can't exist without the other,” per William Zinsser.
You’ll notice I’m saying LLMs, but not Artificial Intelligence (AI). That’s because LLMs aren’t AI by any definition that isn’t coming out of the marketing department. And the latest research suggests that LLMs are unlikely to be able to achieve the kind of independent insight envisioned in an actual AI, or as the goal-post moving now calls them, Artificial General Intelligence (AGI).
I’m struggling to come up with other examples of films that comment on their plot this way, maybe “Adaptation”? In the spirit of this column, I asked ChatGPT and it spit out a bunch of real dumb answers.