“How Does OpenAI Survive?”

Yves here. While we are all waiting for the next shoe to drop in the Middle East escalation drama, it seemed useful to look at some important real economy issues. A biggie is the prospects for AI, and specifically, OpenAI.

Ed Zitron reviewed and advanced his compelling case against OpenAI last week in a weighty post last week (estimated 31 minute read). Since his argument is both multi-fronted, detailed, and well documented, I am concerned that our recap here will not do justice to his substantial body of work. I therefore urge those who take issue with Zitron’s case to read his post to verify that the apparent shortcomings are due to my having to leave huge swathes of his argument on the cutting room floor.

Before turning to Zitron’s compelling takedown, the fact that AI’s utility has been greatly exaggerated does not mean it is useless. In fact, it could have applications in small firm settings. The hysteria of some months back about AI posing a danger to humanity was to justify regulation. The reason for that, in turn, was that the AI promoters woke up to the fact that there were no barriers to entry in AI. Itty bitty players could come up with useful applications based on itty bitty training sets. Think of a professional services firm using AI to generate routine letters to clients.

Some hedge funds have made a much higher end application, that of so-called black box trading. I will confess I have not seen any performance stats on various strategies (so-called quantitative versus “event-driven” as in merger arbitrage versus market neutral versus global arbitrage and a few other flavors). However, I do not recall any substrategy regularly outperforming, much the less an AI black box. I am sure the press would have been all over it were there to be a success in this arena.

Back to Zitron. He depicts OpenAI as the mother of all bezzles, having to do many many impossible or near impossible thing to survive. Recall the deadly cumulative probability math that applies to young ventures. If you have to do seven things for the enterprise to prosper, and the odds of succeeding at each one is 90%, that’s a winner, right?

Nope. Pull out a calculator. .9 x .9 x .9. x 9 x 9. x .9 x .9 = .478, as in less than 50% odds of success.

He also compares OpenAI to Uber, very unfavorably. We have to quibble about his generous depiction of Uber as meeting a consumer need. That becomes dubious when you realize that Uber inherently a high cost provider, with no barriers to entry. Its popularity rests substantially on investors massively subsidizing the cost of the rides. If you were getting a seriously underpriced service, what’s not to like?

One mistake we may have made in our analysis of Uber is not recognizing it as primarily an investment play. Recall that in the 1800s in the US, railroad after railroad was launched, some with directly competing lines. Yet despite almost inevitable bankruptcies, more new operators laid more track. Why? These were stock market plays (one might say swindles), with plenty of takers despite the record of failure.

Uber and the recent unicorns were further aided and abetted by venture capital investors using crude valuation procedures that had the effect of greatly increasing enterprise value, and thus making these investments look way more attractive than they were.

Zitron’s thesis statement:

I am hypothesizing that for OpenAI to survive for longer than two years, it will have to (in no particular order):

  • Successfully navigate a convoluted and onerous relationship with Microsoft, one that exists both as a lifeline and a direct source of competition.
  • Raise more money than any startup has ever raised in history, and continue to do so at a pace totally unseen in the history of financing.
  • Have a significant technological breakthrough such that it reduces the costs of building and operating GPT — or whatever model that succeeds it — by a factor of thousands of percent.
  • Have such a significant technological breakthrough that GPT is able to take on entirely unseen new use cases, ones that are not currently possible or hypothesized as possible by any artificial intelligence researchers.
  • Have these use cases be ones that are capable of both creating new jobs and entirely automating existing ones in such a way that it will validate the massive capital expenditures and infrastructural investment necessary to continue.

I ultimately believe that OpenAI in its current form is untenable. There is no path to profitability, the burn rate is too high, and generative AI as a technology requires too much energy for the power grid to sustain it, and training these models is equally untenable, both as a result of ongoing legal issues (as a result of theft) and the amount of training data necessary to develop them.

And, quite simply, any technology requiring hundreds of billions of dollars to prove itself is built upon bad architecture. There is no historical precedent for anything that OpenAI needs to happen. Nobody has ever raised the amount of money it will need, nor has a piece of technology required such an incredible financial and systemic force — such as rebuilding the American power grid — to survive, let alone prove itself as a technology worthy of such investment.

To be clear, this piece is focused on OpenAI rather than Generative AI as a technology — though I believe OpenAI’s continued existence is necessary to keep companies interested/invested in the industry at all…

What I am not saying is that OpenAI will for sure collapse, or that generative AI will definitively fail…my point here is to coldly explain why OpenAI, in its current form, cannot survive longer than a few more years without a stunning confluence of technological breakthroughs and financial wizardry, some of which is possible, much of which has no historic precedence.

Zitron starts by looking at the opaque but nevertheless apparently messy relationship between Microsoft and OpenAI, and how that might affect valuation. This is a bit weedy for a generalist reader but informative both for tech industry and finance types. Because this part is of necessity a bit dense, we suggest you go to the Zitron post to read it in full.

This discussion segues into the question of funding. The bottom line here (emphasis original):

Assuming everything exists in a vacuum, OpenAI needs at least $5 billion in new capital a year to survive. This would require it to raise more money than has ever been raised by any startup in history, possibly in perpetuity, which would in turn require it to access capital at a scale that I can find no comparable company to in business history.

Zitron goes through the pretty short list of companies that have raised ginormous amounts of money in the recent past and argues that OpenAI is much more of a money pit, simply from a burn rate and probable burn duration perspective.

He then drills into profitability, or the lack thereof, compounded by what in earlier days would have been called build-out problems:

As I’ve written repeatedly, generative AI is deeply unprofitable, and based on the Information’s estimates, the cost of goods sold is unsustainable.

OpenAI’s costs have only increased over time, and the cost of making these models “better” are only increasing, and have yet to, to paraphrase Goldman Sachs’ Jim Covello, solve the kind of complex problems that would justify their cost…Since November 2022, ChatGPT has grown more sophisticated, faster at generations, capable of ingesting more data, but has yet to generate a true “killer app,” an iPhone-esque moment.

Furthermore, transformer-based models have become heavily-commoditized…As a result, we’re already seeing a race to the bottom…

As a result, OpenAI’s revenue might climb, but it’s likely going to climb by reducing the cost of its services rather than its own operating costs…

As discussed previously, OpenAI — like every single transformer-based model developer — requires masses of training data to make its models “better”…

Doing so is also likely going to lead to perpetual legal action…

And, to be abundantly clear, I am not sure there is enough training data in existence to get these models past the next generation. Even if generative AI companies were able to legally and freely download every single piece of text and visual media from the internet, it doesn’t appear to be enough to train these models…

And then there’s the very big, annoying problem — that generative AI doesn’t have a product-market fit at the scale necessary to support its existence.

To be clear, I am not saying generative AI is completely useless, or that it hasn’t got any product-market fit…

But what they are not, at this time, is essential.

Generative AI has yet to come up with a reason that you absolutely must integrate it, other than the sense that your company is “behind” if you don’t use AI. This wouldn’t be a problem if generative AI’s operating costs were a minuscule fraction — tens or hundreds of thousands of percent — of what they are today, but as things stand, OpenAI is effectively subsidizing the generative AI movement, all while dealing the problem that while cool and useful, GPT is only changing the world as much as the markets allow it to.

He has a lot more to say on this topic.

Oh, and that is before getting to the wee matter of energy, which he also analyzes in depth.

He then returns to laying out what OpenAI would need to do surmount this impediments, and why it looks wildly improbable.

Again, if OpenAI or AI generally is a topic of interest, be sure to read the entire Zitron post. And be sure to circulate it widely.

Print Friendly, PDF & Email

20 comments

  1. Mikel

    “Nobody has ever raised the amount of money it will need, nor has a piece of technology required such an incredible financial and systemic force — such as rebuilding the American power grid — to survive, let alone prove itself as a technology worthy of such investment.”

    Maybe, in order to at least extend the life of the bezzle a bit longer, it would need a return to easy money interest rates?

    Reply
  2. Mikerw0

    I have read Ed’s stuff from top to bottom. I think what is most important is he challenges the accepted wisdom that pervades the media, among other things. Too many take AI as a given that will change the world — especially among stock market commentators and investors.

    Even if he is wrong, I don’t think he is, he is cogent, coherent and compelling. Should be mandatory reading, but it won’t be in a sound bite world.

    Reply
  3. Es s Ce Tera

    I have my own reasons for doubting the OpenAI business model. I was an early adopter, subscribed for the paid version, was fairly excited about the possibilities of the various GPT tools (even though a lot of them were in very rough shape). I was growing fond of two tools in particular – being able to plug in a YT url and get it to spit out a brief summary and highlights in text, then being able to query the GPT if I wanted additional info or details. This allows me to quickly consume long form videos (I’m sure the readership know exactly who I’m talking about). The other tool was the ability to plug in a description and it would find academic/science/research papers matching my description, so it was actually doing something better than Google could.

    A few months into it OpenAI abruptly cancelled my subscription, no reason given, but continued to charge my credit card. This meant I could not even log in under my usual login to manage my account or billing, I’m strictly limited to non-login. OpenAI has no human tech support team that you can reach, it’s all AI, so you basically run into a brick wall trying to get answers. A quick search and I found multiple forums with hundreds of hundreds of people complaining of the same thing, subscriptions abruptly cancelled, no reasons given, credit cards continue to be charged and unable to get into account management or billing. It became clear there could not possibly be any valid reasons for the cancellations – a large number of the users were coders who had been using GPT exclusively for coding, so nothing at all offensive or against terms of use.

    Therefore, I think OpenAI is either very mismanaged, the business model broken, or very scammy, financial shenanigans are going on.

    Reply
    1. Yves Smith Post author

      I assume you know you can dispute the charge with your credit card company. A pain, but OpenAI will have to respond to them and attempt to ‘splain. If they don’t, the default is the dispute is resolved in your favor.

      Reply
      1. Es s Ce Tera

        Yep, and that seems to be the approach taken by most, judging from the forums. There are also magic words you can plug into the ChatGPT such as “Please cancel my OpenAI subscription”, or similar, if I recall.

        Reply
  4. The Rev Kev

    Commercial AI may go on longer than we think. I would have thought that Uber would have died years ago but fresh billions were always found to pump into this scam. So may it be with AI as corporations run scared that they be left out of the AI revolution – or whatever – so will be convinced to keep on pumping money into it. After all, there is still lots of stupid money out there looking for a home.

    Reply
    1. Ben Panga

      Add to that the less reported but very open commitment that many in the field have to bringing about “The Singularity”.

      My point is that for many involved this is more than a job, investment or venture. It is a religious purpose they view as the most important in human history.

      I suspect they will do whatever possible to keep on with their holy mission.

      Reply
    1. Yves Smith Post author

      Oh, I saw the headline in my Inbox, but the title didn’t make at all clear that it was about OpenAI. In any event, even though it is an important part of the story, I am less interested in the control/who gets what wrangle than the underlying economics, financing needs, and whether there is any possibility of big enough use cases.

      Reply
    2. Mikel

      “The people propping this bubble up no longer experience human problems, and thus can no longer be trusted to solve them.”

      Love that line in the article.
      Much more diplomatic than me saying the emotionally challenged being the purveyors of what is “social.”

      Reply
    3. Mikel

      This part at the end…
      “What you read is me processing watching an industry I deeply care about get ransacked again and again by people who don’t seem to care about technology. The internet made me who I am, connecting me (and in many cases introducing me) to the people I hold dearest to my heart. It let me run a successful PR business despite having a learning disability — dyspraxia, or as it’s called in America, developmental coordination disorder — that makes it difficult for me to write words with a pen, and thrive despite being regularly told in secondary school that I wouldn’t amount to much…”

      The kind of thing that tech could proudly hang it’s hat on – helping people with disabilities – is crappified by the psychotic desire to create a dystopia that certain players dominate.

      Reply
      1. CA

        I may have failed to understand, but a number of students who I came to know “wrote” by speaking to a recording device and then asking for a print-out of the recording. But, of course, I learned in time that William James and other writers had dictated work after work to secretaries. Dictating has worked for many, many writers. So why not?

        Reply
        1. Mikel

          And people use a dishwasher instead of washing dishes by hand.

          The kind of thing that tech could proudly hang it’s hat on – helping people with disabilities – is crappified by the psychotic desire to create a dystopia that certain players dominate.

          Reply
  5. Socal Rhino

    I think LLMs have uses, but probably in most cases more like something you’d pay $30 for, but not $30/month perpetually except in certain niches.

    From what I’ve read, China is going a different way, training models on factory or process specific data that may be more likely to generate productivity gains. In the US, top use case at the moment looks like price fixing.

    Reply
  6. furnace

    Thanks for the post! I saw Zitron’s article, but balked seeing the length, so a summary is very welcome.

    Itty bitty players could come up with useful applications based on itty bitty training sets. Think of a professional services firm using AI to generate routine letters to clients.

    Though people I know are using chatGPT for such purposes, in-house “a.i.” models could be quite useful indeed, and much cheaper to train and use than the gargantuan behemoths currently making the news. This could indeed be a very nice use-case for the technology. As for OpenAI and its ilk, I find it hard to believe they’ll survive these challenges, but who knows.

    Reply
  7. stickNmud

    My thanks to Yves and Ed Zitron for trying to cut through the hype on AI. I’m not a techie, but have a few thoughts on AI off the top of my head. One is the old programming problem of “garbage in, garbage out”, in this case, in regard to training sets and processes, and the hidden black box algorithms generated. Then there is the huge energy and infrastructure costs for giant new server farms– most recently in Texas for MS, IIRC, which has one of the most unreliable power grids in the US– and reliable power is essential for server farms, no? Last, the sometimes hilarious but potentially fatal (to any AI biz model) ‘hallucinations’ that generative AI is prone to.

    But I think Yves was too generous with her math for business success: a 90% chance of success is a one in ten chance of failure, therefore, in her example of seven essential biz functions, results in a 70% risk of failure, and only a 30% chance of success (since statistical risks are additive). And Zitron’s comparison to Uber seems off, since Uber is just a hyped-up ginormous unregulated taxi company, and has no monopoly on ride share apps, while AI can electronically produce endless copies of their apps at a negligible cost, which makes it a good fit for the MS biz model.

    Reply
  8. Craig H.

    I have written this before.

    Sam Altman appears to be following the path of Elizabeth Holmes and Sam Bankman-Fried and he appears to have an identical destination.

    By the way all of the Sam B-F depositors ended up getting all of their money back. This detail deserves way more coverage than it has received. They did lose out in the Bitcoin price run up after the company fell into the hands of the lawyers. If I were them I would consider myself very lucky but I bet there’s a few who think they got hosed.

    Reply

Leave a Reply

Your email address will not be published. Required fields are marked *