“How Does OpenAI Survive?”

Yves here. While we are all waiting for the next shoe to drop in the Middle East escalation drama, it seemed useful to look at some important real economy issues. A biggie is the prospects for AI, and specifically, OpenAI.

Ed Zitron reviewed and advanced his compelling case against OpenAI last week in a weighty post last week (estimated 31 minute read). Since his argument is both multi-fronted, detailed, and well documented, I am concerned that our recap here will not do justice to his substantial body of work. I therefore urge those who take issue with Zitron’s case to read his post to verify that the apparent shortcomings are due to my having to leave huge swathes of his argument on the cutting room floor.

Before turning to Zitron’s compelling takedown, the fact that AI’s utility has been greatly exaggerated does not mean it is useless. In fact, it could have applications in small firm settings. The hysteria of some months back about AI posing a danger to humanity was to justify regulation. The reason for that, in turn, was that the AI promoters woke up to the fact that there were no barriers to entry in AI. Itty bitty players could come up with useful applications based on itty bitty training sets. Think of a professional services firm using AI to generate routine letters to clients.

Some hedge funds have made a much higher end application, that of so-called black box trading. I will confess I have not seen any performance stats on various strategies (so-called quantitative versus “event-driven” as in merger arbitrage versus market neutral versus global arbitrage and a few other flavors). However, I do not recall any substrategy regularly outperforming, much the less an AI black box. I am sure the press would have been all over it were there to be a success in this arena.

Back to Zitron. He depicts OpenAI as the mother of all bezzles, having to do many many impossible or near impossible thing to survive. Recall the deadly cumulative probability math that applies to young ventures. If you have to do seven things for the enterprise to prosper, and the odds of succeeding at each one is 90%, that’s a winner, right?

Nope. Pull out a calculator. .9 x .9 x .9. x 9 x 9. x .9 x .9 = .478, as in less than 50% odds of success.

He also compares OpenAI to Uber, very unfavorably. We have to quibble about his generous depiction of Uber as meeting a consumer need. That becomes dubious when you realize that Uber inherently a high cost provider, with no barriers to entry. Its popularity rests substantially on investors massively subsidizing the cost of the rides. If you were getting a seriously underpriced service, what’s not to like?

One mistake we may have made in our analysis of Uber is not recognizing it as primarily an investment play. Recall that in the 1800s in the US, railroad after railroad was launched, some with directly competing lines. Yet despite almost inevitable bankruptcies, more new operators laid more track. Why? These were stock market plays (one might say swindles), with plenty of takers despite the record of failure.

Uber and the recent unicorns were further aided and abetted by venture capital investors using crude valuation procedures that had the effect of greatly increasing enterprise value, and thus making these investments look way more attractive than they were.

Zitron’s thesis statement:

I am hypothesizing that for OpenAI to survive for longer than two years, it will have to (in no particular order):

  • Successfully navigate a convoluted and onerous relationship with Microsoft, one that exists both as a lifeline and a direct source of competition.
  • Raise more money than any startup has ever raised in history, and continue to do so at a pace totally unseen in the history of financing.
  • Have a significant technological breakthrough such that it reduces the costs of building and operating GPT — or whatever model that succeeds it — by a factor of thousands of percent.
  • Have such a significant technological breakthrough that GPT is able to take on entirely unseen new use cases, ones that are not currently possible or hypothesized as possible by any artificial intelligence researchers.
  • Have these use cases be ones that are capable of both creating new jobs and entirely automating existing ones in such a way that it will validate the massive capital expenditures and infrastructural investment necessary to continue.

I ultimately believe that OpenAI in its current form is untenable. There is no path to profitability, the burn rate is too high, and generative AI as a technology requires too much energy for the power grid to sustain it, and training these models is equally untenable, both as a result of ongoing legal issues (as a result of theft) and the amount of training data necessary to develop them.

And, quite simply, any technology requiring hundreds of billions of dollars to prove itself is built upon bad architecture. There is no historical precedent for anything that OpenAI needs to happen. Nobody has ever raised the amount of money it will need, nor has a piece of technology required such an incredible financial and systemic force — such as rebuilding the American power grid — to survive, let alone prove itself as a technology worthy of such investment.

To be clear, this piece is focused on OpenAI rather than Generative AI as a technology — though I believe OpenAI’s continued existence is necessary to keep companies interested/invested in the industry at all…

What I am not saying is that OpenAI will for sure collapse, or that generative AI will definitively fail…my point here is to coldly explain why OpenAI, in its current form, cannot survive longer than a few more years without a stunning confluence of technological breakthroughs and financial wizardry, some of which is possible, much of which has no historic precedence.

Zitron starts by looking at the opaque but nevertheless apparently messy relationship between Microsoft and OpenAI, and how that might affect valuation. This is a bit weedy for a generalist reader but informative both for tech industry and finance types. Because this part is of necessity a bit dense, we suggest you go to the Zitron post to read it in full.

This discussion segues into the question of funding. The bottom line here (emphasis original):

Assuming everything exists in a vacuum, OpenAI needs at least $5 billion in new capital a year to survive. This would require it to raise more money than has ever been raised by any startup in history, possibly in perpetuity, which would in turn require it to access capital at a scale that I can find no comparable company to in business history.

Zitron goes through the pretty short list of companies that have raised ginormous amounts of money in the recent past and argues that OpenAI is much more of a money pit, simply from a burn rate and probable burn duration perspective.

He then drills into profitability, or the lack thereof, compounded by what in earlier days would have been called build-out problems:

As I’ve written repeatedly, generative AI is deeply unprofitable, and based on the Information’s estimates, the cost of goods sold is unsustainable.

OpenAI’s costs have only increased over time, and the cost of making these models “better” are only increasing, and have yet to, to paraphrase Goldman Sachs’ Jim Covello, solve the kind of complex problems that would justify their cost…Since November 2022, ChatGPT has grown more sophisticated, faster at generations, capable of ingesting more data, but has yet to generate a true “killer app,” an iPhone-esque moment.

Furthermore, transformer-based models have become heavily-commoditized…As a result, we’re already seeing a race to the bottom…

As a result, OpenAI’s revenue might climb, but it’s likely going to climb by reducing the cost of its services rather than its own operating costs…

As discussed previously, OpenAI — like every single transformer-based model developer — requires masses of training data to make its models “better”…

Doing so is also likely going to lead to perpetual legal action…

And, to be abundantly clear, I am not sure there is enough training data in existence to get these models past the next generation. Even if generative AI companies were able to legally and freely download every single piece of text and visual media from the internet, it doesn’t appear to be enough to train these models…

And then there’s the very big, annoying problem — that generative AI doesn’t have a product-market fit at the scale necessary to support its existence.

To be clear, I am not saying generative AI is completely useless, or that it hasn’t got any product-market fit…

But what they are not, at this time, is essential.

Generative AI has yet to come up with a reason that you absolutely must integrate it, other than the sense that your company is “behind” if you don’t use AI. This wouldn’t be a problem if generative AI’s operating costs were a minuscule fraction — tens or hundreds of thousands of percent — of what they are today, but as things stand, OpenAI is effectively subsidizing the generative AI movement, all while dealing the problem that while cool and useful, GPT is only changing the world as much as the markets allow it to.

He has a lot more to say on this topic.

Oh, and that is before getting to the wee matter of energy, which he also analyzes in depth.

He then returns to laying out what OpenAI would need to do surmount this impediments, and why it looks wildly improbable.

Again, if OpenAI or AI generally is a topic of interest, be sure to read the entire Zitron post. And be sure to circulate it widely.

Print Friendly, PDF & Email

57 comments

  1. Mikel

    “Nobody has ever raised the amount of money it will need, nor has a piece of technology required such an incredible financial and systemic force — such as rebuilding the American power grid — to survive, let alone prove itself as a technology worthy of such investment.”

    Maybe, in order to at least extend the life of the bezzle a bit longer, it would need a return to easy money interest rates?

  2. Mikerw0

    I have read Ed’s stuff from top to bottom. I think what is most important is he challenges the accepted wisdom that pervades the media, among other things. Too many take AI as a given that will change the world — especially among stock market commentators and investors.

    Even if he is wrong, I don’t think he is, he is cogent, coherent and compelling. Should be mandatory reading, but it won’t be in a sound bite world.

  3. Es s Ce Tera

    I have my own reasons for doubting the OpenAI business model. I was an early adopter, subscribed for the paid version, was fairly excited about the possibilities of the various GPT tools (even though a lot of them were in very rough shape). I was growing fond of two tools in particular – being able to plug in a YT url and get it to spit out a brief summary and highlights in text, then being able to query the GPT if I wanted additional info or details. This allows me to quickly consume long form videos (I’m sure the readership know exactly who I’m talking about). The other tool was the ability to plug in a description and it would find academic/science/research papers matching my description, so it was actually doing something better than Google could.

    A few months into it OpenAI abruptly cancelled my subscription, no reason given, but continued to charge my credit card. This meant I could not even log in under my usual login to manage my account or billing, I’m strictly limited to non-login. OpenAI has no human tech support team that you can reach, it’s all AI, so you basically run into a brick wall trying to get answers. A quick search and I found multiple forums with hundreds of hundreds of people complaining of the same thing, subscriptions abruptly cancelled, no reasons given, credit cards continue to be charged and unable to get into account management or billing. It became clear there could not possibly be any valid reasons for the cancellations – a large number of the users were coders who had been using GPT exclusively for coding, so nothing at all offensive or against terms of use.

    Therefore, I think OpenAI is either very mismanaged, the business model broken, or very scammy, financial shenanigans are going on.

    1. Yves Smith Post author

      I assume you know you can dispute the charge with your credit card company. A pain, but OpenAI will have to respond to them and attempt to ‘splain. If they don’t, the default is the dispute is resolved in your favor.

      1. Es s Ce Tera

        Yep, and that seems to be the approach taken by most, judging from the forums. There are also magic words you can plug into the ChatGPT such as “Please cancel my OpenAI subscription”, or similar, if I recall.

    2. Mike

      I use AI to summarize company earnings call transcripts. So far it’s pretty useful and saves time.

    3. Greg Taylor

      I’d wanted to become an early GPT adopter but my application was rejected. They took my phone number and personal information and I never got the $20/month chatGPT access. They claimed they couldn’t serve the demand they were getting. Sounds like that may have been for the best.

  4. The Rev Kev

    Commercial AI may go on longer than we think. I would have thought that Uber would have died years ago but fresh billions were always found to pump into this scam. So may it be with AI as corporations run scared that they be left out of the AI revolution – or whatever – so will be convinced to keep on pumping money into it. After all, there is still lots of stupid money out there looking for a home.

    1. Ben Panga

      Add to that the less reported but very open commitment that many in the field have to bringing about “The Singularity”.

      My point is that for many involved this is more than a job, investment or venture. It is a religious purpose they view as the most important in human history.

      I suspect they will do whatever possible to keep on with their holy mission.

      1. Jams O'Donnell

        If by ‘the Singularity’ you are implying some sort of attainment of some kind of ‘consciousness’ by ‘AI’, don’t hold your breath. Computers run on code of some variety – this code ultimately runs on ‘machine code’, which is a succession of ‘1’s and ‘0’s. ‘1’s and ‘0’s cannot attain consciousness. (The same applies to the languages running above machine code – C++, Java or any other variety of programming language – no matter how complex, or how many algorithms they employ, they are just arbitrary symbols, devoid of any meaning outside of their specific, limited, context).

          1. Jams O'Donnell

            Yes, – my comment was aimed at ‘them’, not you. Sorry not to have made that clear.

        1. Paul Art

          I think the lip smacking and drooling from the VCs and the Capital class in general is due to their wet fantasies of using AI to reducing labor cost to almost zero. This has been their White Whale for a very very long time. This rubbish about “consciousness” is bait and hype which all VCs need for their ventures to succeed. Witness Liz Holmes. They cannot come out and say that the entire plan is to demolish labor – to vanquish it once and for all; ergo they puff pink clouds around the whole thing. As to how insidious the concept is, there are hints in the comments itself – some person using AI to automate company earnings call transcripts. It is very like how all of us no matter how well read, have Uber on our cell phones and use it with various contortions of justifications. I wonder if the term ‘consciousness’ refers to a sort of intelligence that comes from the data that the models train on. AI is very good at spotting trends in data which can escape human cognizance purely because we are limited computing machines. This can ‘appear’ as some kind of intelligence. However the data has to be cleaned and prepped and all that but they don’t proof read it – not tera and peta bytes of it. This means they cannot really predict the outcome or the answer a model might yield. It is only as good as the data it ingests and, ergo ‘consciousness’. Soon it will be, ‘it is AI to err’.

    2. steppenwolf fetchit

      If pension fund managers are not legally forbidden from investing “their” money in things like AI, crypto, etc., then stupid managers will invest lots of “managee’s” money into such things. That would be a case of stupid managers, not stupid money. The money would be helpless victim money.

    3. .Tom

      Commercial AI is clearly here to stay. The post is about a specific company that’s trying an investor scam something like Theranos times 10 or some multiplier I can’t figure.

      1. c_heale

        Not sure this is true. It doesn’t have a use which can raise a lot of revenue, which given its costs, is what it needs.

        The Metaverse was here to stay too. And that was run by someone who had success with social media. What has Sam Altman done? Nothing that I can see.

    1. Yves Smith Post author

      Oh, I saw the headline in my Inbox, but the title didn’t make at all clear that it was about OpenAI. In any event, even though it is an important part of the story, I am less interested in the control/who gets what wrangle than the underlying economics, financing needs, and whether there is any possibility of big enough use cases.

    2. Mikel

      “The people propping this bubble up no longer experience human problems, and thus can no longer be trusted to solve them.”

      Love that line in the article.
      Much more diplomatic than me saying the emotionally challenged being the purveyors of what is “social.”

    3. Mikel

      This part at the end…
      “What you read is me processing watching an industry I deeply care about get ransacked again and again by people who don’t seem to care about technology. The internet made me who I am, connecting me (and in many cases introducing me) to the people I hold dearest to my heart. It let me run a successful PR business despite having a learning disability — dyspraxia, or as it’s called in America, developmental coordination disorder — that makes it difficult for me to write words with a pen, and thrive despite being regularly told in secondary school that I wouldn’t amount to much…”

      The kind of thing that tech could proudly hang it’s hat on – helping people with disabilities – is crappified by the psychotic desire to create a dystopia that certain players dominate.

      1. CA

        I may have failed to understand, but a number of students who I came to know “wrote” by speaking to a recording device and then asking for a print-out of the recording. But, of course, I learned in time that William James and other writers had dictated work after work to secretaries. Dictating has worked for many, many writers. So why not?

        1. Mikel

          And people use a dishwasher instead of washing dishes by hand.

          The kind of thing that tech could proudly hang it’s hat on – helping people with disabilities – is crappified by the psychotic desire to create a dystopia that certain players dominate.

        2. Mikel

          The dictation machines didn’t also censor amd surveill.
          The dishwasher…just saved time doing dishes and didn’t want to sell your data.

          1. CA

            “The kind of thing that tech could proudly hang its hat on – helping people with disabilities…”

            I completely agree, completely. Helping people with disabilities with technology, especially AI, is repeatedly emphasized in China. Pay attention to the Special Olympics soon, and notice how Chinese athletes have benefited with technology advances.

          2. CA

            China has just developed and will be able to mass-produce a robotic guide dog at a wonderfully low cost:

            https://www.chinadaily.com.cn/a/202406/01/WS665ac37ca31082fc043ca62a.html

            June 1, 2024

            Chinese university develops six-legged guide robot for blind people

            BEIJING — A research team from China’s Shanghai Jiao Tong University has developed a six-legged guide robot for visually impaired people that is expected to address the country’s shortage of real guide dogs.

            “We believe our robot will function as a ‘pair of eyes’ for visually impaired people,” said Professor Gao Feng from the School of Mechanical Engineering in a press release on the university’s official website on Thursday.

            According to the China Association of Persons with Visual Disabilities, there are about 17.31 million visually impaired people in China. However, due to high breeding costs and long training periods, there are reportedly only over 400 guide dogs in service nationwide, which means only one guide dog is available for every 40,000 visually impaired individuals in China…

  5. Socal Rhino

    I think LLMs have uses, but probably in most cases more like something you’d pay $30 for, but not $30/month perpetually except in certain niches.

    From what I’ve read, China is going a different way, training models on factory or process specific data that may be more likely to generate productivity gains. In the US, top use case at the moment looks like price fixing.

    1. El Slobbo

      Not just China, the company that employs me is training models on process specific data, mostly to make the back office more efficient.
      As noted at the beginning of the article there are low barriers to entry. Plenty of open Source LLMs out there too, to provide the basis for natural language.

  6. furnace

    Thanks for the post! I saw Zitron’s article, but balked seeing the length, so a summary is very welcome.

    Itty bitty players could come up with useful applications based on itty bitty training sets. Think of a professional services firm using AI to generate routine letters to clients.

    Though people I know are using chatGPT for such purposes, in-house “a.i.” models could be quite useful indeed, and much cheaper to train and use than the gargantuan behemoths currently making the news. This could indeed be a very nice use-case for the technology. As for OpenAI and its ilk, I find it hard to believe they’ll survive these challenges, but who knows.

  7. stickNmud

    My thanks to Yves and Ed Zitron for trying to cut through the hype on AI. I’m not a techie, but have a few thoughts on AI off the top of my head. One is the old programming problem of “garbage in, garbage out”, in this case, in regard to training sets and processes, and the hidden black box algorithms generated. Then there is the huge energy and infrastructure costs for giant new server farms– most recently in Texas for MS, IIRC, which has one of the most unreliable power grids in the US– and reliable power is essential for server farms, no? Last, the sometimes hilarious but potentially fatal (to any AI biz model) ‘hallucinations’ that generative AI is prone to.

    But I think Yves was too generous with her math for business success: a 90% chance of success is a one in ten chance of failure, therefore, in her example of seven essential biz functions, results in a 70% risk of failure, and only a 30% chance of success (since statistical risks are additive). And Zitron’s comparison to Uber seems off, since Uber is just a hyped-up ginormous unregulated taxi company, and has no monopoly on ride share apps, while AI can electronically produce endless copies of their apps at a negligible cost, which makes it a good fit for the MS biz model.

    1. steppenwolf fetchit

      Or when the AI starts training itself on priorly-outputted AI output, then it will be vomit in, vomit out. And then vomit squared in, vomit cubed out. And ever upward and onward, Excelsior!

      Barf in, barfeces out. Barfeces in , barfecal feces out. Barfecal feces in, barfecal fecal feces out. and etc.

      The best thing that could happen would be for AI to be allowed to continue on its merry way until it crashes and burns so utterly and totally that it discredits itself.

      Regulating it would merely keep it alive longer, like a cancer that lives forever without growing quite big enough to kill the patient.

  8. Jeremy Hartley

    It is interesting to contrast MidJourney with OpenAI. Midjourney focuses solely on AI image generation and if what I read is to believed they have created a company that is profitable with $200m in revenue and only 40 employees. They have raised $0 in investments.

    I personally happily pay the montly subscription for MidJourney and get a lot of value from it because I need a lot of very specific images to express feelings, ideas or opinions.

    Midjourney presumably has much lower (though substantial) training costs because they are optimizing for a very specific task and not general intelligence. And of course all of the images used to train it are stolen…….

  9. Craig H.

    I have written this before.

    Sam Altman appears to be following the path of Elizabeth Holmes and Sam Bankman-Fried and he appears to have an identical destination.

    By the way all of the Sam B-F depositors ended up getting all of their money back. This detail deserves way more coverage than it has received. They did lose out in the Bitcoin price run up after the company fell into the hands of the lawyers. If I were them I would consider myself very lucky but I bet there’s a few who think they got hosed.

    1. Phichibe

      Ironically enough, what’s saved the creditors is that SBF invested in Anthropic, one of OpenAI’s principal competitors. Bezzles collide.

  10. Anon48

    “…the fact that AI’s utility has been greatly exaggerated… The hysteria of some months back about AI posing a danger to humanity was to justify regulation. The reason for that, in turn, was that the AI promoters woke up to the fact that there were no barriers to entry in AI…”

    This makes complete sense to me. I’m not a coder but regularly use Chat GPT (subscription version). I questioned myself about how a program could eventually achieve conscious state and initiate independent action on its own without further prompting, and therefore create risk. My foundational thought/assumption has been that all software comes down to whether a “1” or a “-0-“ is to be laid down. Which to me, means that its framework ultimately rests upon a mathematical formula. And all of the programmatic software framework built upon that foundation is also probably either based upon mathematical formulas or logic (to me the essence of both math and logic are similar in they are tool used to synthesize specific axioms to achieve a higher truth). Consequently, I think the real question is- Can a combination of mathematical or logic-based formulas or assertions be written in such a complete fashion, as to create something new that will be able think independently and have a sense of self. I too, am extraordinarily skeptical about that.

    …” Itty bitty players could come up with useful applications based on itty bitty training sets. Think of a professional services firm using AI to generate routine letters to clients.”

    As an itty bitty player in my industry, one that requires a high level of technical knowledge that’s constantly changing, I agree 100 percent that Chat GPT and other AI tools can truly help level the playing field. Obviously, one has to be thoughtful about how to structure the query, and in requesting the most efficient but throough response format. And make sure you follow up and confirm the interpretation of citations provided with the response…and make sure you’ve covered all the technical and context bases, among other things. BUT it clearly seems to mitigate the advantage held by large firms of having in-house specialists.

  11. Reader Keith

    OpenAI also has a hard time hanging on to their people. While moving between startups every 18-24 months is somewhat common for some folks, it can also mean that employees do not value future equity grants and are looking for greener pastures. Most people will put up with all kinds of nonsense if there is an exit strategy that will net them some serious cash but only if they see that exit.

    https://techcrunch.com/2024/08/05/openai-co-founder-leaves-for-anthropic/

  12. Adam1

    Just last week I was at scout camp with my boys and one of the other dad’s there was a programmer whose current gig supports a lot of AI periphery apps. We got joking and chatting about the risks most laypeople ignore or are oblivious to and he mentioned that on many of the newer AI platforms you can turn-up or turn-down the level of “creativity”. The dial that turns up or down this so called “creativity” basically adjusts how the models view outlier data points in the model.

    While I would totally expect that creative people and ideas are outliers… this is so Stats 101 confusion of correlation with causation. Just imagine an AI tool for finance that was turned up to seek out outlier events! Does it know the difference between buying Apples stock in 1982 versus say buying high yield mortgage-backed securities in 2007?!?!?! Not all outliers are created equally. You might as well invent a black swan device and sell it as AI for Wolf’s of Wall Street (hahaha).

  13. chuck roast

    On Monday, defender of all that is good and holy Elon Musk, filed a lawsuit against OpenAI and Sam Altman in Federal Court in California. Musk claims that he and other investors were hoodwinked by Altman and OpenAI’s claim to pursue “a humanitarian mission.” The supposed humanitarian benefit went over the transom when Altman signed the partnership agreement with Microsoft. Musk claims that the OpenAI principles broke Federal racketeering laws by turning an ostensibly non-profit, humanitarian mission into a commercial deal. Well, Elon would know about racketeering.

  14. James

    Most IT these days is running on open source software – Linux, Kubernetes, Postgres. Open source software has totally eclipsed closed source software.

    OpenAI … as its name suggests … was originally going to release its models as open source but then reneged on that promise. Zuckerberg has stepped in and has been releasing open source models (the Llama family) which give the OpenAI models a run for their money. Open source models are going to win and OpenAI is going to lose.

    The famous “OpenAI has no moat” memo lays it out in detail:
    https://www.semianalysis.com/p/google-we-have-no-moat-and-neither

    1. SocalJimObjects

      “Most IT these days is running on open source software.” If this had been true, the CrowdStrike outage would not have been so damaging. Also unless I am mistaken, Microsoft is a 3 trillion dollar company.

      1. James

        From codemag.com:
        “Today you can see that Microsoft is an open-source company and really does build open-source software with all the essential elements of open source, like accepting pull requests from the public. More Microsoft employees contribute to open source projects on GitHub than any other company in the world. Building open source software via GitHub is the way a lot of programmers at Microsoft make their living now.”

        https://www.codemag.com/Article/2009041/When-Open-Source-Came-to-Microsoft

  15. QuarterBack

    Large Language Models (LLMs) are often imagined to be able to achieve so many grandiose things that they cannot now, or perhaps ever, be able to achieve; however, their utility is here and now.
    LLMs are not the be-all-end-all technology to save the planet, butI have to say, from 50 years of working with “cutting edge technology”, the current generation of LLMs are the most significant advancement I have ever seen. This is an inflection point in technology development.

    That said, IMO as a business model, I think OpenAI may be doomed to fail unless they can finagle a multinational monopoly through regulatory capture. God knows they are trying. The fact of the matter is the open source community is nipping at OpenAI’s heals nearly every day. I haven’t watched much Shark Tank, but I do remember questions about the soundness of investing in an endeavor that has little proprietary intellectual capital advantage. Such giant investments and burn rates run the risk of leading the industry for a very brief period only to be lapped by the open source community.

  16. Greg Taylor

    OpenAI may well be toast but the profit potential for generative AI still seems high.

    1. Establish an LLM as a reasonably trustworthy source of truth, critical thought and content creation (code, visuals, fiction,…). People start relying on genAI instead of search and thought (as happened with GPS and land navigation skills).
    2. Those interested in influencing “truth” pay to have their versions embedded in LLM training. Far greater revenue-generating potential than search ads.
    3. Monopolize around internet access choke points (primarily operating systems, perhaps browsers, government censors/regulators) as with Google’s search.
    4. Slowly crapify as the big bucks roll in.

    What am I missing?

    1. .Tom

      Generative AI should be profitable as a service model based on open source software available to all. The owners of any given instance that’s put into production should be liable for its use of the data they used to train it (i.e. they have to license or own it) and for the ways that they use it. Part of the OpenAI business plan is to become too big to fail and/or simultaneously so useful to governments that it gets their protection. FTS.

  17. Paul Art

    When the first TCP/IP stack was written all free of cost and with collaboration across Continents and on UNIX, no one ever thought that one day the US Government will use that to spy on and control everyone. Commercial exploitation was hardly on the mind of anyone at that time. Today the direction of AI has been wrenched into the hands of greedy Corporations under the direct control of the Government. AI could very well end up like a new Palantir being used to ‘predict’ impending crime. Someone commented the other day about Minority Report. Exactly. That is the form it could take. From the ‘Lake of Data’ will emerge the Precogs with their pointing fingers.

    1. Acacia

      Already happening:

      https://www.japan.go.jp/kizuna/2024/06/japans_ai-based_crime_prediction.html

      Created by Singular Perturbations Inc., a startup developing solutions for crime reduction, the system predicts, in detail and with high accuracy, both when and where crimes are likely to occur and meticulously formulates optimal routes for patrol. Simulations in Japan have demonstrated the system to be over 50% more effective than conventional methods in its coverage of locations where crimes occur. Though it uses reams of data—including on past crime occurrence, demographics, geographic data, and weather—that normally would require an enormous amount of time and money to process through conventional computational methods, the company’s proprietary algorithm has succeeded in much faster data processing and lower costs.

      Note the company name. “Singular”… a.k.a. “Singularity”.

      And so it goes.

  18. .Tom

    It’s like Zitron explains to us how a Theranos machine cannot possibly work. Who’s listening? Lots of people are listening but those who are already in are working hard hush that down and get the next guys in so they can get out.

    Markets can be hilarious.

  19. GramSci

    I find it odd that nobody, including Zitron, considers the Government and The Church as the target end-customer.

    By the ‘Government’, I mean the Spooks, and by the ‘First Estate’, I of course mean the MSM and the network providers, which shape the Public Truth and monitor public discourse for heresy.

    The young and aspiring J. Edgars can’t trust humans to classify emails and blog posts as “dangerously subversive”, but they will pay anything for a robot trained to their specifications.

    Only the Military and its Church have the bottomless budget to keep OpenAI in business; nobody else has experience wasting money at this scale.

    Sure, before long OpenAI will fold and passive investors will be fleeced. The likes of Microsoft and Nvidia are not likely to get a consumer-facing product out of their contributions to this hare-brained project, but if they want future military contracts, it’s best practice for them to pay this current tribute to the Generals’ and Attorney Generals’ favorite toys.

  20. ChrisPacific

    Thanks for the link. It was an interesting article, reminiscent of Horan’s work on Uber (which it resembles in terms of financial hubris).

    NC readers might take issue with this:

    OpenAI also has a problem with its marketing. Sam Altman has repeatedly misled the media about what “AI might do,” conflating generative AI — which does not “know” things and is not “intelligence” — with the purely-theoretical concept of an autonomous, sentient artificial intelligence.

    That sounds more like a business model than a problem to me. Oversell the technology, attract a corresponding valuation, and make bank. The small problem, as he notes later in the article, is that OpenAI can’t really make an IPO given its corporate structure. So the usual exit strategy – unload the whole mess on credulous public investors to take the fall, and cash out – doesn’t apply in this case. Aside from Microsoft and its heads-I-win-tails-you-lose sweetheart deal, where’s the payoff for other investors?

  21. SocalJimObjects

    On Uber, yes it’s been unprofitable for most of its lifetime, but it’s managed to turn a profit a couple of times over the last few years, including the last quarter, https://www.wsj.com/business/earnings/uber-q2-earnings-report-2024-84cc5171.

    If anything, IMHO Uber has become a barometer for consumer health, because it you have the slack to spend money on overpriced rides and food deliveries, then you have to be doing quite well no?

    From the FT, https://www.ft.com/content/22b1b73e-c9df-4318-8cf8-cac92c85f4f5,
    “Uber says consumer spending ‘never been stronger’ as profits jump”
    “The Uber consumer has never been stronger,” said Khosrowshahi. “We’re not seeing any softness or trading down across any income cohort.”

    Recession cancelled. Muppets coming through.

Comments are closed.