What the OpenAI turmoil says about our future
I have no idea why Sam Altman was fired from OpenAI. Or why he was reinstated. But if you’ve spent any time thinking about AI—or simply considered the subject with vague, under-researched queasiness—then this OpenAI drama is a great opportunity to take stock of the potential impact of AI on your business. Specifically, regarding communications: What aspects can AI handle, and for which aspects do you still need humans (for now)?
If Altman was canned for some conventional reason (eg inappropriate workplace relationship; financial impropriety), then the recent drama is irrelevant to this discussion. On the other hand, there are rumors that the company had achieved some massive breakthrough in AI technology that moved them closer to Artificial General Intelligence (AGI), the point where computers effectively become sentient and a potential threat to humans. Such an advance might have triggered the board, which runs the non-profit part of the company whose “primary fiduciary duty is to humanity,” according to its charter.
Current AI, such as ChatGPT, is a simulacrum of intelligence. It is a most-likely-next-word system that makes no attempt to understand what it is saying. As has often been noted, the output of such a Large Language Model (LLM) AI is indifferent to factual accuracy; it merely reflects that the system discovered that, given all the preceding words and concepts, it makes statistical sense that the following words and concepts should appear next.
Aside from this truth problem, the design of the system virtually guarantees that nothing creative will come of this system; it’s taking the average of every sentence on the internet. You know how the Yelp review of every restaurant seems to average out to around 3.4 stars? Same idea.
To me, this approach feels like an evolutionary dead-end to achieving AGI. Success will come from imbuing computers with actual reasoning ability so that they construct sentences based on logic and creativity.
If OpenAI indeed did make substantial progress towards AGI, then God help us. Not just because sentient machines might be around the corner, with all the risk of concomitant sci-fi dystopia, but because the OpenAI board of directors that had specifically been tasked with defending “the best interests of humanity” in that scenario got rolled in under a week.
Here’s how Scott Galloway summarized the drama in a sentence that no LLM-based AI could have come up with: “If this was a battle between capital and (concern for) humanity, capital smothered humanity in its sleep.”
Reassuring! So, since we have somewhere between a few weeks and a few decades before AGI, let’s summarize what an LLM can and can’t do for you right now.
What LLM AI is great at:
- A high-school-level writing assignment. “Write 3 pages about the demise of the Roman Empire,” or more relevant to a business setting, “Write a 400-word introduction for my business plan” (just don’t expect it to be very interesting).
- SEO content. “Write 800 words about Zoom etiquette.” If your business could benefit from a whole website full of search-engine optimized content, AI is your friend. At least, until the web is so full of this pablum that no one reads any of it anymore.
- Repetition. “Produce a hundred variations of this social media ad headline.” Anything where you need to produce many similar versions for different platforms, formats, and goals.
- Summarize and Recall. “List the key deal points from this 364-page document;” “What was the signer’s middle name?” I’ve been watching the pre-AI Netflix lawyer show Suits and the feats of memory that make Mike Ross special could be easily replaced by AI
What challenges LLM AI:
- The original research problem. I’m writing an article right now for a biotech startup about the problem of iron deficiency in Africa. There isn’t a lot of good information available online; I need to find and speak with relevant experts.
- The self-reflection problem. An AI can’t know what you’re thinking, or draw on your memories (aside from what is accessible digitally).
- The novelty problem. You invented something new. By definition, an LLM has little or no pre-existing documents it can process in order to write meaningfully it.
- Motivations: What are the needs & motivations of your startup’s target audience? What motivations did you have for creating your startup (which is critical for a powerful founding story)?
- Strategy & negotiation. What is your best beachhead market, what features do they need in an MVP, and how should you pitch them?
- Metaphors & Analogies: Effective communication of your business often depends on apt comparisons between your novel technology and something your audience already understands. Good prose builds on that shared understanding.
So when will AI eat us all? Let’s keep our eyes peeled for information regarding the motivations behind the OpenAI’s original firing.
As an outside observer, I doubt that AGI is imminent. Change has become so ubiquitous that our minds tend to extrapolate to extremes immediately, even when reality moves a lot slower. We like to say that “everything is different” but often forget how slow actual change is. Not just in tech terms like self-driving cars, which have been happening next year for over a decade, but the “nothing seems to change” human condition in the face of technology. Don’t forget, we are chattering about manned missions to Mars while religions still fight about which hill is the holiest.
Otto Pohl is a communications consultant who helps startups tell their story better. He works with deep tech, health tech, and climate tech leaders looking to create profound impact with customers, partners, and investors. He has taught entrepreneurial storytelling at USC Annenberg and at accelerators across the country. Learn more at www.corecommunicationsconsulting.com