Generative AI will continue to make tremendous strides. It’s nowhere near the point of eliminating the need for human oversight, however.
Just as we were finishing our Thanksgiving leftovers last year, the American artificial intelligence research laboratory OpenAI released a powerful text-generation tool called ChatGPT.
Rather than define it in my own words, I’ll get all meta.
I’ll let ChatGPT introduce itself:
ChatGPT is a large language model developed by OpenAI that can generate human-like text. It is trained on a diverse set of internet text, and can be fine-tuned for various language generation tasks such as conversational response generation, language translation, and summarization.
ChatGPT is no parlor trick or vaporware. (In case you’re curious, this is the only time that I used it while writing this.) The sophisticated tool relies upon the latest version of OpenAI’s generative pre-trained transformer, version 3.5.
A full technical explanation of GPTs isn’t necessary here. Suffice it to say that it systematically combs through and ingests ginormous amounts of data. It then applies “a machine-learning technique that teaches computers to do what comes naturally to humans: learn by example.” We call this deep learning. Again, there’s much more to it, but machine learning is a subset of AI. And deep learning is a subset of machine learning. A helpful analogy is Matryoshka or Russian tea dolls.
If you haven’t taken ChatGPT for a test drive yourself yet, I encourage you to do so. You’ll quickly understand why its release broke the internet, as the kids say. Twitter was all in a twitter.
ChatGPT packed the requisite wow factor to elicit countless reactions from the cognoscenti. Derek Thompson of The Atlantic named ChatGPT one of his breakthroughs of 2022. In his words, “These uncanny tools, having emerged from our mind, may change our mind about how we work, how we think, and what human creativity really is.”
Cade Metz of the New York Times opined, “The Turing test used to be the gold standard for proving machine intelligence. This generation of bots is racing past it.”
Dharmesh Shah is the founder and CTO of HubSpot, a company that makes customer relationship management and marketing software. As he astutely observed on LinkedIn, “The Internet existed before Netscape. But the browser helped millions of mere mortals connect the dots on what could be done, and dream of what could be.”
It’s a valid comparison. Netscape democratized the internet, and ChatGPT is doing that with AI. Within five days of its launch, more than one million people had used it.
Is all the hype around ChatGPT really justified, though? Not everyone is sold.
As OpenAI CEO Sam Altman cautioned over Twitter, “It’s a mistake to be relying on ChatGPT for anything important right now.” No argument here, but the operative words in that tweet are right now.
AI expert Gary Marcus echoes that sentiment. In January 2023, he appeared on The Prof G Pod with Scott Galloway. Marcus is the author of several books on the subject and an NYU professor. Marcus is not nearly as buoyant on the release of ChatGPT as Thompson, Shah, and countless others. In his words, ChatGPT’s backbone (version 3.5) is “not so different from a bunch of other systems that came before it,” including:
GPT version 3: The previous iteration that OpenAI launched to a more limited audience in June 2020.
Meta’s Galactica AI: Pulled by the company a whole three days after its November 2022 launch because of its obvious inaccuracies.
Despite its fancy tech, Marcus wisely reminds us to remain cautious. ChatGPT is still problematic; most importantly, it can’t distinguish fact from fiction. Its results sound more authoritative than they are. Much of the hype stemmed from the scale of OpenAI’s 3.5 launch. Version 3 was much more limited.
NOT JUST TEXT
While GPTs aren’t perfect, they can already serve a number of practical business purposes. That is, they can do more than just spit out original, possibly apocryphal text in response to user prompts.
ChatGPT sits under a larger group of technologies called generative AI. As Kevin Roose of the New York Times put it, it’s “a wonky umbrella term for AI that doesn’t just analyze existing data but creates new text, images, videos, code snippets, and more.”
If you’re a writer, software developer, graphic designer, photographer, artist, or any other type of creative, that last sentence should give you pause.
Yes, AI has been around in different forms for decades. With the launch of ChatGPT, though, AI is no longer some abstract, distant threat. Shit is starting to get real—and fast.
GENERATIVE AI IN THE WORKPLACE
The American historian and history professor Melvin Kranzberg once famously opined, “Technology is neither good nor bad; nor is it neutral.”
As we’ll see in this section, nowhere do those words ring truer than with generative AI. Generative AI is a bit of a Rorschach test. It’s not hard to find positives and negatives. Here are some of them.
At a high level, generative AI tools may make some workers more productive—in some cases, much more. While researching this book, I came across an interesting analogy of how we’ll use tools based on generative AI in the coming years.
Professional golfers walk the course but don’t carry their own bags. That job falls to their caddies. These folks do more than just lug heavy bags of clubs, golf balls, snacks, umbrellas, water, and other assorted equipment. They don’t just rake sand traps. Before their bosses tee off, caddies meticulously study the course before tournaments begin. While rounds are taking place, they make critical recommendations on where the golfers should aim, which club to hit, and how to read a putt.
Caddies’ advice can be indispensable. A single stroke is usually the difference between making and missing the cut, among other things. For their efforts, caddies typically earn $2,000 per week plus anywhere from 5 to 10 percent of their bosses’ weekly prize money.
Think of generative AI tools as caddies—at least in the short term. They don’t swing the clubs themselves, and they aren’t infallible. Caddies’ tips help pro golfers hit the best golf shot possible under the circumstances. In other words, the best caddie in and of itself means zilch. All things being equal, however, a better caddie results in a better outcome.
To complete the analogy, robots may swing the clubs themselves in the future, and the golfer may become irrelevant. We’re not there yet, though. The arrival of generative AI also means that employers will try to get more bang for their employee buck.
Massive Job Losses?
These newfangled generative AI tools are exciting and more than a little addictive. (The phrase time suck comes to mind.) Some might see them as innocuous. For example, let’s examine image-generation programs. What’s the harm in creating a few goofy images like Bart Simpson holding a book?
In October 2022, I hired a local photographer to take a few headshots. I wanted to spruce up my website and add a more recent pic to the back of my book Low-Code/No-Code. I spent a modest $120 for 50 pictures, one of which adorns the physical book.
Could AI have done the same? Yes.
The AI portrait app Lensa launched in 2018 and performed respectably. That is, it caught lightning in a bottle à la Instagram in 2010. In late 2022, Lensa soared in popularity after its newly launched magic avatar feature went viral.
At that time, I downloaded it for my iPhone. I ordered fifty original AI-generated images for a whopping total of $3. After uploading eight existing headshots, Lensa spit out its creations. I didn’t care much for Lensa’s depiction of me in space, but some of its photos were pretty slick.
Will proper photographers go the way of travel agents? What about knowledge workers, like lawyers?
Erik Brynjolfsson is the director of the Stanford Digital Economy Lab and a professor there to boot. As he told David Pogue on CBS Sunday Morning in January 2023:
If done right, it’s not going to be AI replacing lawyers. It’s going to be lawyers working with AI replacing lawyers who don’t work with AI.
Or, while we’re at it, what about entire mega-corporations?
Some industry types have speculated that tools like ChatGPT may soon obviate search engines. Google, as we know it, could become another AskJeeves. The juggernaut typically generates about $150 billion yearly for its parent company, Alphabet. That number presents more than 80 percent of Alphabet’s annual revenue. Will people use ChatGPT instead of Google? If they do, they won’t click on ads—at least on Google’s ads.
The idea that search engines may evaporate seems unlikely. Still, search just became far more interesting than it has been in twenty years.
In early February 2023 Microsoft announced that it had already started integrating OpenAI’s tech into Bing, its also-ran search engine. Early reviews from beta testers were positive. Johanna Stern of the Wall Street Journal wrote that “search will never be the same.” Odds are that Bing will capture a good chunk of its rival’s 84 percent market share.
Google’s head honchos are too smart to sit back as Bing generates buzz. What’s more, the company hasn’t exactly been ignoring AI. As I was wrapping up the manuscript for this book, Google launched Bard, its ChatGPT competitor. Stay tuned.
Certain applications have earned our trust over time. We’re confident that Microsoft Excel correctly analyzes data, calculates averages, and produces charts. Our email messages arrive unaltered.
We still need to use our brains. For example, it’s dangerous to blindly trust Google results without reviewing the source of its recommendations.
And there’s this crop of powerful, yet largely unproven generative AI tools. The algorithms work their magic in black boxes and provide zero transparency into their methods. The word opaque comes to mind. Should we effectively represent their results as our
original creations? Are you willing to let them make key business or life decisions? I’m not.
Going Too Fast Too Soon
With any emerging technology, strong incentives exist to exaggerate its capabilities, especially among the ethically challenged.
Case in point: a certain electric-car company. Multiple lawsuits “argue that Tesla’s self-driving software is dangerously overhyped.” Tellingly, Musk initially vetoed his team’s preferred name of Copilot, insisting upon Autopilot. In July 2023, Bloomberg reported that
“Musk oversaw the faked 2016 Autopilot video.”
Musk’s penchant for hyperbole and showmanship rivals that of P. T. Barnum, but he’s hardly the only source of embellishment.
Since 1994, CNET has been a popular website that has published thousands of articles about technology and consumer electronics. Its current tagline reads, “Your guide to a better future.”
Evidently, that future doesn’t include fact-checking.
The site has begun quietly using AI to generate original content. As of this writing, CNET Money has penned seventy-three articles, and it’s “already publishing very dumb errors.” (Maybe it should steal a page from the Associated Press’s playbook. The AP has been using robot journalists to write limited financial articles since at least 2015 sans the same level of controversy.)
The dangers of eliminating all human supervision from creative endeavors today are obvious—or at least they should be to responsible business leaders. In this case, CNET loses credibility with discerning readers in an already challenging environment. (How long until advertisers follow?)
Those unable or unwilling to fact-check CNET’S claims proceed with false knowledge that, as George Bernard Shaw famously said, is more dangerous than ignorance. Let’s hope that management at BuzzFeed and other sites heed Shaw’s warning as they use ChatGPT to churn out quick and cheap content.
Peering into the future of any emerging technology is a roll of the dice. History has repeatedly shown that predictions aren’t guarantees, even from folks in the know.
In the early 1940s, IBM’s President Thomas J. Watson opined, “I think there is a world market for about five computers.” After the 2007 launch of the iPhone, Microsoft’s then-CEO famously said, “There’s no chance that the iPhone is going to get any significant market share. No chance.” Long-defunct AltaVista could have bought Google in 1998 for a mere $1 million.
It’s entirely possible that the AI bulls are misguided. Former US Treasury Secretary Larry Summers’s admittedly tempered comparison of AI to historical game-changers may be wildly inaccurate:
Just as the printing press or electricity was a huge change because it was a general- purpose technology, this could be the most important general-purpose technology since the wheel or fire. And that is something we all are going to be changed by.
I can’t fathom how all the AI advancements and billions invested will result in bupkis. With respect to generative AI, the essential question isn’t if sophisticated tools will transform the workplace. The better questions are when and how.
In a way, generative AI changes nothing. Doing more with less will continue to rule the day. That’s the nature of capitalism.
Generative AI will prove too tempting to resist. Plenty of firms will cut back on hiring. Bosses will expect their employees to wear multiple hats. Why can’t a graphic designer also serve as a copywriter or blogger? Our financial analyst is an Excel wiz. Surely, he can build our website and manage our social media accounts in his spare time, right?
This mindset is misguided. The world’s best golf caddies don’t also do side hustles as tennis coaches. The jack of all trades is the master of none.
Some companies will use generative AI to aggressively supplant people with machines and software. But firms that eliminate the human element will regret it. Generative AI just isn’t there yet. What’s more, morale among existing employees will crater. Finally, more companies will hire fractional legal counsels specifically to handle a burgeoning array of complicated AI issues.
To knowledge workers and creatives, AI has always been an existential threat, albeit a distant and abstract one. ChatGPT made it much more concrete. Practical and legal considerations make going all-in on AI unwise at this time. Fine, but it’s silly to ignore it altogether.
Generative AI will continue to make tremendous strides. It’s nowhere near the point of eliminating the need for human oversight, however.