AI winter is a term that describes funding cuts in research and development of artificial intelligence systems.
This usually follows after a period of overhype and under-delivery in the expectations of AI systems capabilities. Does this sound like today’s AI?
Over the past few months, we’ve observed several key generative AI systems failing to meet the promise of investors and Silicon Valley executives – from the recent launch of Open AI’s GPT-4o model to Google’s AI Overviews to Perspective’s plagiarism engine and a ton more.
While such periods are typically temporary, they can impact the industry’s growth.
This article tackles:
A brief history of AI winters and the reasons each one occurred.
Characteristics and lessons learned from past AI winters.
Are we headed for an AI winter now?
The future of AI in search and your role in it.
Brief history of AI winters and the reasons each one occurred
The field of AI has a rich (albeit quite short) history, marked by periods of intense excitement followed by somewhat of a disappointment. These periods of decline are what we now call AI winters.
The first one occurred in the 1970s. Early AI projects like machine translation and speech recognition failed to meet the ambitious expectations set for them. Funding for AI research dried up, leading to a slowdown in progress.
Several factors contributed to the first AI winter.
In a nutshell, researchers over-promised the capabilities of what AI could achieve in the short term.
Even now, we don’t fully understand human intelligence, making it hard to replicate in AI.
Another key factor was that the computing power available at the time was insufficient to handle the growing demands of the AI field, which inevitably halted progress in the area.
Some progress was observed in the 1980s with the development of expert systems, which successfully solved specific problems in limited domains. This period of excitement lasted until the late 1980s and early 1990s when another AI winter arrived.
This time, the reasons were more closely related to the death of one computing technology – the LISP machine, which was replaced by more efficient alternatives.
Simultaneously, expert systems failed to meet expectations when prompted with unexpected inputs, leading to errors and erosion of trust.
One key effort in replacing the LISP machines was the Japanese Fifth Generation project.
This was a collaboration between the country’s computing industry and government that aimed to revolutionize AI operating systems and computing techniques, technologies and hardware. It ultimately failed to meet most of its goals.
Despite research in AI continuing throughout the 1990s, many researchers avoided using the term “AI” to distance themselves from the field’s history of failed promises.
This is quite similar to a trend observed at the moment, with many prominent researchers carefully signifying the specific area of research they are operating in and avoiding using the umbrella term.
AI interest grew in the early 2000s due to machine learning and computing advances, but practical integration was slow.
Despite this period being referred to as the “AI spring,” the term “AI” itself remained tarnished by past failures and unmet expectations.
Investors and researchers alike shied away from the term, associating it with overhyped and underperforming systems.
As a result, AI was often rebranded under different names, such as machine learning, informatics or cognitive systems. This allowed researchers to distance themselves from the stigma associated with AI and secure funding for their work.
From 2000 to 2020, IBM’s Watson was a prime example of the failed integration of AI, following the company’s promise to revolutionize healthcare and diagnostics.
Despite its success on the game show Jeopardy!, the AI super project faced significant challenges when applied to real-world healthcare.
The Oncology Expert Advisor, in collaboration with the MD Anderson Cancer Center, struggled to interpret doctors’ notes and apply research findings to individual patient cases.
A similar project at Memorial Sloan Kettering Cancer Center encountered problems due to the use of synthetic data, which introduced bias and failed to account for real-world variations in patient cases and treatment options.
When Watson was implemented in other parts of the world, its recommendations were often irrelevant or incompatible with local healthcare infrastructures and treatment regimens.
Even in the U.S., it was criticized for providing obvious or impractical advice.
Ultimately, Watson’s failure in healthcare highlights the challenges of applying AI to complex, real-world problems and the importance of considering context and data limitations.
Meanwhile, several AI-related trends emerged. These niche technologies gained buzz and funding but quickly faded after failing to live up to the hype.
Think of:
Chatbots.
IoT (internet of things).
Voice-command devices.
Big data.
Blockchain.
Augmented reality.
Autonomous vehicles.
All of these areas of research and development still have a ton of potential, but investor interest has peaked at separate periods in the past.
Overall, the history of AI is a cautionary tale of the dangers of hype and unrealistic expectations, despite also demonstrating the resilience and progress of the industry’s mission. Despite the setbacks, AI technologies have evolved.
Dig deeper: No, AI won’t change your marketing job: A contrarian perspective
Characteristics and lessons learned from past AI winters
Generative AI is the most recent iteration in the cycle of AI breakthrough, hype, investment and multi-faceted technology integration in many areas of life and business.
Let’s track whether it is currently headed toward an AI winter. But before that, allow me to briefly recap the lessons learned from each past AI winter.
Each AI winter shares the following key milestones:
Hype cycle
AI winters often follow periods of intense hype and inflated expectations.
The gap between these unrealistic expectations and the actual capabilities of AI technology leads to disappointment and disillusionment.
Technical barriers
AI winters frequently coincide with technical limitations.
Whether it’s a lack of computational power, algorithmic challenges or insufficient data, these barriers can significantly impede progress.
Financial drought
As enthusiasm for AI wanes, funding for research and development dries up.
This lack of investment can further stifle innovation and exacerbate the slowdown.
Backlash and skepticism
AI winters often witness a surge in criticism and skepticism from both the scientific community and the public.
This negative sentiment can further dampen the mood and make it difficult to secure funding or support.
Strategic retreat
In response to these challenges, AI researchers often shift their focus to more manageable, less ambitious projects.
This can involve rebranding their work or focusing on specific applications to avoid the negative connotations associated with AI.
Then a niche breakthrough occurs, starting the cycle all over again.
AI winters aren’t just a temporary setback; they can really hurt progress.
Funding dries up, projects get abandoned and talented people leave the field. This means we miss out on potentially life-changing technologies.
Plus, AI winters can make people suspicious of AI, making it harder for even good AI to be accepted.
Since AI is becoming increasingly integrated into our countries’ economies, our lives and many businesses, a downturn hurts everyone.
It’s like hitting the brakes just as we start making progress toward achieving some of the world’s biggest tech-related goals like AGI (artificial general intelligence).
These cycles also discourage long-term research, leading to a focus on short-term gains.
Despite stalling progress, AI winters offer valuable learning experiences. They remind us to be realistic about AI’s capabilities, focus on foundational research and ensure diverse funding sources.
Collaboration across different sectors is key, as is transparent communication about AI’s potential and limitations – especially to investors and the public.
By embracing these lessons, we can create a sustainable and impactful future for AI that truly benefits society.
Let’s address the big question – are we currently headed toward an AI winter?
Are we headed for an AI winter now?
It appears that progress in AI has slowed down a bit after an explosive 2023, both with regard to new technologies released, updates to existing models and hype around generative AI.
People like Gary Marcus believe that the big leaps forward in AI model performance are becoming less frequent.
The lack of breakthroughs in generative AI and new model developments from the leaders in the space suggests a potential slowdown in progress.
Judging by investor calls, mentions of AI have also decreased, leading more to believe that the productivity gains that generative AI promised would not manifest more than what has already been achieved.
Admittedly, it isn’t much. The ROI isn’t great. Many companies struggle to find the productivity returns expected from their AI investments.
The rapid advancements and excitement around tools like ChatGPT have inflated expectations about their capabilities and potential impact.
Something previously apparent to only a small fraction of the population, mostly AI researchers, is now becoming general knowledge – large language models (LLMs).
These models face major limitations, including hallucinations and a lack of true understanding, which reduces their practical impact.
People are realizing that these technologies, when misused, are already harming the web. AI-generated content has spread across the web, from social media comments to posts, blogs, videos and podcasts.
Authentic human-generated content is becoming scarce. Future AI models will inevitably be trained on synthetic content, making it impossible to avoid and leading to worse performance over time.
We haven’t even addressed the ease of hacking generative AI, ethical issues in sourcing training data, challenges in protecting user data and many other problems that tech companies often overlook in AI discussions.
Still, some signs point against an impending AI winter in the short term.
AI technology continues to evolve rapidly, with open-source models rapidly catching up to closed models and innovative applications like AI agents emerging.
Furthermore, AI is being integrated into various industries and applications, often seamlessly (sometimes not – looking at you, AI Overviews), demonstrating at least some practical value.
It’s unclear whether these implementations will meet the tests of time.
Ongoing investment in companies like Perplexity shows investors’ confidence in AI’s potential for search, despite skeptics debunking some of the company’s claims and questioning its tactics around intellectual property.
Dig deeper: Google AI Overviews are an evolution, not a revolution
The future of AI in search and your role in it
AI is undoubtedly here to stay. My fellow automation enthusiasts and I are thrilled that everyone is now excited about this technology and exploring it themselves.
It’s important not to let the current excitement raise your expectations too high. The technology still has limits and a long way to go before reaching its full potential.
Beware of tech bros and CEOs promising uncanny ROI or sharing their doomsday predictions of the day (always so, so soon) where there will be AGI and you will be replaced by AI.
While automation is revolutionizing the workforce, change is gradual.
Progress is being made toward AGI, but reputable AI researchers believe this reality will not come in the immediate future. Numerous obstacles must still be overcome to achieve this.
Understanding any emerging technologies (especially those so widely discussed as AI is at the moment) and how they work is crucial to creating strategies that stand the test of time.
What we might see happening (in search, in particular) is one of two scenarios.
Progress continues
Implementations stand the test of time, and models improve.
For search marketers, this might mean more AI-generated content to outcompete but also improved search systems and AI-detection algorithms, easing this task by amplifying human-written, authentic voices.
Investors win. Big tech wins. Everyone wins.
That is if we solve the challenges related to ethics, security, IP and resource use. But I digress.
Progress stalls
Systems become worse. Think:
No improvement in Google AI Overviews.
Even more spam in web results.
Misinformation.
Entirely poisoned social media feeds, online forums and other digital spaces.
In this scenario, big tech will start bleeding money rapidly. (Some evidence suggests this trend has already begun.)
AI systems are, at the end of the day, expensive to develop, maintain and improve.
Failing to do so, however, will tarnish investor trust and they will eventually bow down to scaling back implementations in the area.
The public failure of several of these technologies to meet expectations will lead to the widespread loss of trust in the potential of generative AI.
In both scenarios, the brand, the authenticity of the company and its people and the approach to consumer relationships will become even more important.
The second scenario will also amplify the consumer desire for authentic non-digital experiences.
My advice to search marketers is to stay aware of the risks of AI and learn how different models work. What are their benefits and limitations? What tasks do they handle well or poorly?
Experiment with tools to boost your productivity. Many models aren’t yet ready for full marketing use, and treating them as such can worsen the issues mentioned in this article.
Dig deeper: How AI will affect the future of search