Podcast
128 - The Future of AI with Tim Warren
In today’s episode of the SEOLeverage Podcast, Gert and his guest, Tim Warren discussed the future of AI technology and its impact on SEO. They emphasize the need to understand and embrace these changes to remain competitive while acknowledging the limitations of machine learning and AI in capturing human emotions.
They also discuss the potential impact of AI on the legal industry, proposing specialist AI engines and a shared platform for the entire industry. Finally, Tim Warren tells Gert about his hopes that Knowledge Graph will make it possible for Google to have personalized conversations that understand who people are.
Podcast Highlights:
00:00 Prologue
01:06 Introduction to the podcast episode topic and the guest
02:15 Tim Warren’s background and his role as a Chief Provocation Officer
03:46 The importance of asking questions in the face of change
04:39 AI impact on white-collar jobs
08:06 The Gartner Hype Cycle and AI's evolution
13:26 Rise and fall of AI companies
16:06 The Importance of Human Expertise in SEO
23:12 Why more and more people are utilizing AI for their online search
32:13 AI in Personal Finance
36:35 The importance of trust and personalization in AI
43:35 The role of digital companions
50:41 Where to connect with Tim Warren?
51:09 End
Resources:
ChatGPT - https://chatgpt.com/
Anthropic - https://www.anthropic.com/
OpenAI - https://openai.com/
Claude - https://claude.ai
CopilotAI - https://www.copilotai.com/
Perplexity - https://www.perplexity.ai/
Connect with Tim Warren:
LinkedIn - https://www.linkedin.com/in/nkrawczyk/
Connect with Gert Mellak:
Website: https://seoleverage.com/
Email: [email protected]
The Future of AI
The world of technology is evolving, and with it, so is the way we interact with information and each other. With the right strategy, keeping up with the latest trends can be easy.
In this episode, join Gert with Tim Warren to discuss about these topics:
- The future of Artificial Intelligence (AI)
- Its impact on search engines and federated AI systems
- SEO as a strategy for advancements
Table of Contents
- The Future of AI
- Adaptability in technological change determines survival
- The limitations of Large Language Models
- Focus on the top two and stand out
- Integrating emotional intelligence into AI systems
- Google's network as the key to innovation
- Legal technology advances benefit junior lawyers' future
- How to regulate inputs and outputs
- Safety concerns about personal data in AI
- How to use AI
- Conclusion
Adaptability in technological change determines survival
Nowadays, adaptability and flexibility are important, particularly in times of rapid technological change. Tim pointed out that questions are more crucial than answers since no one can predict the future. Historically, human adaptability has been key to survival, especially as technology and culture evolve.
Unlike past industrial revolutions that primarily affected manual labor, this one targets white-collar jobs. This revolution targets professionals like lawyers and other intellectual workers who might resist this change.
Gert added that their work with AI began before the popularity of tools like ChatGPT. Initially, AI tools weren't advanced enough to replace junior writers. However, over time, they improved to the point where hiring junior writers became unnecessary. This shift led to significant changes in their processes and how they advised clients.
However, after some time, clients returned, recognizing the need to adapt. Gert compared this reaction to the Gartner hype cycle. This suggests that it gives way to a measured response as people adjust to new technologies.
The limitations of Large Language Models
People were overly optimistic, believing LLMs would be a magic bullet solution.
The launch of ChatGPT fueled this enthusiasm, leading some to abandon proven methods.
However, limitations like outdated data in ChatGPT exposed the flaws in this thinking. Some SEO firms capitalized on the hype, generating irrelevant content and blaming outdated data.
These "quick-fix" solutions lacked long-term value and ultimately failed.
A more sustainable approach involves using LLMs to improve efficiency and deliver real results. Don't be swayed by claims of revolutionary AI. True progress comes from addressing existing problems, not chasing the latest trends.
Focus on the top two and stand out
Tim believes it's more valuable to focus on the top quality options rather than chasing the latest trends. For example, instead of always listening to the newest podcast episode, he recommends sticking to the top two most impactful ones.
He illustrates his point by using companies like OpenAI and Anthropic. And note that even though Anthropic isn't as well-known, it still does excellent work.
Lately, Tim has also avoided podcasts about current events and instead listened to content that has proven its value over time. He also prefers using Claude, an AI tool, over ChatGPT because it gives him better results.
Gert agrees with Tim, especially when it comes to SEO strategies. He points out that while AI can help, it can't replace the deep understanding that top SEO firms provide. He also talks about the importance of online reputation, which unfair reviews can easily affect.
Tim shares a personal story from the COVID-19 pandemic. A few employees left negative reviews despite his efforts to support his team. This experience taught him that human behavior is unpredictable and not always rational.
Integrating emotional intelligence into AI systems
AI is becoming increasingly a part of everyone’s life, but there's a key missing piece: emotions. People think AI is a super-smart database that always gives the same answer. But that's not quite true.
AI can be inconsistent, just like people. The problem is AI doesn't take emotions into account.
Here's an example: Facebook once created AI programs that could talk to each other and learn independently. These AIs decided to create their own language to talk in, which humans couldn't understand. This is kind of like how families develop their own little ways of communicating, with inside jokes and shortcuts.
The point is that AI needs to understand these human things. Despite its efficiencies, Gert acknowledges that AI still can't fully replace human intuition and empathy, which are critical in fields like SEO.
Google's network as the key to innovation
In the early days of the Internet (around 1995), people relied on directories like Gopher to find information. It was a clunky system. Search engines like Lycos and Altavista came along and made searching a bit easier. However, they still relied on people manually adding websites to their databases.
Google's founders, coming from an AI background, had a clever idea. Instead of creating their own database, they used the existing network itself.
They looked at backlinks (links from other websites) as a way to judge a website's importance and trustworthiness. This became the foundation of their algorithm, which ranked search results based on relevance.
Basically, Google turned the internet's own data into a giant ranking system. Today, large language models like Claude and GPT-3 are good at specific things. But Google's search engine continues to evolve and adapt.
Tim also points out that:
Google wasn't the first to consider using AI for ranking. But it was the first to actually do it successfully.
Legal technology advances benefit junior lawyers' future
Law firms are starting to use large language models (LLMs) like GPT-3. These are basically AI researchers who can handle a lot of legal tasks currently done by junior lawyers. This can be a good thing for junior lawyers. Imagine being able to do 20 hours of work in just 2 hours.
Law firms can use LLMs in two ways:
- build their own
- use one created by a group of law firms
If a group of firms creates an LLM, it sets a baseline standard for the whole industry. This means new lawyers coming out of school will be able to hit the ground running.
Tim isn't sure if this is a totally good thing because standards set by everyone can sometimes be a bit low. In the end, there will probably be a few big companies offering these legal AI tools, just like there are Google and Microsofts in the tech world.
How to regulate inputs and outputs
Regulating AI is tricky. You can control what information goes into the system (inputs) and what it produces (outputs). But it takes time to control what happens in between. It's like trying to regulate someone's thoughts!
Tim suggests that AI systems will become more specialized. There might be one AI for Spanish law, another for e-commerce in Spain, and so on. This way, your personal data stays private.
For instance, information about your health would only be accessible to a healthcare AI system.
On your personal devices (phones, wearables), you'll likely have a personal AI assistant powered by a smaller language model. This AI will know a lot about you, but its information sharing will be controlled.
Encryption will also be key to keeping all this personal data safe. So, while third-party cookies might disappear, stores will still be able to track purchases without revealing your identity. This personal data store will even hold your health information.
Safety concerns about personal data in AI
Tim worries that companies can't always protect personal data. He's seen cases where data leaks from one AI system to another. This is especially concerning for sensitive data like health information.
Tim thinks the future of AI is moving away from understanding everything about a specific person. It also moves more toward understanding general trends and behaviors.
For example, instead of recommending a camera based on everything you've ever bought, an AI might recommend one based on what other adventure photographers use. This approach relies more on trust. You trust the AI to find things that are generally relevant. But, you don't give it access to all your personal data.
Gert agrees with Tim and mentions Google's "Knowledge Graph" as an example. This is a system that tries to connect information about people and things online. While it's not perfect, it shows how AI is getting better at understanding people and their connections to information.
How to use AI
Our actions train AI systems, even if we don't realize it. For example, merging duplicate contacts on your phone teaches AI that these contacts are the same person. This can be used to build a more complete picture of a person.
Additionally, platforms like LinkedIn and Google are starting to require users to verify their identities using things like IDs and videos. Tim worries that this verification process isn't very secure. With such, more companies are more interested in building a trust score to sell you things than protecting your privacy.
In the future, AI assistants could act as our advocates. They could help us make decisions based on our personal needs and risk tolerance. To do this effectively, AI assistants would only need to know a small amount of relevant information about us.
However, some seemingly unimportant details, like a hobby or interest, can reveal things about a person that can be useful for building trust and rapport. The challenge will be choosing the right balance between personalization and privacy.
Conclusion
Both Gert and Tim reflect on the balance between technological advancement and preserving human values.
Tim acknowledges that some seemingly unimportant details can be helpful. But, he argues that AI should focus on what's relevant to the task at hand.
Always consider the importance of adapting to new technologies while maintaining empathy and understanding in business interactions.