London’s technology ecosystem is underrated. Some of the best technical talent in the world lives there, and the AI boom of the last few years has spawned some incredible companies from London’s tech community.
Last week my team and I made our annual summer trip to London, where we hosted several events and met with founders, engineers, and investors. Among these events included a dinner with 50 founders of early-stage software startups based in London, and another dinner with 15 senior AI engineering leaders and researchers.
Today’s post synthesizes the key ideas and debates from my conversations during my time in London, focusing on AI’s impact on technology, business, and society.
Measuring AI’s success in business
“Growth” was often the answer people had to how their company was measuring the success of AI initiatives. More specifically, high leverage growth, as businesses want to scale their customer acquisition and revenue, while keeping associated costs like customer support flat or close to flat.
AI is also being used to accelerate product development, enabling companies to implement more of their product roadmap at a faster pace.
This is one reason I believe that even if AI advancement were to plateau for the next decade, the economy is going to see meaningful productivity growth as businesses continue to roll out AI across every department. AI adoption remains early among both large and small businesses, and as it increases AI will provide a meaningful lift to corporate profit margins.
Generative AI and the future of media
AI-generated short-form video has captured incredible viewership on TikTok and Instagram within weeks of Google launching its Veo-3 video model. Creators using Veo-3 to create “vlogs” of Storm Troopers from Star Wars are reaching hundreds of thousands of followers and millions of views.
Is that a fad driven by the novelty of new technology, or is it the future of short-form content creation? Here there was a lot of debate, with some I spoke to arguing that human authenticity is the key to art and media. Creators will dominate entertainment, as people like Mr. Beast become empowered with low cost AI tools that enable their videos to feature special effects, animation, voiceover, and AI actors to complement human actors.
Others believe Storm Trooper vlogging is only the beginning. The TikTok of the future will not be a feed comprised of user-generated content, but rather AI-generated content that is personalized, created on the fly, and dynamically optimized for maximal engagement.
How AI will transform user experiences in software
Will we continue interacting with apps through screens filled with buttons and input boxes? Or will AI fundamentally alter the way we interface with software?
Several people I spoke with expressed they believe AI will mark the death of software as we know it. In this future, people’s digital experiences will be like the movie Her, where the protagonist engages with his AI companion via voice. Forget needing an app store with thousands of applications you can download. There will be one application, an AI assistant, which will be able to handle tasks for you both at home and at work.
But what about when you need to visualize something on a screen, like a chart or an image? Ephemeral software interfaces will be generated and disposed based on the user’s query. For example, a user could ask a future AI-powered financial management application to show next quarter’s financial projections in a chart alongside the last 8 quarters. When the user was done, rather than needing to save the chart in some digital shared storage/file system, the chart would be disposed of. Because the underlying AI system has the company’s financials, the user would be able to ask for the same chart again if needed.
There are a lot of good counter arguments to these ideas (user preferences, inertia, hardware development, the practicality of disposable software). Yet even the skeptics agreed that over a long enough time horizon, software is more likely to emphasize voice and chat to create short-lived user experiences tailored to users’ specific needs in the moment.
The open question remains, how soon will this shift occur? I think first, we’ll need to reach artificial superintelligence.
AGI vs ASI: Defining Superintelligence and its timeline
I believe there is an increasingly common view among some AI circles, even if it’s not the dominant narrative in the media being driven by some of the heads of AI labs: Current transformer-based model architectures are unlikely to produce the types of AI models needed to reach superintelligence.
Artificial superintelligence (ASI) is increasingly being used rather than AGI (artificial general intelligence). One AI researcher I spoke with defined ASI as AI models that can generate insight, such as new theories of physics, insights about biology, or create new AI models themselves.
On the contrary, AGI is increasingly being viewed as a marketing label that has been a moving goalpost, as the benchmarks of “intelligence” have shifted with each new model breakthrough. In some ways, we may already have artificial general intelligence in the current models. Perhaps a more helpful marker, rather than comparing models to human intelligence, would be output-driven such as GDP growth or productivity gains.
As far as timelines for superintelligence, estimates varied from 3 years to 5 years to unknown, but no one I spoke with questioned whether humanity would eventually attain artificial super intelligence. Only when.
Prompting vs Fine-tuning
With AI model costs falling and model infrastructure and middleware becoming cheaper, optimized, and more widely available, companies are changing how they customize AI models for their businesses’ specific use cases.
One pattern I found to be insightful was trying high-quality prompting on generalist models like Gemini Flash (cheap & fast model) to achieve 80-90% task accuracy. Depending on the nature of the task and potential desire to go closer to 100% accuracy, then developers can fine-tune smaller open-source models (i.e. LLaMa) using their own data.
Developers can use prompting to extract a representation of what a model output would be, then using a small model to replicate that output for use cases where the latency needs to be higher.
In other words, prompt first, fine-tune later.
AI’s future is here, what’s next?
The themes from the conversations I’ve talked about throughout this post make clear that AI is already reshaping our world in ways often more subtle than we realize. These will create amazing opportunities to build and invest in new businesses. If you have any thoughts on these topics, I’d love to hear from you.