As a one-time student of history, the passage of expanding and bursting market bubbles is always a source of interest and perspective. This is not the right place to construct a robust analysis of whether the current hype around AI triggered by the arrival of Large Language Models will come to be viewed by the passing of time as a bubble or not. Although there is a strong whiff of speculative mania akin to tulips in 1636 or arguably peak SaaS valuations in the recent Covid window, it feels like stronger corollaries exist with the infrastructure booms around railways in 1845, the advent of the PC in the 1980s; the fibre optic boom of the late 1990s; and the introduction of the smartphone in 2007. There is a fundamental technology innovation underway that will drive long-term structural change … potentially at the risk of short-term capital destruction.
As technology investors, our job at Farview is to have a point of view on how to realise the opportunity of AI, playing to the fundamentals rather than the hype cycle. To help us draw the best conclusions, we enlisted the perspectives of those leading the push towards AI within our portfolio companies … bringing together executives and practitioners for a day of presentations, demos and debate. Ringing in our ears as we did so, the words of Satya Nadella that all software applications are just CRUD with business logic on top and will be replaced by an entire new agentic interface … assuming status quo has rarely been a winning approach in technology, never more true than at this moment in time.
We are very grateful for the time and contributions on the day from John Williams of Amplience, showcasing the opportunity to embed generative capabilities into the workflows of modern content management systems; Tim Moxon of Unily demonstrating the ability to use generative coding tools to reimagine large-scale enterprise products in lightning fast timescales; and Boaz Zehavi & Maldy Agavia of Grantify outlining an entirely new user journey around grant discovery and delivery through intuitive tooling. In addition, we tested the scope for new operating models at both ends of the scale spectrum, with Vijay Gupta setting out the potential for bootstrapping from concept to fully formed working product with minimal resource and maximum velocity, and Olga Pirog articulating what it takes to ensure AI adoption within large-scale and regulated companies. Closing out the day with a great panel discussion chaired by Rav Dhaliwal and further contributions from functional experts like Nick Haye from Finomatic, our teams spent some time in debate and discussions with their functional peers sharing AI experiences and perspectives.
Apart from thoroughly enjoying ourselves and feeling a lot of satisfaction from seeing the interactions of our companies over coffee and end of day drinks – what did we take-away?
- Current generative models perform much better with a narrow focus and rich context. Great outcomes come from architecting very discrete tasks together in logical flows vs. long form open-ended queries where the engines tend to over-rotate and sometimes confuse themselves
- Within enterprise applications, the pace of upgrades in the growing universe of LLM (and Small Language Models, plus the broader non-language AI universe) means that flexibility to swap models in and out is key for customers, as well as aligning them with specific preferred tasks (Claude and Cursor were popular in the room for coding / Nano Banana for image generation / Playwright + MCP for code QA)
- The non-deterministic nature of current LLMs means some form of human in the loop or QA/QC control points is key … notwithstanding that humans are also non-deterministic and have higher failure rates !
- Today, productivity lift is most significant for those who already have profound depth of experience and skills. For those with lower skills, there is risk of creating unsolvable problems at record pace. That dichotomy is part of the explanation for why individual productivity gains of 10x so far have only translated into organisational uplifts of 10%.
- Time-to-prototype is perhaps the most significant advantage for innovating with clean sheet of paper. The ideation phase is now structurally short, completely changing the classic validation cycles and meaning that feature driving differentiation has an almost zero half-life in software markets
- Organisational change management is currently the most profound block to AI adoption and key to the 95% failure rate reported on AI experiments. The key to solving the change management problem is to target those tasks that people (esp. sceptics) find least interesting – very often these are the type of tasks that AI excels at performing and create in the beneficiaries the strongest internal advocates
- AI deployment at scale often accelerates and accentuates traditional architectural challenges such as identity management, security, data governance, observability, documentation, etc – further stressing the software development lifecycle
The above are clearly very practical and relate only to the current early phases of AI tooling and adoptions. One point that remains wide open from our perspective is the scope for AI to transform the interface through which applications are consumed and processes delivered. Classic user interfaces designed to support human oriented structured workflows are starting to be re-imagined as ones that will be geared around machine-to-machine flows. Figuring out what end-point designs will be defining aspects of future success is not an area we want to speculate on right now, but it is certainly something the best minds in our portfolio businesses are very engaged with.
As investors, our view remains that companies which sit on some combination of private data assets, complex workflow processes and deep subject matter expertise with an existing base of customers who are open to change and innovation, is the structurally advantaged place to play out the opportunities that AI offers.
Our clear objective remains to find companies who enjoy those characteristics and to keep learning and experimenting as they push into the new AI frontier, focused on operational and customer value. It is going to be a stimulating and doubtless twisting journey …
NB. Written by hand, in analogue, with all hallucinations and grammatical errors the responsibility of the author.
