ONE: The Mother of All Betas
Microsoft releases Windows Copilot to half a billion users -- was that wise?
In opening remarks at Microsoft’s on 21 September 2023 public launch of Windows Copilot - a deep integration of an AI chatbot into its Windows 11 operating system - CEO Satya Nadella situated that day’s events in the broader history of computing:
You know, I recently had a chance to reread a book, bootstrapping, which I think came out perhaps in the early 2000s. It's a story of Doug Engelbart and his life, his vision. You know, obviously, everyone knows about his contributions to some of the seminal things that we all enjoy in personal computing, whether it's the mouse or the graphical user interface. But the thing that he writes about, you know, how we chose to sort of get into the computing industry and motivated his life's work, deeply resonated with me. He felt that both the complexity and the urgency of the humanity's problems both big and small, are growing at such a fast rate, that our ability to cope with them was being challenged. And he envisioned, therefore a new system of computer and human collaboration that could augment human capabilities and give each of us more agency in tackling these challenges. And that's always been the ultimate dream of this entire computing revolution. And now, as we stand here today, we have a couple of new capabilities.
Nadella draws a straight line between Engelbart’s multi-decadal research project into systems that could effectively augment human intelligence - a defense against a world complexifying so rapidly that it threatened to overwhelm any human capacity to manage that complexity - and Microsoft’s own hurried efforts to kneecap Google by any means necessary - integrating AI chatbots into nearly every product the company sells.
The irony of multiplying tool complexity with the ambiguities and errors that come paired with all of the strengths of generative AI may have been lost on Nadella, but would likely have made the famously cranky Engelbart even more so. Across his long and fundamentally influential career in computing - with 1968’s ‘Mother of All Demos’ at its centre - Engelbart created a series of tools that increased human agency. Whether generative AI tools do this today - or ever will - remains a hotly debated topic, even as the question of whether these tools will be released to the public seems a settled point. That has already happened.
In the less than 300 days since the launch of ChatGPT, a tectonic shift has completely reordered technology, as we have pivoted away from interactions based on decades of research in User Experience Design - experience that has both made complex processes easily apprehensible and generated a range of ‘dark patterns’ that weaponise the psychology of perception. Instead, we are witnessing a headlong run toward a deep integration of a still quite poorly understood ‘transformer’ model of computing that generates ‘completions’ to user ‘prompts’ as the primary interface.
We have deeply embedded a technology throughout our entire technological apparatus before we really understand it. Even the creators of GPT-4 don’t understand why it possesses most of its emergent properties, such as ‘chain-of-thought’ prompts that allow an AI chatbot to be ‘taught’ by example. Nor do we understand how to avoid ‘prompt subversion’ attacks that weaponse the deeply reflexive qualities of language to defeat any of an AI chatbot’s ‘guardrails’, or how to limit a transformer’s inherent ability to generate completions that have no corresponding basis in their training data - all-too-anthropomorphically framed as “hallucinations”.
In short, there’s a lot we don’t know.
While AI chatbots aren’t prone to ‘crashing’ - that being the most influential metric previously used by technology firms to determine a product’s suitability for release - nothing in an AI chatbot is truly reliable. A given prompt may return different completions; there appears to be (again) a poorly-understood sensitive dependence on initial conditions within the transformer which means that AI chatbots can’t even be consistently broken, making ‘debugging’ - whatever that now means - fiendishly difficult.
Despite all of that - or perhaps in defiance of it - Microsoft has begun the rollout of Windows Copilot to all of the nearly half billion Windows PCs. Install the update, reboot, and suddenly an AI chatbot has been installed. Enterprise users will have centrally coordinated IT policies that will likely keep Windows Copilot off organisational PCs until the appropriate policies (when to use it), procedures (how to use it) and protocols (what to do when things go wrong, as they will) have been developed. All of that will take some time.
On the other hand, most home users of Windows 11 will not even know how to remove Windows Copilot from the taskbar - so it will be seen and used. Inevitably, a lot of information will be shared with the AI chatbot. As users explore the capabilities of Copilot, Microsoft will be learning a lot about them - information those users never had any reason to share. This may be the most significant strategic value of Copilot for Microsoft: it allows them to collect far more highly relevant data on their users than Google could ever glean from a search query. What happens to that data, innocently entered in search of meaningful help, remains an intensely problematic point. Microsoft may end up with a dragon’s horde of user data, but will that help them - or simply result in another massive data breach?
The agendas of too many benefit from the rapid rollout of AI chatbots. Nothing is going to slow this runaway train. Overnight, Meta announced AI chatbot integrations with Facebook Messenger, Instagram and WhatsApp. Before the end of this year, that firm’s three billion monthly users will all have access to ‘weapons-grade’ artificial intelligence. Is this wise? It doesn’t matter. It’s happening. In this beta test, we’re all automatically enrolled.