This morning, I opened a PDF in Microsoft Edge, to find a new addition to the interface of Microsoft’s browser-based PDF reader. In the top of the reader, smack in the middle, the words ‘Ask Copilot’. Tapping there brought up the interface for Bing Chat, Microsoft’s AI chatbot. I immediately asked Bing Chat to summarise the scientific paper that I’d opened in the PDF viewer. After a few moments to digest the PDF, it provided a detailed (if partial) summary. Nothing about any of that was in any way out of step with the direction of Microsoft’s operating system, browser or strategic intent: all of them are fully directed toward one goal: Copilot ‘all the things’.
In practice this means that nearly every Microsoft touchpoint - not just Windows 11, but the biggest of their cash cows, Microsoft Office 365, Microsoft Edge, and, in all likelihood, Windows 10, will now have deep connections into Microsoft’s AI chatbot. Given the billion-and-a-half users across all those touchpoints, it’s likely that they’re being used regularly in all sorts of situations: to summarise documents, develop planning and strategy, provide verbiage for marketing plans, and so forth. Crikey recently reported Australia’s realtors have been employing ChatGPT to manage customer relationships. Meanwhile, Salesforce rapidly integrates the same capabilities into its own product suite, poised to deliver an AI chatbot to a global audience of salespeople.
We have no real data on how AI chatbots are being used, by whom, and in what circumstances, in service of which tasks. All reports remain anecdotal. Many touchpoints exist - and are multiplying. They are certainly being used by many people every single day. But no one is talking about where and how and when this is happening.
This silence around what I’m calling ‘The Quiet Question’ may be a function of a lack of clear policy: employees using AI chatbots may not know whether they are permitted to use them, and almost certainly have no guidance around procedures or best practices in their use. This means that a significant amount of clandestine or ‘grey’ AI usage has likely begun to permeate organisations of every size. This would not present a problem if AI chatbots were completely safe and completely accurate. Unfortunately, they are neither.
This morning, a post on HN from Embrace The Red details a ‘prompt injection’ attack that could be made via a ‘poisoned’ email delivered to a GMail account that had been connected to and was being monitored by Google Bard. The prompt injection attack itself had been described by researchers back in May of 2023; this post shows a step-by-step implementation. Threats posed by widespread adoption of AI chatbots are not merely theoretical, or ‘over there’. Now that Google Bard reaches at least one billion users, the AI chatbot threat surface looks both very large and - for an attacker - very tempting.
The accuracy of AI chatbots remains problematic, a feature of their design rather than a bug in their operation. This mean they will continue to generate confabulations - completions that feel truthful (‘truthiness’), without actually being factually correct. Now that all the ‘Big Three’ AI chatbots have access to the Web, they can check their own completions against Web resources. However, the Web itself hosts a growing supply of disinformation and misinformation. The Web can also be used for hidden attacks by prompt injection, just like email. This means that relying on the Web as a source of authority will not ensure that AI chatbots will produce truthful completions.
We have growing use of AI chatbots across organisations; a growing threat surface as a result of that use; and rising levels of misinformation as contrafactual completions find their way into organisational outputs. While it would be nearly impossible to call time on the use of AI chatbots - for one thing, we have no idea who needs to hear that stop order - we can at least begin to survey our organisations, developing a sense for where, how and why AI chatbots are being used as part of normal business operations. Doing this will not fix all the problems presented by the rapid adoption of a technology as unreliable as it is powerful, but it will at least give organisations some sense of the depth and urgency of the problems they face.
We have to start somewhere.
It begins with quiet conversations - casually, and broadly across the entire organisation - about the use of AI chatbots. Staff will share what they’ve learned only when they feel they can do so safely. Coming in guns blazing, making policy mandates before a quiet survey of on-the-ground use will only drive that activity underground, where it will remain until something goes very wrong - and it suddenly becomes an organisational issue. “Softly softly” is the best approach. Ask the quiet questions - and learn where and how and why AI is already being used within your organisation.
If you want help asking the quiet questions - to implement AI safely and wisely in your own organisation - please get in touch!