This article is adapted from my 2024 Q2 quarterly GenAI for Design newsletter, which went out to subscribers at the start of July, and which you are able to sign up for below.
AI will…
“AI will…” is not a good way to start any sentence predicting the future of AI. Even to talk about AI + the individual seems to be missing most of the wood for the trees. There is AI + society and there is AI + organisations.
I am not sure which we should be paying closer attention to.
There are cumulative societal effects induced by perceptual and interactional changes due to use at scale of genAI by individuals. This is true of any tool. If you carry a gun or if you carry a camera, if you drive a car or if you go on foot, you see the world and interact with it differently.
In the case of organisations, genAI will enable more effective achievement of organisational goals. Organisations sit within a system of incentives that prioritise profit-making and the accumulation of power through the externalisation of harms such as societal and environmental exploitation. The reason for this is that the perceived benefits of technological advancement are considered to be worth an incremental level of externalised harm, and that where harms later come to be seen as unacceptable, regulation will step in and return order. This is a system that has walked us into a lot of problems. The perceived benefits are, in many cases, illusory, and often remedies to problems that the system has itself created.
As time goes by and genAI becomes ever more embedded in the tools all knowledge workers use day-to-day, it won’t be possible to distinguish genAI-enhanced work from anything else done on a computer, so it will stop being interesting to ask whether architects are doing it, just as it would be uninteresting to ask whether they make use of spell check or Google search.
Likening intensity of AI use to mechanical power
For this reason, I want to propose consideration of a hierarchy to capture the intensity of genAI use. Perhaps we can liken this to mechanical power. Just as power is the product of force and speed, we need to model genAI use as both the breadth and the depth of the tasks it’s taking on.
For instance, genAI could perform a large number of low cognitive effort tasks, automation of simple routines for example, but also a smaller number of more sophisticated tasks. Perhaps this could be measured as work-minutes saved, or amount of cognitive effort outsourced. Of course, I’m sure that outsourcing cognitive effort, and substituting time-consuming routine task completion with time-consuming routine task checking, which is of course needed when the thing doing the routine tasks is probabilistic in nature, both entail substantial rebound and spillover effects.
Wellbeing
Equally, to focus in this way alone would be to miss something of key importance. We must not take a narrow appraisal of the possible benefits of genAI in our work, but a broad one that considers wellbeing of workers, enjoyment of tasks, sense of meaning. How does genAI enable people to spend more time together? How does it enable people to spend less time sitting at their computers? If the answer is that it does the opposite, just as ever more efficient email clients, word processors, website builders and database tools have, that alone should encourage us to view it with skepticism. “Go on then,” should be our opening gambit. “Impress us!”
We should equally be unafraid of stating our belief that this technology (or platform, or process, or whatever we view it as) isn’t ready yet and needs longer in the oven. It may be no bad thing to say “Perhaps in five years, but not yet.” We are not big tech companies. We haven’t invested billions. We won’t take a massive bath if this turns out not to provide return on investment yet, or ever.
An important aspect of that big tech fuelled hype – which has already swerved into parody (AI-powered toothbrush, anyone?) – is that people are experimenting. It’s a time for trying things out. Rates of tool usage will inevitably outstrip their usefulness for a period of time, and it’s not until everyone who didn’t find them useful has given up that the market correction will happen. The flow of capital is the issue here; it’s little to do with technological value, and instead to do with the fact that in certain circles (again, see toothbrush) slapping an “AI” label onto your business makes the venture capital happen. This will pass, and the people who say “AI” in the way they used to say “NFT”, “Web3”, “crypto” or “metaverse” more or less interchangeably will find something new with which to sell their self-help courses.
May I interest you in a pyramid scheme?
Echoing Daniel Schmachtenberger, I would therefore highlight the need for a shift in the way society identifies goals and defines progress. It is this framework that rewards profit pursuit, which is based on resource exploitation, which is based on a view of the world as consisting of boxes of resources to be opened either quickly or slowly, but indefinitely and in increasing numbers.
I’m afraid I don’t know how to suggest a way forward, other than the usual injunction to be reflective and treat things that have become normalised as though they are unfamiliar.