30 May 2025
Yes, we’re a tech company and all tech companies are getting excited about AI right? Why would we not be excited? In this article I look at some of the compelling reasons why this technology should be not adopted with abandon.
What is AI?
AI, standing for artificial intelligence, is a very broad term, with parts that are potentially useful and parts that are downright disturbing. Currently the AI that we interact with (and this is changing almost daily) is mostly based on Large Language Models (LLM). This means that AI tools like ChatGPT (OpenAI), Gemini (Google), Grok (xAI), Claude (Anthropic), Co-pilot (Microsoft), DeepSeek and so on are trained on vast amounts of data using gobs of energy to simulate human responses, not just in language but in problem solving, coding, creating art, music, writing and much more.
So far so good. But there are many reasons not to rush wholeheartedly into any new technology – especially one with potentially existential ramifications. Let’s consider a few pertaining to AI:
1. Loss of judgment, problem-solving and people skills
I can no longer add up numbers in my head or divide and multiply like I used to. Why? Because from high school I relied on a calculator. My spelling is worse than it used to be because I have learned to lean on spellcheckers and predictive text. Don’t even mention my handwriting skills… The old saying is “Use it or lose it.”
Is this a big deal? Many would argue that it’s not. However, in the words of Jonathan Haidt in the introduction to The Anxious Generation – how the great rewiring of childhood is causing an epidemic of mental illness, he states that his book
“…is for anyone who wants to understand how the most rapid rewiring of human relationships and consciousness in human history has made it harder for all of us to think, focus, forget ourselves enough to care for others, and build close relationships.”
Are we dehumanising (or transhumanising?) into something potentially very unhealthy?
Mary Harrington, in a great article The Industrialisation of Thought quotes an American College professor describing his recent students, formed in a culture of smart-phones and social media as follows:
“Most of our students are functionally illiterate. This is not a joke. […] I’m not saying our students just prefer genre books or graphic novels or whatever. No, our average graduate literally could not read a serious adult novel cover-to-cover and understand what they read. They just couldn’t do it. They don’t have the desire to try, the vocabulary to grasp what they read, and most certainly not the attention span to finish… Their writing skills are at the 8th-grade level. Spelling is atrocious, grammar is random, and the correct use of apostrophes is cause for celebration. Worse is the resistance to original thought. What I mean is the reflexive submission of the cheapest cliché as novel insight.”
When I talk to auditors who have employed recent graduates they echo many of these sentiments. Some graduates really struggle to problem-solve, to think critically beyond certain cliched responses, and to engage interpersonally outside of the online world. They already rely on AI to answer difficult questions or to write coherently and lack many basic research skills – apart from asking ChatGPT.
In a profession that relies on professional judgement and interpersonal skills, this does not bode well for the future.
2. Reliance on disembodied processes we don’t understand
Philosopher Matthew B. Crawford in his book The World Beyond Your Head – on becoming an individual in a age of distraction links our reliance on the digital technologies to not just cognitive decline but a widespread sense of detachment from our bodies, from the real world, and worse, from each other. We were made to work with our hands, heads, and hearts.
Many of us who work in tech long to leave it all behind in the weekend and get into the garden, build something with our hands, or get lost in nature. We feel restored and whole. So why are we plunging without question deeper into the digital void where we are mere cogs in the giant interconnected machine? Are we more than just “brains on a stick” or “wetware?”
Part of Crawford’s thesis is that humans are made to use tools and we thrive to develop skills that allow us to interact with the world through those tools. He describes how the ice hockey player or the GP motorcycle racer learns to experience their hockey stick or their bike as an extension of themselves. However, sophisticated digital technology is different. It is not an extension of ourselves – we become it’s tool instead. It takes away our agency.
This highlights another danger of these technologies to the auditing profession – we can easily become passive because we just don’t understand what our AI “tools” are doing. We may provide the input and read the output, but if we have no idea of the process in between we are in great danger of trusting a tool that may be missing something important.
AI tools are already known to hallucinate, meaning coming up with nonsense answers that sound legitimate. Will more complexity fix this bug? Apparently not. It is more likely that they will just get become more convincing, and we become more submissive as we stand in awe of their power.
From an audit perspective, it is essential that we know what every tool is doing, and we are aware of the steps made to arrive at the conclusion, and we can verify the process. But under time pressure it might just be easier to trust the AI results – leading to potential reputational disaster and an insurance train-wreck.
The lesson is that tools that employ any kind of digital empowerment should have a clear purpose, be constrained to that purpose, and driven by rules that are crystal clear to the auditor.
3. Loss of human agency and job satisfaction
You know that feeling when your phone or laptop demands that you do an update? Or your appliance or vehicle just starts doing things you didn’t ask for? My car has a nasty habit of randomly switching the radio on loudly at inconvenient times to a station I don’t like and can’t seem to change the settings. Once upon a time we owned our technology and could do with it what we wished, now it feels like someone else is in charge, or that we maybe are owned by our technology. So it’s easier to just go along with it.
Social media and search engines have come full circle. Google’s early banner motto of Don’t be Evil has long been quietly removed. In his book Move Fast and Break Things – how Facebook, Google, and Amazon cornered culture and undermined democracy, Jonathan Taplin discusses how the founders of the internet started out to try to make a fairer, more transparent world, but in the end it ended up destroying the livelihoods of artists, film-makers. musicians, and smaller businesses while gathering vast amounts of data from everyone and everything without asking. Peter Thiel, who has funded many of these companies, famously said “competition is for losers.”
These tech giants have grown so powerful that they have become the gatekeepers and curators of culture – reducing it to “slop” that now passes for art or information. And in the process, we have become passive consumers, addicted to whatever dopamine hit we are dished out.
AI only promises to multiply this. But maybe we won’t care because we won’t have jobs either as we surrender our agency to AI and the companies behind it. Yuval Noah Harari, Israeli historian and author, in his book Homo Deus – a brief history of tomorrow admits – with his typically dark outlook that – “The most important question in twenty-first-century economics may well be what to do with all the superfluous people.” He continues: “What will they do all day? One solution might be offered by drugs and computer games.” Is this peak humanity? Is this the best we can do?
AI is now providing digital assistants that can be addressed in spoken language to replace many tasks carried out by human assistants – using Siri, Alexa, Co-pilot etc. Sure, they may be replacing some bullshit jobs with machines, but they are taking away jobs that require human interaction, even if it is just phoning someone to make a booking or follow up a sale. And human interaction is what makes us human, even in the little things. (Don’t get me started about chatbots on help-desks! When you are interacting with the Audit Assistant help-desk you are talking to a human.)
The moral for auditors? Use appropriate tools, but don’t confuse them with humans. Real relationships keep us human. Keep employing people. Interacting with machines makes us more like machines. Input/output, drugs and computer games, lost souls, whatever…
4. The dangers of connected complexity
The Internet of Things (IoT) refers to “a network of physical devices, vehicles, appliances, and other physical objects that are embedded with sensors, software, and network connectivity, allowing them to collect and share data.”
Harari refers to it as “The Internet of All Things” and relates it to the concept of “Dataism.” In Wired he writes:
According to Dataism, human experiences are not sacred and Homo sapiens isn’t the apex of creation or a precursor of some future Homo deus. Humans are merely tools for creating the Internet-of-All-Things, which may eventually spread out from planet Earth to cover the whole galaxy and even the whole universe. This cosmic data-processing system would be like God. It will be everywhere and will control everything, and humans are destined to merge into it.
Apart from this philosophy being totally bonkers (and we will revisit this and other bizarre ideologies behind AI in a future article), the networking of everything – although it sounds cool and efficient – actually makes for an extremely brittle eco-system.
Natural eco-systems are very resilient because of limited networking and multiple redundancy. They are not in a normal sense “efficient” at all. Think of all the seeds produced that never grow. Imagine if all the Kauri trees were connected – a virus like Kauri dieback would take them all out at once. This makes natural environments incredibly adaptable and able to recover from disaster relatively easily, as environments are discrete, and species are widespread. Disasters are confined and recovery is natural.
The idea of everything connected decreases resilience and opens us up to failure. It misunderstands the nature of reality, that the world is more than the sum of its parts – not merely a complicated problem to break down and solve and put together again using interchangeable parts. The world is irreducibly complex. Analyst and writer N.S. Lyons describes a man-made complex system as follows:
“… a complex system is a technical term defining a system composed of such a great quantity of component parts, in such intricate relationships of dependency and interaction with each other, that its composite behavior in response to entropy cannot be predictively modeled. Such systems are characterized by their nonlinearity, feedback loops, and unpredictable emergent properties. When things go wrong in a complex system it can’t be easily solved, because each sub-system relies on many other sub-systems, and pulling any one lever to try to solve one problem will produce totally unexpected effects, potentially only creating more problems. This means complex systems are vulnerable to failure cascades, in which the failure of even a single part can set off an unpredictable domino effect of further failures, which spread exponentially as more and more dependencies fail. Even if the original failure is fixed this cannot reverse the cascade, and the whole system may soon face catastrophic collapse.”
Apart from the fact that the IoT linked with AI is a totalitarian surveillance-state dream, it is simply a terrible idea to digitally network everything. Even now our supply chains and basic infrastructures are completely dependent on the internet. With the sophistication of AI enhanced cyber-warfare and the physical vulnerability of undersea internet cables widespread catastrophic collapse with no warning becomes possible. These Zero Day events are far more likely under the kind of networking enabled and encouraged by AI promoters.
The problem for auditors here (apart for totalitarian rule, war, and famine!) is that these excessively and increasingly complex interdependencies become potentially impossible to audit. And the AI running them may indeed work very cleverly to hide their secrets from the auditor. While being far cleverer than the auditor an AI has no concept of actual ethics. Dr Toby Groves, who spoke at the recent CAANZ conference, confirmed that ethics spring from the capacity for empathy, it cannot be inferred from rules. AI can mimic empathy – but only because it is trained to appear so.
The scenario may be the auditor-bot busy exercising professional scepticism and judgement over the client-bot – with the humans standing helplessly on the sidelines, having no comprehension over what is going on and what dirty deals are being made. It will make Enron like your local Cricket Club audit in comparison.
And so…
I hope this article makes you think and pause – but there is more (much more) which we may return to later. To summarise, my conclusions so far are as follows:
- Humans are made to think, create, and problem solve, and we love using tools to accomplish these – but where our tools reduce our abilities to think, create, and problem solve, it’s a red flag that the tools are wrong.
- Human judgement should not be outsourced. If we are going to take responsibility for our work (and sign an audit report to that effect) we cannot trust opaque-at-best processes that we do not fully understand.
- Human relationships are essential to human thriving and a good work environment. So keep employing people – don’t replace them with digital stand-ins and send them home to play video games and take drugs.
- Don’t be enamoured by the promises of a connected world. Seek efficiency, but not at the cost of resilience.
- Just because we can doesn’t mean we should.
(Disclaimer – No AI was knowingly used in the making of this article, however Google could not be avoided, so maybe a couple of icecaps were melted – sorry about that.)