Truth, Order and AI

4 years, 5 months, 22 days, 18 hours

30 January 2025


After 30 years of not really understanding what computers are doing, we are now at risk of also not be able to control what they are doing.


It was 1986. I was 16. The home computing revolution was in its infancy. My friends all had Amstrads or Commodore 64s, screeching away for frustrating minutes with epileptic-fit inducing graphics, trying to load games from cassette tapes into 32 or 64 KB memories.

I had a Dragon 64. I played games with it, but I also wrote games. I wrote the software to turn it into a word-processor. I wrote first in the BASIC language that was preloaded. Then I wrote in Assembler code, in Hexadecimal and finally in Binary. I understood how that computer worked.

A few years later, my first job was in a software company. In the short period when I was there I was asked to destruction-test a new software product. It crashed almost constantly. And I learned then that almost no-one in that company actually understood the code they were writing. They were already well into the world of using existing building blocks and bolting them together. To properly understand what was going on inside that code was already nearly impossible for any single human’s comprehension. This was in 1992.

Fast forward to 2025 and we are in the foothills of the next great computing revolution, AI. Between Christmas and New Year I read Yuval Noah Harari's latest book Nexus, which tells the story of the critical role of information through history, and then reflects on what we are heading into with AI. He is worried. I think we should all be. Purposefully worried - alive to the risks (as well as the opportunities) and determined to do something about it. Because after more than 30 years of not really understanding what computers are doing, we are now at risk of also not being able to control what they are doing.

Yuval comprehensively shows that having more information than ever before, and more computing power to analyse it, isn’t making us smarter in our actions as humans, nor more benign. So why should we assume AI computers will behave any better? Information provides both a route to truth, and the power to impose order, which we need if society is not to be anarchic. But balancing truth and order is not easy. For me, Yuval’s most important message is that as a network becomes more powerful so its self-correcting mechanisms (the ability for truth to be divined and influence the wielding of power) becomes more vital. Are we paying enough attention to this with AI? Or are we blithely courting the risk, as Yuval contends, that ‘People in all countries and walks of life – including dictators - might find themselves subservient to an alien intelligence that can monitor everything we do while we have little idea what it is doing’.

Previous
Previous

Fundamentals for Me

Next
Next

The New Normal?