Everywhere you look there’s a feverish debate going on about the impact (good, bad and catastrophic) of AI. Open AI’s launch of ChatGPT was heralded as the dawn of a new age, just what that age is going to look like is being hotly contested by some of the very people responsible for creating it.
For some it’s going to liberate we mere humans from all sorts of drudgery (sound familiar) and allow each of us more time for …well, I am not sure. For others it’s an existential threat that requires action, now, in terms of regulation and oversight.
Some would suggest that the lack of regulation and oversight has led to the mega corporations of Meta, Microsoft, Alphabet, Apple, Google and Amazon already operating in an extra-national parallel data universe.
And, parallel to this, we have the impact of the data revolution on the stock market around the world. Those of us of a certain age remember when the Internet really came of age in the 1990s, when the iPhone was launched (in 2007) and the then then Industry 4.0, the smart revolution from 2011 onwards Internet of Things (IoT), robotics, virtual reality (VR) and artificial intelligence (AI) .
Now we have the AI boom
I remember the dotcom boom and bust of the early 2000s when the hype was around ‘eyeballs’, around market share, not revenue and certainly not profitability, until it was. Remember the rise and fall of OneTel, LookSmart, Spike Networks,?
Any company remotely linked to AI powering up the stock markets around the world. Nvidia’s stock hit $US974 in early 2024 up from $US265 in March 2023. This is especially so of those companies who are providing the infrastructure needed to support AI-powered applications even before it’s clear just how useful generative AI will be for business, government, you and me. It might just be that generative AI is a useful addition to what most of use everyday across the myriad of platforms we interact with. How ‘useful’ depends on what industry sector you’re working in and your role in it.
A generative AI use case
For example, I am a business historian. I research and write company histories. 35 years ago I spent a large amount of time in libraries and company archives reading through printed documents. Now I still sit in company archives (yes, they still exist), but now spend more time on Google searching for all manner of documents, publications, media releases, books and reports.
I engage with ChatGPT when I want to explore a different perspective on some area of research given I am often writing about industry sectors I have no previous experience in. At the moment ChatGPT is a useful addition to the tools I currently use NOT a substitute for them and I certainly don’t use it to write for me.
I am also fortunate enough to work with many business leaders on their books and there are two that I highly recommend anyone with an interest in the different aspects of AI read.
The first is Checkmate Humanity. The how and why of Responsible AI by Dr Catriona Wallace, Richard Vidgen and Sam Kirshner In Checkmate Humanity nine internationally-renowned academics, researchers and an entrepreneur explore the AI terrain, outline the potential dangers of unconstrained AI development, consider how we might respond to the AI challenge, and proposed guidelines for organisations and government.
As the blurb on the back notes
‘AI is a technology and by itself can do nothing. It only achieves agency once it is adopted by organisations as part of their business and decision-making processes’.
To this we can, of course add governments.
As the IAPP stated in a recent LinkedIn Post (and I paraphrase)
AI can’t govern itself. It’s up to companies to govern AI.
The second is Tim Trumper’s recently released book, AI. Game On. How to decide who or what decides. This book actually tackles one of the questions raised in Checkmate Humanity…how can organisations and those that lead them decide how to deploy AI in their operations and to what level/depth.
AI. Game On attacks the vexing and strategic dilemma for directors and executives whose job is it to decide who or what decides. What decisions are delegated to AI and how is this done? Here in Australia we are all aware of how disastrous this can be when inadequate thinking, deployment and oversight are missing. Of course I am referring to the Robodebt scandal.
AI ‘thinking’ fast not slow
With apologies to Daniel Kahneman for the riff on his best selling book Thinking, Fast and Slow
Generative AI doesn’t think. The large language models that form the foundation of generative AI deal in probabilities not the hard logic of traditional computing systems. This is their beauty really. To most of us it appears as though it’s this all-knowing, smart system even though at times it delivers inaccurate information (very authoritatively) that bear little relation to reality. Part of this is because the base information it’s using has been stripped from the internet. Rubbish in, rubbish out. Bias in, biases out. Lies and mistruths in, more lies out.
As a historian I am the first person to acknowledge that history is written by the victors and that just because something that’s written in a book doesn’t make it true. However, I research and interview widely and footnote all my sources so that I can be held accountable for what I write. And, I can (and want to) be held accountable for my interpretation of it. How can ChatGPT held accountable?
As the tech world, and investors, VC funds and governments, are struggling to understand the scale of the AI boom, how long it will last and how much further Nvidia’s valuation can climb, the real world people like me and you need to wrapping our heads around how we can use AI, what we use it for and what governance parameters each one of us needs to apply to it.
It’s not up to AI to govern itself. It can’t. It’s up to every one of us. And to do this we all need to make an effort to understand it.
The two books I’ve mentioned here are a great start.
#AI #AIgovernance #Nvidia #ChatGPT #selfpublishing #businessbooks #entrepreneurs #bookcoach