This blog post is an ongoing, interactive attempt to figure out what I think about AI, and why. Join in if you it helps you.
The version you're reading now is an early draft: I am publishing this using the permanent versions pattern - where I publish a string of versions as I develop my thoughts - because I'm hoping constructive reactions to early drafts will help me answer my questions and so finish the post. Links to the latest and previous versions, if any, are in the footer.
I'm making this effort because (a) AI is really important, and (b) I'm utterly torn about how to think about it. I read a lot, and do more than just read: every time I read something valuable I take notes on it and publish them on my Hub, partly so others can benefit from the resulting library of what I Like, but mostly so I fully integrate what I'm reading into What I Think and Do.
And there are currently 324 resources tagged AI there. Many individual resources have really influenced me, from ChatGPT as muse, not oracle, Ted Chiang's blurry jpeg, Sara Walker's AI Is Life and Emily Bender's stochastic parrotsto What Do Large Language Models “Understand”?.
But I've read so much that I'm beginning to identify individual authors who are having a major influence on me. There's only one problem: they don't agree with each other.
On the one hand there are "optimists" (for lack of a better term) like Ethan Mollick, who publishes a stream of content about new developments in AI, and regularly reminds us (correctly) that "the worse AI you'll ever use is the one you're using right now"; and Kyle Shannon, who shows normal people like me using AI in really innovative, useful ways.
Yet I'm also massively influenced by those who convincingly set out why and how we are going in the wrong direction, like Cory Doctorow (10 resources hubbed), who introduced me to the enshittification playbook, and Douglas Rushkoff (4 more), the "Team Human" podcast creator who pointed out that all GPT can really do it revert to the mean. I hesitate to call them pessimists or Cassandras - their logic is pretty sound, and based on decades of watching Big Tech transform technology's promises into profit for a few and ashes for everyone else.
The law of averages may say that lying with your head in the freezer and your feet in the oven is a good way of getting comfortably warm, but I'm finding living in both camps increasingly uncomfortable. On the one hand I should be in Camp Optimist: I was literally using NLP to autoclassify my clients' content in 2011-2012, was arguing for the integration of self-training autoclassification tools into EU Commission knowledge management in 2019 and proved it worked three years later, launched myhub.ai in 2020 and am currently exploring how to integrate the "GPT@EC" pilot into a major EC online platform.
So why was I so negative on AI when I published the results of my first experiments with integrating GPT and myhub.ai earlier this year? Why did I follow that with an apocalyptic Linkedin carousel about AI destroying society's ability to innovate by mid-century? Why did I not turn to ChatGPT to help me write this blog post? Am I trying to sabotage my career?
I'm beginning to believe that my cognitive biases are in play. I find it very easy to write, for example, which probably explains why I am still so underwhelmed by ChatGPT: its rubbish, cliche-ridden texts don't offer me much, and are flooding my favourite sites with absolutely rubbish content. And there's possibly also a little voice in the back of my head reminding me that even if GPT is rubbish today it could be a future threat, given that writing was literally the basis of my early career.
Then there's how I feel when I see people jump on a bandwagon. If you read https://mathewlowry.medium.com/all-aboard-the-bandwagon-67fb899c2bbd you'll see that this is not an intellectual reaction: my gorge rises in my throat almost every time I visit LinkedIn and see yet another grifter promising instant results. These are the same people who a few years ago were promising huge returns on NFTs, before that blockchain, before that social media, before that affiliate marketing.
What really enrages me is that those technologies really did promise a lot. and it's partly those loathsome grifters' fault that they didn't. But that doesn't mean I should loathe AI. After all, I still find some value in social media, and recognise blockchain has its uses.
In this version, this section is just a set of notes and half-formed sentences.
Maybe I'm right to be torn. Maybe it's amazing and terrible at the same time.
For example, take that apocalyptic linkedin carousel predicting that the widespread adoption of AI over the next years risks undermining our ability to innovate as a species by mid-century. As the blog post it was based on makes clear, I was channelling Douglas Rushkoff that day, and was pointing to a low-risk, high-stakes future: something that probably won't happen, but would be so bad we must ensure it doesn't. I even pointed out some ways of doing so.
!
"It's a big claim, but the logic's pretty straightforward: AIs cannot innovate but humans can. However, they must master their craft first, but AI is taking the entry-level jobs where tomorrow's experts get started."
The best critique was from Peter Kaminski who, in the followup conversation on the AI coaching forum, predicts a future which is "bright and exciting - we’re poised to see unprecedented advancements across various fields as humans & Ai work together to tackle complex global challenges". He has a few arguments, but one was to directly point to recent research (Can LLMs Generate Novel Research Ideas? ) claiming to show that AIs can in fact innovate (in fairness, he also pointed to a good critique).
Because the same paper was shared by Ethan Mollick that same day: "We have increasing evidence that today's AIs can, indeed, generate novel ideas that can beat those of expert humans", in my head I put him in "Pete's Camp for Optimists". It's only now, as I write this post, that I realise how my cognitive biases tripped me up here - when I saw Mollick a few days later in the same ideaspace as my post, warning that "learning in knowledge-based organizations is threatened" , I realised that he was in fact one of the origins of the idea I ran with in my post.
So while 90% of the people on LinkedIn posting about AI for likes and follows can probably easily be put into one or the other camp, the smart move is to realist the truth is probably somewhere in between, or in fact in both.
started this section of this post with the clear idea in my head that Mollick had been contradicting himself. Now I actually reread his posts and it's clear that he wasn't: he merely pointed to some research showing that AI can innovate, and then reiterated something from his book from May that AI is threatening learning in organisations. Both things can be true.
It's interesting you pointed to the paper showing that LLMs can innovate. I'd in fact seen that paper when Ethan Mollick shared it the same day, and in my notes pointed out that in the comments (not Mollick's own remarks) "a real conflation between ideation and innovation, although the paper is clear. Ideation is a very important step, and one that many projects and companies fail at, instead pursuing the first idea they find rather than generating as many ideas as possible before selecting the best." But spewing out 4000 ideas to come up with 200 good ones for humans to review and rank is not innovation, it's ideation.
Three days later Mollick was pointing out to signs "of the collapse of the talent pipeline ... Learning in knowledge-based organizations is threatened": https://www.linkedin.com/feed/update/urn:li:activity:7239994675368009728/
Interesting, no? If AIs can innovate in every meaning of the word, rather than just ideate, then you can forget companies hiring centaurs - Ethan's the talent pipeline collapse . If you're right, If he's right, and you're right, then by mid-century
I've been optimistic and then disappointed about technology before. For example, the emergence of the blogosphere and then social media. All of these technologies offer so much to society, but there's a huge difference between what a technology can do and what actually happens with it. What actually happens is that people use the technology not for social good but for private gain. When that happens, they end up not fulfilling the promise of the technology for society and actually, in many ways, doing a lot of harm. I'm programmed in many ways to believe that's likely to happen with AI. Also, there's plenty of evidence that it will.
Here's one example. I wrote a post pointing out that if many companies replace their junior people with AI, in 10 or 20 years, they're not going to have any middle to senior people left. And the problem there is that I don't believe AIs are actually that good at innovation. Humans are, but humans have to master their craft before they can build on it and innovate and stand on the shoulders of giants that came before them. But there won't be any people who will have mastered the craft in 20 years because they're not being hired now, their job is being done by AI. So we're hollowing out our expertise and our ability to innovate as a species. This is a risk that's pessimistic. Some people have reacted to that by pointing to posts by, for example, Ethan Mollick, pointing out that actually AIs can innovate. But those studies are more about ideation, which is part of innovation, but it's not a complete part. It's about generating lots of ideas, but not necessarily creating, taking a new idea and developing it into an innovation. That's a whole process and ideation is only the beginning. And even then, the study that people pointed to was a study where humans needed to evaluate the ideas in order to find the right one, because as I would point out, an AI can generate ideas, but it can't really spot the value in an idea. Not everybody agrees with me, not everybody is pessimistic. What's interesting is that the same poster on LinkedIn, Ethan Mollick, recently posted something else, which pointed out that they're not very good at innovating. So there seems to be a contradiction here, or contradictory ideas at least, and I want to explore that.
This is one of this wiki's pages managed with the permanent versions pattern described in Two wiki authors and a blogger walk into a bar…