Experiment 6 analysis

Analysis of experiment 6 - ideator GPT, notably Experiment 6 - response 1 & Experiment 6 Idea Weaver response part 2 - images.

Introduction

Goals

Figure out:

Process

I asked it to explore two ideas, one of which lead (for the simple hell of it) to a blog post.

How did it do?

Identifying ideas

The GPT is first supposed to "Focuses on finding contradictions, similar ideas in different contexts, or similar conclusions in varying contexts" from amongst the collection of resources provided. The idea is to provoke ideas for further development.

It identified 3 ideas, for each identifying a tension or common idea between 2 or more posts, and suggesting a very brief blog idea which I could pursue:

  • "The Evolving Role of Human Expertise in an AI-Dominated World" - I asked it to develop this a little, as it contrasted 2 posts by the same person (Ethan Mollick), but it soon admitted that the "contradiction isn’t stark but subtle, focusing on the degree of autonomy AI currently possesses and the extent to which human expertise is needed", so I moved on to...
  • "Digital Futures: AI Advancements vs. The Human Experience in Social Media", which I developed into the above blog post (more below).
  • "Balancing AI's Potential with its Limitations" - the juxtaposition identified is good, but the proposed blog idea was bland. I will perhaps return to this later to explore the pros, cons and how of integrating emotional stimuli into GPTs.

Developing an idea

Idea 2 compared Idea 1 ("where the focus is on AI and technology advancement") with Douglas Rushkoff's piece "Why I’m Finally Leaving X and Probably All Social Media", resulting in a suggestion to analyse "how the advancement of AI technology contrasts with the human-centric problems in current social media platforms".

I asked it to imagine the future: "please imagine what would happen when AI chatbots join the conversation." It responded with a lot of interesting ideas - briefly:

  • authenticity
  • echo chambers
  • manipulation/misinformation,
  • AI's inability to truly understand human emotional nuance,
  • the evolution from social platforms to info/entertainment hubs driven by AI-generated content (points 5 & 8),
  • data privacy and ethical use of information
  • how AI chatbots could make social media more inclusive and user-friendly via support & accessibility features (real-time translation, content moderation, and personalized assistance)

As I pointed out to it, "Many of these ideas have already happened, except that the algorithms are hidden... so "Please imagine an optimistic development, where AI chatbots are developed specifically to improve the quality of online social media interactions..." and then, after its response, "what would be the downsides... could humans "game the system" to their advantage and detriment of others?".

In fairness, both requests generated a lot of good ideas, although probably nothing I would not have thought of myself (impossible to test now, but as that'd make an interesting experiment I've added it to Ideas log) and the feasibility of the ideas of course needs to be checked. Of particular interest:

  • "positive" developments included AI's explicitly AI manipulating and training humans - eg "Through reinforcement learning, these chatbots could encourage positive behavior"
  • negative developments include censorship (both self- and AI-driven), lowered personal accountability and "reduced critical thinking skills"

The recommendations it gave me read like some well-meaning but never-to-be-implemented thinktank report, but I think the idea of a community setting up and managing an AI to help them with their interactions is still an interesting one to pursue.

Writing the blog post

The blog post it wrote, however, was as bland as you'd expect, but this is hardly surprising given the brevity and lack of character of my request. I might come back to this with other prompts to compare the results.

Creating the image

I always have mixed feelings about wrestling with DALL-E or MidJourney . On the one hand, it makes it almost effortless for me to create a high-quality image.

But each image always has something I want to change, and when I ask it to make those changes ... it doesn't, or completely reworks the image, as can see what I mean from Experiment 6 Idea Weaver response part 2 - images

Conclusion

It's a start. I think future versions of Idea Weaver will not ask for the blog post, but could integrate the "give me optimistic and pessimistic scenarios" and "figure out how to maximise the positive and minimise the negative" processes.

But I still think these bots need to be used with caution. There's a very real risk that widespread use would make human thinking seriously less creative - if we get 12 bullet points in 30 seconds and move on, will we ever consider that there may be more interesting ideas in our own minds, if only we exercised them more?