What will the LLM integration into MyHub.ai look like in the future?
For the first 5 experiments I used the two ways of accessing ChatGPT open to me at the time: directly via the ChatGPT website, and using the pilot version of the MyHub.ai "GPT wrapper" set out in pilot MyHub ChatGPT integration. As I was working on the analysis of experiment 5 - ideator, however, OpenAI released the "Create your own GPT" feature, so I created my first GPT and quickly ran experiment 6 - ideator GPT.
Which means I now have at least two ways forward to investigate and compare:
Both are explored below. Life being what it is, however, I'm pretty certain that I'll have to combine both to allow Editors to explore and innovate, because if MyHub.ai only offered GPT integration, Hub Editors and their visitors/subscribers would be restricted to the tasks those GPTs were created to support.
Either way, the picture emerging looks something like this (image adapted from Social knowledge graphs for collective intelligence):
!
Before Create your own GPT came along, my original idea was to build on the basic approach set out in pilot MyHub ChatGPT integration: allow Hub Editors to create "AIgents" that talk to ChatGPT via its API, as briefly explained from around ~1m40 of this video:
Logged-in Hub Editors will have:
Behind the scenes, obviously, there are some calculations being made to ensure the token limit is not exceeded - see AIgents behind the scenes.
As I write this (2023-11-11), details of OpenAI's GPT Store are still fuzzy, particularly wrt their claim that creators of popular GPTs will get a share of the revenue. So this is just conjecture for now.
But clearly it opens interesting possibilities, all based on the mechanic set out in API-based AIgent approach. Instead of creating AIgents on MyHub which talk to ChatGPT, Hub Editors create "GPT Agents" which send the user's selection of Hubbed resources to the GPT, hosted on OpenAI.
Caveat: critical to this idea is that it is the user who pays for the use of the GPT, via their OpenAI subscription. This is still unclear (as is whether GPTs have APIs at all), but if this is the case GPT usage will be zero cost to the Editor.
And that means:
An Editor could monetise their Hub by prioritising their GPTs on their Hub's interface to all users, but I for one will prioritise making my Hub as useful to my visitors as possible by putting the best GPTs front and centre, not just the ones I have a small profit share in.
Best case scenario: the GPT Store supports a marketplace of MyHub-compatible GPTs which any Hub Editor could add to their Hub, for themselves and their visitors.
A second alternative is that the Editor bears the cost of GPT usage, but limits usage to Hub Subscribers, who pay to access the GPTs and other "premium services".
After all, a Hub would then provide much more than any blog or Substack, offering Hub subscribers:
The Editor, moreover, has all of the above plus a set of reading, thinking and writing tools arrayed in a content creation pipeline, with each stage supported by dedicated GPTs, as I explained
The downside of the subscriber model is that the subscribers' fees (which are generally flat) will need to cover the subscribers' GPT usage, which may vary from zero to infinite from one month to the next. And the costs of sending content to a GPT is several times higher than simply sending it to ChatGPT. A system to manage that risk will therefore need to be developed, but this is the same for every GPT wrapper out there.
This is one of this wiki's pages managed with the permanent versions pattern described in Two wiki authors and a blogger walk into a bar…