post-processing

with rss to clean notes prompt:

Here are the processed RSS items according to your instructions:

  1. Oops! We Automated Bullshit URL: https://www.cst.cam.ac.uk/blog/afb21/oops-we-automated-bullshit According to:

  2. MIT Professor of AI Rodney Brooks, ChatGPT "“just makes up stuff that sounds good"... where “sounds good” is an algorithm to imitate text found on the internet, while “makes up” is the basic randomness of relying on predictive text rather than logic or facts",

  3. Geoff Hinton: "the greatest risks is not that chatbots will become super-intelligent, but that they will generate text that is super-persuasive without being intelligent, in the manner of Donald Trump or Boris Johnson",
  4. Harry Frankfurt "explains that the bullshitter “does not reject the authority of truth, as the liar does […] He pays no attention to it ... bullshit is a greater enemy of the truth than lies are”",
  5. Graeber’s famous analysis of Bullshit Jobs ... revealed that over 30% of British workers believe their own job contributes nothing of any value to society... writing pointless reports, relaying messages from one person to another, or listening to complaints they can do nothing about... [all] easily done by ChatGPT... we train people for bullshit jobs by training them to generate bullshit.

  6. What kind of bubble is AI? URL: https://doctorow.medium.com/what-kind-of-bubble-is-ai-d02040b5573a "All economic bubbles are hugely destructive, but some ... leave behind (useful) wreckage that can be salvaged":

  7. the first dotcom bubble left behind "cheap servers, office furniture and space... a generation of young people ,,, trained as web makers... technologists from non-technical backgrounds"

  8. crypto bubble: "a smattering of Rust programmers... bad digital art and worse Austrian economics"

Which type is the AI bubble? We'll have "a legion of technologists who are learning Tensorflow and Pytorch", but while "nominally open source" they're bound to Google and Facebook - "if those environments go away, those programming skills become a lot less useful."

  1. A Quiet Revolution? Mistral AI Releases Sensational New AI Model URL: https://medium.com/tales-of-tomorrow/a-quiet-revolution-mistral-ai-releases-sensational-new-ai-model-c17c663287f0 "startup Mistral AI posted a download link to their latest language model". In many ways it's equivalent to GPT4: context size of 32k tokens, and built using the “Mixture Of Experts” model, combining "several highly specialized language models... 8 experts with 7 billion parameters each: “8x7B”... a training method for AI systems in which, instead of a single model learning everything, a combination of sub-models is used", with a gating network " assigning tasks to the experts" and combining their insights.

  2. Artificial General Bullshit. AI, AGI, and its other hallucinations… URL: https://medium.com/whither-news/artificial-general-bullshit-e480939332f6 At an AI conference, Jeff Jarvis "knew I was in the right place when I heard AGI brought up and quickly dismissed... I call bullshit... large language models might prove to be a parlor trick". The rest of the conference focused on "frameworks for discussion of responsible use of AI".

  3. I Trained ChatGPT on My Notes To Create Content. Here’s What Happened URL: https://medium.com/the-generator/i-trained-chatgpt-on-my-notes-heres-what-happened-1dcb48725f45 Like me, the author has "been taking notes for over a decade ...ended up with thousands of notes that I never revisited. So I decided to train ChatGPT on my 3,743 Obsidian notes", which were created for "one purpose: content creation. So I use an improved version of the Zettelkasten".

RSS original

<item>
<guid>https://myhub.ai/items/ai-for-institutions</guid>
<pubDate>Wed, 10 Jan 2024 03:56:45 CET</pubDate>
<title>AI for Institutions</title>
<link>https://www.ai4institutions.com/</link>

<description><p data-block-key="citbb">&quot;Could AI itself be used to imagine and build new institutions, so we can better collaborate, govern, and live together?... hope to inspire and engage researchers and practitioners to build AI that improves our societal institutions.&quot;</p><p data-block-key="faoeh">The site includes 8 (so far) “Project Cards that showcase ideas for projects that have come out of our workshops... We are actively looking to support people interested in working in this area&quot;</p><p>Tags: ai, egovernment</p></description>
<category>ai</category>
<category>egovernment</category>
</item>
<item>
<guid>https://myhub.ai/items/oops-we-automated-bullshit-department-of-computer-science-and-technology</guid>
<pubDate>Fri, 05 Jan 2024 10:18:06 CET</pubDate>
<title>Oops! We Automated Bullshit</title>
<link>https://www.cst.cam.ac.uk/blog/afb21/oops-we-automated-bullshit</link>

<description><p data-block-key="pbr86">According to:</p><ul><li data-block-key="33aa2">MIT Professor of AI Rodney Brooks, ChatGPT &quot;“just makes up stuff that sounds good&quot;... where “sounds good” is an algorithm to imitate text found on the internet, while “makes up” is the basic randomness of relying on predictive text rather than logic or facts&quot;,</li><li data-block-key="13tlv">Geoff Hinton: &quot;the greatest risks is not that chatbots will become super-intelligent, but that they will generate text that is super-persuasive without being intelligent, in the manner of Donald Trump or Boris Johnson&quot;</li><li data-block-key="f7s89">Harry Frankfurt &quot;explains that the bullshitter “does not reject the authority of truth, as the liar does […] He pays no attention to it ... bullshit is a greater enemy of the truth than lies are”&quot;</li><li data-block-key="49ghp">&quot;Graeber’s famous analysis of <i>Bullshit Jobs</i> ... revealed that over 30% of British workers believe their own job contributes nothing of any value to society... writing pointless reports, relaying messages from one person to another, or listening to complaints they can do nothing about... [all] easily done by ChatGPT... we train people for bullshit jobs by training them to generate bullshit.&quot;</li></ul><p data-block-key="7ts9e">LLMs are trained using &quot;huge archives of bullshit, alongside plenty of actual facts... But there is no algorithm in ChatGPT to check which parts are true. The output is literally bullshit... Just as Twitter encourages bullshitting politicians ... what they have said can be used to train automatic bullshit generators.&quot;</p><p>Tags: ai, bullshit, chatgpt, llm, alan blackwell</p></description>
<category>ai</category>
<category>bullshit</category>
<category>chatgpt</category>
<category>llm</category>
<category>alan blackwell</category>
</item>
<item>
<guid>https://myhub.ai/items/what-kind-of-bubble-is-ai</guid>
<pubDate>Thu, 04 Jan 2024 09:41:59 CET</pubDate>
<title>What kind of bubble is AI?</title>
<link>https://doctorow.medium.com/what-kind-of-bubble-is-ai-d02040b5573a</link>

<description><p data-block-key="2bwyo">&quot;All economic bubbles are hugely destructive, but some ... leave behind (useful) wreckage that can be salvaged&quot;:</p><ul><li data-block-key="4gm18">the first dotcom bubble left behind &quot;cheap servers, office furniture and space... a generation of young people ,,, trained as web makers... technologists from non-technical backgrounds&quot;</li><li data-block-key="ctu4k">crypto bubble: &quot;a smattering of Rust programmers... bad digital art and worse Austrian economics&quot;</li></ul><p data-block-key="c01kj">Which type is the AI bubble? We&#x27;ll have &quot;a legion of technologists who are learning Tensorflow and Pytorch&quot;, but while &quot;nominally open source&quot; they&#x27;re bound to Google and Facebook - &quot;if those environments go away, those programming skills become a <i>lot</i> less useful&quot;.</p><p data-block-key="3lrnc">The cost of each AI query includes &quot;a massive primary energy bill... [another] for the chillers, and a titanic wage bill for the specialized technical staff. Once investor subsidies dry up, will real-world ... applications cover these running costs?&quot;</p><p data-block-key="br6rk"><b>Plot those applications on a value vs. risk tolerance 2x2 grid.</b> Most applications driving the high valuations we see are high-value but only if they&#x27;re risk-tolerant. However, they&#x27;re <i>not</i> risk-tolerant.</p><p data-block-key="3eb47">&quot;The profit-generating pitch for high-value AI applications lies in creating “reverse centaurs”: humans who serve as appendages for automation ... But unless [they&#x27;re] intrinsically risk-tolerant, they are poor candidates for automation&quot; - cf:</p><ul><li data-block-key="esav0"><b>self-driving cars</b> aren&#x27;t &quot;about using automation to augment human skill — it’s about replacing humans&quot;, but after Cruise replaced a driver with &quot;1.5 high-waged skilled technicians... one of their cars maimed a woman&quot;</li><li data-block-key="cdbs8"><b>radiologists</b>: &quot;no AI investor thinks their return will come from ... <i>reduce</i> the number of X-rays each radiologist processes ... [but] from replacing them with software whose judgments are cursorily double-checked by a human whose “automation blindness” will turn them into an OK-button-mashing automaton&quot;</li></ul><p data-block-key="1ehno">Unfortunately, &quot;risk-tolerant applications are almost all low-value; while nearly all the high-value applications are risk-intolerant.&quot;</p><p data-block-key="9hees">What will be left? &quot;smaller models... run on commodity hardware... [with] communities ... formed around them&quot;, pushing their limits &quot;far beyond their original manufacturers’ beliefs about their capacity&quot;.</p><p data-block-key="2u3d0">But there&#x27;ll be a lot of wreckage. Big AI models may be &quot;integrated into the processes of the productive economy&quot;, and then <i>disappear.</i> &quot;no one is asking, “What will we do if” — when — “the AI bubble pops and most of this stuff disappears overnight?” &quot;</p><p>Tags: ai, bubble, open source, economics, cory doctorow</p></description>
<category>ai</category>
<category>bubble</category>
<category>open source</category>
<category>economics</category>
<category>cory doctorow</category>
</item>
<item>
<guid>https://myhub.ai/items/a-quiet-revolution-mistral-ai-releases-sensational-new-ai-model-by-tristan-wolff-tales-of-tomorrow-dec-2023-medium</guid>
<pubDate>Wed, 03 Jan 2024 10:15:09 CET</pubDate>
<title>A Quiet Revolution? Mistral AI Releases Sensational New AI Model</title>
<link>https://medium.com/tales-of-tomorrow/a-quiet-revolution-mistral-ai-releases-sensational-new-ai-model-c17c663287f0</link>
<description><p data-block-key="nbs1b">&quot;startup Mistral AI posted a <a href="https://x.com/MistralAI/status/1733150512395038967?s=20">download link</a> to their latest language model&quot;. In many ways it&#x27;s equivalent to GPT4: context size of 32k tokens, and built using the “Mixture Of Experts” model, combining &quot;several highly specialized language models... 8 experts with 7 billion parameters each: “8x7B”... a training method for AI systems in which, instead of a single model learning everything, a combination of sub-models is used&quot;, with a gating network &quot; assigning tasks to the experts&quot; and combining their insights.</p><p data-block-key="bki86">Test it on</p><p data-block-key="8l8s1"><a href="https://poe.com/"><b>Poe.com</b></a> and <a href="https://app.fireworks.ai/models">https://app.fireworks.ai/models</a>.</p><p>Tags: ai, llm, mixture of experts, mistral</p></description>
<category>ai</category>
<category>llm</category>
<category>mixture of experts</category>
<category>mistral</category>
</item>
<item>
<guid>https://myhub.ai/items/artificial-general-bullshit-ai-agi-and-its-other-hallucinations-by-jeff-jarvis-whither-news-nov-2023-medium</guid>
<pubDate>Tue, 02 Jan 2024 04:15:00 CET</pubDate>
<title>Artificial General Bullshit. AI, AGI, and its other hallucinations…</title>
<link>https://medium.com/whither-news/artificial-general-bullshit-e480939332f6</link>

<description><p data-block-key="9x2ef">At an AI conference, Jeff Jarvis &quot;knew I was in the right place when I heard AGI brought up and quickly dismissed... I call bullshit... large language models might prove to be a parlor trick&quot;. The rest of the conference focused on &quot;frameworks for discussion of responsible use of AI&quot;.</p><p data-block-key="fnv4r">Benefits - for some, AI can:</p><ul><li data-block-key="c2sv9">&quot;raise the floor...</li><li data-block-key="cjt7f">scale ... enabling people and organizations to take on certain tasks much more efficiently)</li><li data-block-key="56vdu">raise the ceiling (performing tasks ... [previously] unattainable&quot;</li></ul><p data-block-key="1j2qn">Problems: &quot;AI would:</p><ul><li data-block-key="2vtm1">bring economic hardship;</li><li data-block-key="8kt8s">enable evil at scale</li><li data-block-key="26vap">for some, result in a loss of purpose or identity&quot;</li></ul><p data-block-key="4iavb">Not that it will be neat: &quot;what is wondrous for one can be dreadful for another&quot;.</p><p data-block-key="7u502">He then tours a few guidelines for responsible AI under development, concluding &quot;Rather than principles ... chiseled on tablets... we need ongoing discussion to react to rapid development and changing impact; to consider unintended consequences ... and to make use of ... research. That is what WEF’s AI Governance Alliance says it will do&quot;.</p><p data-block-key="8h0b5">Drawing parallels with print, &quot;the full effect of a new technology can take generations to be realized... [and] there is no sure way to guarantee safety&quot; because &quot;a machine that is trained to imitate human linguistic behavior is fundamentally unsafe. See: print.&quot;</p><p data-block-key="7th3c">In this it&#x27;s a similar discussion to gun law, or Section 230: who&#x27;s responsible, and hence liable? Holding the AI models&#x27; manufacturers &quot;responsible for everything anyone does with them, as is being discussed in Europe... is unrealistic&quot;, while blaming users &quot;could be unfair&quot;.</p><p data-block-key="9aqe6">Should LLLMs be open-sourced? There are good reasons for doing so, but &quot;others fear bad actors will take open-source models... and detour around guardrails&quot; - but are these guardrails any use anyway? Let&#x27;s at least hope we avoid a &quot;knowledge ecosystem when books, newspapers, and art are locked up by copyright ... [and] machines learn only from the crap that is free... [we need] a commons of mutual benefit and control. &quot;</p><p>Tags: ai, ethics, bullshit, open source, llm, safety, jeff jarvis</p></description>
<category>ai</category>
<category>ethics</category>
<category>bullshit</category>
<category>open source</category>
<category>llm</category>
<category>safety</category>
<category>jeff jarvis</category>
</item>
<item>
<guid>https://myhub.ai/items/i-trained-chatgpt-on-my-notes-to-create-content-heres-what-happened</guid>
<pubDate>Tue, 02 Jan 2024 09:48:06 CET</pubDate>
<title>I Trained ChatGPT on My Notes To Create Content. Here’s What Happened</title>
<link>https://medium.com/the-generator/i-trained-chatgpt-on-my-notes-heres-what-happened-1dcb48725f45</link>
<description><p data-block-key="vs1yr">Like me, the author has &quot;been taking notes for over a decade ...ended up with thousands of notes that I never revisited. So I decided to train ChatGPT on my 3,743 Obsidian notes&quot;, which were created for &quot;one purpose: content creation. So I use an <a href="https://medium.com/swlh/how-the-zettelkasten-radically-changed-my-way-of-creating-content-and-how-it-will-change-yours-baa23904f034">improved version of the Zettelkasten</a>&quot;.</p><p data-block-key="agnt5">Apparently there are just 3 ways to use AI in your notes:</p><ol><li data-block-key="3uuja">... within each note ... features like: completion, summarizing, rewriting, etc.&quot; Like me, the author&#x27;s not a fan, as these features disengage your mind from the content, and you won&#x27;t &quot;“own” and internalize the information&quot;.</li><li data-block-key="e866g">&quot;find relevant notes/link notes together&quot; - the author finds no added value here, but perhaps because he used one particular plugin and has a very specific, &quot;one link per note&quot; rule for content discovery</li><li data-block-key="d29q6">&quot;Use AI to interact with your notes... uploads your notes to OpenAI and trains GPT on it... you can then interact with your notes within Obsidian... [via] a chat window ... if you start the chat with “based on my notes”&quot;. However, &quot;it missed on a bunch of other insights ... I was disappointed&quot;</li></ol><p data-block-key="pha0">PS the plugin tested in (3) was probably Smart Connections:</p><embed embedtype="media" url="https://www.youtube.com/watch?v=xHY3OHUNOrQ"/><p data-block-key="fb7fi"></p><p>Tags: productivity, ai, obsidian, chatgpt, embedding, smart connections</p></description>
<category>productivity</category>
<category>ai</category>
<category>obsidian</category>
<category>chatgpt</category>
<category>embedding</category>
<category>smart connections</category>
</item>