C-5-AllNotes

  1. Getting Started | AutoGen Link AutoGen is a framework that enables development of LLM applications using multiple agents that can converse with each other to solve tasks. AutoGen agents are customizable, conversable, and seamlessly allow human participation. They can operate in various modes that employ combinations of LLMs, human inputs, and tools... provides a drop-in replacement of openai.Completion or openai.ChatCompletion as an enhanced inference API.

  2. How knowledge graphs improve generative AI | InfoWorld Link "Despite the immense potential of LLMs, they come with a range of challenges... hallucinations, the high costs associated with training and scaling, the complexity of addressing and updating them, their inherent inconsistency, the difficulty of conducting audits and providing explanations, predominance of English language content... [they're also] poor at reasoning and need careful prompting for correct answers. All of these issues can be minimized by supporting your new internal corpus-based LLM by a knowledge graph... an information-rich structure that provides a view of entities and how they interrelate... Beyond simply querying a LG for patterns, "you can also compute over the graph using graph algorithms and graph data science... uncovering facts previously obscured and lead to valuable insights [while]... embeddings (encompassing both its data and its structure) can be used in machine learning pipelines or as an integration point to LLMs." How to make LLMs and KGs work together? Use an LLM to create a knowledge graph. Use LLMs to "process a huge corpus of text data ... to produce a knowledge graph ... [which] can be inspected, QA'd, and curated... [and] is explicit and deterministic about its answers in a way that LLMs are not." Use a knowledge graph to train an LLM. Train an LLM "on our existing knowledge graph ... [to] build chatbots that are very skilled with respect to our products and services and that answer without hallucination." Use a knowledge graph on the interaction path with an LLM to enrich queries and responses "intercept messages going to and from the LLM and enrich them with data from our knowledge graph... [which is] used to enrich the prompt given to the LMM. Similarly, on the way back from the LLM, we can take embeddings and resolve them against the knowledge graph to provide deeper insight" Use knowledge graphs to create better models referencing "interesting research from Yejin Choi at the University of Washington ... an LLM is enriched by a secondary, smaller AI ... “critic.” looks for reasoning errors in the responses of the LLM, and in doing so creates a knowledge graph for downstream consumption by another training process that creates a “student” model. The student model is smaller and more accurate than the original LLM on many benchmarks because it never learns factual inaccuracies or inconsistent answers to questions.

  3. Navigating the Jagged Technological Frontier: Field Experimental Evidence of the Effects of AI on Knowledge Worker Productivity and Quality Link In the Boston Consulting Group AI experiment: "we examine the performance implications of AI on realistic, complex, and knowledge-intensive tasks... subjects were randomly assigned to no AI access; GPT-4 AI access; and GPT-4 AI access with a prompt engineering overview" (or GPT + Overview). Need to map "a “jagged technological frontier” where some tasks are easily done by AI, while others, though seemingly similar in difficulty", are not suitable for AI support. "Professionals who skillfully navigate this frontier gain large productivity benefits ... while AI can actually decrease performance when used for work outside of the frontier." Inside the frontier "The inside-the-frontier experiment focused on creative product innovation and development." Benefits within the frontier: "consultants using AI ... completed 12.2% more tasks on average, and completed task 25.1% more quickly... 40% higher quality" lower-skilled consultants benefited 43%, higher-skilled 17% "compared to their own scores... GPT + Overview treatment consistently exhibits a more pronounced positive effect compared to GPT Only" Creativity vs regression to the mean However, while using AI "produce ideas of higher quality... there is a marked reduction in the variability ... [and so] might lead to more homogenized outputs." Outside the frontier Risks outside the frontier: "consultants using AI were 19% less likely to produce correct solutions compared to those without AI... with the GPT + Overview group experiencing a more pronounced decrease". However, wrt recommendation quality, "subjects using AI ... consistently outperformed those not using AI." "this study highlights the importance of validating and interrogating AI ... and of continuing to exert cognitive effort and experts’ judgment... Professionals who had a negative performance ... tended to blindly adopt its output and interrogate it less (“unengaged interaction with AI". Two successful usage patterns (More in Appendix E). Centaurs ("allowed them to leverage AI ... optimizing their decision-making process while maintaining a significant level of human input and oversight" - Harpa) "divide and delegate their solution-creation activities to the AI or to themselves... highly attuned to the jaggedness of the frontier ... dividing the tasks into sub-tasks ... best suited for [either] human intervention [or] efficiently managed by AI." Even those sub-tasks they allocate to themselves are nevertheless supported by AI, but they remain in the loop. Example: "used human knowledge to complete the task of generating a recommendation ... switched to AI to draft a memo". Cyborgs ("deploying it as an intrinsic part of their decision-making process... harnessed AI... as a collaborative partner, continually interacting with the technology to optimize their performance and outcomes". Completely integrate with the AI, "intertwine their efforts with AI at the very frontier of capabilities... at the subtask level". Common practices: "Assigning a persona" Asking AI to make editorial changes to the outputs AI has produced Teaching through examples: Giving example of correct answer before asking AI a question Modularizing tasks Validating: Asking AI to check its inputs, analysis, and outputs Demanding logic explanation Exposing Contradictions Elaborating Directing a Deep dive Adding user’s own data... to re-do the analysis in iterative cycles Pushing back: Disagreeing ... ask AI to reconsider" Conclusions "move beyond the dichotomous decision of adopting or not AI ... focus on the knowledge workflow and the tasks within it, and in each of them, evaluate the value of using different configurations and combinations of humans and AI". To tackle "diminished diversity of ideas ... consider employing a variety of AI models, possibly multiple LLMs, or increased human-only involvement ... underscores the significance of maintaining a diverse AI ecosystem". It also depends on the organisation: many will want greater productivity and more uniformly high quality; "others might value maximum exploration and innovation". Moreover, when everyone's using AI, "outputs generated without AI assistance might stand out ... due to their distinctiveness." More in [

BCG AI& document](https://www.bcg.com/en-us/publications/2021/navigating-jagged-technological-frontier).

  1. How Knowledge Graphs Improve Generative AI | Datanami Link AI language models (LLMs) like GPT-3 and GPT-4 have shown remarkable capabilities in generating human-like text, but they also have limitations, such as generating plausible but incorrect information (hallucinations). Knowledge graphs (KGs) can complement LLMs by providing structured information and helping with fact-checking and reasoning. KGs offer a way to represent entities and their relationships, and this structured data can be used to enrich LLMs' responses. By using KGs alongside LLMs, organizations can enhance the accuracy and reliability of the information generated by these models. KGs can be used to create a curated knowledge base that LLMs can refer to when answering questions, reducing the chances of generating incorrect information. Additionally, KGs can be employed to improve the quality of training data for LLMs, making the models more robust and accurate. Combining KGs with LLMs represents a promising approach to leverage the strengths of both technologies, addressing the limitations of LLMs and enhancing their performance in various applications.

  2. HARPA AI | GPT Chrome Automation Copilot Link

"Bring AI to your browser. Chat with websites, PDFs, videos, write emails, SEO articles, tweets, automate workflows, monitor prices & data. Bing AI & Notion AI alternative... can summarize and reply to emails for you, rewrite, rephrase, correct and expand text, read articles, translate and scan web pages for data."

  1. Google Bard “Extensions” Are Here Link

"With Extensions, Bard can find and show you relevant information from the Google tools you use every day... across multiple apps and services. The personal assistant potential looks huge for users of Google services - eg "Give me a summary of my emails today". Moreover, 3rd-party extensions are on the way, starting with Adobe Firefly, Adobe’s family of creative generative AI models, into Bard so you can easily and quickly turn your own creative ideas into high-quality images, which you can then edit further or add to your designs in Adobe Express."

  1. How to Create an AI Generated Video with ChatGPT, Synthesia, and Descript - Wistia Blog Link

"We used AI to make a video without writing a script or picking up a camera, but that looks as though it was filmed...: - [asked] ChatGPT... Write a script in the style of a YouTube video about how to make an apple pie. The video should be under 60 seconds long. The script should feel friendly in nature... Generate your avatar video with Synthesia... AI software that ... convert text to a talking head video, complete with an AI voice. Result: creepy."

  1. Dall-E3 Is Here And It’s Mind-Blowing Link Dall-e3 is OpenAI's latest AI image generator:
  2. MidJourney-comparable
  3. include text in images
  4. works inside ChatGPT: "simply ask it what you want to see"
  5. you own, and so can sell, the resulting images
  6. copyright: artists can opt out their images