[{"data":1,"prerenderedAt":127},["ShallowReactive",2],{"podcast-meta":3,"podcast-theme-colors":32,"episode-ai-poisoned-its-own-well-libraries-to-unsuckjs-we-need-more-richard-stallman-and-chatgpt-package":92},{"title":4,"author":5,"description":6,"artwork":7,"categories":8,"feedUrl":10,"type":11,"explicit":12,"link":13,"language":14,"copyright":15,"podcast2":16,"hasPeople":31},"The Changelog: Software Development, Open Source","Changelog Media","Software's best weekly news brief, deep technical interviews & talk show.","https://cdn.changelog.com/static/images/podcasts/podcast-original-f16d0363067166f241d080ee2e2d4a28.png",[9],"Technology","https://changelog.com/podcast/feed","episodic",false,"https://changelog.com/podcast","en-us","All rights reserved",{"persons":17,"funding":27},[18,23],{"name":19,"role":20,"img":21,"href":22},"Adam Stacoviak","host","https://cdn.changelog.com/uploads/avatars/people/Qo/avatar_large.jpg?v=63760280419","https://changelog.com/person/adamstac",{"name":24,"role":20,"img":25,"href":26},"Jerod Santo","https://cdn.changelog.com/uploads/avatars/people/z4/avatar_large.jpeg?v=63760071650","https://changelog.com/person/jerodsanto",[28],{"url":29,"text":30},"https://changelog.com/++","Support our work by joining Changelog++",true,{"palette":33,"sourceColor":54,"extractedColors":55},{"light":34,"dark":43},{"primary":35,"primary-foreground":36,"secondary":37,"secondary-foreground":35,"accent":38,"muted":39,"muted-foreground":40,"ring":35,"podcast-vibrant":41,"podcast-muted":42},"#00182f","#ffffff","#eff2f6","#e7ecf0","#f0f2f4","#6f7275","#0375c4","#e2e5e8",{"primary":44,"primary-foreground":45,"secondary":46,"secondary-foreground":47,"accent":48,"muted":49,"muted-foreground":50,"ring":51,"podcast-vibrant":52,"podcast-muted":53},"#5580a9","#09090b","#191b1d","#dcdee0","#1d2022","#1a1b1c","#8d8f91","#c1c4c8","#3694e6","#151618","#a1978d",[56,63,71,79,84],{"hex":54,"red":57,"green":58,"blue":59,"area":60,"saturation":61,"lightness":62},161,151,141,0.13136455555555557,0.09615384615384609,0.592156862745098,{"hex":64,"red":65,"green":66,"blue":67,"area":68,"saturation":69,"lightness":70},"#d2d1d4",210,209,212,0.000134,0.03370786516853954,0.8254901960784313,{"hex":72,"red":73,"green":74,"blue":75,"area":76,"saturation":77,"lightness":78},"#525153",82,81,83,0.003252888888888889,0.012195121951219556,0.32156862745098036,{"hex":36,"red":80,"green":80,"blue":80,"area":81,"saturation":82,"lightness":83},255,0.03285188888888889,0,1,{"hex":85,"red":86,"green":87,"blue":88,"area":89,"saturation":90,"lightness":91},"#101820",16,24,32,0.8323966666666667,0.3333333333333333,0.09411764705882353,{"meta":93,"episode":101,"transcript":124},{"title":4,"author":5,"description":6,"artwork":7,"categories":94,"feedUrl":10,"type":11,"explicit":12,"link":13,"language":14,"copyright":15,"podcast2":95,"hasPeople":31},[9],{"persons":96,"funding":99},[97,98],{"name":19,"role":20,"img":21,"href":22},{"name":24,"role":20,"img":25,"href":26},[100],{"url":29,"text":30},{"guid":102,"title":103,"slug":104,"description":105,"htmlContent":106,"audioUrl":107,"audioType":108,"audioLength":109,"pubDate":110,"duration":111,"artwork":112,"episodeType":113,"explicit":12,"link":114,"podcast2":115},"changelog.com/16/2118","AI poisoned its own well, libraries to UnsuckJS, we need more Richard Stallman  & ChatGPT package hallucination (News)","ai-poisoned-its-own-well-libraries-to-unsuckjs-we-need-more-richard-stallman-and-chatgpt-package","Tracy Durnell thinks AI has already poisoned its own well, Adam Hill's microsite catalogs everything you need to UnsuckJS, Lionel Dricot thinks we need more Richard Stallman, not less & the Vulcan team proves you can't trust ChatGPT's package recommendations.","\u003Cp>Tracy Durnell thinks AI has already poisoned its own well, Adam Hill’s microsite catalogs everything you need to UnsuckJS, Lionel Dricot thinks we need more Richard Stallman, not less &amp; the Vulcan team proves you can’t trust ChatGPT’s package recommendations.\u003C/p>\n\u003Cp>\u003Ca href=\"https://changelog.com/news/50/email\">View the newsletter\u003C/a>\u003C/p>\u003Cp>\u003Ca href=\"https://changelog.zulipchat.com/#narrow/stream/455469-news\">Join the discussion\u003C/a>\u003C/p>\u003Cp>\u003Ca href=\"https://changelog.com/++\" rel=\"payment\">Changelog++\u003C/a> members support our work, get closer to the metal, and make the ads disappear. Join today!\u003C/p>\u003Cp>Sponsors:\u003C/p>\u003Cp>\u003Cul>\u003Cli>\u003Ca href=\"https://sentry.io/for/code-coverage/\">Sentry\u003C/a> – See the untested code causing errors - or whether it’s partially or fully covered - directly in your stack trace, so you can avoid similar errors from happening in the future. Use the code \u003Ccode>CHANGELOG\u003C/code> and get the team plan free for three months.\n\u003C/li>\n\u003C/ul>\u003C/p>\u003Cp>Featuring:\u003C/p>\u003Cul>\u003Cli>Jerod Santo &ndash; \u003Ca href=\"https://jerodsanto.net\" rel=\"external ugc\">Website\u003C/a>, \u003Ca href=\"https://github.com/jerodsanto\" rel=\"external ugc\">GitHub\u003C/a>, \u003Ca href=\"https://www.linkedin.com/in/jerodsanto\" rel=\"external ugc\">LinkedIn\u003C/a>, \u003Ca href=\"https://changelog.social/@jerod\" rel=\"external ugc\">Mastodon\u003C/a>, \u003Ca href=\"https://x.com/jerodsanto\" rel=\"external ugc\">X\u003C/a>\u003C/li>\u003C/ul>\u003C/p>","https://op3.dev/e/https://pscrb.fm/rss/p/https://cdn.changelog.com/uploads/news/50/changelog-news-50.mp3","audio/mpeg",7905232,"Mon, 26 Jun 2023 20:45:00 +0000",483,"https://cdn.changelog.com/uploads/covers/changelog-news-original.png?v=63848365621","full","https://changelog.com/news/50",{"transcript":116,"chapters":119,"persons":122},{"url":117,"type":118},"https://changelog.com/news/50/transcript","text/html",{"url":120,"type":121},"https://changelog.com/news/50/chapters","application/json+chapters",[123],{"name":24,"role":20,"img":25,"href":26},{"content":125,"type":126,"url":117},"\u003C!DOCTYPE html>\n\u003Chtml>\n\u003Chead>\n  \u003Cmeta charset=\"utf-8\">\n  \u003Cmeta name=\"viewport\" content=\"width=device-width, initial-scale=1\">\n  \u003Cmeta name=\"robots\" content=\"noindex\">\n  \u003Clink rel=\"canonical\" href=\"https://changelog.com/news/50\"/>\n  \u003Ctitle>Transcript for Changelog News #50\u003C/title>\n\u003C/head>\n\u003Cbody>\n\n\n\n    \u003Ccite>Jerod Santo:\u003C/cite>\n    \u003Cp>What up, nerds?\n\nI&#39;m Jerod and this is Changelog News for the week of Monday, June 26th 2023. Hey that sounds familiar...\u003C/p>\n\n\n    \u003Ccite>Me a year ago:\u003C/cite>\n    \u003Cp>Hello, friends. I&#39;m Jerod and this is Changelog News for the week of Monday, June 27th 2022. What the what?\u003C/p>\n\n\n    \u003Ccite>Jerod Santo:\u003C/cite>\n    \u003Cp>That was me one year ago this week. That&#39;s right, Changelog News is a one-year old! Cool Cool Cool.\n\nLet&#39;s get into the news.\u003C/p>\n\n\n    \u003Ccite>Break:\u003C/cite>\n    \u003Cp>\u003C/p>\n\n\n    \u003Ccite>Jerod Santo:\u003C/cite>\n    \u003Cp>Here&#39;s a quick clip of me and Simon Willison talking Stable Diffusion back in September of 2022:\u003C/p>\n\n\n    \u003Ccite>Clip from The Changelog :\u003C/cite>\n    \u003Cp>\u003C/p>\n\n\n    \u003Ccite>Jerod Santo:\u003C/cite>\n    \u003Cp>That&#39;s oh so relevant today because of a new study on AI model collapse that says &quot;We find that use of model-generated content in training causes irreversible defects in the resulting models, where tails of the original content distribution disappear. We refer to this effect as Model Collapse and show that it can occur in Variational Autoencoders, Gaussian Mixture Models and LLMs.&quot;\n\nTracy Durnell writes that she believes AI has already poisoned its own well. &quot;I suspect tech companies (particularly Microsoft / OpenAI and Google) have miscalculated, and in their fear of being left behind, have released their generative AI models too early and too wide. By doing so, they’ve essentially **established a threshold for the maximum improvement of their products due to the threat of model collapse**. I don’t think the quality that generative AI will be able to reach on a poisoned data supply will be good enough to [get rid of all us plebs](https://tracydurnell.com/2023/02/21/the-dream-of-ai-is-the-dream-of-free-labor/)&quot;\n\nSince there&#39;s no consistent system for marking up generated content online as computer generated, the toothpaste is already being squeezed from its proverbial bottle. Here&#39;s Tracy again:\n\n&gt; Because of this approach, 2022 and 2023 will be essentially “lost years” of internet-sourced content, even if they can establish a tagging system going forward — and get people hostile or ambivalent to them to use it.\n\nIf she&#39;s right, this is a big deal.\u003C/p>\n\n\n    \u003Ccite>Break:\u003C/cite>\n    \u003Cp>\u003C/p>\n\n\n    \u003Ccite>Jerod Santo:\u003C/cite>\n    \u003Cp>UnsuckJS.com is a cool microsite from [Adam Hill](https://adamghill.com) that catalogs the many (20+) JavaScript libraries that progressively enhance HTML and cost 10KB or less to deliver to your clients. No build tools, no compilers, and no hassle.\u003C/p>\n\n\n    \u003Ccite>The Diamond Dogs:\u003C/cite>\n    \u003Cp>[On perfection](https://www.youtube.com/watch?v=4HTYrh214k4)\u003C/p>\n\n\n    \u003Ccite>Jerod Santo:\u003C/cite>\n    \u003Cp>I&#39;d love to see this resource go beyond the basic information and table format it currently has. But still, I&#39;m a big proponent of this &quot;less JS&quot; movement and there are some high quality libraries featured here (and some I&#39;d never heard of!). having them all in one place is a win.\u003C/p>\n\n\n    \u003Ccite>Break:\u003C/cite>\n    \u003Cp>\u003C/p>\n\n\n    \u003Ccite>Jerod Santo:\u003C/cite>\n    \u003Cp>We need more of Richard Stallman, not less. That&#39;s the title of a recent article by Ploum (a.k.a. Lionel Dricot). After a big fat disclaimer differentiating the man&#39;s philosophy from the man himself, he writes: &quot;RMS was right since the very beginning. Every warning, every prophecy realised. And, worst of all, he had the solution since the start. The problem is not RMS or FSF. The problem is us. The problem is that we didn’t listen.&quot;\n\nThe core of Stallman&#39;s beliefs were the four freedoms of software. The right to use the software at your discretion. The right to study the software. The right to modify the software. And The right to share the software, including the modified version.\n\nThese four freedoms were formalized as copyleft, but according to Ploum RMS&#39;s theory had a weakness in that copyleft itself wasn&#39;t part of the four freedoms it secured. This allowed other non-copyleft licenses to come along and secure all four. There&#39;s too much said to quote it all on the show, so read the piece which includes Ploum&#39;s suggested amendment (one obligation) to RMS&#39; four freedoms of free software.\n\nThen let me know what you think in the comments. Was RMS right? Did we just not listen? Would Ploum&#39;s amendment fix things? I&#39;d love to hear your thoughts on the matter.\u003C/p>\n\n\n    \u003Ccite>Break:\u003C/cite>\n    \u003Cp>\u003C/p>\n\n\n    \u003Ccite>Jerod Santo:\u003C/cite>\n    \u003Cp>It&#39;s time for some Sponsored News!\n\nJust because you don&#39;t record a problem doesn&#39;t mean it didn&#39;t happen.\n\nStay ahead of latency issues and trace every slow transaction to a poor-performing API call or database query. Sentry is the only developer-first application monitoring platform that shows you what’s slow, down to the line of code. But don&#39;t take their word for it. Matthew Egan (Engineering Team Lead at DiviPay) has this to say about it: &quot;Unlike past tools we’ve used, Sentry provides the complete picture. No more combing through logs — Sentry makes it incredibly easy to find issues in our code to deliver a much smoother payment experience and a better overall customer experience.&quot;\n\nCheck the link in the show notes and get a demo today. Why not, right?\u003C/p>\n\n\n    \u003Ccite>Break:\u003C/cite>\n    \u003Cp>\u003C/p>\n\n\n    \u003Ccite>Jerod Santo:\u003C/cite>\n    \u003Cp>Can you trust ChatGPT’s package recommendations? Maybe not so much. The team at Vulcan have published a new security threat vector they&#39;re calling AI package hallucination. It relies on the fact that ChatGPT (et al) sometimes answers questions with hallucinated sources, links, blogs and statistics. It&#39;ll even generate questionable fixes to CVEs and offer links to libraries that don’t actually exist!\n\n&quot;When the attacker finds a recommendation for an unpublished package, they can publish their own malicious package in its place. The next time a user asks a similar question they may receive a recommendation from ChatGPT to use the now-existing malicious package. We recreated this scenario in the proof of concept below using ChatGPT 3.5.&quot;\n\nBe careful out there...\u003C/p>\n\n\n    \u003Ccite>Break:\u003C/cite>\n    \u003Cp>\u003C/p>\n\n\n    \u003Ccite>Jerod Santo:\u003C/cite>\n    \u003Cp>That is the news for now!\n\nOn Wednesday I&#39;m talking yak shaves, system architecture, -10x devs &amp; more with Taylor Troesh. And on Friday Kelsey Hightower joins Adam and I on Changelog &amp; Friends!\n\nHave a great week, share Changelog with your peers who might dig it &amp; I&#39;ll talk to you again real soon.\u003C/p>\n\n\u003C/body>\n\u003C/html>\n","text/html; charset=utf-8",1771793554650]