<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0" xmlns:itunes="http://www.itunes.com/dtds/podcast-1.0.dtd" xmlns:psc="http://podlove.org/simple-chapters" xmlns:podcast="https://podcastindex.org/namespace/1.0"><channel><title><![CDATA[World Brain: No Experts]]></title><description><![CDATA[<p>Taking the HG Wells essay collection as a departure point, this podcast asks the following questions of a range of people:</p><p></p><ul><li>What has been awakened? Rough beast? Benevolent angel? Boring super-appliance?</li><li>Could we be less wrong about AI than those considered to be experts?</li><li>Why "No Experts"? Is it possible that there really are any experts on this subject?</li><li>Could a few relatively smart outsiders be less wrong about AI - what it is, what changes it's going to make to our lives - than the glory-drunk founders and their whorish enablers, or the terrifying doomsayers?</li><li>Could we be any less wrong, for that matter, than Wells, who in 1938 brought out a collection of essays that imagined that a global encyclopedia would help bring about a permanent state of world peace?</li><li>Finally, can an unstructured discussion between humans meaningfully enrich our understanding of the boundaries between us and what we've created?</li></ul>]]></description><link>https://riverside.fm/dashboard/studios/world-brain-no-experts/podcast</link><generator>Riverside.fm (https://riverside.com)</generator><lastBuildDate>Tue, 21 Apr 2026 07:35:32 GMT</lastBuildDate><atom:link href="https://api.riverside.fm/hosting/kRez0nER.rss" rel="self" type="application/rss+xml"/><author><![CDATA[Matt Brandabur, Yuri Marder]]></author><pubDate>Thu, 20 Nov 2025 06:29:23 GMT</pubDate><copyright><![CDATA[2025 Matt Brandabur, Yuri Marder]]></copyright><language><![CDATA[en]]></language><ttl>60</ttl><category><![CDATA[Technology]]></category><category><![CDATA[Philosophy]]></category><itunes:author>Matt Brandabur, Yuri Marder</itunes:author><itunes:summary>&lt;p&gt;Taking the HG Wells essay collection as a departure point, this podcast asks the following questions of a range of people:&lt;/p&gt;&lt;p&gt;&lt;/p&gt;&lt;ul&gt;&lt;li&gt;What has been awakened? Rough beast? Benevolent angel? Boring super-appliance?&lt;/li&gt;&lt;li&gt;Could we be less wrong about AI than those considered to be experts?&lt;/li&gt;&lt;li&gt;Why &quot;No Experts&quot;? Is it possible that there really are any experts on this subject?&lt;/li&gt;&lt;li&gt;Could a few relatively smart outsiders be less wrong about AI - what it is, what changes it&apos;s going to make to our lives - than the glory-drunk founders and their whorish enablers, or the terrifying doomsayers?&lt;/li&gt;&lt;li&gt;Could we be any less wrong, for that matter, than Wells, who in 1938 brought out a collection of essays that imagined that a global encyclopedia would help bring about a permanent state of world peace?&lt;/li&gt;&lt;li&gt;Finally, can an unstructured discussion between humans meaningfully enrich our understanding of the boundaries between us and what we&apos;ve created?&lt;/li&gt;&lt;/ul&gt;</itunes:summary><itunes:type>episodic</itunes:type><itunes:owner><itunes:name>Matt Brandabur, Yuri Marder</itunes:name><itunes:email>matthewbrandabur@gmail.com</itunes:email></itunes:owner><itunes:explicit>yes</itunes:explicit><itunes:category text="Technology"/><itunes:category text="Society &amp; Culture"><itunes:category text="Philosophy"/></itunes:category><itunes:image href="https://hosting-media.rs-prod.riverside.fm/media/podcasts/246cf488-d2e5-47d5-af40-24854ff44870/logos/f3a7ef7d-a50a-4c37-a2b3-4511addd53c4.jpeg"/><item><title><![CDATA[Ep 6: AI - Fears, Dreams, Experts with Kemi Olugemo and Barbara Salami from KAINDLY.AI]]></title><description><![CDATA[<p>In what is easily our friendliest interview so far, Yuri and Matt engage with industry leaders Kemi Olugemo and Barbara Salami to consider how artificial intelligence is redefining the role of the expert, what it's like to try bridging socioeconomic and cultural divides, and how hopes and fears comingle when the pace and the scope of changes continues to become more violent every day.</p><p></p><p>They also discuss <a rel="noopener noreferrer nofollow" href="http://KAINDLY.AI" target="_blank">KAINDLY.AI</a>, a newly founded organization dedicated to helping organizations design for equitable access will build collective capability, shared confidence, and the kind of trust that sustains transformation.</p><p></p><p>Notes and episodes: <a rel="noopener noreferrer nofollow" href="http://worldbrainnoexperts.substack.com" target="_blank">worldbrainnoexperts.substack.com</a></p><p></p>]]></description><guid isPermaLink="false">b548b45a-e1c1-488d-93b0-f56009bb5443</guid><dc:creator><![CDATA[Matt Brandabur, Yuri Marder]]></dc:creator><pubDate>Sat, 07 Mar 2026 23:51:07 GMT</pubDate><enclosure url="https://api.riverside.fm/hosting-analytics/media/517cbf23b46b1466e1fc67b4ba02154a52111ca59b31204ee722459c278e5ec1/eyJlcGlzb2RlSWQiOiJiNTQ4YjQ1YS1lMWMxLTQ4OGQtOTNiMC1mNTYwMDliYjU0NDMiLCJwb2RjYXN0SWQiOiIyNDZjZjQ4OC1kMmU1LTQ3ZDUtYWY0MC0yNDg1NGZmNDQ4NzAiLCJhY2NvdW50SWQiOiI2NmI5MDgxMGEwMjVkNzRkNWUyZTllMGQiLCJwYXRoIjoibWVkaWEvY2xpcHMvNjlhY2EwYTcxYzQxYjdkNWM5NTA3OTY5L3dvcmxkLWJyYWluLW5vLWV4cGVydHMtY29tcG9zZXItMjAyNi0zLTdfXzIzLTMtMTkubXAzIn0=.mp3" length="119317254" type="audio/mpeg"/><podcast:transcript url="https://hosting-media.rs-prod.riverside.fm/media/podcasts/246cf488-d2e5-47d5-af40-24854ff44870/episodes/b548b45a-e1c1-488d-93b0-f56009bb5443/transcripts.txt" type="text/plain"/><itunes:summary>&lt;p&gt;In what is easily our friendliest interview so far, Yuri and Matt engage with industry leaders Kemi Olugemo and Barbara Salami to consider how artificial intelligence is redefining the role of the expert, what it&apos;s like to try bridging socioeconomic and cultural divides, and how hopes and fears comingle when the pace and the scope of changes continues to become more violent every day.&lt;/p&gt;&lt;p&gt;&lt;/p&gt;&lt;p&gt;They also discuss &lt;a rel=&quot;noopener noreferrer nofollow&quot; href=&quot;http://KAINDLY.AI&quot; target=&quot;_blank&quot;&gt;KAINDLY.AI&lt;/a&gt;, a newly founded organization dedicated to helping organizations design for equitable access will build collective capability, shared confidence, and the kind of trust that sustains transformation.&lt;/p&gt;&lt;p&gt;&lt;/p&gt;&lt;p&gt;Notes and episodes: &lt;a rel=&quot;noopener noreferrer nofollow&quot; href=&quot;http://worldbrainnoexperts.substack.com&quot; target=&quot;_blank&quot;&gt;worldbrainnoexperts.substack.com&lt;/a&gt;&lt;/p&gt;&lt;p&gt;&lt;/p&gt;</itunes:summary><itunes:explicit>no</itunes:explicit><itunes:duration>01:22:52</itunes:duration><itunes:image href="https://hosting-media.rs-prod.riverside.fm/media/podcasts/246cf488-d2e5-47d5-af40-24854ff44870/logos/f3a7ef7d-a50a-4c37-a2b3-4511addd53c4.jpeg"/><itunes:season>1</itunes:season><itunes:episode>6</itunes:episode><itunes:title>Ep 6: AI - Fears, Dreams, Experts with Kemi Olugemo and Barbara Salami from KAINDLY.AI</itunes:title><itunes:episodeType>full</itunes:episodeType></item><item><title><![CDATA[Ep 5: Beyond the Book - AI and Technical Documentation with Tom Johnson and Floyd Jones]]></title><description><![CDATA[<p>Three tech writers and a photographer walk into a bar. The writers are mostly not writing for human readers directly anymore. How is this funny?</p><p></p><p>Tom Johnson, a technical writer at Google and a publishes thoughts on trends in his work at <a rel="noopener noreferrer nofollow" href="http://idratherbewriting.com" target="_blank">idratherbewriting.com</a>. Tom sat down with Yuri, Matt, and another tech writer, Floyd Jones, to talk about how AI has altered the work Tech Writers do.</p><p></p><p>Notes and episodes: <a rel="noopener noreferrer nofollow" href="http://worldbrainnoexperts.substack.com/" target="_blank">worldbrainnoexperts.substack.com</a></p>]]></description><guid isPermaLink="false">d95e95a8-9e2b-4732-949d-3fd2066f8c1b</guid><dc:creator><![CDATA[Matt Brandabur, Yuri Marder]]></dc:creator><pubDate>Sat, 14 Feb 2026 22:31:04 GMT</pubDate><enclosure url="https://api.riverside.fm/hosting-analytics/media/74a947986788ea99f07e991bc8e85f39cb0c092a638c42c3180fbe69430e534c/eyJlcGlzb2RlSWQiOiJkOTVlOTVhOC05ZTJiLTQ3MzItOTQ5ZC0zZmQyMDY2ZjhjMWIiLCJwb2RjYXN0SWQiOiIyNDZjZjQ4OC1kMmU1LTQ3ZDUtYWY0MC0yNDg1NGZmNDQ4NzAiLCJhY2NvdW50SWQiOiI2NmI5MDgxMGEwMjVkNzRkNWUyZTllMGQiLCJwYXRoIjoibWVkaWEvY2xpcHMvNjk5MGYyNGUwYTlkM2Q2YzlmMmQ3YzBlL3dvcmxkLWJyYWluLW5vLWV4cGVydHMtY29tcG9zZXItMjAyNi0yLTE0X18yMy04LTE0Lm1wMyJ9.mp3" length="46855959" type="audio/mpeg"/><itunes:summary>&lt;p&gt;Three tech writers and a photographer walk into a bar. The writers are mostly not writing for human readers directly anymore. How is this funny?&lt;/p&gt;&lt;p&gt;&lt;/p&gt;&lt;p&gt;Tom Johnson, a technical writer at Google and a publishes thoughts on trends in his work at &lt;a rel=&quot;noopener noreferrer nofollow&quot; href=&quot;http://idratherbewriting.com&quot; target=&quot;_blank&quot;&gt;idratherbewriting.com&lt;/a&gt;. Tom sat down with Yuri, Matt, and another tech writer, Floyd Jones, to talk about how AI has altered the work Tech Writers do.&lt;/p&gt;&lt;p&gt;&lt;/p&gt;&lt;p&gt;Notes and episodes: &lt;a rel=&quot;noopener noreferrer nofollow&quot; href=&quot;http://worldbrainnoexperts.substack.com/&quot; target=&quot;_blank&quot;&gt;worldbrainnoexperts.substack.com&lt;/a&gt;&lt;/p&gt;</itunes:summary><itunes:explicit>no</itunes:explicit><itunes:duration>01:37:37</itunes:duration><itunes:image href="https://hosting-media.rs-prod.riverside.fm/media/podcasts/246cf488-d2e5-47d5-af40-24854ff44870/logos/f3a7ef7d-a50a-4c37-a2b3-4511addd53c4.jpeg"/><itunes:season>1</itunes:season><itunes:episode>5</itunes:episode><itunes:title>Ep 5: Beyond the Book - AI and Technical Documentation with Tom Johnson and Floyd Jones</itunes:title><itunes:episodeType>full</itunes:episodeType></item><item><title><![CDATA[Ep 4: AI Will Write All the Code, Ready or Not (with Chris Fregly)]]></title><description><![CDATA[<p>In this episode of World Brain: No Experts, Matt and Yuri interview technologist and author Chris Fregly about the accelerating integration of AI into software development and large-scale computing systems.</p><p></p><p>Drawing on experience at companies like Netflix, AWS, and Databricks, Fregly argues that AI-assisted coding is no longer optional but inevitable, asserting that teams should move toward fully AI-generated code rather than cautious hybrid approaches. He describes a workflow in which multiple models review and critique each other’s output, emphasizing evaluation systems (“evals”) over traditional unit tests and encouraging comfort with ambiguity and non-determinism. The conversation explores tensions between productivity gains and maintainability concerns, particularly around claims that AI-generated code introduces inconsistency or “slop.” Fregly agrees but takes about how disciplined (but exhausting) prompt design, evaluation harnesses, and system-level instrumentation can mitigate these risks.</p><p></p><p>Will super-intelligent agents soon exist? "I hope so," says Chris.</p><p></p><p>Show Notes for Episode 4</p><p><a rel="noopener noreferrer nofollow" href="https://open.substack.com/pub/worldbrainnoexperts/p/show-notes-for-episode-4?r=9fpi3&amp;utm_campaign=post&amp;utm_medium=web" target="_blank">https://open.substack.com/pub/worldbrainnoexperts/p/show-notes-for-episode-4?r=9fpi3&amp;utm_campaign=post&amp;utm_medium=web</a></p><p></p><p>Subscribe to find out about new episodes:</p><p><a rel="noopener noreferrer nofollow" href="https://worldbrainnoexperts.substack.com/" target="_blank">https://worldbrainnoexperts.substack.com/</a></p>]]></description><guid isPermaLink="false">82f9f4da-d1b7-4d3b-9c68-bd019575e6a6</guid><dc:creator><![CDATA[Matt Brandabur, Yuri Marder]]></dc:creator><pubDate>Mon, 09 Feb 2026 07:17:11 GMT</pubDate><enclosure url="https://api.riverside.fm/hosting-analytics/media/6f883f28dda9db6335f8ea93060d60650da4afa64613f2a1fa16e59208ab347d/eyJlcGlzb2RlSWQiOiI4MmY5ZjRkYS1kMWI3LTRkM2ItOWM2OC1iZDAxOTU3NWU2YTYiLCJwb2RjYXN0SWQiOiIyNDZjZjQ4OC1kMmU1LTQ3ZDUtYWY0MC0yNDg1NGZmNDQ4NzAiLCJhY2NvdW50SWQiOiI2NmI5MDgxMGEwMjVkNzRkNWUyZTllMGQiLCJwYXRoIjoibWVkaWEvY2xpcHMvNjk4OTg2OTA4MjlkMDFmYzEwZTllN2RkL3dvcmxkLWJyYWluLW5vLWV4cGVydHMtY29tcG9zZXItMjAyNi0yLTlfXzgtMi00MC5tcDMifQ==.mp3" length="52535606" type="audio/mpeg"/><itunes:summary>&lt;p&gt;In this episode of World Brain: No Experts, Matt and Yuri interview technologist and author Chris Fregly about the accelerating integration of AI into software development and large-scale computing systems.&lt;/p&gt;&lt;p&gt;&lt;/p&gt;&lt;p&gt;Drawing on experience at companies like Netflix, AWS, and Databricks, Fregly argues that AI-assisted coding is no longer optional but inevitable, asserting that teams should move toward fully AI-generated code rather than cautious hybrid approaches. He describes a workflow in which multiple models review and critique each other’s output, emphasizing evaluation systems (“evals”) over traditional unit tests and encouraging comfort with ambiguity and non-determinism. The conversation explores tensions between productivity gains and maintainability concerns, particularly around claims that AI-generated code introduces inconsistency or “slop.” Fregly agrees but takes about how disciplined (but exhausting) prompt design, evaluation harnesses, and system-level instrumentation can mitigate these risks.&lt;/p&gt;&lt;p&gt;&lt;/p&gt;&lt;p&gt;Will super-intelligent agents soon exist? &quot;I hope so,&quot; says Chris.&lt;/p&gt;&lt;p&gt;&lt;/p&gt;&lt;p&gt;Show Notes for Episode 4&lt;/p&gt;&lt;p&gt;&lt;a rel=&quot;noopener noreferrer nofollow&quot; href=&quot;https://open.substack.com/pub/worldbrainnoexperts/p/show-notes-for-episode-4?r=9fpi3&amp;amp;utm_campaign=post&amp;amp;utm_medium=web&quot; target=&quot;_blank&quot;&gt;https://open.substack.com/pub/worldbrainnoexperts/p/show-notes-for-episode-4?r=9fpi3&amp;amp;utm_campaign=post&amp;amp;utm_medium=web&lt;/a&gt;&lt;/p&gt;&lt;p&gt;&lt;/p&gt;&lt;p&gt;Subscribe to find out about new episodes:&lt;/p&gt;&lt;p&gt;&lt;a rel=&quot;noopener noreferrer nofollow&quot; href=&quot;https://worldbrainnoexperts.substack.com/&quot; target=&quot;_blank&quot;&gt;https://worldbrainnoexperts.substack.com/&lt;/a&gt;&lt;/p&gt;</itunes:summary><itunes:explicit>no</itunes:explicit><itunes:duration>01:49:27</itunes:duration><itunes:image href="https://hosting-media.rs-prod.riverside.fm/media/podcasts/246cf488-d2e5-47d5-af40-24854ff44870/logos/f3a7ef7d-a50a-4c37-a2b3-4511addd53c4.jpeg"/><itunes:season>1</itunes:season><itunes:episode>4</itunes:episode><itunes:title>Ep 4: AI Will Write All the Code, Ready or Not (with Chris Fregly)</itunes:title><itunes:episodeType>full</itunes:episodeType></item><item><title><![CDATA[Ep 3: Can AI Understand Meaning? (with Jobst Landgrebe)]]></title><description><![CDATA[<p>Jobst Landgrebe, co-author (with Barry Smith) of <i>Why Machines Will Never Rule the World: Artificial Intelligence Without Fear</i>, joins Matt and Yuri for a wide-ranging argument about what AI can and cannot do. Landgrebe claims that minds and living systems are complex systems shaped by history and irreversibility, and that LLMs can imitate language without understanding meaning in open contexts. Yuri pushes back with an “approximation” critique—planes don’t fly like birds, yet they outperform birds—asking why AI couldn’t surpass humans in many domains without “real” understanding. The conversation moves from philosophy and neuroscience to economics, scaling narratives, and the political risks of AI-enabled surveillance and propaganda.</p><p><a rel="noopener noreferrer nofollow" href="https://worldbrainnoexperts.substack.com/p/show-notes-for-episode-3?r=9fpi3" target="_blank">Show notes</a></p><p></p><p><a rel="noopener noreferrer nofollow" href="worldbrainnoexperts.substack.com" target="_blank">Substack</a></p>]]></description><guid isPermaLink="false">5572cffd-6525-452c-a63c-dbca092180b6</guid><dc:creator><![CDATA[Matt Brandabur, Yuri Marder]]></dc:creator><pubDate>Sat, 17 Jan 2026 22:14:29 GMT</pubDate><enclosure url="https://api.riverside.fm/hosting-analytics/media/8ff28379afd92331e9df1cf7829f0bb013d8b713b8344d583d3e05cbf632bd03/eyJlcGlzb2RlSWQiOiI1NTcyY2ZmZC02NTI1LTQ1MmMtYTYzYy1kYmNhMDkyMTgwYjYiLCJwb2RjYXN0SWQiOiIyNDZjZjQ4OC1kMmU1LTQ3ZDUtYWY0MC0yNDg1NGZmNDQ4NzAiLCJhY2NvdW50SWQiOiI2NmI5MDgxMGEwMjVkNzRkNWUyZTllMGQiLCJwYXRoIjoibWVkaWEvY2xpcHMvNjk2YzAxYzMzOTcyNjcxOWI1NTZmM2VkL3dvcmxkLWJyYWluLW5vLWV4cGVydHMtY29tcG9zZXItMjAyNi0xLTE3X18yMi00MC0xOS5tcDMifQ==.mp3" length="86633827" type="audio/mpeg"/><itunes:summary>&lt;p&gt;Jobst Landgrebe, co-author (with Barry Smith) of &lt;i&gt;Why Machines Will Never Rule the World: Artificial Intelligence Without Fear&lt;/i&gt;, joins Matt and Yuri for a wide-ranging argument about what AI can and cannot do. Landgrebe claims that minds and living systems are complex systems shaped by history and irreversibility, and that LLMs can imitate language without understanding meaning in open contexts. Yuri pushes back with an “approximation” critique—planes don’t fly like birds, yet they outperform birds—asking why AI couldn’t surpass humans in many domains without “real” understanding. The conversation moves from philosophy and neuroscience to economics, scaling narratives, and the political risks of AI-enabled surveillance and propaganda.&lt;/p&gt;&lt;p&gt;&lt;a rel=&quot;noopener noreferrer nofollow&quot; href=&quot;https://worldbrainnoexperts.substack.com/p/show-notes-for-episode-3?r=9fpi3&quot; target=&quot;_blank&quot;&gt;Show notes&lt;/a&gt;&lt;/p&gt;&lt;p&gt;&lt;/p&gt;&lt;p&gt;&lt;a rel=&quot;noopener noreferrer nofollow&quot; href=&quot;worldbrainnoexperts.substack.com&quot; target=&quot;_blank&quot;&gt;Substack&lt;/a&gt;&lt;/p&gt;</itunes:summary><itunes:explicit>no</itunes:explicit><itunes:duration>01:53:38</itunes:duration><itunes:image href="https://hosting-media.rs-prod.riverside.fm/media/podcasts/246cf488-d2e5-47d5-af40-24854ff44870/logos/f3a7ef7d-a50a-4c37-a2b3-4511addd53c4.jpeg"/><itunes:season>1</itunes:season><itunes:episode>3</itunes:episode><itunes:title>Ep 3: Can AI Understand Meaning? (with Jobst Landgrebe)</itunes:title><itunes:episodeType>full</itunes:episodeType></item><item><title><![CDATA[Ep 1:  Are There Any AI Experts? (Wells, Consciousness, and the End of Shared Truth)]]></title><description><![CDATA[<p>In the first episode of <i>World Brain: No Experts</i>, Matt Brandabur and Yuri Marder use H. G. Wells’ 1938 <i>World Brain</i> essays as a starting point for an argument about artificial intelligence, expertise, and human identity. Yuri sees AI as a transformative force on the scale of language itself—while Matt insists that machines cannot reproduce the lived quality of biological consciousness. The conversation ranges across creativity, shared truth, and what it means to understand anything at all.</p><p><a rel="noopener noreferrer nofollow" href="https://worldbrainnoexperts.substack.com/p/show-notes-for-episode-1?r=9fpi3" target="_blank">Show notes</a></p><p><a rel="noopener noreferrer nofollow" href="worldbrainnoexperts.substack.com" target="_blank">Substack</a></p><p></p><p></p>]]></description><guid isPermaLink="false">a8e5f82b-45c0-4a20-8645-3c5ce025b94b</guid><dc:creator><![CDATA[Matt Brandabur, Yuri Marder]]></dc:creator><pubDate>Wed, 17 Dec 2025 20:58:05 GMT</pubDate><enclosure url="https://api.riverside.fm/hosting-analytics/media/5552523ba15b5d32ffe2083b9b33a47095c04e6637db22cc58a6b12662f2ebcd/eyJlcGlzb2RlSWQiOiJhOGU1ZjgyYi00NWMwLTRhMjAtODY0NS0zYzVjZTAyNWI5NGIiLCJwb2RjYXN0SWQiOiIyNDZjZjQ4OC1kMmU1LTQ3ZDUtYWY0MC0yNDg1NGZmNDQ4NzAiLCJhY2NvdW50SWQiOiI2NmI5MDgxMGEwMjVkNzRkNWUyZTllMGQiLCJwYXRoIjoibWVkaWEvY2xpcHMvNjk0MjcxMjg4ZjU5ZDAwY2ZmNzJiNzAxL3dvcmxkLWJyYWluLW5vLWV4cGVydHMtY29tcG9zZXItMjAyNS0xMi0xN19fMTAtMC0yNC5tcDMifQ==.mp3" length="51266218" type="audio/mpeg"/><itunes:summary>&lt;p&gt;In the first episode of &lt;i&gt;World Brain: No Experts&lt;/i&gt;, Matt Brandabur and Yuri Marder use H. G. Wells’ 1938 &lt;i&gt;World Brain&lt;/i&gt; essays as a starting point for an argument about artificial intelligence, expertise, and human identity. Yuri sees AI as a transformative force on the scale of language itself—while Matt insists that machines cannot reproduce the lived quality of biological consciousness. The conversation ranges across creativity, shared truth, and what it means to understand anything at all.&lt;/p&gt;&lt;p&gt;&lt;a rel=&quot;noopener noreferrer nofollow&quot; href=&quot;https://worldbrainnoexperts.substack.com/p/show-notes-for-episode-1?r=9fpi3&quot; target=&quot;_blank&quot;&gt;Show notes&lt;/a&gt;&lt;/p&gt;&lt;p&gt;&lt;a rel=&quot;noopener noreferrer nofollow&quot; href=&quot;worldbrainnoexperts.substack.com&quot; target=&quot;_blank&quot;&gt;Substack&lt;/a&gt;&lt;/p&gt;&lt;p&gt;&lt;/p&gt;&lt;p&gt;&lt;/p&gt;</itunes:summary><itunes:explicit>no</itunes:explicit><itunes:duration>01:15:23</itunes:duration><itunes:image href="https://hosting-media.rs-prod.riverside.fm/media/podcasts/246cf488-d2e5-47d5-af40-24854ff44870/logos/f3a7ef7d-a50a-4c37-a2b3-4511addd53c4.jpeg"/><itunes:season>1</itunes:season><itunes:episode>1</itunes:episode><itunes:title>Ep 1:  Are There Any AI Experts? (Wells, Consciousness, and the End of Shared Truth)</itunes:title><itunes:episodeType>full</itunes:episodeType></item><item><title><![CDATA[Ep 2: AI as a Rorschach Test (Tool, Mirror, or Mass Delusion?)]]></title><description><![CDATA[<p>In this episode, Matt and Yuri treat AI less as a single “thing” and more as a cultural mirror—an unsettling Rorschach test that reveals what humans and institutions fear, desire, and project. Matt argues that the current frenzy resembles a mass hallucination driven by investor fantasies of automation, while Yuri counters with a grounded case: in his daily work, AI has become an indispensable collaborator that removes bottlenecks and accelerates real output.</p><p></p><p><a rel="noopener noreferrer nofollow" href="https://worldbrainnoexperts.substack.com/p/show-notes-for-episode-2?r=9fpi3" target="_blank">Show notes</a></p><p><a rel="noopener noreferrer nofollow" href="worldbrainnoexperts.substack.com" target="_blank">Substack</a></p><p></p>]]></description><guid isPermaLink="false">2b87fb30-8709-4c95-abb1-e660818b4246</guid><dc:creator><![CDATA[Matt Brandabur, Yuri Marder]]></dc:creator><pubDate>Mon, 08 Dec 2025 07:15:36 GMT</pubDate><enclosure url="https://api.riverside.fm/hosting-analytics/media/f5978e10a9f66f0a012b5030ec6b73b8cae7f1f919a0559291b4c8b37382baed/eyJlcGlzb2RlSWQiOiIyYjg3ZmIzMC04NzA5LTRjOTUtYWJiMS1lNjYwODE4YjQyNDYiLCJwb2RjYXN0SWQiOiIyNDZjZjQ4OC1kMmU1LTQ3ZDUtYWY0MC0yNDg1NGZmNDQ4NzAiLCJhY2NvdW50SWQiOiI2NmI5MDgxMGEwMjVkNzRkNWUyZTllMGQiLCJwYXRoIjoibWVkaWEvY2xpcHMvNjkzNjc3NmE3YTc5NmU3OWVhZmYxOWY5L3dvcmxkLWJyYWluLW5vLWV4cGVydHMtY29tcG9zZXItMjAyNS0xMi04X183LTU5LTU0Lm1wMyJ9.mp3" length="35718856" type="audio/mpeg"/><itunes:summary>&lt;p&gt;In this episode, Matt and Yuri treat AI less as a single “thing” and more as a cultural mirror—an unsettling Rorschach test that reveals what humans and institutions fear, desire, and project. Matt argues that the current frenzy resembles a mass hallucination driven by investor fantasies of automation, while Yuri counters with a grounded case: in his daily work, AI has become an indispensable collaborator that removes bottlenecks and accelerates real output.&lt;/p&gt;&lt;p&gt;&lt;/p&gt;&lt;p&gt;&lt;a rel=&quot;noopener noreferrer nofollow&quot; href=&quot;https://worldbrainnoexperts.substack.com/p/show-notes-for-episode-2?r=9fpi3&quot; target=&quot;_blank&quot;&gt;Show notes&lt;/a&gt;&lt;/p&gt;&lt;p&gt;&lt;a rel=&quot;noopener noreferrer nofollow&quot; href=&quot;worldbrainnoexperts.substack.com&quot; target=&quot;_blank&quot;&gt;Substack&lt;/a&gt;&lt;/p&gt;&lt;p&gt;&lt;/p&gt;</itunes:summary><itunes:explicit>no</itunes:explicit><itunes:duration>00:55:46</itunes:duration><itunes:image href="https://hosting-media.rs-prod.riverside.fm/media/podcasts/246cf488-d2e5-47d5-af40-24854ff44870/logos/f3a7ef7d-a50a-4c37-a2b3-4511addd53c4.jpeg"/><itunes:season>1</itunes:season><itunes:episode>2</itunes:episode><itunes:title>Ep 2: AI as a Rorschach Test (Tool, Mirror, or Mass Delusion?)</itunes:title><itunes:episodeType>full</itunes:episodeType></item></channel></rss>