<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0" xmlns:itunes="http://www.itunes.com/dtds/podcast-1.0.dtd" xmlns:psc="http://podlove.org/simple-chapters" xmlns:podcast="https://podcastindex.org/namespace/1.0"><channel><title><![CDATA[No Effing AIdea!]]></title><description><![CDATA[No Effing AIdea! is where enterprise leaders get the real story on AI adoption. Forget the hype and vendor theatre — this is the messy middle, where boards want moonshots, compliance says no, and customers push back on brilliance.

Hosts Srini Annamaraju and David Royle bring 30+ years of earned scars in digital and AI transformation. Every fortnight, they cut through the noise with:

The Cold Open — a provocative stat or story you should be paying attention to (AI ethics, job displacement, enterprise fraud, you name it)

The Reality Check — a fast, unfiltered rundown of the last two weeks in enterprise AI, decoded for what it really means

The Deep Dive — one paradox in focus, like midsize firms “too big for hacks, too lean for moonshots,” or boards demanding both ROI and revolution at once

The Paradox Box — candid Q&amp;A with execs, founders, and investors wrestling with the contradictions of AI in the enterprise

It’s pragmatic, funny, and sometimes brutal. Finally, a podcast that talks about AI transformation like adults who’ve actually been there.]]></description><link>https://riverside.com</link><generator>Riverside.fm (https://riverside.com)</generator><lastBuildDate>Thu, 07 May 2026 10:04:46 GMT</lastBuildDate><atom:link href="https://api.riverside.com/hosting/XylEDoAY.rss" rel="self" type="application/rss+xml"/><author><![CDATA[Srini and David]]></author><pubDate>Thu, 11 Sep 2025 17:39:08 GMT</pubDate><copyright><![CDATA[2025 Srini and David]]></copyright><language><![CDATA[en]]></language><ttl>60</ttl><category><![CDATA[Business]]></category><category><![CDATA[Technology]]></category><itunes:author>Srini and David</itunes:author><itunes:summary>No Effing AIdea! is where enterprise leaders get the real story on AI adoption. Forget the hype and vendor theatre — this is the messy middle, where boards want moonshots, compliance says no, and customers push back on brilliance.

Hosts Srini Annamaraju and David Royle bring 30+ years of earned scars in digital and AI transformation. Every fortnight, they cut through the noise with:

The Cold Open — a provocative stat or story you should be paying attention to (AI ethics, job displacement, enterprise fraud, you name it)

The Reality Check — a fast, unfiltered rundown of the last two weeks in enterprise AI, decoded for what it really means

The Deep Dive — one paradox in focus, like midsize firms “too big for hacks, too lean for moonshots,” or boards demanding both ROI and revolution at once

The Paradox Box — candid Q&amp;amp;A with execs, founders, and investors wrestling with the contradictions of AI in the enterprise

It’s pragmatic, funny, and sometimes brutal. Finally, a podcast that talks about AI transformation like adults who’ve actually been there.</itunes:summary><itunes:type>episodic</itunes:type><itunes:owner><itunes:name>Srini and David</itunes:name><itunes:email>srini.annamaraju@gmail.com</itunes:email></itunes:owner><itunes:explicit>no</itunes:explicit><itunes:category text="Business"/><itunes:category text="Technology"/><itunes:image href="https://hosting-media.riverside.com/media/podcasts/59262203-5254-4406-b6a4-785187a81892/logos/4eb9df3f-f142-4991-acd1-66c585202c54.png"/><item><title><![CDATA[Ep #8: Enterprise AI Field Notes: Agents at Work, Live Demo, Guardrails + Responsible Innovation]]></title><description><![CDATA[<p>Hosts: <a rel="noopener noreferrer nofollow" href="https://www.linkedin.com/in/sriniuk/" target="_blank">Srini Annamaraju</a> &amp; <a rel="noopener noreferrer nofollow" href="https://www.linkedin.com/in/davidroyle/" target="_blank">David Royle</a>.</p><p>Guest: <a rel="noopener noreferrer nofollow" href="https://www.linkedin.com/in/raviramchandran/" target="_blank">Ravi Ramchandran.</a></p><p></p><p>Welcome to episode 8.</p><p></p><p>AI agents are getting easier to build. That’s the exciting bit. The risky bit is that organisations can now create weak, badly governed automations before leadership has worked out what “good” actually looks like.</p><p></p><p>In this episode, Ravi joins Srini and David to pull the conversation out of buzzword-land and into real work. He walks through a practical example of building an agent that turns meeting transcripts into status reports, then digs into what matters underneath: prompt discipline, guardrails, safe experimentation, risk metrics, and why handing people tools without changing operating practice is asking for trouble.</p><p></p><p>The conversation moves from macro AI noise to enterprise reality. How should leaders think about the 70-20-10 split of routine, experimental, and visionary work? Where does human friction still belong? And how do you encourage innovation without creating a quiet flood of low-quality AI output across the firm?</p><p></p><p><b>What we cover</b></p><ul><li>Macro AI reality check - Why the sensible middle matters more than the hype-or-panic cycle.</li><li>Productivity is starting to show up - Early signs of measurable uplift are emerging, even if the landing is still messy.</li><li>The 70-20-10 work model - How to cut routine work and create more room for experimentation and higher-order thinking.</li><li>Innovation becomes everybody’s job - The barrier to building has dropped so far that innovation can’t stay in a corporate side room.</li><li>A live agent example - Ravi demonstrates how meeting transcripts can be turned into weekly status reporting.</li><li>Why prompts are not enough - One decent output is not the same as a repeatable capability.</li><li>Risk metrics for the AI era - Traditional productivity measures are no longer enough.</li><li>A seven-day build plan - Ravi shares a practical way to identify, scope, and build useful agents.</li></ul><p></p><p><b>Chapters</b></p><ol><li>AI noise vs real enterprise adoption</li><li>Why productivity gains are starting to matter</li><li>The 70-20-10 model for redesigning work</li><li>Innovation becomes everybody’s business</li><li>Live demo: agent for weekly status reports</li><li>Prompting, grounding, and hallucination risk</li><li>Guardrails, policy, and engineering practice</li><li>Risk metrics and trust in production</li><li>A seven-day framework for useful agents</li></ol><p></p><p><b>Top-5 Takeaways</b></p><ul><li>Tools alone do not transform organisations</li><li>Agents need boundaries, not vibes</li><li>AI risk is now operational risk</li><li>Safe experimentation needs leadership air cover</li><li>Frameworks beat random enthusiasm<p></p></li></ul><p><b>Who it’s for</b></p><p>Enterprise Leaders in all functions inerested in AI adoption.  </p><p></p><p><b>Help Spread the Word </b></p><p>Enjoyed the episode? Follow us!</p><p></p><p><b>Template Takeaways</b></p><p>Ravi has kindly shared these two templates he walked us through for general open access. Please feel free to download them from this Google Drive folder. </p><p></p><p><a rel="noopener noreferrer nofollow" href="https://drive.google.com/drive/folders/1yKGryaEQ4lM8hLSf1il3jrqbZj4XgHrt?usp=sharing" target="_blank">https://drive.google.com/drive/folders/1yKGryaEQ4lM8hLSf1il3jrqbZj4XgHrt?usp=sharing</a> </p>]]></description><guid isPermaLink="false">9fa13de1-20c7-43a0-80a2-2d2d46f774fd</guid><dc:creator><![CDATA[Srini and David]]></dc:creator><pubDate>Sun, 08 Mar 2026 11:04:49 GMT</pubDate><enclosure url="https://api.riverside.com/hosting-analytics/media/c15074b995616841ab8efef6e456b4d0b654d64560502b29fa9e2aa11304220e/eyJlcGlzb2RlSWQiOiI5ZmExM2RlMS0yMGM3LTQzYTAtODBhMi0yZDJkNDZmNzc0ZmQiLCJwb2RjYXN0SWQiOiI1OTI2MjIwMy01MjU0LTQ0MDYtYjZhNC03ODUxODdhODE4OTIiLCJhY2NvdW50SWQiOiI2ODVkMDdhODdjODcwMjIwZmFiNjkxNTgiLCJwYXRoIjoibWVkaWEvY2xpcHMvNjlhZDU3ZDFiM2FhYzhiYmJlMDkwODVhL3NyaW5pdmFzcy1zdHVkaW8tUXNxVWItY29tcG9zZXItMjAyNi0zLThfXzEyLTQtNDkubXAzIn0=.mp3" length="79472161" type="audio/mpeg"/><podcast:transcript url="https://hosting-media.riverside.com/media/podcasts/59262203-5254-4406-b6a4-785187a81892/episodes/9fa13de1-20c7-43a0-80a2-2d2d46f774fd/transcripts.txt" type="text/plain"/><itunes:summary>&lt;p&gt;Hosts: &lt;a rel=&quot;noopener noreferrer nofollow&quot; href=&quot;https://www.linkedin.com/in/sriniuk/&quot; target=&quot;_blank&quot;&gt;Srini Annamaraju&lt;/a&gt; &amp;amp; &lt;a rel=&quot;noopener noreferrer nofollow&quot; href=&quot;https://www.linkedin.com/in/davidroyle/&quot; target=&quot;_blank&quot;&gt;David Royle&lt;/a&gt;.&lt;/p&gt;&lt;p&gt;Guest: &lt;a rel=&quot;noopener noreferrer nofollow&quot; href=&quot;https://www.linkedin.com/in/raviramchandran/&quot; target=&quot;_blank&quot;&gt;Ravi Ramchandran.&lt;/a&gt;&lt;/p&gt;&lt;p&gt;&lt;/p&gt;&lt;p&gt;Welcome to episode 8.&lt;/p&gt;&lt;p&gt;&lt;/p&gt;&lt;p&gt;AI agents are getting easier to build. That’s the exciting bit. The risky bit is that organisations can now create weak, badly governed automations before leadership has worked out what “good” actually looks like.&lt;/p&gt;&lt;p&gt;&lt;/p&gt;&lt;p&gt;In this episode, Ravi joins Srini and David to pull the conversation out of buzzword-land and into real work. He walks through a practical example of building an agent that turns meeting transcripts into status reports, then digs into what matters underneath: prompt discipline, guardrails, safe experimentation, risk metrics, and why handing people tools without changing operating practice is asking for trouble.&lt;/p&gt;&lt;p&gt;&lt;/p&gt;&lt;p&gt;The conversation moves from macro AI noise to enterprise reality. How should leaders think about the 70-20-10 split of routine, experimental, and visionary work? Where does human friction still belong? And how do you encourage innovation without creating a quiet flood of low-quality AI output across the firm?&lt;/p&gt;&lt;p&gt;&lt;/p&gt;&lt;p&gt;&lt;b&gt;What we cover&lt;/b&gt;&lt;/p&gt;&lt;ul&gt;&lt;li&gt;Macro AI reality check - Why the sensible middle matters more than the hype-or-panic cycle.&lt;/li&gt;&lt;li&gt;Productivity is starting to show up - Early signs of measurable uplift are emerging, even if the landing is still messy.&lt;/li&gt;&lt;li&gt;The 70-20-10 work model - How to cut routine work and create more room for experimentation and higher-order thinking.&lt;/li&gt;&lt;li&gt;Innovation becomes everybody’s job - The barrier to building has dropped so far that innovation can’t stay in a corporate side room.&lt;/li&gt;&lt;li&gt;A live agent example - Ravi demonstrates how meeting transcripts can be turned into weekly status reporting.&lt;/li&gt;&lt;li&gt;Why prompts are not enough - One decent output is not the same as a repeatable capability.&lt;/li&gt;&lt;li&gt;Risk metrics for the AI era - Traditional productivity measures are no longer enough.&lt;/li&gt;&lt;li&gt;A seven-day build plan - Ravi shares a practical way to identify, scope, and build useful agents.&lt;/li&gt;&lt;/ul&gt;&lt;p&gt;&lt;/p&gt;&lt;p&gt;&lt;b&gt;Chapters&lt;/b&gt;&lt;/p&gt;&lt;ol&gt;&lt;li&gt;AI noise vs real enterprise adoption&lt;/li&gt;&lt;li&gt;Why productivity gains are starting to matter&lt;/li&gt;&lt;li&gt;The 70-20-10 model for redesigning work&lt;/li&gt;&lt;li&gt;Innovation becomes everybody’s business&lt;/li&gt;&lt;li&gt;Live demo: agent for weekly status reports&lt;/li&gt;&lt;li&gt;Prompting, grounding, and hallucination risk&lt;/li&gt;&lt;li&gt;Guardrails, policy, and engineering practice&lt;/li&gt;&lt;li&gt;Risk metrics and trust in production&lt;/li&gt;&lt;li&gt;A seven-day framework for useful agents&lt;/li&gt;&lt;/ol&gt;&lt;p&gt;&lt;/p&gt;&lt;p&gt;&lt;b&gt;Top-5 Takeaways&lt;/b&gt;&lt;/p&gt;&lt;ul&gt;&lt;li&gt;Tools alone do not transform organisations&lt;/li&gt;&lt;li&gt;Agents need boundaries, not vibes&lt;/li&gt;&lt;li&gt;AI risk is now operational risk&lt;/li&gt;&lt;li&gt;Safe experimentation needs leadership air cover&lt;/li&gt;&lt;li&gt;Frameworks beat random enthusiasm&lt;p&gt;&lt;/p&gt;&lt;/li&gt;&lt;/ul&gt;&lt;p&gt;&lt;b&gt;Who it’s for&lt;/b&gt;&lt;/p&gt;&lt;p&gt;Enterprise Leaders in all functions inerested in AI adoption.  &lt;/p&gt;&lt;p&gt;&lt;/p&gt;&lt;p&gt;&lt;b&gt;Help Spread the Word &lt;/b&gt;&lt;/p&gt;&lt;p&gt;Enjoyed the episode? Follow us!&lt;/p&gt;&lt;p&gt;&lt;/p&gt;&lt;p&gt;&lt;b&gt;Template Takeaways&lt;/b&gt;&lt;/p&gt;&lt;p&gt;Ravi has kindly shared these two templates he walked us through for general open access. Please feel free to download them from this Google Drive folder. &lt;/p&gt;&lt;p&gt;&lt;/p&gt;&lt;p&gt;&lt;a rel=&quot;noopener noreferrer nofollow&quot; href=&quot;https://drive.google.com/drive/folders/1yKGryaEQ4lM8hLSf1il3jrqbZj4XgHrt?usp=sharing&quot; target=&quot;_blank&quot;&gt;https://drive.google.com/drive/folders/1yKGryaEQ4lM8hLSf1il3jrqbZj4XgHrt?usp=sharing&lt;/a&gt; &lt;/p&gt;</itunes:summary><itunes:explicit>no</itunes:explicit><itunes:duration>00:55:11</itunes:duration><itunes:image href="https://hosting-media.riverside.com/media/podcasts/59262203-5254-4406-b6a4-785187a81892/logos/4eb9df3f-f142-4991-acd1-66c585202c54.png"/><itunes:season>1</itunes:season><itunes:episode>8</itunes:episode><itunes:title>Ep #8: Enterprise AI Field Notes: Agents at Work, Live Demo, Guardrails + Responsible Innovation</itunes:title><itunes:episodeType>full</itunes:episodeType></item><item><title><![CDATA[Ep #7: Enterprise AI Field Notes: Live cohort for Evals Careers,  AI Trust, Governance + Spicy News!]]></title><description><![CDATA[<p>Hosts: <a rel="noopener noreferrer nofollow" href="https://www.linkedin.com/in/sriniuk/" target="_blank">Srini Annamaraju</a> &amp; <a rel="noopener noreferrer nofollow" href="https://www.linkedin.com/in/davidroyle/" target="_blank">David Royle</a></p><p></p><p><b>“Evals are the weak link in enterprise AI adoption.”</b></p><p></p><p><i>And we say it like it is in our Maven cohort Lightning Lesson. Enrol here or see the recording - or join the waitlist for the paid 4-part course (tba): </i></p><p></p><p><a rel="noopener noreferrer nofollow" href="https://shorturl.at/lA9ig" target="_blank">https://shorturl.at/lA9ig</a></p><p></p><p><b>This episode is a proper grilling on AI Evals</b>: what they are, why boards should care, and why “ship it now, eval it later” is how you end up with a quiet disaster. </p><p></p><p>We also do a quick sweep on vendors going more “enterprise-native” (less benchmark theatre, more workflow reality).</p><p></p><p><b>What we cover</b></p><ul><li>Enterprise AI news: vendors shifting from benchmarks to enterprise workflows</li><li>OpenAI’s Enterprise report highlights </li><li>UiPath as the “plug-in hybrid” of automation: deterministic RPA meets GenAI via connectors (and why that blend might win)</li><li>What evals actually are: accuracy, citations, groundedness, hallucinations</li><li>Vendor reality: some push AI first and worry about evals later, others oversell eval tooling. Error analysis still matters</li><li>Evals as the connective tissue between value, risk, and operations. Proactive, not post-mortem-after-the-horses-bolted</li><li>The EDSO “four hats” operating model (Echo, Delta, Sigma, Omega) and why boards need the Omega translation layer</li><li>Maturity and scaling: small firms can fuse hats, even one-person pods for bounded scopes</li><li>Agentic future: “checker agents”, Delta agents writing eval harnesses, humans steering fleets of agents</li><li>Why SMEs lag, and how eval expectations will percolate through supply chains </li></ul><p></p><p><b>Chapters</b></p><ul><li>00:02 Intro: Episode 7, cold UK afternoon, messy middle of enterprise AI</li><li>00:56 AI news: enterprise context is the new battleground</li><li>02:45 OpenAI Enterprise report headlines</li><li>10:16 UiPath, hybrid automation, and the “plug-in hybrid” analogy</li><li>12:53 The grilling starts: what are evals?</li><li>17:02 Is AI risk being exaggerated to sell governance tools?</li><li>19:45 Evals as connective tissue, and why proactive matters</li><li>21:55 The EDSO roles and what “good” looks like</li><li>25:21 Maturity levels and how smaller firms scope it</li><li>26:58 Checker agents and agentic operating models</li><li>28:58 Business case problem: cost vs avoided disaster</li><li>32:14 Evals in SMEs and supply-chain pressure</li><li>33:26 Close: “survived the grilling”</li></ul><p></p><p><b>Takeaways</b></p><ul><li>Evals are not paperwork. They’re how you keep the value chain connected to operations without risk blowing up later.</li><li>Don’t let vendors sell you “tooling-as-a-substitute-for-thinking.” You still need human error analysis and clear accountability.</li><li>Treat EDSO as hats, not headcount. Start bounded, prove value, then scale.</li><li>Evals is becoming a career lane (think “AI eval controller” the way finance has controllers).</li><li>The agentic world will add “checker agents” and automated harness-writing, but humans still steer the system.</li></ul><p></p><p><b>Who it’s for</b></p><p>CIOs, CDOs, CAIOs, Heads of Risk, and anyone trying to ship enterprise AI without quietly lighting their control environment on fire. Also, anyone building a real career edge around AI trust and operational quality.</p>]]></description><guid isPermaLink="false">5e695d3e-5eeb-4cb7-bd4d-185dfef795a4</guid><dc:creator><![CDATA[Srini and David]]></dc:creator><pubDate>Mon, 15 Dec 2025 12:43:23 GMT</pubDate><enclosure url="https://api.riverside.com/hosting-analytics/media/fef9ddbabea830d1762e621cb24fcd69cd6e3286d6f8606994ce242e7ac25ab8/eyJlcGlzb2RlSWQiOiI1ZTY5NWQzZS01ZWViLTRjYjctYmQ0ZC0xODVkZmVmNzk1YTQiLCJwb2RjYXN0SWQiOiI1OTI2MjIwMy01MjU0LTQ0MDYtYjZhNC03ODUxODdhODE4OTIiLCJhY2NvdW50SWQiOiI2ODVkMDdhODdjODcwMjIwZmFiNjkxNTgiLCJwYXRoIjoibWVkaWEvY2xpcHMvNjk0MDAyNmNiYmU2YmYzNzU2NTY0Y2VkL3NyaW5pdmFzcy1zdHVkaW8tUXNxVWItY29tcG9zZXItMjAyNS0xMi0xNV9fMTMtNDMtMjMubXAzIn0=.mp3" length="25705645" type="audio/mpeg"/><itunes:summary>&lt;p&gt;Hosts: &lt;a rel=&quot;noopener noreferrer nofollow&quot; href=&quot;https://www.linkedin.com/in/sriniuk/&quot; target=&quot;_blank&quot;&gt;Srini Annamaraju&lt;/a&gt; &amp;amp; &lt;a rel=&quot;noopener noreferrer nofollow&quot; href=&quot;https://www.linkedin.com/in/davidroyle/&quot; target=&quot;_blank&quot;&gt;David Royle&lt;/a&gt;&lt;/p&gt;&lt;p&gt;&lt;/p&gt;&lt;p&gt;&lt;b&gt;“Evals are the weak link in enterprise AI adoption.”&lt;/b&gt;&lt;/p&gt;&lt;p&gt;&lt;/p&gt;&lt;p&gt;&lt;i&gt;And we say it like it is in our Maven cohort Lightning Lesson. Enrol here or see the recording - or join the waitlist for the paid 4-part course (tba): &lt;/i&gt;&lt;/p&gt;&lt;p&gt;&lt;/p&gt;&lt;p&gt;&lt;a rel=&quot;noopener noreferrer nofollow&quot; href=&quot;https://shorturl.at/lA9ig&quot; target=&quot;_blank&quot;&gt;https://shorturl.at/lA9ig&lt;/a&gt;&lt;/p&gt;&lt;p&gt;&lt;/p&gt;&lt;p&gt;&lt;b&gt;This episode is a proper grilling on AI Evals&lt;/b&gt;: what they are, why boards should care, and why “ship it now, eval it later” is how you end up with a quiet disaster. &lt;/p&gt;&lt;p&gt;&lt;/p&gt;&lt;p&gt;We also do a quick sweep on vendors going more “enterprise-native” (less benchmark theatre, more workflow reality).&lt;/p&gt;&lt;p&gt;&lt;/p&gt;&lt;p&gt;&lt;b&gt;What we cover&lt;/b&gt;&lt;/p&gt;&lt;ul&gt;&lt;li&gt;Enterprise AI news: vendors shifting from benchmarks to enterprise workflows&lt;/li&gt;&lt;li&gt;OpenAI’s Enterprise report highlights &lt;/li&gt;&lt;li&gt;UiPath as the “plug-in hybrid” of automation: deterministic RPA meets GenAI via connectors (and why that blend might win)&lt;/li&gt;&lt;li&gt;What evals actually are: accuracy, citations, groundedness, hallucinations&lt;/li&gt;&lt;li&gt;Vendor reality: some push AI first and worry about evals later, others oversell eval tooling. Error analysis still matters&lt;/li&gt;&lt;li&gt;Evals as the connective tissue between value, risk, and operations. Proactive, not post-mortem-after-the-horses-bolted&lt;/li&gt;&lt;li&gt;The EDSO “four hats” operating model (Echo, Delta, Sigma, Omega) and why boards need the Omega translation layer&lt;/li&gt;&lt;li&gt;Maturity and scaling: small firms can fuse hats, even one-person pods for bounded scopes&lt;/li&gt;&lt;li&gt;Agentic future: “checker agents”, Delta agents writing eval harnesses, humans steering fleets of agents&lt;/li&gt;&lt;li&gt;Why SMEs lag, and how eval expectations will percolate through supply chains &lt;/li&gt;&lt;/ul&gt;&lt;p&gt;&lt;/p&gt;&lt;p&gt;&lt;b&gt;Chapters&lt;/b&gt;&lt;/p&gt;&lt;ul&gt;&lt;li&gt;00:02 Intro: Episode 7, cold UK afternoon, messy middle of enterprise AI&lt;/li&gt;&lt;li&gt;00:56 AI news: enterprise context is the new battleground&lt;/li&gt;&lt;li&gt;02:45 OpenAI Enterprise report headlines&lt;/li&gt;&lt;li&gt;10:16 UiPath, hybrid automation, and the “plug-in hybrid” analogy&lt;/li&gt;&lt;li&gt;12:53 The grilling starts: what are evals?&lt;/li&gt;&lt;li&gt;17:02 Is AI risk being exaggerated to sell governance tools?&lt;/li&gt;&lt;li&gt;19:45 Evals as connective tissue, and why proactive matters&lt;/li&gt;&lt;li&gt;21:55 The EDSO roles and what “good” looks like&lt;/li&gt;&lt;li&gt;25:21 Maturity levels and how smaller firms scope it&lt;/li&gt;&lt;li&gt;26:58 Checker agents and agentic operating models&lt;/li&gt;&lt;li&gt;28:58 Business case problem: cost vs avoided disaster&lt;/li&gt;&lt;li&gt;32:14 Evals in SMEs and supply-chain pressure&lt;/li&gt;&lt;li&gt;33:26 Close: “survived the grilling”&lt;/li&gt;&lt;/ul&gt;&lt;p&gt;&lt;/p&gt;&lt;p&gt;&lt;b&gt;Takeaways&lt;/b&gt;&lt;/p&gt;&lt;ul&gt;&lt;li&gt;Evals are not paperwork. They’re how you keep the value chain connected to operations without risk blowing up later.&lt;/li&gt;&lt;li&gt;Don’t let vendors sell you “tooling-as-a-substitute-for-thinking.” You still need human error analysis and clear accountability.&lt;/li&gt;&lt;li&gt;Treat EDSO as hats, not headcount. Start bounded, prove value, then scale.&lt;/li&gt;&lt;li&gt;Evals is becoming a career lane (think “AI eval controller” the way finance has controllers).&lt;/li&gt;&lt;li&gt;The agentic world will add “checker agents” and automated harness-writing, but humans still steer the system.&lt;/li&gt;&lt;/ul&gt;&lt;p&gt;&lt;/p&gt;&lt;p&gt;&lt;b&gt;Who it’s for&lt;/b&gt;&lt;/p&gt;&lt;p&gt;CIOs, CDOs, CAIOs, Heads of Risk, and anyone trying to ship enterprise AI without quietly lighting their control environment on fire. Also, anyone building a real career edge around AI trust and operational quality.&lt;/p&gt;</itunes:summary><itunes:explicit>no</itunes:explicit><itunes:duration>00:37:37</itunes:duration><itunes:image href="https://hosting-media.riverside.com/media/podcasts/59262203-5254-4406-b6a4-785187a81892/logos/4eb9df3f-f142-4991-acd1-66c585202c54.png"/><itunes:season>1</itunes:season><itunes:episode>7</itunes:episode><itunes:title>Ep #7: Enterprise AI Field Notes: Live cohort for Evals Careers,  AI Trust, Governance + Spicy News!</itunes:title><itunes:episodeType>full</itunes:episodeType></item><item><title><![CDATA[Ep #6: Enterprise AI Field Notes: Shadow AI, Fwd Deployed Eval Engrs, AI Drift, Board Governance ]]></title><description><![CDATA[<p>Hosts: <a rel="noopener noreferrer nofollow" href="https://www.linkedin.com/in/sriniuk/" target="_blank">Srini Annamaraju</a> &amp; <a rel="noopener noreferrer nofollow" href="https://www.linkedin.com/in/davidroyle/" target="_blank">David Royle</a>.</p><p></p><p>“The AI bubble is the wrong fear.”</p><p></p><p>The real threat sits inside your own walls: shadow AI you don’t see, boards that confuse risk aversion with risk management, and leaders trying to govern a technology they don’t actually understand.</p><p></p><p>We unpack why mid-market boards are exposed, how shadow AI reveals the truth about how your org really works, and what an actually realistic 12-month AI plan looks like.</p><p>And yes—why people, not models, are now the biggest AI risk vector.</p><p></p><p>The conversation revolves around a recent paper that David authored, a link to <a rel="noopener noreferrer nofollow" href="https://www.linkedin.com/posts/davidroyle_guiding-ai-strategy-the-boards-imperative-activity-7391871577631608832-_3IA?utm_source=share&amp;utm_medium=member_desktop&amp;rcm=ACoAAAFdNaABIvmIsUfUKUvr8ukvjoIDAxHQRVc" target="_blank">the post that has the details is here</a>. </p><p></p><p><b>What we cover</b></p><ol><li>Bubble noise vs fundamentals - Valuations swing wildly, but enterprise AI maturity rises daily. We explain why it has nothing to do with the technology reshaping your org.</li><li>Shadow AI as diagnosis - It’s not a tooling problem but a symptom of mismatched expectations. </li><li>Boards: from passive listeners to owners - Why literacy is step zero, and why chairs need to move fast. </li><li>Risk aversion trap - The boards that “get it” flip from “should we?” to “how quickly, safely, and visibly can we?” </li><li>90-day governance playbook - Inventory → Validate → Govern. </li><li>Top-down vs bottom-up AI - How grassroots use cases and board-led operating models collide. </li><li>12-month reality check - You won’t be AI-first in a year. But you can be an AI-literate, AI-safe, AI-enabled organisation in 12 months. </li><li>Explainability anxiety - Why boards demand transparency from AI they never asked of spreadsheets or humans. </li><li>The uncomfortable truth - The biggest AI risk isn’t the model. It’s your people. </li><li>Evals preview - Why audits, trust contracts, drift checks, and forward-deployed evaluators will soon be board-level concerns.</li></ol><p></p><p><b>Chapters</b></p><ul><li>AI bubble vs enterprise fundamentals</li><li>Shadow AI as a symptom</li><li>Boards falling behind</li><li>Risk aversion vs risk management</li><li>90-day governance plan</li><li>A realistic 12-month AI horizon</li><li>The real AI risk: people</li><li>Intro to enterprise evals</li></ul><p></p><p><b>Takeaways</b></p><ul><li>Shadow AI is a mirror - reveals gaps in culture, process, and leadership direction, not tooling.</li><li>Boards must lead, not observe - Active literacy and ownership are key.</li><li>Governance is the stabiliser. Inventories, validations, guardrails, and oversight reduce drift &amp; exposure.</li><li>Explainability is contextual. Set boundaries, not magic expectations.</li><li>People are the attack surface. Don't miss non-malicious misuse.</li><li>12 months = foundations. Literacy, safety, and one high-value use case per function. That’s the win.</li></ul><p></p><p><b>Who it’s for</b></p><p>Board members, CEOs, COOs, CIOs, CROs, and mid-market operators needing a grounded, real-world view of AI risk, governance, and organisational maturity. </p><p></p><p><b>Help Spread the Word - </b>Enjoyed the episode? Follow the show, leave a review, and share with a colleague grappling with shadow AI, governance gaps, or board-level AI decisions.  Want to join as a guest or sponsor a future episode? Get in touch!</p>]]></description><guid isPermaLink="false">5357881c-9758-41c2-b6bd-79fa1f01e891</guid><dc:creator><![CDATA[Srini and David]]></dc:creator><pubDate>Tue, 25 Nov 2025 06:00:00 GMT</pubDate><enclosure url="https://api.riverside.com/hosting-analytics/media/838a96cfecaae0fcba35ea08da8402ecdbd2c84cec3323c0a71fde159b534955/eyJlcGlzb2RlSWQiOiI1MzU3ODgxYy05NzU4LTQxYzItYjZiZC03OWZhMWYwMWU4OTEiLCJwb2RjYXN0SWQiOiI1OTI2MjIwMy01MjU0LTQ0MDYtYjZhNC03ODUxODdhODE4OTIiLCJhY2NvdW50SWQiOiI2ODVkMDdhODdjODcwMjIwZmFiNjkxNTgiLCJwYXRoIjoibWVkaWEvY2xpcHMvNjkyNGUyYWEyYzk3ZjFhMDljMjhiY2JhL3NyaW5pdmFzcy1zdHVkaW8tUXNxVWItY29tcG9zZXItMjAyNS0xMS0yNF9fMjMtNTYtNDIubXAzIn0=.mp3" length="26259270" type="audio/mpeg"/><itunes:summary>&lt;p&gt;Hosts: &lt;a rel=&quot;noopener noreferrer nofollow&quot; href=&quot;https://www.linkedin.com/in/sriniuk/&quot; target=&quot;_blank&quot;&gt;Srini Annamaraju&lt;/a&gt; &amp;amp; &lt;a rel=&quot;noopener noreferrer nofollow&quot; href=&quot;https://www.linkedin.com/in/davidroyle/&quot; target=&quot;_blank&quot;&gt;David Royle&lt;/a&gt;.&lt;/p&gt;&lt;p&gt;&lt;/p&gt;&lt;p&gt;“The AI bubble is the wrong fear.”&lt;/p&gt;&lt;p&gt;&lt;/p&gt;&lt;p&gt;The real threat sits inside your own walls: shadow AI you don’t see, boards that confuse risk aversion with risk management, and leaders trying to govern a technology they don’t actually understand.&lt;/p&gt;&lt;p&gt;&lt;/p&gt;&lt;p&gt;We unpack why mid-market boards are exposed, how shadow AI reveals the truth about how your org really works, and what an actually realistic 12-month AI plan looks like.&lt;/p&gt;&lt;p&gt;And yes—why people, not models, are now the biggest AI risk vector.&lt;/p&gt;&lt;p&gt;&lt;/p&gt;&lt;p&gt;The conversation revolves around a recent paper that David authored, a link to &lt;a rel=&quot;noopener noreferrer nofollow&quot; href=&quot;https://www.linkedin.com/posts/davidroyle_guiding-ai-strategy-the-boards-imperative-activity-7391871577631608832-_3IA?utm_source=share&amp;amp;utm_medium=member_desktop&amp;amp;rcm=ACoAAAFdNaABIvmIsUfUKUvr8ukvjoIDAxHQRVc&quot; target=&quot;_blank&quot;&gt;the post that has the details is here&lt;/a&gt;. &lt;/p&gt;&lt;p&gt;&lt;/p&gt;&lt;p&gt;&lt;b&gt;What we cover&lt;/b&gt;&lt;/p&gt;&lt;ol&gt;&lt;li&gt;Bubble noise vs fundamentals - Valuations swing wildly, but enterprise AI maturity rises daily. We explain why it has nothing to do with the technology reshaping your org.&lt;/li&gt;&lt;li&gt;Shadow AI as diagnosis - It’s not a tooling problem but a symptom of mismatched expectations. &lt;/li&gt;&lt;li&gt;Boards: from passive listeners to owners - Why literacy is step zero, and why chairs need to move fast. &lt;/li&gt;&lt;li&gt;Risk aversion trap - The boards that “get it” flip from “should we?” to “how quickly, safely, and visibly can we?” &lt;/li&gt;&lt;li&gt;90-day governance playbook - Inventory → Validate → Govern. &lt;/li&gt;&lt;li&gt;Top-down vs bottom-up AI - How grassroots use cases and board-led operating models collide. &lt;/li&gt;&lt;li&gt;12-month reality check - You won’t be AI-first in a year. But you can be an AI-literate, AI-safe, AI-enabled organisation in 12 months. &lt;/li&gt;&lt;li&gt;Explainability anxiety - Why boards demand transparency from AI they never asked of spreadsheets or humans. &lt;/li&gt;&lt;li&gt;The uncomfortable truth - The biggest AI risk isn’t the model. It’s your people. &lt;/li&gt;&lt;li&gt;Evals preview - Why audits, trust contracts, drift checks, and forward-deployed evaluators will soon be board-level concerns.&lt;/li&gt;&lt;/ol&gt;&lt;p&gt;&lt;/p&gt;&lt;p&gt;&lt;b&gt;Chapters&lt;/b&gt;&lt;/p&gt;&lt;ul&gt;&lt;li&gt;AI bubble vs enterprise fundamentals&lt;/li&gt;&lt;li&gt;Shadow AI as a symptom&lt;/li&gt;&lt;li&gt;Boards falling behind&lt;/li&gt;&lt;li&gt;Risk aversion vs risk management&lt;/li&gt;&lt;li&gt;90-day governance plan&lt;/li&gt;&lt;li&gt;A realistic 12-month AI horizon&lt;/li&gt;&lt;li&gt;The real AI risk: people&lt;/li&gt;&lt;li&gt;Intro to enterprise evals&lt;/li&gt;&lt;/ul&gt;&lt;p&gt;&lt;/p&gt;&lt;p&gt;&lt;b&gt;Takeaways&lt;/b&gt;&lt;/p&gt;&lt;ul&gt;&lt;li&gt;Shadow AI is a mirror - reveals gaps in culture, process, and leadership direction, not tooling.&lt;/li&gt;&lt;li&gt;Boards must lead, not observe - Active literacy and ownership are key.&lt;/li&gt;&lt;li&gt;Governance is the stabiliser. Inventories, validations, guardrails, and oversight reduce drift &amp;amp; exposure.&lt;/li&gt;&lt;li&gt;Explainability is contextual. Set boundaries, not magic expectations.&lt;/li&gt;&lt;li&gt;People are the attack surface. Don&apos;t miss non-malicious misuse.&lt;/li&gt;&lt;li&gt;12 months = foundations. Literacy, safety, and one high-value use case per function. That’s the win.&lt;/li&gt;&lt;/ul&gt;&lt;p&gt;&lt;/p&gt;&lt;p&gt;&lt;b&gt;Who it’s for&lt;/b&gt;&lt;/p&gt;&lt;p&gt;Board members, CEOs, COOs, CIOs, CROs, and mid-market operators needing a grounded, real-world view of AI risk, governance, and organisational maturity. &lt;/p&gt;&lt;p&gt;&lt;/p&gt;&lt;p&gt;&lt;b&gt;Help Spread the Word - &lt;/b&gt;Enjoyed the episode? Follow the show, leave a review, and share with a colleague grappling with shadow AI, governance gaps, or board-level AI decisions.  Want to join as a guest or sponsor a future episode? Get in touch!&lt;/p&gt;</itunes:summary><itunes:explicit>no</itunes:explicit><itunes:duration>00:36:51</itunes:duration><itunes:image href="https://hosting-media.riverside.com/media/podcasts/59262203-5254-4406-b6a4-785187a81892/logos/4eb9df3f-f142-4991-acd1-66c585202c54.png"/><itunes:season>1</itunes:season><itunes:episode>6</itunes:episode><itunes:title>Ep #6: Enterprise AI Field Notes: Shadow AI, Fwd Deployed Eval Engrs, AI Drift, Board Governance </itunes:title><itunes:episodeType>full</itunes:episodeType></item><item><title><![CDATA[Ep #5: Enterprise AI Field Notes: AI Job Shifts, Micro-Creds, Brave New Orgs, AI in SMEs, AgentOps]]></title><description><![CDATA[<p><b>Hosts:</b> <a rel="noopener noreferrer nofollow" href="https://www.linkedin.com/in/sriniuk/" target="_blank">Srini Annamaraju</a> &amp; <a rel="noopener noreferrer nofollow" href="https://www.linkedin.com/in/davidroyle/" target="_blank">David Royle</a></p><p></p><p>“AI kills jobs” is the wrong headline. The real story is structural: org pyramids flatten into diamonds, managers run fleets of agents, SMEs unlock backlogs without hiring sprees, and skills go modular with micro-credentials. </p><p></p><p>We break down what changes now—and how to lead it without face-planting.</p><p></p><p><b>What we cover</b></p><ul><li><b>Jobs vs. roles:</b> Why the entry-level layer thins, the manager layer thickens, and how to redesign spans of control when agents do the doing.</li><li><b>Agents on a spectrum:</b> Start with human-in-the-loop, graduate to AgentOps. Where to set autonomy today, what to monitor, and how to keep audits, drift checks, and safety rails sane.</li><li><b>Backlog &gt; headcount:</b> Use AI to attack the work you never had people for—deterministic, high-volume tasks that finally move the needle.</li><li><b>Operational resilience:</b> Outages and dependency chains aren’t hypotheticals. We outline layered BCP/DR for an agentic stack so one failure doesn’t cascade.</li><li><b>Early-career paradox:</b> Apprenticeships still matter—how to select, coach, and rotate juniors in a world with fewer traditional entry roles.</li><li><b>Skills that rise:</b> Cognitive prompting, judgment, people leadership—and why short, role-tied micro-credentials beat semester-long generalities.</li><li><b>SME timing &amp; tactics:</b> Where mid-market buyers actually are on the curve, what to build vs. buy, and how to avoid “pilot purgatory.”</li></ul><p></p><p><b>Chapters</b></p><ol><li>Jobs headline vs ground truth</li><li>From pyramid to diamond orgs</li><li>Agents, autonomy, and HITL → AgentOps</li><li>Managing hybrid teams (humans + agents)</li><li>Resilience playbook for outages and dependencies</li><li>Early-career design: apprenticeships, reverse mentoring</li><li>Micro-credentials and fast upskilling</li><li>What SMEs should do this quarter</li></ol><p></p><p><b>Takeaways</b></p><ul><li>Jobs aren’t vanishing; roles are morphing. Plan for fewer juniors, more AI-enabled managers, explicit oversight of agent fleets.</li><li>Governance is the unlock. Treat agents like teammates with performance records, audits, and clear escalation paths.</li><li>Resilience is strategy. Design for failure before agents touch critical workflows.</li><li>Upskill in sprints. Tie micro-credentials to roles, not buzzwords.</li></ul><p></p><p><b>Who it’s for</b><br />Operators, CTOs/CIOs, and line leaders who need practical steps to re-shape teams, govern agentic workflows, and build real resilience—especially in SMEs.</p><p></p><p><b>Help Spread the Word:</b><br />Enjoyed the episode? Follow the show, leave a quick review, and share with a colleague wrestling with agent governance or workforce design. Interested in joining as a guest or sponsoring a future episode? Get in touch.</p>]]></description><guid isPermaLink="false">3b0fce91-a118-4bd3-991f-d7ff32bab8c9</guid><dc:creator><![CDATA[Srini and David]]></dc:creator><pubDate>Tue, 04 Nov 2025 12:53:09 GMT</pubDate><enclosure url="https://api.riverside.com/hosting-analytics/media/c39c4f932fae09385124f6e1f34253ebd6ce791e12901a8e176d3ae79ef83ef2/eyJlcGlzb2RlSWQiOiIzYjBmY2U5MS1hMTE4LTRiZDMtOTkxZi1kN2ZmMzJiYWI4YzkiLCJwb2RjYXN0SWQiOiI1OTI2MjIwMy01MjU0LTQ0MDYtYjZhNC03ODUxODdhODE4OTIiLCJhY2NvdW50SWQiOiI2ODVkMDdhODdjODcwMjIwZmFiNjkxNTgiLCJwYXRoIjoibWVkaWEvY2xpcHMvNjkwOWY3MzVlZTlhZTdlY2E2ZjkyM2Y5L3NyaW5pdmFzcy1zdHVkaW8tUXNxVWItY29tcG9zZXItMjAyNS0xMS00X18xMy01My05Lm1wMyJ9.mp3" length="17597562" type="audio/mpeg"/><itunes:summary>&lt;p&gt;&lt;b&gt;Hosts:&lt;/b&gt; &lt;a rel=&quot;noopener noreferrer nofollow&quot; href=&quot;https://www.linkedin.com/in/sriniuk/&quot; target=&quot;_blank&quot;&gt;Srini Annamaraju&lt;/a&gt; &amp;amp; &lt;a rel=&quot;noopener noreferrer nofollow&quot; href=&quot;https://www.linkedin.com/in/davidroyle/&quot; target=&quot;_blank&quot;&gt;David Royle&lt;/a&gt;&lt;/p&gt;&lt;p&gt;&lt;/p&gt;&lt;p&gt;“AI kills jobs” is the wrong headline. The real story is structural: org pyramids flatten into diamonds, managers run fleets of agents, SMEs unlock backlogs without hiring sprees, and skills go modular with micro-credentials. &lt;/p&gt;&lt;p&gt;&lt;/p&gt;&lt;p&gt;We break down what changes now—and how to lead it without face-planting.&lt;/p&gt;&lt;p&gt;&lt;/p&gt;&lt;p&gt;&lt;b&gt;What we cover&lt;/b&gt;&lt;/p&gt;&lt;ul&gt;&lt;li&gt;&lt;b&gt;Jobs vs. roles:&lt;/b&gt; Why the entry-level layer thins, the manager layer thickens, and how to redesign spans of control when agents do the doing.&lt;/li&gt;&lt;li&gt;&lt;b&gt;Agents on a spectrum:&lt;/b&gt; Start with human-in-the-loop, graduate to AgentOps. Where to set autonomy today, what to monitor, and how to keep audits, drift checks, and safety rails sane.&lt;/li&gt;&lt;li&gt;&lt;b&gt;Backlog &amp;gt; headcount:&lt;/b&gt; Use AI to attack the work you never had people for—deterministic, high-volume tasks that finally move the needle.&lt;/li&gt;&lt;li&gt;&lt;b&gt;Operational resilience:&lt;/b&gt; Outages and dependency chains aren’t hypotheticals. We outline layered BCP/DR for an agentic stack so one failure doesn’t cascade.&lt;/li&gt;&lt;li&gt;&lt;b&gt;Early-career paradox:&lt;/b&gt; Apprenticeships still matter—how to select, coach, and rotate juniors in a world with fewer traditional entry roles.&lt;/li&gt;&lt;li&gt;&lt;b&gt;Skills that rise:&lt;/b&gt; Cognitive prompting, judgment, people leadership—and why short, role-tied micro-credentials beat semester-long generalities.&lt;/li&gt;&lt;li&gt;&lt;b&gt;SME timing &amp;amp; tactics:&lt;/b&gt; Where mid-market buyers actually are on the curve, what to build vs. buy, and how to avoid “pilot purgatory.”&lt;/li&gt;&lt;/ul&gt;&lt;p&gt;&lt;/p&gt;&lt;p&gt;&lt;b&gt;Chapters&lt;/b&gt;&lt;/p&gt;&lt;ol&gt;&lt;li&gt;Jobs headline vs ground truth&lt;/li&gt;&lt;li&gt;From pyramid to diamond orgs&lt;/li&gt;&lt;li&gt;Agents, autonomy, and HITL → AgentOps&lt;/li&gt;&lt;li&gt;Managing hybrid teams (humans + agents)&lt;/li&gt;&lt;li&gt;Resilience playbook for outages and dependencies&lt;/li&gt;&lt;li&gt;Early-career design: apprenticeships, reverse mentoring&lt;/li&gt;&lt;li&gt;Micro-credentials and fast upskilling&lt;/li&gt;&lt;li&gt;What SMEs should do this quarter&lt;/li&gt;&lt;/ol&gt;&lt;p&gt;&lt;/p&gt;&lt;p&gt;&lt;b&gt;Takeaways&lt;/b&gt;&lt;/p&gt;&lt;ul&gt;&lt;li&gt;Jobs aren’t vanishing; roles are morphing. Plan for fewer juniors, more AI-enabled managers, explicit oversight of agent fleets.&lt;/li&gt;&lt;li&gt;Governance is the unlock. Treat agents like teammates with performance records, audits, and clear escalation paths.&lt;/li&gt;&lt;li&gt;Resilience is strategy. Design for failure before agents touch critical workflows.&lt;/li&gt;&lt;li&gt;Upskill in sprints. Tie micro-credentials to roles, not buzzwords.&lt;/li&gt;&lt;/ul&gt;&lt;p&gt;&lt;/p&gt;&lt;p&gt;&lt;b&gt;Who it’s for&lt;/b&gt;&lt;br /&gt;Operators, CTOs/CIOs, and line leaders who need practical steps to re-shape teams, govern agentic workflows, and build real resilience—especially in SMEs.&lt;/p&gt;&lt;p&gt;&lt;/p&gt;&lt;p&gt;&lt;b&gt;Help Spread the Word:&lt;/b&gt;&lt;br /&gt;Enjoyed the episode? Follow the show, leave a quick review, and share with a colleague wrestling with agent governance or workforce design. Interested in joining as a guest or sponsoring a future episode? Get in touch.&lt;/p&gt;</itunes:summary><itunes:explicit>no</itunes:explicit><itunes:duration>00:36:40</itunes:duration><itunes:image href="https://hosting-media.riverside.com/media/podcasts/59262203-5254-4406-b6a4-785187a81892/logos/4eb9df3f-f142-4991-acd1-66c585202c54.png"/><itunes:season>1</itunes:season><itunes:episode>5</itunes:episode><itunes:title>Ep #5: Enterprise AI Field Notes: AI Job Shifts, Micro-Creds, Brave New Orgs, AI in SMEs, AgentOps</itunes:title><itunes:episodeType>full</itunes:episodeType></item><item><title><![CDATA[Ep #4: Enterprise AI Field Notes: The Big Scaffold, Pilot to Prod, ROI and TCO, Tradeoffs & Payoffs]]></title><description><![CDATA[<p>In this new <i>Enterprise AI Field Notes</i> deep dive, <a rel="noopener noreferrer nofollow" href="https://www.linkedin.com/in/sriniuk/" target="_blank">Srini Annamaraju</a> (aka, 'the tech guy') and <a rel="noopener noreferrer nofollow" href="https://www.linkedin.com/in/davidroyle/" target="_blank">David Royle</a> (who's 'the business guy') take the story past design into delivery — from the Target Operating Model (TOM) to the everyday reality of running AI inside the enterprise.</p><p></p><p>Through the lens of a real Bank's AI copilot rollout, name changed to "<b>Albion Bank", </b>they map how real transformation happens inside the <b>Enterprise AI Honeycomb</b> — a connected system of data, models, patterns, platforms, and guardrails that must all work in harmony.</p><p></p><p>💡 <b>What we cover:</b></p><ul><li>Why do most AI “pilots” stall before production — and how to stop the fade?</li><li>How data decisions shape every downstream fork in the journey?</li><li>What “brains, behaviour, and nervous system” really mean in AI design?</li><li>How to build hybrid platforms that stay compliant <i>and</i> fast?</li><li>What does it take to shift from ad-hoc prompting to disciplined LLMOps?</li><li>Why security, governance, and economics are the body’s immune system and heartbeat?</li></ul><p></p><p>Srini breaks down the <b>technical scaffold</b> — how the ten cells of the honeycomb connect to deliver measurable ROI.<br /></p><p>David probes rigorously from the business side — questioning trade-offs, accountability, and real-world friction.</p><p></p><p>Together they turn AI from a keynote fantasy into a <b>hard-nosed operating reality</b>. </p>]]></description><guid isPermaLink="false">6834740e-5544-44f4-895f-91f92c858837</guid><dc:creator><![CDATA[Srini and David]]></dc:creator><pubDate>Fri, 17 Oct 2025 10:08:30 GMT</pubDate><enclosure url="https://api.riverside.com/hosting-analytics/media/d637db310d2a2036531e17362c8475ce2ef27090c68e4c0a1469b3436ca90f1b/eyJlcGlzb2RlSWQiOiI2ODM0NzQwZS01NTQ0LTQ0ZjQtODk1Zi05MWY5MmM4NTg4MzciLCJwb2RjYXN0SWQiOiI1OTI2MjIwMy01MjU0LTQ0MDYtYjZhNC03ODUxODdhODE4OTIiLCJhY2NvdW50SWQiOiI2ODVkMDdhODdjODcwMjIwZmFiNjkxNTgiLCJwYXRoIjoibWVkaWEvY2xpcHMvNjhmMjE1OWVlMDI1MWRhYjZiY2I5MjE1L3NyaW5pdmFzcy1zdHVkaW8tUXNxVWItY29tcG9zZXItMjAyNS0xMC0xN19fMTItOC0zMC5tcDMifQ==.mp3" length="24535475" type="audio/mpeg"/><itunes:summary>&lt;p&gt;In this new &lt;i&gt;Enterprise AI Field Notes&lt;/i&gt; deep dive, &lt;a rel=&quot;noopener noreferrer nofollow&quot; href=&quot;https://www.linkedin.com/in/sriniuk/&quot; target=&quot;_blank&quot;&gt;Srini Annamaraju&lt;/a&gt; (aka, &apos;the tech guy&apos;) and &lt;a rel=&quot;noopener noreferrer nofollow&quot; href=&quot;https://www.linkedin.com/in/davidroyle/&quot; target=&quot;_blank&quot;&gt;David Royle&lt;/a&gt; (who&apos;s &apos;the business guy&apos;) take the story past design into delivery — from the Target Operating Model (TOM) to the everyday reality of running AI inside the enterprise.&lt;/p&gt;&lt;p&gt;&lt;/p&gt;&lt;p&gt;Through the lens of a real Bank&apos;s AI copilot rollout, name changed to &quot;&lt;b&gt;Albion Bank&quot;, &lt;/b&gt;they map how real transformation happens inside the &lt;b&gt;Enterprise AI Honeycomb&lt;/b&gt; — a connected system of data, models, patterns, platforms, and guardrails that must all work in harmony.&lt;/p&gt;&lt;p&gt;&lt;/p&gt;&lt;p&gt;💡 &lt;b&gt;What we cover:&lt;/b&gt;&lt;/p&gt;&lt;ul&gt;&lt;li&gt;Why do most AI “pilots” stall before production — and how to stop the fade?&lt;/li&gt;&lt;li&gt;How data decisions shape every downstream fork in the journey?&lt;/li&gt;&lt;li&gt;What “brains, behaviour, and nervous system” really mean in AI design?&lt;/li&gt;&lt;li&gt;How to build hybrid platforms that stay compliant &lt;i&gt;and&lt;/i&gt; fast?&lt;/li&gt;&lt;li&gt;What does it take to shift from ad-hoc prompting to disciplined LLMOps?&lt;/li&gt;&lt;li&gt;Why security, governance, and economics are the body’s immune system and heartbeat?&lt;/li&gt;&lt;/ul&gt;&lt;p&gt;&lt;/p&gt;&lt;p&gt;Srini breaks down the &lt;b&gt;technical scaffold&lt;/b&gt; — how the ten cells of the honeycomb connect to deliver measurable ROI.&lt;br /&gt;&lt;/p&gt;&lt;p&gt;David probes rigorously from the business side — questioning trade-offs, accountability, and real-world friction.&lt;/p&gt;&lt;p&gt;&lt;/p&gt;&lt;p&gt;Together they turn AI from a keynote fantasy into a &lt;b&gt;hard-nosed operating reality&lt;/b&gt;. &lt;/p&gt;</itunes:summary><itunes:explicit>no</itunes:explicit><itunes:duration>00:51:07</itunes:duration><itunes:image href="https://hosting-media.riverside.com/media/podcasts/59262203-5254-4406-b6a4-785187a81892/logos/4eb9df3f-f142-4991-acd1-66c585202c54.png"/><itunes:title>Ep #4: Enterprise AI Field Notes: The Big Scaffold, Pilot to Prod, ROI and TCO, Tradeoffs &amp; Payoffs</itunes:title><itunes:episodeType>full</itunes:episodeType></item><item><title><![CDATA[Ep #3: Enterprise AI Field Notes: T.O.M., 'Hot' AI Roles, Agents v. Humans, Rapid-fire Q&A ]]></title><description><![CDATA[<p>AI dazzles in demos but stalls in the enterprise. In this deep dive with a new section on rapid-fire questions by <a rel="noopener noreferrer nofollow" href="https://www.linkedin.com/in/sriniuk/" target="_blank"><b>Srini Annamaraju, </b></a>the resident tech strategy expert on the <b><i>No Effing AIdea </i></b>podcast, <a rel="noopener noreferrer nofollow" href="https://www.linkedin.com/in/davidroyle/" target="_blank"><b>David Royle</b></a> , reveals why — and how a smarter <i>Target Operating Model (T.O.M.)</i> can finally bridge the gap between ambition and adoption. </p><p></p><p>This episode connects TOM blueprints to the people dimension: the rise of <i>AI workflow architects</i>, the tension between <i>agents and humans</i>, and the messy middle where governance meets innovation.</p><p></p><p>💡 What we cover:</p><p></p><ul><li>Why enterprises confuse <i>AI projects</i> for <i>AI infrastructure</i></li><li>How to design a T.O.M. that balances <i>guardrails and greenlights</i></li><li>The rise of new <i>‘hot’ AI roles</i> — and which ones will fade fast</li><li>What happens when <i>test kitchens</i> meet <i>board-level control</i></li><li>Why scaling AI isn’t a tech problem — it’s an <i>operating model</i> one</li></ul><p></p><p>David brings creative clarity and a rare mix of strategic and operational design chops. Srini brings decades of enterprise GTM and technology experience.  </p><p></p><p>Together, they unpack what it really takes to make AI <i>run the enterprise </i>— not just impress it.</p><p></p>]]></description><guid isPermaLink="false">70b1b545-e17f-40ec-9759-8cb60a3f1c6a</guid><dc:creator><![CDATA[Srini and David]]></dc:creator><pubDate>Fri, 10 Oct 2025 10:36:27 GMT</pubDate><enclosure url="https://api.riverside.com/hosting-analytics/media/eeaf3dabf8868be7f6453f55187416169d7f52077728fb92a18181a251743bfe/eyJlcGlzb2RlSWQiOiI3MGIxYjU0NS1lMTdmLTQwZWMtOTc1OS04Y2I2MGEzZjFjNmEiLCJwb2RjYXN0SWQiOiI1OTI2MjIwMy01MjU0LTQ0MDYtYjZhNC03ODUxODdhODE4OTIiLCJhY2NvdW50SWQiOiI2ODVkMDdhODdjODcwMjIwZmFiNjkxNTgiLCJwYXRoIjoibWVkaWEvY2xpcHMvNjhlOGUxYWIxZmY3OWI3YThiYzg2ODdjL3NyaW5pdmFzcy1zdHVkaW8tUXNxVWItY29tcG9zZXItMjAyNS0xMC0xMF9fMTItMzYtMjcubXAzIn0=.mp3" length="12913911" type="audio/mpeg"/><itunes:summary>&lt;p&gt;AI dazzles in demos but stalls in the enterprise. In this deep dive with a new section on rapid-fire questions by &lt;a rel=&quot;noopener noreferrer nofollow&quot; href=&quot;https://www.linkedin.com/in/sriniuk/&quot; target=&quot;_blank&quot;&gt;&lt;b&gt;Srini Annamaraju, &lt;/b&gt;&lt;/a&gt;the resident tech strategy expert on the &lt;b&gt;&lt;i&gt;No Effing AIdea &lt;/i&gt;&lt;/b&gt;podcast, &lt;a rel=&quot;noopener noreferrer nofollow&quot; href=&quot;https://www.linkedin.com/in/davidroyle/&quot; target=&quot;_blank&quot;&gt;&lt;b&gt;David Royle&lt;/b&gt;&lt;/a&gt; , reveals why — and how a smarter &lt;i&gt;Target Operating Model (T.O.M.)&lt;/i&gt; can finally bridge the gap between ambition and adoption. &lt;/p&gt;&lt;p&gt;&lt;/p&gt;&lt;p&gt;This episode connects TOM blueprints to the people dimension: the rise of &lt;i&gt;AI workflow architects&lt;/i&gt;, the tension between &lt;i&gt;agents and humans&lt;/i&gt;, and the messy middle where governance meets innovation.&lt;/p&gt;&lt;p&gt;&lt;/p&gt;&lt;p&gt;💡 What we cover:&lt;/p&gt;&lt;p&gt;&lt;/p&gt;&lt;ul&gt;&lt;li&gt;Why enterprises confuse &lt;i&gt;AI projects&lt;/i&gt; for &lt;i&gt;AI infrastructure&lt;/i&gt;&lt;/li&gt;&lt;li&gt;How to design a T.O.M. that balances &lt;i&gt;guardrails and greenlights&lt;/i&gt;&lt;/li&gt;&lt;li&gt;The rise of new &lt;i&gt;‘hot’ AI roles&lt;/i&gt; — and which ones will fade fast&lt;/li&gt;&lt;li&gt;What happens when &lt;i&gt;test kitchens&lt;/i&gt; meet &lt;i&gt;board-level control&lt;/i&gt;&lt;/li&gt;&lt;li&gt;Why scaling AI isn’t a tech problem — it’s an &lt;i&gt;operating model&lt;/i&gt; one&lt;/li&gt;&lt;/ul&gt;&lt;p&gt;&lt;/p&gt;&lt;p&gt;David brings creative clarity and a rare mix of strategic and operational design chops. Srini brings decades of enterprise GTM and technology experience.  &lt;/p&gt;&lt;p&gt;&lt;/p&gt;&lt;p&gt;Together, they unpack what it really takes to make AI &lt;i&gt;run the enterprise &lt;/i&gt;— not just impress it.&lt;/p&gt;&lt;p&gt;&lt;/p&gt;</itunes:summary><itunes:explicit>no</itunes:explicit><itunes:duration>00:26:54</itunes:duration><itunes:image href="https://hosting-media.riverside.com/media/podcasts/59262203-5254-4406-b6a4-785187a81892/logos/4eb9df3f-f142-4991-acd1-66c585202c54.png"/><itunes:season>1</itunes:season><itunes:episode>3</itunes:episode><itunes:title>Ep #3: Enterprise AI Field Notes: T.O.M., &apos;Hot&apos; AI Roles, Agents v. Humans, Rapid-fire Q&amp;A </itunes:title><itunes:episodeType>full</itunes:episodeType></item><item><title><![CDATA[Ep #2: Enterprise AI Field Notes: Traps, Hard Truths, Hallucinations, Shadow AI, and the 3 AI Rooms]]></title><description><![CDATA[<h3>Episode 2 – Enterprise AI Field Notes: Traps, Hard Truths, Hallucinations, Shadow AI, and the 3 AI Rooms</h3><p>Hosts <a rel="noopener noreferrer nofollow" href="https://www.linkedin.com/in/sriniuk/" target="_blank">Srini Annamaraju</a> and <a rel="noopener noreferrer nofollow" href="https://www.linkedin.com/in/davidroyle/" target="_blank">David Royle</a> are back for episode two – and yes, the feedback is in. Some said we were a bit <i>too serious</i>  last time. We’ll try not to become a Sunday love songs show, but we’re working on upping the “gags per minute.”</p><p></p><p>This week’s conversation covers:</p><p></p><ul><li><b>Listener reach &amp; feedback:</b> Almost 100 plays already, with listeners tuning in from the UK, US, India, and even Slovenia. The appetite is real for discussions that go beyond hype and get into the messy middle of enterprise AI.</li><li><b>Event notes from Big Data London:</b> A buzzing show, but still very tech-heavy. We debate whether AI conversations are stuck in the IT lane, and why that’s a problem when the real impact is business-wide.</li><li><b>NBER study on ChatGPT usage:</b> 700M weekly users. Surprisingly, 70% of usage is personal rather than work. Heavy skew toward under-26s. We unpack what that means for adoption inside enterprises.</li><li><b>US tech investment in the UK:</b> Nvidia and OpenAI committing eye-watering sums (hundreds of billions over time). A rare bit of good economic news for the UK, with national implications for jobs, productivity, and independence from US/China dominance.</li><li><b>Enterprise field news:</b><ul><li>Citi experimenting with agentic AI for wealth advisors, using Claude and Gemini inside secure workspaces.</li><li>FT analysis showing CEOs hype AI on earnings calls, but get risk-heavy and muted in regulatory filings.</li><li>JLR cyberattack fallout: £3.5B revenue hit, no cyber insurance in place. Knock-on effects on suppliers and supply chain.</li></ul></li><li><b>The “three rooms” where AI decisions get made:</b><ul><li><b>C-suite</b> (value, governance, risk)</li><li><b>Technical teams</b> (architecture, data quality, safe design)</li><li><b>Operations</b> (ongoing management, compliance, usage quality)</li></ul></li><li><b>Traps to avoid:</b><ul><li><i>The Whac-a-Mole Trap</i> – hallucinations never disappear, they just reduce.</li><li><i>The Origami Trap</i> – clever prompts aren’t a moat; without guardrails, they fold fast.</li><li><i>The IT-Only Trap</i> – AI left to technologists will fail; business P&amp;L owners need to lead.</li><li><i>The Corporate DNA Trap</i> – over-automating risks erasing what makes your org unique.</li></ul></li><li><b>Shadow AI is real:</b> Even if companies ban AI tools, staff use them on personal devices. Risks around leakage and compliance multiply.</li></ul><p></p><p>We close with a look ahead:</p><ul><li>How frontier model labs (OpenAI, Cohere, Mistral, etc.) are approaching enterprise go-to-market.</li><li>Real use cases from our own client work – what’s working, what’s not.</li></ul><hr /><p><b>Next steps for listeners:</b><br />Got topics you’d like us to cover? Message us on LinkedIn. The more specific, the better – we’ll dig in and bring field notes to the next episode.</p><hr />]]></description><guid isPermaLink="false">0fdb3d4e-49c4-43c2-a97b-d6c9e20526bc</guid><dc:creator><![CDATA[Srini and David]]></dc:creator><pubDate>Sun, 28 Sep 2025 16:40:14 GMT</pubDate><enclosure url="https://api.riverside.com/hosting-analytics/media/6dc6a09801ce1b92a9aeea87bca2e3b03f996b6fe7146775b03d65681e4e4b3d/eyJlcGlzb2RlSWQiOiIwZmRiM2Q0ZS00OWM0LTQzYzItYTk3Yi1kNmM5ZTIwNTI2YmMiLCJwb2RjYXN0SWQiOiI1OTI2MjIwMy01MjU0LTQ0MDYtYjZhNC03ODUxODdhODE4OTIiLCJhY2NvdW50SWQiOiI2ODVkMDdhODdjODcwMjIwZmFiNjkxNTgiLCJwYXRoIjoibWVkaWEvY2xpcHMvNjhkOTY0ZWY2OTQyOTE0YzYzMTg1OTU2L3NyaW5pdmFzcy1zdHVkaW8tUXNxVWItY29tcG9zZXItMjAyNS05LTI4X18xOC00MC0xNS5tcDMifQ==.mp3" length="15024605" type="audio/mpeg"/><itunes:summary>&lt;h3&gt;Episode 2 – Enterprise AI Field Notes: Traps, Hard Truths, Hallucinations, Shadow AI, and the 3 AI Rooms&lt;/h3&gt;&lt;p&gt;Hosts &lt;a rel=&quot;noopener noreferrer nofollow&quot; href=&quot;https://www.linkedin.com/in/sriniuk/&quot; target=&quot;_blank&quot;&gt;Srini Annamaraju&lt;/a&gt; and &lt;a rel=&quot;noopener noreferrer nofollow&quot; href=&quot;https://www.linkedin.com/in/davidroyle/&quot; target=&quot;_blank&quot;&gt;David Royle&lt;/a&gt; are back for episode two – and yes, the feedback is in. Some said we were a bit &lt;i&gt;too serious&lt;/i&gt;  last time. We’ll try not to become a Sunday love songs show, but we’re working on upping the “gags per minute.”&lt;/p&gt;&lt;p&gt;&lt;/p&gt;&lt;p&gt;This week’s conversation covers:&lt;/p&gt;&lt;p&gt;&lt;/p&gt;&lt;ul&gt;&lt;li&gt;&lt;b&gt;Listener reach &amp;amp; feedback:&lt;/b&gt; Almost 100 plays already, with listeners tuning in from the UK, US, India, and even Slovenia. The appetite is real for discussions that go beyond hype and get into the messy middle of enterprise AI.&lt;/li&gt;&lt;li&gt;&lt;b&gt;Event notes from Big Data London:&lt;/b&gt; A buzzing show, but still very tech-heavy. We debate whether AI conversations are stuck in the IT lane, and why that’s a problem when the real impact is business-wide.&lt;/li&gt;&lt;li&gt;&lt;b&gt;NBER study on ChatGPT usage:&lt;/b&gt; 700M weekly users. Surprisingly, 70% of usage is personal rather than work. Heavy skew toward under-26s. We unpack what that means for adoption inside enterprises.&lt;/li&gt;&lt;li&gt;&lt;b&gt;US tech investment in the UK:&lt;/b&gt; Nvidia and OpenAI committing eye-watering sums (hundreds of billions over time). A rare bit of good economic news for the UK, with national implications for jobs, productivity, and independence from US/China dominance.&lt;/li&gt;&lt;li&gt;&lt;b&gt;Enterprise field news:&lt;/b&gt;&lt;ul&gt;&lt;li&gt;Citi experimenting with agentic AI for wealth advisors, using Claude and Gemini inside secure workspaces.&lt;/li&gt;&lt;li&gt;FT analysis showing CEOs hype AI on earnings calls, but get risk-heavy and muted in regulatory filings.&lt;/li&gt;&lt;li&gt;JLR cyberattack fallout: £3.5B revenue hit, no cyber insurance in place. Knock-on effects on suppliers and supply chain.&lt;/li&gt;&lt;/ul&gt;&lt;/li&gt;&lt;li&gt;&lt;b&gt;The “three rooms” where AI decisions get made:&lt;/b&gt;&lt;ul&gt;&lt;li&gt;&lt;b&gt;C-suite&lt;/b&gt; (value, governance, risk)&lt;/li&gt;&lt;li&gt;&lt;b&gt;Technical teams&lt;/b&gt; (architecture, data quality, safe design)&lt;/li&gt;&lt;li&gt;&lt;b&gt;Operations&lt;/b&gt; (ongoing management, compliance, usage quality)&lt;/li&gt;&lt;/ul&gt;&lt;/li&gt;&lt;li&gt;&lt;b&gt;Traps to avoid:&lt;/b&gt;&lt;ul&gt;&lt;li&gt;&lt;i&gt;The Whac-a-Mole Trap&lt;/i&gt; – hallucinations never disappear, they just reduce.&lt;/li&gt;&lt;li&gt;&lt;i&gt;The Origami Trap&lt;/i&gt; – clever prompts aren’t a moat; without guardrails, they fold fast.&lt;/li&gt;&lt;li&gt;&lt;i&gt;The IT-Only Trap&lt;/i&gt; – AI left to technologists will fail; business P&amp;amp;L owners need to lead.&lt;/li&gt;&lt;li&gt;&lt;i&gt;The Corporate DNA Trap&lt;/i&gt; – over-automating risks erasing what makes your org unique.&lt;/li&gt;&lt;/ul&gt;&lt;/li&gt;&lt;li&gt;&lt;b&gt;Shadow AI is real:&lt;/b&gt; Even if companies ban AI tools, staff use them on personal devices. Risks around leakage and compliance multiply.&lt;/li&gt;&lt;/ul&gt;&lt;p&gt;&lt;/p&gt;&lt;p&gt;We close with a look ahead:&lt;/p&gt;&lt;ul&gt;&lt;li&gt;How frontier model labs (OpenAI, Cohere, Mistral, etc.) are approaching enterprise go-to-market.&lt;/li&gt;&lt;li&gt;Real use cases from our own client work – what’s working, what’s not.&lt;/li&gt;&lt;/ul&gt;&lt;hr /&gt;&lt;p&gt;&lt;b&gt;Next steps for listeners:&lt;/b&gt;&lt;br /&gt;Got topics you’d like us to cover? Message us on LinkedIn. The more specific, the better – we’ll dig in and bring field notes to the next episode.&lt;/p&gt;&lt;hr /&gt;</itunes:summary><itunes:explicit>no</itunes:explicit><itunes:duration>00:31:18</itunes:duration><itunes:image href="https://hosting-media.riverside.com/media/podcasts/59262203-5254-4406-b6a4-785187a81892/logos/4eb9df3f-f142-4991-acd1-66c585202c54.png"/><itunes:season>1</itunes:season><itunes:episode>2</itunes:episode><itunes:title>Ep #2: Enterprise AI Field Notes: Traps, Hard Truths, Hallucinations, Shadow AI, and the 3 AI Rooms</itunes:title><itunes:episodeType>full</itunes:episodeType></item><item><title><![CDATA[Ep #1: Enterprise AI Field Notes: AI’s Messy Middle: Jobs, Hype, and Hard Choices]]></title><description><![CDATA[<p><b>Episode 1 — No Effing AIdea! - </b>AI ethics, job disruption, GenAI “failures,” the AI bubble, India SMB adoption, coding risks, consultants falling behind, biotech breakthroughs, and AI in education — all collide in our first episode of <i>No Effing AIdea!</i></p><p></p><p>Welcome to the first episode. We are <a rel="noopener noreferrer nofollow" href="https://www.linkedin.com/in/davidroyle/" target="_blank">David Royle</a> and <a rel="noopener noreferrer nofollow" href="https://www.linkedin.com/in/sriniuk/" target="_blank">Srini Annamaraju.</a></p><p></p><p>We set the tone with a stark cold open: new Stanford data shows a <b>13% drop in jobs for 22–25-year-olds in AI-vulnerable roles</b> since ChatGPT launched. Then, in our <b>Reality Check</b>, we unpack the last two weeks of enterprise AI news with a fresh lens:</p><p></p><ul><li>MIT’s “95% failure” GenAI claim — and why that’s too simple.</li><li>Why the so-called AI bubble might actually be good for business.</li><li>Reliance &amp; Meta’s $100M JV bringing enterprise AI to India’s SMBs.</li><li>AI coding tools: 30% faster, but 2x more vulnerabilities.</li><li>Big consultants left behind by in-house AI adoption.</li><li>Stanford’s autonomous AI lab slashing drug discovery timelines.</li><li>Khan Academy’s Khanmigo AI tutor bringing hope to 180M learners worldwide.</li></ul><p></p><p>Our <b>Deep Dive</b> asks: why does AI suddenly get its own moral panic when cloud, ERP, and digital never did? We explore what “ethics” really means for enterprises today:</p><p></p><ul><li>How ethics shows up on the P&amp;L — as fines, lawsuits, and PR disasters.</li><li>Where ethics must live in the AI stack to avoid “governance theatre.”</li><li>The trade-offs leaders underestimate — speed vs. trust, open vs. proprietary.</li><li>A pragmatic three-step ethical readiness checklist for 2025.</li></ul><p></p><p>Finally, in <b>The Paradox Box</b>, we tackle three dilemmas from the field:</p><p></p><ul><li>Boards demanding ROI and revolution at the same time.</li><li>Compliance vs. engineers in the race for velocity.</li><li>Customers rebelling against brilliance.</li></ul><p></p><p>Listen in for pragmatic tactics, not theatre — and a candid take on why AI ethics isn’t an afterthought. It’s the seatbelt that lets enterprises drive faster.</p>]]></description><guid isPermaLink="false">602fa56f-711b-4a21-b5e7-ff931560b959</guid><dc:creator><![CDATA[Srini and David]]></dc:creator><pubDate>Thu, 11 Sep 2025 17:50:24 GMT</pubDate><enclosure url="https://api.riverside.com/hosting-analytics/media/f6098e53ee6496d644f91a66531fd0f04b0a7282d30c2a72043b5dcb304837d4/eyJlcGlzb2RlSWQiOiI2MDJmYTU2Zi03MTFiLTRhMjEtYjVlNy1mZjkzMTU2MGI5NTkiLCJwb2RjYXN0SWQiOiI1OTI2MjIwMy01MjU0LTQ0MDYtYjZhNC03ODUxODdhODE4OTIiLCJhY2NvdW50SWQiOiI2ODVkMDdhODdjODcwMjIwZmFiNjkxNTgiLCJwYXRoIjoibWVkaWEvY2xpcHMvNjhjMzBiZTAzOWM0MzZhMmU1NzY1NmZlL3NyaW5pdmFzcy1zdHVkaW8tUXNxVWItY29tcG9zZXItMjAyNS05LTExX18xOS01MC0yNC5tcDMifQ==.mp3" length="31235779" type="audio/mpeg"/><itunes:summary>&lt;p&gt;&lt;b&gt;Episode 1 — No Effing AIdea! - &lt;/b&gt;AI ethics, job disruption, GenAI “failures,” the AI bubble, India SMB adoption, coding risks, consultants falling behind, biotech breakthroughs, and AI in education — all collide in our first episode of &lt;i&gt;No Effing AIdea!&lt;/i&gt;&lt;/p&gt;&lt;p&gt;&lt;/p&gt;&lt;p&gt;Welcome to the first episode. We are &lt;a rel=&quot;noopener noreferrer nofollow&quot; href=&quot;https://www.linkedin.com/in/davidroyle/&quot; target=&quot;_blank&quot;&gt;David Royle&lt;/a&gt; and &lt;a rel=&quot;noopener noreferrer nofollow&quot; href=&quot;https://www.linkedin.com/in/sriniuk/&quot; target=&quot;_blank&quot;&gt;Srini Annamaraju.&lt;/a&gt;&lt;/p&gt;&lt;p&gt;&lt;/p&gt;&lt;p&gt;We set the tone with a stark cold open: new Stanford data shows a &lt;b&gt;13% drop in jobs for 22–25-year-olds in AI-vulnerable roles&lt;/b&gt; since ChatGPT launched. Then, in our &lt;b&gt;Reality Check&lt;/b&gt;, we unpack the last two weeks of enterprise AI news with a fresh lens:&lt;/p&gt;&lt;p&gt;&lt;/p&gt;&lt;ul&gt;&lt;li&gt;MIT’s “95% failure” GenAI claim — and why that’s too simple.&lt;/li&gt;&lt;li&gt;Why the so-called AI bubble might actually be good for business.&lt;/li&gt;&lt;li&gt;Reliance &amp;amp; Meta’s $100M JV bringing enterprise AI to India’s SMBs.&lt;/li&gt;&lt;li&gt;AI coding tools: 30% faster, but 2x more vulnerabilities.&lt;/li&gt;&lt;li&gt;Big consultants left behind by in-house AI adoption.&lt;/li&gt;&lt;li&gt;Stanford’s autonomous AI lab slashing drug discovery timelines.&lt;/li&gt;&lt;li&gt;Khan Academy’s Khanmigo AI tutor bringing hope to 180M learners worldwide.&lt;/li&gt;&lt;/ul&gt;&lt;p&gt;&lt;/p&gt;&lt;p&gt;Our &lt;b&gt;Deep Dive&lt;/b&gt; asks: why does AI suddenly get its own moral panic when cloud, ERP, and digital never did? We explore what “ethics” really means for enterprises today:&lt;/p&gt;&lt;p&gt;&lt;/p&gt;&lt;ul&gt;&lt;li&gt;How ethics shows up on the P&amp;amp;L — as fines, lawsuits, and PR disasters.&lt;/li&gt;&lt;li&gt;Where ethics must live in the AI stack to avoid “governance theatre.”&lt;/li&gt;&lt;li&gt;The trade-offs leaders underestimate — speed vs. trust, open vs. proprietary.&lt;/li&gt;&lt;li&gt;A pragmatic three-step ethical readiness checklist for 2025.&lt;/li&gt;&lt;/ul&gt;&lt;p&gt;&lt;/p&gt;&lt;p&gt;Finally, in &lt;b&gt;The Paradox Box&lt;/b&gt;, we tackle three dilemmas from the field:&lt;/p&gt;&lt;p&gt;&lt;/p&gt;&lt;ul&gt;&lt;li&gt;Boards demanding ROI and revolution at the same time.&lt;/li&gt;&lt;li&gt;Compliance vs. engineers in the race for velocity.&lt;/li&gt;&lt;li&gt;Customers rebelling against brilliance.&lt;/li&gt;&lt;/ul&gt;&lt;p&gt;&lt;/p&gt;&lt;p&gt;Listen in for pragmatic tactics, not theatre — and a candid take on why AI ethics isn’t an afterthought. It’s the seatbelt that lets enterprises drive faster.&lt;/p&gt;</itunes:summary><itunes:explicit>no</itunes:explicit><itunes:duration>01:05:04</itunes:duration><itunes:image href="https://hosting-media.riverside.com/media/podcasts/59262203-5254-4406-b6a4-785187a81892/logos/4eb9df3f-f142-4991-acd1-66c585202c54.png"/><itunes:season>1</itunes:season><itunes:episode>1</itunes:episode><itunes:title>Ep #1: Enterprise AI Field Notes: AI’s Messy Middle: Jobs, Hype, and Hard Choices</itunes:title><itunes:episodeType>full</itunes:episodeType></item></channel></rss>