<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0" xmlns:itunes="http://www.itunes.com/dtds/podcast-1.0.dtd" xmlns:psc="http://podlove.org/simple-chapters" xmlns:podcast="https://podcastindex.org/namespace/1.0"><channel><title><![CDATA[AI Security Update]]></title><description><![CDATA[<p>A podcast covering the latest in AI and security with host Dr. Anmol Agarwal</p><p></p><p>Disclaimer: All views and opinions expressed in this podcast are solely individual opinions of the host and guest(s) featured and do not represent those of any current or former employer, client, partner, or organization. Nothing discussed should be considered official guidance, policy, or professional advice.</p>]]></description><link>https://riverside.com</link><generator>Riverside.fm (https://riverside.com)</generator><lastBuildDate>Sat, 11 Apr 2026 20:52:23 GMT</lastBuildDate><atom:link href="https://api.riverside.fm/hosting/Yy4bWYo6.rss" rel="self" type="application/rss+xml"/><author><![CDATA[Dr. Anmol Agarwal]]></author><pubDate>Sun, 11 Jan 2026 04:59:34 GMT</pubDate><copyright><![CDATA[2026 Dr. Anmol Agarwal]]></copyright><language><![CDATA[en]]></language><ttl>60</ttl><category><![CDATA[Technology]]></category><category><![CDATA[Education]]></category><itunes:author>Dr. Anmol Agarwal</itunes:author><itunes:summary>&lt;p&gt;A podcast covering the latest in AI and security with host Dr. Anmol Agarwal&lt;/p&gt;&lt;p&gt;&lt;/p&gt;&lt;p&gt;Disclaimer: All views and opinions expressed in this podcast are solely individual opinions of the host and guest(s) featured and do not represent those of any current or former employer, client, partner, or organization. Nothing discussed should be considered official guidance, policy, or professional advice.&lt;/p&gt;</itunes:summary><itunes:type>episodic</itunes:type><itunes:owner><itunes:name>Dr. Anmol Agarwal</itunes:name><itunes:email>anmolspeaker@gmail.com</itunes:email></itunes:owner><itunes:explicit>no</itunes:explicit><itunes:category text="Technology"/><itunes:category text="Education"/><itunes:image href="https://hosting-media.rs-prod.riverside.fm/media/podcasts/ed3e296c-741f-4921-988c-dbec3d84793c/logos/09b98283-ffec-46f1-9853-8fa8f01eca0c.png"/><item><title><![CDATA[The Hidden Dangers of Open Source AI with Yesenia Yser]]></title><description><![CDATA[<p>Open source AI is powerful, but it’s also quietly introducing risks most people never see coming.</p><p>In this episode, we talk with cybersecurity leader Yesenia Yser. Drawing from her work across the open source ecosystem and organizations like the Linux Foundation and Open Source Security Foundation, she breaks down why open source AI models can introduce hidden vulnerabilities. Through her nonprofit, The Lioness Instincts, Yesenia is redefining what security means and teaching women how to protect themselves both physically and digitally, blending cybersecurity with real-world self-defense. In this episode, we break down hidden vulnerabilities, copyright risks, algorithmic bias, and how to protect yourself from AI-driven scams online and in real life.</p>]]></description><guid isPermaLink="false">8ac6ded5-6717-4f83-a1db-d2ca481afc16</guid><dc:creator><![CDATA[Dr. Anmol Agarwal]]></dc:creator><pubDate>Sat, 11 Apr 2026 12:00:00 GMT</pubDate><enclosure url="https://api.riverside.fm/hosting-analytics/media/193e47e57a67bdca5f8c77514fc147e2b9da0269b673444bc21e70d9a090e8e1/eyJlcGlzb2RlSWQiOiI4YWM2ZGVkNS02NzE3LTRmODMtYTFkYi1kMmNhNDgxYWZjMTYiLCJwb2RjYXN0SWQiOiJlZDNlMjk2Yy03NDFmLTQ5MjEtOTg4Yy1kYmVjM2Q4NDc5M2MiLCJhY2NvdW50SWQiOiI2OTUzMDVmZjk4OTNjNmFkOWUwNjU0MDQiLCJwYXRoIjoibWVkaWEvY2xpcHMvNjljM2RlYjQ4ZjAwZDU1OGZhOTc1NzcwL2FubW9scy1zdHVkaW8tRWd5cmwtY29tcG9zZXItMjAyNi0zLTI1X18xNC0xMC0xMS5tcDMifQ==.mp3" length="66357228" type="audio/mpeg"/><podcast:transcript url="https://hosting-media.rs-prod.riverside.fm/media/podcasts/ed3e296c-741f-4921-988c-dbec3d84793c/episodes/8ac6ded5-6717-4f83-a1db-d2ca481afc16/transcripts.txt" type="text/plain"/><itunes:summary>&lt;p&gt;Open source AI is powerful, but it’s also quietly introducing risks most people never see coming.&lt;/p&gt;&lt;p&gt;In this episode, we talk with cybersecurity leader Yesenia Yser. Drawing from her work across the open source ecosystem and organizations like the Linux Foundation and Open Source Security Foundation, she breaks down why open source AI models can introduce hidden vulnerabilities. Through her nonprofit, The Lioness Instincts, Yesenia is redefining what security means and teaching women how to protect themselves both physically and digitally, blending cybersecurity with real-world self-defense. In this episode, we break down hidden vulnerabilities, copyright risks, algorithmic bias, and how to protect yourself from AI-driven scams online and in real life.&lt;/p&gt;</itunes:summary><itunes:explicit>no</itunes:explicit><itunes:duration>00:46:05</itunes:duration><itunes:image href="https://hosting-media.rs-prod.riverside.fm/media/podcasts/ed3e296c-741f-4921-988c-dbec3d84793c/logos/09b98283-ffec-46f1-9853-8fa8f01eca0c.png"/><itunes:title>The Hidden Dangers of Open Source AI with Yesenia Yser</itunes:title><itunes:episodeType>full</itunes:episodeType></item><item><title><![CDATA[OpenClaw & the Future of AI Security: Cloud, AppSec & Governance with Angie Saccone]]></title><description><![CDATA[<p>What does OpenClaw reveal about the future of AI security?</p><p>In this episode, we’re joined by Angela Saccone, Cybersecurity Professional, AI Security Enthusiast, and Podcaster, to explore how AI is reshaping the security landscape across core domains. We break down key concepts in virtual machines, cloud security, and application security, and how these environments are evolving in an AI-driven world.</p><p>We also discuss incident response in the context of AI-powered threats and the growing importance of governance.</p><p>Using OpenClaw as a real-world anchor, this conversation highlights emerging risks, practical security considerations, and how both practitioners and newcomers can better understand and navigate AI security today.</p><p></p>]]></description><guid isPermaLink="false">f19167d3-b43d-45e3-990f-941841cab8f6</guid><dc:creator><![CDATA[Dr. Anmol Agarwal]]></dc:creator><pubDate>Sat, 04 Apr 2026 12:00:00 GMT</pubDate><enclosure url="https://api.riverside.fm/hosting-analytics/media/53635d62f82ef0725c98c312174850d48cd4b5a3c8fcb0d8ed93c35ee8821bf2/eyJlcGlzb2RlSWQiOiJmMTkxNjdkMy1iNDNkLTQ1ZTMtOTkwZi05NDE4NDFjYWI4ZjYiLCJwb2RjYXN0SWQiOiJlZDNlMjk2Yy03NDFmLTQ5MjEtOTg4Yy1kYmVjM2Q4NDc5M2MiLCJhY2NvdW50SWQiOiI2OTUzMDVmZjk4OTNjNmFkOWUwNjU0MDQiLCJwYXRoIjoibWVkaWEvY2xpcHMvNjliYjVlZjhmYjE5YzE0ZGM2NDM3NGQwL2FubW9scy1zdHVkaW8tRWd5cmwtY29tcG9zZXItMjAyNi0zLTE5X18zLTI3LTQubXAzIn0=.mp3" length="61352376" type="audio/mpeg"/><podcast:transcript url="https://hosting-media.rs-prod.riverside.fm/media/podcasts/ed3e296c-741f-4921-988c-dbec3d84793c/episodes/f19167d3-b43d-45e3-990f-941841cab8f6/transcripts.txt" type="text/plain"/><itunes:summary>&lt;p&gt;What does OpenClaw reveal about the future of AI security?&lt;/p&gt;&lt;p&gt;In this episode, we’re joined by Angela Saccone, Cybersecurity Professional, AI Security Enthusiast, and Podcaster, to explore how AI is reshaping the security landscape across core domains. We break down key concepts in virtual machines, cloud security, and application security, and how these environments are evolving in an AI-driven world.&lt;/p&gt;&lt;p&gt;We also discuss incident response in the context of AI-powered threats and the growing importance of governance.&lt;/p&gt;&lt;p&gt;Using OpenClaw as a real-world anchor, this conversation highlights emerging risks, practical security considerations, and how both practitioners and newcomers can better understand and navigate AI security today.&lt;/p&gt;&lt;p&gt;&lt;/p&gt;</itunes:summary><itunes:explicit>no</itunes:explicit><itunes:duration>00:42:36</itunes:duration><itunes:image href="https://hosting-media.rs-prod.riverside.fm/media/podcasts/ed3e296c-741f-4921-988c-dbec3d84793c/logos/09b98283-ffec-46f1-9853-8fa8f01eca0c.png"/><itunes:title>OpenClaw &amp; the Future of AI Security: Cloud, AppSec &amp; Governance with Angie Saccone</itunes:title><itunes:episodeType>full</itunes:episodeType></item><item><title><![CDATA[Your Next Employee Isn’t Human: Securing Agentic AI with Dd Budiharto]]></title><description><![CDATA[<p>What happens when AI stops being a tool and starts acting like an employee?</p><p>In this episode, Dd Budiharto, who works as a Chief Security Advisor at Microsoft, dives into the real security challenge most organizations aren’t ready for: agentic AI with identities and access. We break down how “shadow AI employees” are already creeping into enterprises, and what it actually means to secure AI inside enterprise platforms.</p><p>The takeaway is simple but urgent: if you’re not managing AI like part of your workforce, you don’t have control, you have exposure.</p><p>Disclaimer: All opinions expressed in this episode are the individual opinions of the host and guest featured.</p><p>The opinions do not reflect that of any organization.</p><p></p><p>Resources:</p><p> Microsoft Resources:</p><p>- <a rel="noopener noreferrer nofollow" href="https://learn.microsoft.com/en-us/security/security-for-ai/" target="_blank">https://learn.microsoft.com/en-us/security/security-for-ai/</a></p><p>- <a rel="noopener noreferrer nofollow" href="https://www.microsoft.com/en-us/security/blog/2026/03/19/new-tools-and-guidance-announcing-zero-trust-for-ai/" target="_blank">https://www.microsoft.com/en-us/security/blog/2026/03/19/new-tools-and-guidance-announcing-zero-trust-for-ai/</a></p><p>- <a rel="noopener noreferrer nofollow" href="https://learn.microsoft.com/en-us/copilot/microsoft-365/copilot-control-system/security-governance" target="_blank">https://learn.microsoft.com/en-us/copilot/microsoft-365/copilot-control-system/security-governance</a></p><p>- <a rel="noopener noreferrer nofollow" href="https://learn.microsoft.com/en-us/copilot/security/responsible-ai-overview-security-copilot" target="_blank">https://learn.microsoft.com/en-us/copilot/security/responsible-ai-overview-security-copilot</a></p><p>Other resources:</p><p>- <a rel="noopener noreferrer nofollow" href="https://www.nist.gov/itl/ai-risk-management-framework" target="_blank">https://www.nist.gov/itl/ai-risk-management-framework</a></p><p>- <a rel="noopener noreferrer nofollow" href="https://www.cisa.gov/resources-tools/resources/ai-data-security-best-practices-securing-data-used-train-operate-ai-systems" target="_blank">https://www.cisa.gov/resources-tools/resources/ai-data-security-best-practices-securing-data-used-train-operate-ai-systems</a></p><p></p>]]></description><guid isPermaLink="false">e58c2acf-aefa-4cff-859c-dc164a6a7a53</guid><dc:creator><![CDATA[Dr. Anmol Agarwal]]></dc:creator><pubDate>Sat, 28 Mar 2026 12:00:00 GMT</pubDate><enclosure url="https://api.riverside.fm/hosting-analytics/media/4f1ccc69fa20117eeeeb9c38e25dc0fbadd9b9fb18f4b3bde828402c5d7871ec/eyJlcGlzb2RlSWQiOiJlNThjMmFjZi1hZWZhLTRjZmYtODU5Yy1kYzE2NGE2YTdhNTMiLCJwb2RjYXN0SWQiOiJlZDNlMjk2Yy03NDFmLTQ5MjEtOTg4Yy1kYmVjM2Q4NDc5M2MiLCJhY2NvdW50SWQiOiI2OTUzMDVmZjk4OTNjNmFkOWUwNjU0MDQiLCJwYXRoIjoibWVkaWEvY2xpcHMvNjliZjE1ZmJkYWJhMDAxZjFlMDY0NWMyL2FubW9scy1zdHVkaW8tRWd5cmwtY29tcG9zZXItMjAyNi0zLTIxX18yMy00LTQzLm1wMyJ9.mp3" length="23788712" type="audio/mpeg"/><podcast:transcript url="https://hosting-media.rs-prod.riverside.fm/media/podcasts/ed3e296c-741f-4921-988c-dbec3d84793c/episodes/e58c2acf-aefa-4cff-859c-dc164a6a7a53/transcripts.txt" type="text/plain"/><itunes:summary>&lt;p&gt;What happens when AI stops being a tool and starts acting like an employee?&lt;/p&gt;&lt;p&gt;In this episode, Dd Budiharto, who works as a Chief Security Advisor at Microsoft, dives into the real security challenge most organizations aren’t ready for: agentic AI with identities and access. We break down how “shadow AI employees” are already creeping into enterprises, and what it actually means to secure AI inside enterprise platforms.&lt;/p&gt;&lt;p&gt;The takeaway is simple but urgent: if you’re not managing AI like part of your workforce, you don’t have control, you have exposure.&lt;/p&gt;&lt;p&gt;Disclaimer: All opinions expressed in this episode are the individual opinions of the host and guest featured.&lt;/p&gt;&lt;p&gt;The opinions do not reflect that of any organization.&lt;/p&gt;&lt;p&gt;&lt;/p&gt;&lt;p&gt;Resources:&lt;/p&gt;&lt;p&gt; Microsoft Resources:&lt;/p&gt;&lt;p&gt;- &lt;a rel=&quot;noopener noreferrer nofollow&quot; href=&quot;https://learn.microsoft.com/en-us/security/security-for-ai/&quot; target=&quot;_blank&quot;&gt;https://learn.microsoft.com/en-us/security/security-for-ai/&lt;/a&gt;&lt;/p&gt;&lt;p&gt;- &lt;a rel=&quot;noopener noreferrer nofollow&quot; href=&quot;https://www.microsoft.com/en-us/security/blog/2026/03/19/new-tools-and-guidance-announcing-zero-trust-for-ai/&quot; target=&quot;_blank&quot;&gt;https://www.microsoft.com/en-us/security/blog/2026/03/19/new-tools-and-guidance-announcing-zero-trust-for-ai/&lt;/a&gt;&lt;/p&gt;&lt;p&gt;- &lt;a rel=&quot;noopener noreferrer nofollow&quot; href=&quot;https://learn.microsoft.com/en-us/copilot/microsoft-365/copilot-control-system/security-governance&quot; target=&quot;_blank&quot;&gt;https://learn.microsoft.com/en-us/copilot/microsoft-365/copilot-control-system/security-governance&lt;/a&gt;&lt;/p&gt;&lt;p&gt;- &lt;a rel=&quot;noopener noreferrer nofollow&quot; href=&quot;https://learn.microsoft.com/en-us/copilot/security/responsible-ai-overview-security-copilot&quot; target=&quot;_blank&quot;&gt;https://learn.microsoft.com/en-us/copilot/security/responsible-ai-overview-security-copilot&lt;/a&gt;&lt;/p&gt;&lt;p&gt;Other resources:&lt;/p&gt;&lt;p&gt;- &lt;a rel=&quot;noopener noreferrer nofollow&quot; href=&quot;https://www.nist.gov/itl/ai-risk-management-framework&quot; target=&quot;_blank&quot;&gt;https://www.nist.gov/itl/ai-risk-management-framework&lt;/a&gt;&lt;/p&gt;&lt;p&gt;- &lt;a rel=&quot;noopener noreferrer nofollow&quot; href=&quot;https://www.cisa.gov/resources-tools/resources/ai-data-security-best-practices-securing-data-used-train-operate-ai-systems&quot; target=&quot;_blank&quot;&gt;https://www.cisa.gov/resources-tools/resources/ai-data-security-best-practices-securing-data-used-train-operate-ai-systems&lt;/a&gt;&lt;/p&gt;&lt;p&gt;&lt;/p&gt;</itunes:summary><itunes:explicit>no</itunes:explicit><itunes:duration>00:16:31</itunes:duration><itunes:image href="https://hosting-media.rs-prod.riverside.fm/media/podcasts/ed3e296c-741f-4921-988c-dbec3d84793c/logos/09b98283-ffec-46f1-9853-8fa8f01eca0c.png"/><itunes:title>Your Next Employee Isn’t Human: Securing Agentic AI with Dd Budiharto</itunes:title><itunes:episodeType>full</itunes:episodeType></item><item><title><![CDATA[AI Is Only as Good as Its Data with Heather Case-Hall]]></title><description><![CDATA[<p>In this episode, Heather Case-Hall, Senior Solutions Security Architect at Myriad360 breaks down why AI is only as good as the data behind it and why completely trusting it can create real risk. From the importance of logging and asset visibility to why you shouldn’t rely on AI when someone you love ends up in a hospital, this conversation explores the growing gap between AI "over-confidence" and reality</p><p></p>]]></description><guid isPermaLink="false">03079cff-c4dd-4f00-ba25-1fb0eb532fb2</guid><dc:creator><![CDATA[Dr. Anmol Agarwal]]></dc:creator><pubDate>Sat, 21 Mar 2026 13:00:00 GMT</pubDate><enclosure url="https://api.riverside.fm/hosting-analytics/media/8fe9477850caaf7dfda7cadf43cb067f09c77fdc587bbb900ec1a93a134ffea1/eyJlcGlzb2RlSWQiOiIwMzA3OWNmZi1jNGRkLTRmMDAtYmEyNS0xZmIwZWI1MzJmYjIiLCJwb2RjYXN0SWQiOiJlZDNlMjk2Yy03NDFmLTQ5MjEtOTg4Yy1kYmVjM2Q4NDc5M2MiLCJhY2NvdW50SWQiOiI2OTUzMDVmZjk4OTNjNmFkOWUwNjU0MDQiLCJwYXRoIjoibWVkaWEvY2xpcHMvNjlhYTUxODMyYjcxYWNmM2RlMGM0MDM3L2FubW9scy1zdHVkaW8tRWd5cmwtY29tcG9zZXItMjAyNi0zLTZfXzUtMS02Lm1wMyJ9.mp3" length="69022345" type="audio/mpeg"/><podcast:transcript url="https://hosting-media.rs-prod.riverside.fm/media/podcasts/ed3e296c-741f-4921-988c-dbec3d84793c/episodes/03079cff-c4dd-4f00-ba25-1fb0eb532fb2/transcripts.txt" type="text/plain"/><itunes:summary>&lt;p&gt;In this episode, Heather Case-Hall, Senior Solutions Security Architect at Myriad360 breaks down why AI is only as good as the data behind it and why completely trusting it can create real risk. From the importance of logging and asset visibility to why you shouldn’t rely on AI when someone you love ends up in a hospital, this conversation explores the growing gap between AI &quot;over-confidence&quot; and reality&lt;/p&gt;&lt;p&gt;&lt;/p&gt;</itunes:summary><itunes:explicit>no</itunes:explicit><itunes:duration>00:47:56</itunes:duration><itunes:image href="https://hosting-media.rs-prod.riverside.fm/media/podcasts/ed3e296c-741f-4921-988c-dbec3d84793c/logos/09b98283-ffec-46f1-9853-8fa8f01eca0c.png"/><itunes:title>AI Is Only as Good as Its Data with Heather Case-Hall</itunes:title><itunes:episodeType>full</itunes:episodeType></item><item><title><![CDATA[AI in Healthcare: Breakthrough or Security Risk? With Omar Sangurima]]></title><description><![CDATA[<p>AI is transforming healthcare, and the future looks promising.</p><p>In this episode, Omar Sangurima,<br />Head of Cyber Program Management &amp; Cyber Third-Party Risk at Memorial Sloan Kettering Cancer Center<b> </b>and Anmol Agarwal explore how AI is helping unlock insights in healthcare and improving patient outcomes. They discuss why thoughtful AI regulation is essential, the balance between innovation and privacy, and even how global events like the FIFA World Cup reveal AI’s growing role in society.</p><p>Join us for a forward-looking conversation on the opportunities, ethical considerations, and exciting future of AI in healthcare.</p><p></p>]]></description><guid isPermaLink="false">507c3e00-a0b6-407c-b67d-199302b3b58b</guid><dc:creator><![CDATA[Dr. Anmol Agarwal]]></dc:creator><pubDate>Sat, 14 Mar 2026 13:00:00 GMT</pubDate><enclosure url="https://api.riverside.fm/hosting-analytics/media/d90d5042cecb00738b615d1dab468d3b9b94b064ddaf4800e32f2003373efbd5/eyJlcGlzb2RlSWQiOiI1MDdjM2UwMC1hMGI2LTQwN2MtYjY3ZC0xOTkzMDJiM2I1OGIiLCJwb2RjYXN0SWQiOiJlZDNlMjk2Yy03NDFmLTQ5MjEtOTg4Yy1kYmVjM2Q4NDc5M2MiLCJhY2NvdW50SWQiOiI2OTUzMDVmZjk4OTNjNmFkOWUwNjU0MDQiLCJwYXRoIjoibWVkaWEvY2xpcHMvNjlhNDkyZDc0MGJmZmZhMWU3NjA4OGE1L2FubW9scy1zdHVkaW8tRWd5cmwtY29tcG9zZXItMjAyNi0zLTFfXzIwLTI2LTE1Lm1wMyJ9.mp3" length="90588412" type="audio/mpeg"/><podcast:transcript url="https://hosting-media.rs-prod.riverside.fm/media/podcasts/ed3e296c-741f-4921-988c-dbec3d84793c/episodes/507c3e00-a0b6-407c-b67d-199302b3b58b/transcripts.txt" type="text/plain"/><itunes:summary>&lt;p&gt;AI is transforming healthcare, and the future looks promising.&lt;/p&gt;&lt;p&gt;In this episode, Omar Sangurima,&lt;br /&gt;Head of Cyber Program Management &amp;amp; Cyber Third-Party Risk at Memorial Sloan Kettering Cancer Center&lt;b&gt; &lt;/b&gt;and Anmol Agarwal explore how AI is helping unlock insights in healthcare and improving patient outcomes. They discuss why thoughtful AI regulation is essential, the balance between innovation and privacy, and even how global events like the FIFA World Cup reveal AI’s growing role in society.&lt;/p&gt;&lt;p&gt;Join us for a forward-looking conversation on the opportunities, ethical considerations, and exciting future of AI in healthcare.&lt;/p&gt;&lt;p&gt;&lt;/p&gt;</itunes:summary><itunes:explicit>no</itunes:explicit><itunes:duration>01:02:54</itunes:duration><itunes:image href="https://hosting-media.rs-prod.riverside.fm/media/podcasts/ed3e296c-741f-4921-988c-dbec3d84793c/logos/09b98283-ffec-46f1-9853-8fa8f01eca0c.png"/><itunes:title>AI in Healthcare: Breakthrough or Security Risk? With Omar Sangurima</itunes:title><itunes:episodeType>full</itunes:episodeType></item><item><title><![CDATA[Power Grid at Risk and AI Governance Insights with Dr. Andrea Ruotolo]]></title><description><![CDATA[<p>AI is helping manage power grids, and attackers are running tests. In this episode, Anmol Agarwal talks with Andrea Ruotolo about the real-world stakes of AI security and governance in the energy sector. From lessons learned in cyber incidents like the Polish power grid attack to strategies for continuous monitoring, operationalizing policies, and bridging the gap between AI governance and practice, this conversation shows why cross-functional collaboration and responsible AI are critical.</p>]]></description><guid isPermaLink="false">3710f2b9-aab9-40f1-83fd-78400ff42cd2</guid><dc:creator><![CDATA[Dr. Anmol Agarwal]]></dc:creator><pubDate>Sat, 07 Mar 2026 13:00:00 GMT</pubDate><enclosure url="https://api.riverside.fm/hosting-analytics/media/57988bfd71119061417a987218d1fc5fa180bd5c6c3344810fddb8075c669c60/eyJlcGlzb2RlSWQiOiIzNzEwZjJiOS1hYWI5LTQwZjEtODNmZC03ODQwMGZmNDJjZDIiLCJwb2RjYXN0SWQiOiJlZDNlMjk2Yy03NDFmLTQ5MjEtOTg4Yy1kYmVjM2Q4NDc5M2MiLCJhY2NvdW50SWQiOiI2OTUzMDVmZjk4OTNjNmFkOWUwNjU0MDQiLCJwYXRoIjoibWVkaWEvY2xpcHMvNjk5NzFiNDYwY2UzNjFiMzU4MmIxMzczL2FubW9scy1zdHVkaW8tRWd5cmwtY29tcG9zZXItMjAyNi0yLTE5X18xNS0xNi0zOC5tcDMifQ==.mp3" length="35959475" type="audio/mpeg"/><itunes:summary>&lt;p&gt;AI is helping manage power grids, and attackers are running tests. In this episode, Anmol Agarwal talks with Andrea Ruotolo about the real-world stakes of AI security and governance in the energy sector. From lessons learned in cyber incidents like the Polish power grid attack to strategies for continuous monitoring, operationalizing policies, and bridging the gap between AI governance and practice, this conversation shows why cross-functional collaboration and responsible AI are critical.&lt;/p&gt;</itunes:summary><itunes:explicit>no</itunes:explicit><itunes:duration>00:24:58</itunes:duration><itunes:image href="https://hosting-media.rs-prod.riverside.fm/media/podcasts/ed3e296c-741f-4921-988c-dbec3d84793c/logos/09b98283-ffec-46f1-9853-8fa8f01eca0c.png"/><itunes:season>1</itunes:season><itunes:episode>9</itunes:episode><itunes:title>Power Grid at Risk and AI Governance Insights with Dr. Andrea Ruotolo</itunes:title><itunes:episodeType>full</itunes:episodeType></item><item><title><![CDATA[Blockchain, Model Drift, and Digital Trust With Jennifer Raiford]]></title><description><![CDATA[<p>In this episode, Anmol Agarwal sits down with cybersecurity executive Jennifer Raiford to decode the silent threat of model drift, the rising role of blockchain in digital trust, and why securing tomorrow’s AI systems requires a new playbook. From deepfakes to identity verification, this conversation reframes AI security as a strategic imperative in a world where trust is the ultimate currency.</p>]]></description><guid isPermaLink="false">b6a836ac-e9a3-45a8-a87f-06e499fd9232</guid><dc:creator><![CDATA[Dr. Anmol Agarwal]]></dc:creator><pubDate>Sat, 07 Mar 2026 13:00:00 GMT</pubDate><enclosure url="https://api.riverside.fm/hosting-analytics/media/b5f5188e9789882ba6084b5f45ea0078b2150dbea7d469cb1eb5c48b419a0523/eyJlcGlzb2RlSWQiOiJiNmE4MzZhYy1lOWEzLTQ1YTgtYTg3Zi0wNmU0OTlmZDkyMzIiLCJwb2RjYXN0SWQiOiJlZDNlMjk2Yy03NDFmLTQ5MjEtOTg4Yy1kYmVjM2Q4NDc5M2MiLCJhY2NvdW50SWQiOiI2OTUzMDVmZjk4OTNjNmFkOWUwNjU0MDQiLCJwYXRoIjoibWVkaWEvY2xpcHMvNjk5OTRlNzk5NjIzNTE5MjI2ZWIxZTExL2FubW9scy1zdHVkaW8tRWd5cmwtY29tcG9zZXItMjAyNi0yLTIxX183LTE5LTM3Lm1wMyJ9.mp3" length="26788614" type="audio/mpeg"/><itunes:summary>&lt;p&gt;In this episode, Anmol Agarwal sits down with cybersecurity executive Jennifer Raiford to decode the silent threat of model drift, the rising role of blockchain in digital trust, and why securing tomorrow’s AI systems requires a new playbook. From deepfakes to identity verification, this conversation reframes AI security as a strategic imperative in a world where trust is the ultimate currency.&lt;/p&gt;</itunes:summary><itunes:explicit>no</itunes:explicit><itunes:duration>00:18:36</itunes:duration><itunes:image href="https://hosting-media.rs-prod.riverside.fm/media/podcasts/ed3e296c-741f-4921-988c-dbec3d84793c/logos/09b98283-ffec-46f1-9853-8fa8f01eca0c.png"/><itunes:season>1</itunes:season><itunes:episode>10</itunes:episode><itunes:title>Blockchain, Model Drift, and Digital Trust With Jennifer Raiford</itunes:title><itunes:episodeType>full</itunes:episodeType></item><item><title><![CDATA[Data Centers and Community with Dirce Eduardo Hernandez]]></title><description><![CDATA[<p>In this episode of <i>AI Security Update</i>, Anmol Agarwal speaks with cybersecurity expert Dirce Eduardo Hernandez about the surge in AI-driven data centers and the massive computational backbone required to power today’s intelligent systems. They explore how organizations are adapting their infrastructure strategies to meet AI’s growing demands and what that means for security teams.</p><p>The conversation also shifts to the human side of AI in cybersecurity: using AI to prepare conference talks, staying relevant in public speaking, and learning from industry leaders like Caleb Sima and Phillip Wylie. Hernandez highlights the importance of networking within the cybersecurity community and how collaboration often becomes the strongest defense in an AI-driven threat landscape.</p><p>They close by discussing data privacy and the global impact of regulations like the General Data Protection Regulation (GDPR), emphasizing why privacy awareness must evolve alongside AI innovation.</p><p>This episode blends infrastructure, insight, and community revealing that securing AI is as much about people and principles as it is about technology.</p>]]></description><guid isPermaLink="false">4ccf8973-9f3f-49b4-be75-9d2f0730b24c</guid><dc:creator><![CDATA[Dr. Anmol Agarwal]]></dc:creator><pubDate>Sun, 01 Mar 2026 13:00:00 GMT</pubDate><enclosure url="https://api.riverside.fm/hosting-analytics/media/875424d30dbbadbd93d681d1a35c1eec47a4482d179f457a135383dfe68ca089/eyJlcGlzb2RlSWQiOiI0Y2NmODk3My05ZjNmLTQ5YjQtYmU3NS05ZDJmMDczMGIyNGMiLCJwb2RjYXN0SWQiOiJlZDNlMjk2Yy03NDFmLTQ5MjEtOTg4Yy1kYmVjM2Q4NDc5M2MiLCJhY2NvdW50SWQiOiI2OTUzMDVmZjk4OTNjNmFkOWUwNjU0MDQiLCJwYXRoIjoibWVkaWEvY2xpcHMvNjk5MWQ1NjJjMzlmMGM0ZTg1MWE0NzcxL2FubW9scy1zdHVkaW8tRWd5cmwtY29tcG9zZXItMjAyNi0yLTE1X18xNS0xNy02Lm1wMyJ9.mp3" length="57685411" type="audio/mpeg"/><itunes:summary>&lt;p&gt;In this episode of &lt;i&gt;AI Security Update&lt;/i&gt;, Anmol Agarwal speaks with cybersecurity expert Dirce Eduardo Hernandez about the surge in AI-driven data centers and the massive computational backbone required to power today’s intelligent systems. They explore how organizations are adapting their infrastructure strategies to meet AI’s growing demands and what that means for security teams.&lt;/p&gt;&lt;p&gt;The conversation also shifts to the human side of AI in cybersecurity: using AI to prepare conference talks, staying relevant in public speaking, and learning from industry leaders like Caleb Sima and Phillip Wylie. Hernandez highlights the importance of networking within the cybersecurity community and how collaboration often becomes the strongest defense in an AI-driven threat landscape.&lt;/p&gt;&lt;p&gt;They close by discussing data privacy and the global impact of regulations like the General Data Protection Regulation (GDPR), emphasizing why privacy awareness must evolve alongside AI innovation.&lt;/p&gt;&lt;p&gt;This episode blends infrastructure, insight, and community revealing that securing AI is as much about people and principles as it is about technology.&lt;/p&gt;</itunes:summary><itunes:explicit>no</itunes:explicit><itunes:duration>00:40:04</itunes:duration><itunes:image href="https://hosting-media.rs-prod.riverside.fm/media/podcasts/ed3e296c-741f-4921-988c-dbec3d84793c/logos/09b98283-ffec-46f1-9853-8fa8f01eca0c.png"/><itunes:season>1</itunes:season><itunes:episode>8</itunes:episode><itunes:title>Data Centers and Community with Dirce Eduardo Hernandez</itunes:title><itunes:episodeType>full</itunes:episodeType></item><item><title><![CDATA[🎬 From Film Sets to Firewalls: A Conversation with Aby Rao]]></title><description><![CDATA[<p>A cybersecurity veteran who’s also a filmmaker? 🎬🔐</p><p>In this episode, Anmol Agarwal talks with Aby Rao — 20 years in cybersecurity and a passion for storytelling — about the unexpected overlap between filmmaking and AI security.</p><p>They unpack how AI is reshaping creativity, where automation can dilute originality, and why insider threats and accountability matter more than ever in an AI-driven world.</p><p>Because whether you’re directing a film or deploying AI, what you build and how you control it defines the outcome.</p>]]></description><guid isPermaLink="false">08c026b8-4e69-488a-af15-50c10cc333c4</guid><dc:creator><![CDATA[Dr. Anmol Agarwal]]></dc:creator><pubDate>Sat, 28 Feb 2026 13:00:00 GMT</pubDate><enclosure url="https://api.riverside.fm/hosting-analytics/media/45b86ab6928274208df319894eece162a8fdf757441f7e36d03f7e0eb5bc67ea/eyJlcGlzb2RlSWQiOiIwOGMwMjZiOC00ZTY5LTQ4OGEtYWYxNS01MGMxMGNjMzMzYzQiLCJwb2RjYXN0SWQiOiJlZDNlMjk2Yy03NDFmLTQ5MjEtOTg4Yy1kYmVjM2Q4NDc5M2MiLCJhY2NvdW50SWQiOiI2OTUzMDVmZjk4OTNjNmFkOWUwNjU0MDQiLCJwYXRoIjoibWVkaWEvY2xpcHMvNjk5MTBmMDFjZDg5Nzc3MzY5NDYxNmM3L2FubW9scy1zdHVkaW8tRWd5cmwtY29tcG9zZXItMjAyNi0yLTE1X18xLTEwLTQxLm1wMyJ9.mp3" length="46703325" type="audio/mpeg"/><itunes:summary>&lt;p&gt;A cybersecurity veteran who’s also a filmmaker? 🎬🔐&lt;/p&gt;&lt;p&gt;In this episode, Anmol Agarwal talks with Aby Rao — 20 years in cybersecurity and a passion for storytelling — about the unexpected overlap between filmmaking and AI security.&lt;/p&gt;&lt;p&gt;They unpack how AI is reshaping creativity, where automation can dilute originality, and why insider threats and accountability matter more than ever in an AI-driven world.&lt;/p&gt;&lt;p&gt;Because whether you’re directing a film or deploying AI, what you build and how you control it defines the outcome.&lt;/p&gt;</itunes:summary><itunes:explicit>no</itunes:explicit><itunes:duration>00:32:26</itunes:duration><itunes:image href="https://hosting-media.rs-prod.riverside.fm/media/podcasts/ed3e296c-741f-4921-988c-dbec3d84793c/logos/09b98283-ffec-46f1-9853-8fa8f01eca0c.png"/><itunes:season>1</itunes:season><itunes:episode>7</itunes:episode><itunes:title>🎬 From Film Sets to Firewalls: A Conversation with Aby Rao</itunes:title><itunes:episodeType>full</itunes:episodeType></item><item><title><![CDATA[AI, Star Trek, and Cybersecurity Reality With Kevin White]]></title><description><![CDATA[<p>AI can amplify cybersecurity, but only if it’s used wisely. In this episode of AI Security Update, host Anmol Agarwal talks with Kevin White, Solutions Engineer at Cloudflare about why Zero Trust is essential in an AI-driven world.</p><p>Using Star Trek as a playful yet insightful analogy, Kevin explains how AI can be a powerful tool like a calculator for security, but also how risks like prompt injection and data poisoning can cause real damage if left unchecked. They dive into practical ways to apply Zero Trust principles and context to make AI safer and more effective.</p><p>Whether you’re defending networks or exploring AI’s role in security, this episode offers actionable insights for using AI responsibly without losing control.</p>]]></description><guid isPermaLink="false">787d99cf-793a-4a6f-af27-3c1305e9ce46</guid><dc:creator><![CDATA[Dr. Anmol Agarwal]]></dc:creator><pubDate>Sat, 21 Feb 2026 13:00:00 GMT</pubDate><enclosure url="https://api.riverside.fm/hosting-analytics/media/d08fff1ac3a2b4fb68f689b745383730143f9b58cd05e825104d3dc32d0ab4a6/eyJlcGlzb2RlSWQiOiI3ODdkOTljZi03OTNhLTRhNmYtYWYyNy0zYzEzMDVlOWNlNDYiLCJwb2RjYXN0SWQiOiJlZDNlMjk2Yy03NDFmLTQ5MjEtOTg4Yy1kYmVjM2Q4NDc5M2MiLCJhY2NvdW50SWQiOiI2OTUzMDVmZjk4OTNjNmFkOWUwNjU0MDQiLCJwYXRoIjoibWVkaWEvY2xpcHMvNjk3NTFlMDE4OTJhOTkzNmQzM2VjNDBkL2FubW9scy1zdHVkaW8tRWd5cmwtY29tcG9zZXItMjAyNi0xLTI0X18yMC0zMS0xMy5tcDMifQ==.mp3" length="17670541" type="audio/mpeg"/><itunes:summary>&lt;p&gt;AI can amplify cybersecurity, but only if it’s used wisely. In this episode of AI Security Update, host Anmol Agarwal talks with Kevin White, Solutions Engineer at Cloudflare about why Zero Trust is essential in an AI-driven world.&lt;/p&gt;&lt;p&gt;Using Star Trek as a playful yet insightful analogy, Kevin explains how AI can be a powerful tool like a calculator for security, but also how risks like prompt injection and data poisoning can cause real damage if left unchecked. They dive into practical ways to apply Zero Trust principles and context to make AI safer and more effective.&lt;/p&gt;&lt;p&gt;Whether you’re defending networks or exploring AI’s role in security, this episode offers actionable insights for using AI responsibly without losing control.&lt;/p&gt;</itunes:summary><itunes:explicit>no</itunes:explicit><itunes:duration>00:28:28</itunes:duration><itunes:image href="https://hosting-media.rs-prod.riverside.fm/media/podcasts/ed3e296c-741f-4921-988c-dbec3d84793c/logos/09b98283-ffec-46f1-9853-8fa8f01eca0c.png"/><itunes:season>1</itunes:season><itunes:episode>6</itunes:episode><itunes:title>AI, Star Trek, and Cybersecurity Reality With Kevin White</itunes:title><itunes:episodeType>full</itunes:episodeType></item><item><title><![CDATA[When AI Defends and Decides Who Gets Hired With Larci Robertson]]></title><description><![CDATA[<p>In this episode of AI Security Update, host Dr. Anmol Agarwal is joined by Larci Robertson, a cybersecurity veteran with over two decades of experience across military, government, and corporate environments. From her early work as a Signals Intelligence Analyst in the U.S. Navy and cyber threat intelligence at Navy Cyber Defense Operations Command, to her roles in enterprise security and community leadership, Larci brings a grounded, real-world perspective on AI in security.</p><p>Larci shares practical strategies for incident response and tabletop exercises, explaining how teams can move beyond check-the-box planning and actually prepare for real incidents. The discussion also dives into today’s cybersecurity job market, where AI is screening candidates while job seekers increasingly rely on AI themselves—creating an “AI vs AI” dynamic that’s changing how careers are built and evaluated.</p><p>They also explore why community engagement, information sharing, and human judgment remain critical as AI becomes more embedded in security operations</p>]]></description><guid isPermaLink="false">d716d481-4715-4fbd-9731-006131a9e7b4</guid><dc:creator><![CDATA[Dr. Anmol Agarwal]]></dc:creator><pubDate>Sat, 14 Feb 2026 13:00:00 GMT</pubDate><enclosure url="https://api.riverside.fm/hosting-analytics/media/5f7a2e6d08b5044eed64912ea0dca8701de8e9f5000824977b76b7cdf7dee3cd/eyJlcGlzb2RlSWQiOiJkNzE2ZDQ4MS00NzE1LTRmYmQtOTczMS0wMDYxMzFhOWU3YjQiLCJwb2RjYXN0SWQiOiJlZDNlMjk2Yy03NDFmLTQ5MjEtOTg4Yy1kYmVjM2Q4NDc5M2MiLCJhY2NvdW50SWQiOiI2OTUzMDVmZjk4OTNjNmFkOWUwNjU0MDQiLCJwYXRoIjoibWVkaWEvY2xpcHMvNjk3NTNjYTJlMWMxMzBkMWIxNzAzZTdiL2FubW9scy1zdHVkaW8tRWd5cmwtY29tcG9zZXItMjAyNi0xLTI0X18yMi00MS01NC5tcDMifQ==.mp3" length="19918388" type="audio/mpeg"/><itunes:summary>&lt;p&gt;In this episode of AI Security Update, host Dr. Anmol Agarwal is joined by Larci Robertson, a cybersecurity veteran with over two decades of experience across military, government, and corporate environments. From her early work as a Signals Intelligence Analyst in the U.S. Navy and cyber threat intelligence at Navy Cyber Defense Operations Command, to her roles in enterprise security and community leadership, Larci brings a grounded, real-world perspective on AI in security.&lt;/p&gt;&lt;p&gt;Larci shares practical strategies for incident response and tabletop exercises, explaining how teams can move beyond check-the-box planning and actually prepare for real incidents. The discussion also dives into today’s cybersecurity job market, where AI is screening candidates while job seekers increasingly rely on AI themselves—creating an “AI vs AI” dynamic that’s changing how careers are built and evaluated.&lt;/p&gt;&lt;p&gt;They also explore why community engagement, information sharing, and human judgment remain critical as AI becomes more embedded in security operations&lt;/p&gt;</itunes:summary><itunes:explicit>no</itunes:explicit><itunes:duration>00:28:29</itunes:duration><itunes:image href="https://hosting-media.rs-prod.riverside.fm/media/podcasts/ed3e296c-741f-4921-988c-dbec3d84793c/logos/09b98283-ffec-46f1-9853-8fa8f01eca0c.png"/><itunes:title>When AI Defends and Decides Who Gets Hired With Larci Robertson</itunes:title><itunes:episodeType>full</itunes:episodeType></item><item><title><![CDATA[The AI Skynet Paradox with Nathan Chung]]></title><description><![CDATA[<p>In this episode of AI Security Update, Dr. Anmol Agarwal is joined by Nathan Chung, a cybersecurity leader with over 20 years of experience and a global neurodiversity advocate. The conversation dives into why traditional cybersecurity frameworks are struggling to keep up with rapidly evolving AI technologies. They also unpack the explosion of AI-enabled consumer products showcased at the Consumer Electronics Show (CES), raising critical questions about the security implications of embedding AI into everyday devices. From deepfakes and AI-powered cyberattacks to the need for responsible AI governance and continuous monitoring, the episode highlights why critical thinking and ethical AI use matter more than ever. Along the way, the discussion draws on pop culture references like WALL-E and Dune to illustrate both the promise and peril of AI, while acknowledging its potential to help address major societal challenges such as homelessness and hunger. Disclaimer: All views and opinions expressed in this podcast are solely individual opinions of the host and guest(s) featured and do not represent those of any current or former employer, client, partner, or organization. Nothing discussed should be considered official guidance, policy, or professional advice.</p>]]></description><guid isPermaLink="false">09efdd85-7d8d-4ee1-8d34-9a2422be3872</guid><dc:creator><![CDATA[Dr. Anmol Agarwal]]></dc:creator><pubDate>Sat, 24 Jan 2026 13:00:00 GMT</pubDate><enclosure url="https://api.riverside.fm/hosting-analytics/media/3363b0f992d5d62633d17f107dadda7e35c97e286f17eccc302acc983ce04309/eyJlcGlzb2RlSWQiOiIwOWVmZGQ4NS03ZDhkLTRlZTEtOGQzNC05YTI0MjJiZTM4NzIiLCJwb2RjYXN0SWQiOiJlZDNlMjk2Yy03NDFmLTQ5MjEtOTg4Yy1kYmVjM2Q4NDc5M2MiLCJhY2NvdW50SWQiOiI2OTUzMDVmZjk4OTNjNmFkOWUwNjU0MDQiLCJwYXRoIjoibWVkaWEvY2xpcHMvNjk3MDY5ZGE3YmQxNmVkOGI5ZjAyOTQ2L2FubW9scy1zdHVkaW8tRWd5cmwtY29tcG9zZXItMjAyNi0xLTIxX182LTUzLTMwLm1wMyJ9.mp3" length="19408136" type="audio/mpeg"/><itunes:summary>&lt;p&gt;In this episode of AI Security Update, Dr. Anmol Agarwal is joined by Nathan Chung, a cybersecurity leader with over 20 years of experience and a global neurodiversity advocate. The conversation dives into why traditional cybersecurity frameworks are struggling to keep up with rapidly evolving AI technologies. They also unpack the explosion of AI-enabled consumer products showcased at the Consumer Electronics Show (CES), raising critical questions about the security implications of embedding AI into everyday devices. From deepfakes and AI-powered cyberattacks to the need for responsible AI governance and continuous monitoring, the episode highlights why critical thinking and ethical AI use matter more than ever. Along the way, the discussion draws on pop culture references like WALL-E and Dune to illustrate both the promise and peril of AI, while acknowledging its potential to help address major societal challenges such as homelessness and hunger. Disclaimer: All views and opinions expressed in this podcast are solely individual opinions of the host and guest(s) featured and do not represent those of any current or former employer, client, partner, or organization. Nothing discussed should be considered official guidance, policy, or professional advice.&lt;/p&gt;</itunes:summary><itunes:explicit>no</itunes:explicit><itunes:duration>00:31:21</itunes:duration><itunes:image href="https://hosting-media.rs-prod.riverside.fm/media/podcasts/ed3e296c-741f-4921-988c-dbec3d84793c/logos/09b98283-ffec-46f1-9853-8fa8f01eca0c.png"/><itunes:season>1</itunes:season><itunes:episode>2</itunes:episode><itunes:title>The AI Skynet Paradox with Nathan Chung</itunes:title><itunes:episodeType>full</itunes:episodeType></item><item><title><![CDATA[The Security Risks No One Sees with Shannon Noonan]]></title><description><![CDATA[<p>As artificial intelligence becomes embedded in everyday business operations, organizations face growing AI security and compliance risks. In this episode of AI Security Update, host Dr. Anmol Agarwal sits down with Shannon Noonan, founder of HiNoon Consulting, to discuss the rise of shadow AI, when employees use unauthorized AI tools that can expose sensitive data and create serious cybersecurity gaps.</p><p>The conversation covers practical AI security best practices, the importance of employee education and monitoring, and how tools like the Software Bill of Materials (SBOM) and the emerging AI Bill of Materials help organizations understand and manage AI risk. Shannon also explores AI security challenges in regulated industries such as healthcare and why responsible AI governance is critical as adoption accelerates.</p><p>This episode is essential viewing for CISOs, security leaders, compliance professionals, and anyone navigating AI governance, data protection, and cybersecurity in the age of AI.</p>]]></description><guid isPermaLink="false">17509e2c-16a5-444b-bf3c-3b15e73a73b6</guid><dc:creator><![CDATA[Dr. Anmol Agarwal]]></dc:creator><pubDate>Sat, 07 Feb 2026 13:00:00 GMT</pubDate><enclosure url="https://api.riverside.fm/hosting-analytics/media/ec1318d87029e1b9c8d2e976746f42ececc6dbe293bd289c3facc71cc87bdf9d/eyJlcGlzb2RlSWQiOiIxNzUwOWUyYy0xNmE1LTQ0NGItYmYzYy0zYjE1ZTczYTczYjYiLCJwb2RjYXN0SWQiOiJlZDNlMjk2Yy03NDFmLTQ5MjEtOTg4Yy1kYmVjM2Q4NDc5M2MiLCJhY2NvdW50SWQiOiI2OTUzMDVmZjk4OTNjNmFkOWUwNjU0MDQiLCJwYXRoIjoibWVkaWEvY2xpcHMvNjk2Yzc0ZDczMzZmYzBlNGNkOWQ1ZmY2L2FubW9scy1zdHVkaW8tRWd5cmwtY29tcG9zZXItMjAyNi0xLTE4X182LTUxLTE5Lm1wMyJ9.mp3" length="20605390" type="audio/mpeg"/><itunes:summary>&lt;p&gt;As artificial intelligence becomes embedded in everyday business operations, organizations face growing AI security and compliance risks. In this episode of AI Security Update, host Dr. Anmol Agarwal sits down with Shannon Noonan, founder of HiNoon Consulting, to discuss the rise of shadow AI, when employees use unauthorized AI tools that can expose sensitive data and create serious cybersecurity gaps.&lt;/p&gt;&lt;p&gt;The conversation covers practical AI security best practices, the importance of employee education and monitoring, and how tools like the Software Bill of Materials (SBOM) and the emerging AI Bill of Materials help organizations understand and manage AI risk. Shannon also explores AI security challenges in regulated industries such as healthcare and why responsible AI governance is critical as adoption accelerates.&lt;/p&gt;&lt;p&gt;This episode is essential viewing for CISOs, security leaders, compliance professionals, and anyone navigating AI governance, data protection, and cybersecurity in the age of AI.&lt;/p&gt;</itunes:summary><itunes:explicit>no</itunes:explicit><itunes:duration>00:31:28</itunes:duration><itunes:image href="https://hosting-media.rs-prod.riverside.fm/media/podcasts/ed3e296c-741f-4921-988c-dbec3d84793c/logos/09b98283-ffec-46f1-9853-8fa8f01eca0c.png"/><itunes:season>1</itunes:season><itunes:episode>4</itunes:episode><itunes:title>The Security Risks No One Sees with Shannon Noonan</itunes:title><itunes:episodeType>full</itunes:episodeType></item><item><title><![CDATA[AI Won’t Replace You. Here’s Why. With Santina White]]></title><description><![CDATA[<p>In this episode of AI Security Update, Dr. Anmol Agarwal sits down with Santina White to explore the evolving intersection of artificial intelligence and cybersecurity. Santina shares her career journey from the U.S. Air Force to becoming a data analyst at the U.S Department of Homeland Security, highlighting how transferable skills play a critical role in building a successful cybersecurity career.</p><p>The conversation dives into how AI is being used in vulnerability testing and ethical hacking, reinforcing the idea that AI is a powerful tool to augment human expertise, not replace it. They also discuss the importance of securing critical infrastructure, along with key AI security challenges such as data privacy, trust, and the need for continuous human oversight.</p><p>Beyond technology, the episode touches on career pathways in cybersecurity, the value of mentorship, and the collaborative, supportive nature of the tech community. It’s an insightful look at how people, skills, and AI come together to shape the future of AI Security.</p>]]></description><guid isPermaLink="false">eff31623-6077-433a-b8ec-bfbc62d4d5d6</guid><dc:creator><![CDATA[Dr. Anmol Agarwal]]></dc:creator><pubDate>Sat, 31 Jan 2026 13:00:00 GMT</pubDate><enclosure url="https://api.riverside.fm/hosting-analytics/media/f1ba68af1b1448a66151de736266db55329b85f99b354fb798d49cbfafb991d0/eyJlcGlzb2RlSWQiOiJlZmYzMTYyMy02MDc3LTQzM2EtYjhlYy1iZmJjNjJkNGQ1ZDYiLCJwb2RjYXN0SWQiOiJlZDNlMjk2Yy03NDFmLTQ5MjEtOTg4Yy1kYmVjM2Q4NDc5M2MiLCJhY2NvdW50SWQiOiI2OTUzMDVmZjk4OTNjNmFkOWUwNjU0MDQiLCJwYXRoIjoibWVkaWEvY2xpcHMvNjk2OTc3YmJjZGIzYTkzOGUzODFlNjkyL2FubW9scy1zdHVkaW8tRWd5cmwtY29tcG9zZXItMjAyNi0xLTE2X18wLTI2LTUxLm1wMyJ9.mp3" length="26889758" type="audio/mpeg"/><itunes:summary>&lt;p&gt;In this episode of AI Security Update, Dr. Anmol Agarwal sits down with Santina White to explore the evolving intersection of artificial intelligence and cybersecurity. Santina shares her career journey from the U.S. Air Force to becoming a data analyst at the U.S Department of Homeland Security, highlighting how transferable skills play a critical role in building a successful cybersecurity career.&lt;/p&gt;&lt;p&gt;The conversation dives into how AI is being used in vulnerability testing and ethical hacking, reinforcing the idea that AI is a powerful tool to augment human expertise, not replace it. They also discuss the importance of securing critical infrastructure, along with key AI security challenges such as data privacy, trust, and the need for continuous human oversight.&lt;/p&gt;&lt;p&gt;Beyond technology, the episode touches on career pathways in cybersecurity, the value of mentorship, and the collaborative, supportive nature of the tech community. It’s an insightful look at how people, skills, and AI come together to shape the future of AI Security.&lt;/p&gt;</itunes:summary><itunes:explicit>no</itunes:explicit><itunes:duration>00:40:36</itunes:duration><itunes:image href="https://hosting-media.rs-prod.riverside.fm/media/podcasts/ed3e296c-741f-4921-988c-dbec3d84793c/logos/09b98283-ffec-46f1-9853-8fa8f01eca0c.png"/><itunes:season>1</itunes:season><itunes:episode>3</itunes:episode><itunes:title>AI Won’t Replace You. Here’s Why. With Santina White</itunes:title><itunes:episodeType>full</itunes:episodeType></item></channel></rss>