<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0" xmlns:itunes="http://www.itunes.com/dtds/podcast-1.0.dtd" xmlns:psc="http://podlove.org/simple-chapters" xmlns:podcast="https://podcastindex.org/namespace/1.0"><channel><title><![CDATA[Deploy Securely]]></title><description><![CDATA[<p>Manage risk at the junction of artificial intelligence and software security.</p>]]></description><link>https://www.buzzsprout.com/2266002</link><generator>Riverside.fm (https://riverside.com)</generator><lastBuildDate>Sun, 10 May 2026 01:38:36 GMT</lastBuildDate><atom:link href="https://api.riverside.com/hosting/gNmVqubs.rss" rel="self" type="application/rss+xml"/><author><![CDATA[StackAware]]></author><pubDate>Tue, 03 Feb 2026 13:36:30 GMT</pubDate><copyright><![CDATA[2026 StackAware]]></copyright><language><![CDATA[en]]></language><ttl>60</ttl><category><![CDATA[Technology]]></category><itunes:author>StackAware</itunes:author><itunes:summary>&lt;p&gt;Manage risk at the junction of artificial intelligence and software security.&lt;/p&gt;</itunes:summary><itunes:type>episodic</itunes:type><itunes:owner><itunes:name>StackAware</itunes:name><itunes:email>walter@stackaware.com</itunes:email></itunes:owner><itunes:explicit>no</itunes:explicit><itunes:category text="Technology"/><itunes:image href="https://hosting-media.riverside.com/media/imports/podcasts/8108dff1-4467-4a52-be6b-eaa39a0fd03d/uhn7x1yaj8c59nv415mnahn1jblv.jpg"/><item><title><![CDATA[Aware AI - April 2026]]></title><description><![CDATA[<p>Cameron and I talked about:<br /><br />- How large does a company have to be to recommend ISO 42001?<br />- How do you avoid constant turnover of your AI inventory (you don't)?<br />- What minimum documentation is needed per AI use case? (or before bringing on an AI system or vendor?) <br />- What’s a realistic training and awareness program about AI for non-technical staff?<br />- Are the following over- / under- / properly-hyped?<br />-- Running AI locally<br />-- AI Governance Tools/Platforms<br />-- Model eval benchmarks (SWE‑bench, red‑team scores, "safety" leaderboards) <br />-- “Innovation‑first,” light‑touch approach to AI Governance<br />-- Claude Mythos</p>]]></description><guid isPermaLink="false">2b02dba4-fa35-406d-9c2c-e0819bc59b66</guid><dc:creator><![CDATA[StackAware]]></dc:creator><pubDate>Wed, 29 Apr 2026 15:29:14 GMT</pubDate><enclosure url="https://api.riverside.com/hosting-analytics/media/f44ab8a618574d966cbb164bb698178ec5bc7ee3ca41fd8b4d751dba36f40d4e/eyJlcGlzb2RlSWQiOiIyYjAyZGJhNC1mYTM1LTQwNmQtOWMyYy1lMDgxOWJjNTliNjYiLCJwb2RjYXN0SWQiOiI4MTA4ZGZmMS00NDY3LTRhNTItYmU2Yi1lYWEzOWEwZmQwM2QiLCJhY2NvdW50SWQiOiI2NjYyMWQwZmM4M2RlNjU5NjRiNWUwODMiLCJwYXRoIjoibWVkaWEvY2xpcHMvNjlmMjIzNWZmZjI1NDhiZmQyOTU2Yjc3L2F3YXJlLWFpLWJyaWVmLWNvbXBvc2VyLTIwMjYtNC0yOV9fMTctMjctMjcubXAzIn0=.mp3" length="57507049" type="audio/mpeg"/><podcast:transcript url="https://hosting-media.riverside.com/media/podcasts/8108dff1-4467-4a52-be6b-eaa39a0fd03d/episodes/2b02dba4-fa35-406d-9c2c-e0819bc59b66/transcripts.txt" type="text/plain"/><itunes:summary>&lt;p&gt;Cameron and I talked about:&lt;br /&gt;&lt;br /&gt;- How large does a company have to be to recommend ISO 42001?&lt;br /&gt;- How do you avoid constant turnover of your AI inventory (you don&apos;t)?&lt;br /&gt;- What minimum documentation is needed per AI use case? (or before bringing on an AI system or vendor?) &lt;br /&gt;- What’s a realistic training and awareness program about AI for non-technical staff?&lt;br /&gt;- Are the following over- / under- / properly-hyped?&lt;br /&gt;-- Running AI locally&lt;br /&gt;-- AI Governance Tools/Platforms&lt;br /&gt;-- Model eval benchmarks (SWE‑bench, red‑team scores, &quot;safety&quot; leaderboards) &lt;br /&gt;-- “Innovation‑first,” light‑touch approach to AI Governance&lt;br /&gt;-- Claude Mythos&lt;/p&gt;</itunes:summary><itunes:explicit>no</itunes:explicit><itunes:duration>00:29:57</itunes:duration><itunes:image href="https://hosting-media.riverside.com/media/imports/podcasts/8108dff1-4467-4a52-be6b-eaa39a0fd03d/uhn7x1yaj8c59nv415mnahn1jblv.jpg"/><itunes:title>Aware AI - April 2026</itunes:title><itunes:episodeType>full</itunes:episodeType></item><item><title><![CDATA[ISO 42001 deep dive]]></title><description><![CDATA[<p>I talked with David Forman, CEO of Mastermind. The company is the only pure-play ISO certification body in the U.S. and was the first worldwide to issue an ISO 42001 certificate. We discussed:</p><p></p><p>- Misconceptions (and reality) about the standard</p><p>- How to scope the AIMS (and audit)</p><p>- Traveling the world as a founder</p><p>- Surveillance pricing (side quest)</p>]]></description><guid isPermaLink="false">bd0d2a18-66fa-4ad6-9260-6ec6c737ff73</guid><dc:creator><![CDATA[StackAware]]></dc:creator><pubDate>Sat, 21 Mar 2026 12:45:18 GMT</pubDate><enclosure url="https://api.riverside.com/hosting-analytics/media/af1b567811b90ed2aa2d6f81b74e59806d3a70b37621e5f7918861caa338ab6a/eyJlcGlzb2RlSWQiOiJiZDBkMmExOC02NmZhLTRhZDYtOTI2MC02ZWM2YzczN2ZmNzMiLCJwb2RjYXN0SWQiOiI4MTA4ZGZmMS00NDY3LTRhNTItYmU2Yi1lYWEzOWEwZmQwM2QiLCJhY2NvdW50SWQiOiI2NjYyMWQwZmM4M2RlNjU5NjRiNWUwODMiLCJwYXRoIjoibWVkaWEvY2xpcHMvNjliZTkyMmIyZGRlZWJhNGRjYjM3N2MxL2RlcGxveS1zZWN1cmVseS1wb2RjYXN0LWNvbXBvc2VyLTIwMjYtMy0yMV9fMTMtNDItMTgubXAzIn0=.mp3" length="69239790" type="audio/mpeg"/><podcast:transcript url="https://hosting-media.riverside.com/media/podcasts/8108dff1-4467-4a52-be6b-eaa39a0fd03d/episodes/bd0d2a18-66fa-4ad6-9260-6ec6c737ff73/transcripts.txt" type="text/plain"/><itunes:summary>&lt;p&gt;I talked with David Forman, CEO of Mastermind. The company is the only pure-play ISO certification body in the U.S. and was the first worldwide to issue an ISO 42001 certificate. We discussed:&lt;/p&gt;&lt;p&gt;&lt;/p&gt;&lt;p&gt;- Misconceptions (and reality) about the standard&lt;/p&gt;&lt;p&gt;- How to scope the AIMS (and audit)&lt;/p&gt;&lt;p&gt;- Traveling the world as a founder&lt;/p&gt;&lt;p&gt;- Surveillance pricing (side quest)&lt;/p&gt;</itunes:summary><itunes:explicit>no</itunes:explicit><itunes:duration>00:48:05</itunes:duration><itunes:image href="https://hosting-media.riverside.com/media/imports/podcasts/8108dff1-4467-4a52-be6b-eaa39a0fd03d/uhn7x1yaj8c59nv415mnahn1jblv.jpg"/><itunes:title>ISO 42001 deep dive</itunes:title><itunes:episodeType>full</itunes:episodeType></item><item><title><![CDATA[Audit-ready AI]]></title><description><![CDATA[<p>I spoke with Danny Manimbo, Managing Principal at Schellman, who leads the firm’s AI governance and ISO assurance services. Danny and I talked about:</p><p></p><ul><li>What firms miss when preparing for ISO 42001 audits.</li><li>How they continually improve and what metrics they track.</li><li>The role that ultra-marathon running played in Danny’s personal and professional life.</li></ul>]]></description><guid isPermaLink="false">b9e5e89b-2054-4356-84f9-9b1fbe9ee6f5</guid><dc:creator><![CDATA[StackAware]]></dc:creator><pubDate>Mon, 09 Mar 2026 20:26:17 GMT</pubDate><enclosure url="https://api.riverside.com/hosting-analytics/media/c4c0c3baa95a9c1b2bd2add5ae17759a1a9bddc1d88c009a6c6afc4ac3c74793/eyJlcGlzb2RlSWQiOiJiOWU1ZTg5Yi0yMDU0LTQzNTYtODRmOS05YjFmYmU5ZWU2ZjUiLCJwb2RjYXN0SWQiOiI4MTA4ZGZmMS00NDY3LTRhNTItYmU2Yi1lYWEzOWEwZmQwM2QiLCJhY2NvdW50SWQiOiI2NjYyMWQwZmM4M2RlNjU5NjRiNWUwODMiLCJwYXRoIjoibWVkaWEvY2xpcHMvNjlhZjJiZjJlNzU4ZWIzN2VkODIzMmE3L2RlcGxveS1zZWN1cmVseS1wb2RjYXN0LWNvbXBvc2VyLTIwMjYtMy05X18yMS0yMi0xMC5tcDMifQ==.mp3" length="73988851" type="audio/mpeg"/><podcast:transcript url="https://hosting-media.riverside.com/media/podcasts/8108dff1-4467-4a52-be6b-eaa39a0fd03d/episodes/b9e5e89b-2054-4356-84f9-9b1fbe9ee6f5/transcripts.txt" type="text/plain"/><itunes:summary>&lt;p&gt;I spoke with Danny Manimbo, Managing Principal at Schellman, who leads the firm’s AI governance and ISO assurance services. Danny and I talked about:&lt;/p&gt;&lt;p&gt;&lt;/p&gt;&lt;ul&gt;&lt;li&gt;What firms miss when preparing for ISO 42001 audits.&lt;/li&gt;&lt;li&gt;How they continually improve and what metrics they track.&lt;/li&gt;&lt;li&gt;The role that ultra-marathon running played in Danny’s personal and professional life.&lt;/li&gt;&lt;/ul&gt;</itunes:summary><itunes:explicit>no</itunes:explicit><itunes:duration>00:51:23</itunes:duration><itunes:image href="https://hosting-media.riverside.com/media/imports/podcasts/8108dff1-4467-4a52-be6b-eaa39a0fd03d/uhn7x1yaj8c59nv415mnahn1jblv.jpg"/><itunes:title>Audit-ready AI</itunes:title><itunes:episodeType>full</itunes:episodeType></item><item><title><![CDATA["High-risk" AI, vibe coding, and regulatory gymnastics]]></title><description><![CDATA[<p>Cameron Gaudet and I chatted on the <b>Aware AI Brief</b> about<b>:</b></p><p></p><ul><li>How do you determine "high-risk"? (Additionally WHO determines a "high-risk" AI system?)</li></ul><ul><li>What does "AI Bias" actually mean? What is being evaluated and measured?</li><li>Is vibe coding "good"?</li></ul><p></p><p>Here is the reference database we discussed: <a rel="noopener noreferrer nofollow" href="https://reference.stackaware.com/" target="_blank">https://reference.stackaware.com/</a></p>]]></description><guid isPermaLink="false">07935419-b114-4243-8e63-9e6ca366d9c2</guid><dc:creator><![CDATA[StackAware]]></dc:creator><pubDate>Fri, 06 Mar 2026 19:58:07 GMT</pubDate><enclosure url="https://api.riverside.com/hosting-analytics/media/48f7312a95be03dd1f4f01e774fe217cf3a034ec06110194c6e0696afd2a5274/eyJlcGlzb2RlSWQiOiIwNzkzNTQxOS1iMTE0LTQyNDMtOGU2My05ZTZjYTM2NmQ5YzIiLCJwb2RjYXN0SWQiOiI4MTA4ZGZmMS00NDY3LTRhNTItYmU2Yi1lYWEzOWEwZmQwM2QiLCJhY2NvdW50SWQiOiI2NjYyMWQwZmM4M2RlNjU5NjRiNWUwODMiLCJwYXRoIjoibWVkaWEvY2xpcHMvNjlhYjMxMDhmYjgxMTE2MmZiNjU3MWRkL2F3YXJlLWFpLWJyaWVmLWNvbXBvc2VyLTIwMjYtMy02X18yMC01NC00Ny5tcDMifQ==.mp3" length="36297292" type="audio/mpeg"/><podcast:transcript url="https://hosting-media.riverside.com/media/podcasts/8108dff1-4467-4a52-be6b-eaa39a0fd03d/episodes/07935419-b114-4243-8e63-9e6ca366d9c2/transcripts.txt" type="text/plain"/><itunes:summary>&lt;p&gt;Cameron Gaudet and I chatted on the &lt;b&gt;Aware AI Brief&lt;/b&gt; about&lt;b&gt;:&lt;/b&gt;&lt;/p&gt;&lt;p&gt;&lt;/p&gt;&lt;ul&gt;&lt;li&gt;How do you determine &quot;high-risk&quot;? (Additionally WHO determines a &quot;high-risk&quot; AI system?)&lt;/li&gt;&lt;/ul&gt;&lt;ul&gt;&lt;li&gt;What does &quot;AI Bias&quot; actually mean? What is being evaluated and measured?&lt;/li&gt;&lt;li&gt;Is vibe coding &quot;good&quot;?&lt;/li&gt;&lt;/ul&gt;&lt;p&gt;&lt;/p&gt;&lt;p&gt;Here is the reference database we discussed: &lt;a rel=&quot;noopener noreferrer nofollow&quot; href=&quot;https://reference.stackaware.com/&quot; target=&quot;_blank&quot;&gt;https://reference.stackaware.com/&lt;/a&gt;&lt;/p&gt;</itunes:summary><itunes:explicit>no</itunes:explicit><itunes:duration>00:25:12</itunes:duration><itunes:image href="https://hosting-media.riverside.com/media/imports/podcasts/8108dff1-4467-4a52-be6b-eaa39a0fd03d/uhn7x1yaj8c59nv415mnahn1jblv.jpg"/><itunes:title>&quot;High-risk&quot; AI, vibe coding, and regulatory gymnastics</itunes:title><itunes:episodeType>full</itunes:episodeType></item><item><title><![CDATA[Anthropic, AI;DR, and (more) slop - Steve and Walter talk AI, March 2026]]></title><description><![CDATA[<p>To kick off March 2026, Steve and I talked:<br /><br />- Anthropic's big moves when it comes to:<br />-- ​Claude Code Security​: <a rel="noopener noreferrer nofollow" href="https://www.anthropic.com/news/claude-code-security" target="_blank">https://www.anthropic.com/news/claude-code-security</a><br />-- Facing ​off​ against the Department of War: <a rel="noopener noreferrer nofollow" href="https://www.anthropic.com/news/statement-comments-secretary-war" target="_blank">https://www.anthropic.com/news/statement-comments-secretary-war</a><br /><br />- The emergence of "​AI;DR​" as a phrase: <a rel="noopener noreferrer nofollow" href="https://futurism.com/artificial-intelligence/aidr-meaning" target="_blank">https://futurism.com/artificial-intelligence/aidr-meaning</a><br /><br />- Citrini Research's doomsday ​report​: <a rel="noopener noreferrer nofollow" href="https://www.citriniresearch.com/p/2028gic" target="_blank">https://www.citriniresearch.com/p/2028gic</a><br /><br />- More AI-generated ​slop: <a rel="noopener noreferrer nofollow" href="https://futurism.com/artificial-intelligence/ai-film-pulled-from-amc-theaters" target="_blank">https://futurism.com/artificial-intelligence/ai-film-pulled-from-amc-theaters</a></p>]]></description><guid isPermaLink="false">e0af53d6-2282-472e-9c02-d7dad2f00f3d</guid><dc:creator><![CDATA[StackAware]]></dc:creator><pubDate>Tue, 03 Mar 2026 22:10:06 GMT</pubDate><enclosure url="https://api.riverside.com/hosting-analytics/media/83b9c0f4bf0a8dd2e61669efef30c91a083ae2038fd5ccb369af284782c0b48e/eyJlcGlzb2RlSWQiOiJlMGFmNTNkNi0yMjgyLTQ3MmUtOWMwMi1kN2RhZDJmMDBmM2QiLCJwb2RjYXN0SWQiOiI4MTA4ZGZmMS00NDY3LTRhNTItYmU2Yi1lYWEzOWEwZmQwM2QiLCJhY2NvdW50SWQiOiI2NjYyMWQwZmM4M2RlNjU5NjRiNWUwODMiLCJwYXRoIjoibWVkaWEvY2xpcHMvNjlhNzUzYWFmYTZjMTZhNGIwNDNhMjMzL3N0ZXZlLWFuZC13YWx0ZXItdGFsay1haS1jb21wb3Nlci0yMDI2LTMtM19fMjItMzMtMzAubXAzIn0=.mp3" length="68605955" type="audio/mpeg"/><podcast:transcript url="https://hosting-media.riverside.com/media/podcasts/8108dff1-4467-4a52-be6b-eaa39a0fd03d/episodes/e0af53d6-2282-472e-9c02-d7dad2f00f3d/transcripts.txt" type="text/plain"/><itunes:summary>&lt;p&gt;To kick off March 2026, Steve and I talked:&lt;br /&gt;&lt;br /&gt;- Anthropic&apos;s big moves when it comes to:&lt;br /&gt;-- ​Claude Code Security​: &lt;a rel=&quot;noopener noreferrer nofollow&quot; href=&quot;https://www.anthropic.com/news/claude-code-security&quot; target=&quot;_blank&quot;&gt;https://www.anthropic.com/news/claude-code-security&lt;/a&gt;&lt;br /&gt;-- Facing ​off​ against the Department of War: &lt;a rel=&quot;noopener noreferrer nofollow&quot; href=&quot;https://www.anthropic.com/news/statement-comments-secretary-war&quot; target=&quot;_blank&quot;&gt;https://www.anthropic.com/news/statement-comments-secretary-war&lt;/a&gt;&lt;br /&gt;&lt;br /&gt;- The emergence of &quot;​AI;DR​&quot; as a phrase: &lt;a rel=&quot;noopener noreferrer nofollow&quot; href=&quot;https://futurism.com/artificial-intelligence/aidr-meaning&quot; target=&quot;_blank&quot;&gt;https://futurism.com/artificial-intelligence/aidr-meaning&lt;/a&gt;&lt;br /&gt;&lt;br /&gt;- Citrini Research&apos;s doomsday ​report​: &lt;a rel=&quot;noopener noreferrer nofollow&quot; href=&quot;https://www.citriniresearch.com/p/2028gic&quot; target=&quot;_blank&quot;&gt;https://www.citriniresearch.com/p/2028gic&lt;/a&gt;&lt;br /&gt;&lt;br /&gt;- More AI-generated ​slop: &lt;a rel=&quot;noopener noreferrer nofollow&quot; href=&quot;https://futurism.com/artificial-intelligence/ai-film-pulled-from-amc-theaters&quot; target=&quot;_blank&quot;&gt;https://futurism.com/artificial-intelligence/ai-film-pulled-from-amc-theaters&lt;/a&gt;&lt;/p&gt;</itunes:summary><itunes:explicit>no</itunes:explicit><itunes:duration>00:47:39</itunes:duration><itunes:image href="https://hosting-media.riverside.com/media/imports/podcasts/8108dff1-4467-4a52-be6b-eaa39a0fd03d/uhn7x1yaj8c59nv415mnahn1jblv.jpg"/><itunes:title>Anthropic, AI;DR, and (more) slop - Steve and Walter talk AI, March 2026</itunes:title><itunes:episodeType>full</itunes:episodeType></item><item><title><![CDATA[The State of AI Red Teaming in 2026]]></title><description><![CDATA[<p>I spoke with Kujtim Kryeziu, Co-Founder and CEO of Sentry Security, about:<br /><br />- How companies can tackle the biggest risks in their AI applications<br />- What he sees as the biggest blind spots in heavily-regulated spaces<br />- The future of AI red-teaming and the role for human experts<br /><br />Here's the AI Risk Readiness Kit we discussed: <a rel="noopener noreferrer nofollow" href="https://kit.stackaware.com/" target="_blank">https://kit.stackaware.com/</a><br /><br />And here's the Sentry Security Research Blog: <a rel="noopener noreferrer nofollow" href="https://blog.sentry.security/" target="_blank">https://blog.sentry.security/</a></p>]]></description><guid isPermaLink="false">706c2854-d046-420e-8fe2-8843522c7f0b</guid><dc:creator><![CDATA[StackAware]]></dc:creator><pubDate>Wed, 11 Feb 2026 20:29:42 GMT</pubDate><enclosure url="https://api.riverside.com/hosting-analytics/media/078aa2199dccda54c6585f0884e2e2b8beedb4d2edf569b5cee27be3426c06a7/eyJlcGlzb2RlSWQiOiI3MDZjMjg1NC1kMDQ2LTQyMGUtOGZlMi04ODQzNTIyYzdmMGIiLCJwb2RjYXN0SWQiOiI4MTA4ZGZmMS00NDY3LTRhNTItYmU2Yi1lYWEzOWEwZmQwM2QiLCJhY2NvdW50SWQiOiI2NjYyMWQwZmM4M2RlNjU5NjRiNWUwODMiLCJwYXRoIjoibWVkaWEvY2xpcHMvNjk4Y2RkOTE4YjVlOTU0MjhjOGJkZmQ0L2RlcGxveS1zZWN1cmVseS1wb2RjYXN0LWNvbXBvc2VyLTIwMjYtMi0xMV9fMjAtNTAtNDEubXAzIn0=.mp3" length="32081756" type="audio/mpeg"/><itunes:summary>&lt;p&gt;I spoke with Kujtim Kryeziu, Co-Founder and CEO of Sentry Security, about:&lt;br /&gt;&lt;br /&gt;- How companies can tackle the biggest risks in their AI applications&lt;br /&gt;- What he sees as the biggest blind spots in heavily-regulated spaces&lt;br /&gt;- The future of AI red-teaming and the role for human experts&lt;br /&gt;&lt;br /&gt;Here&apos;s the AI Risk Readiness Kit we discussed: &lt;a rel=&quot;noopener noreferrer nofollow&quot; href=&quot;https://kit.stackaware.com/&quot; target=&quot;_blank&quot;&gt;https://kit.stackaware.com/&lt;/a&gt;&lt;br /&gt;&lt;br /&gt;And here&apos;s the Sentry Security Research Blog: &lt;a rel=&quot;noopener noreferrer nofollow&quot; href=&quot;https://blog.sentry.security/&quot; target=&quot;_blank&quot;&gt;https://blog.sentry.security/&lt;/a&gt;&lt;/p&gt;</itunes:summary><itunes:explicit>no</itunes:explicit><itunes:duration>00:22:17</itunes:duration><itunes:image href="https://hosting-media.riverside.com/media/imports/podcasts/8108dff1-4467-4a52-be6b-eaa39a0fd03d/uhn7x1yaj8c59nv415mnahn1jblv.jpg"/><itunes:title>The State of AI Red Teaming in 2026</itunes:title><itunes:episodeType>full</itunes:episodeType></item><item><title><![CDATA[AI governance at enterprise scale]]></title><description><![CDATA[<p>I spoke with Oliver Patel, Head of Enterprise AI Governance at AstraZeneca, about how to run AI governance at global scale.<br /><br />We will cover:<br /><br />- How he builds AI governance that works across teams, tools, and regions <br />- The EU AI Act, in plain language, and what enterprises must do next <br />- Practical controls for AI risk, from model intake to ongoing monitoring <br />- How compliance, legal, and business leaders can share one playbook <br />- What he's learning while writing the book "Fundamentals of AI Governance"</p>]]></description><guid isPermaLink="false">0b7d38eb-44cd-455a-8d79-19232f308656</guid><dc:creator><![CDATA[StackAware]]></dc:creator><pubDate>Thu, 05 Feb 2026 20:01:23 GMT</pubDate><enclosure url="https://api.riverside.com/hosting-analytics/media/8fcbb0ad7c3844c7a8703f923ff42d550445c9198ede76a84db9fcfcd7f5380f/eyJlcGlzb2RlSWQiOiIwYjdkMzhlYi00NGNkLTQ1NWEtOGQ3OS0xOTIzMmYzMDg2NTYiLCJwb2RjYXN0SWQiOiI4MTA4ZGZmMS00NDY3LTRhNTItYmU2Yi1lYWEzOWEwZmQwM2QiLCJhY2NvdW50SWQiOiI2NjYyMWQwZmM4M2RlNjU5NjRiNWUwODMiLCJwYXRoIjoibWVkaWEvY2xpcHMvNjk4NGY2NjdiYzM2ODgzNTg1NzEwNDc0L2RlcGxveS1zZWN1cmVseS1wb2RjYXN0LWNvbXBvc2VyLTIwMjYtMi01X18yMC01OC0zMS5tcDMifQ==.mp3" length="55595094" type="audio/mpeg"/><itunes:summary>&lt;p&gt;I spoke with Oliver Patel, Head of Enterprise AI Governance at AstraZeneca, about how to run AI governance at global scale.&lt;br /&gt;&lt;br /&gt;We will cover:&lt;br /&gt;&lt;br /&gt;- How he builds AI governance that works across teams, tools, and regions &lt;br /&gt;- The EU AI Act, in plain language, and what enterprises must do next &lt;br /&gt;- Practical controls for AI risk, from model intake to ongoing monitoring &lt;br /&gt;- How compliance, legal, and business leaders can share one playbook &lt;br /&gt;- What he&apos;s learning while writing the book &quot;Fundamentals of AI Governance&quot;&lt;/p&gt;</itunes:summary><itunes:explicit>no</itunes:explicit><itunes:duration>00:38:36</itunes:duration><itunes:image href="https://hosting-media.riverside.com/media/imports/podcasts/8108dff1-4467-4a52-be6b-eaa39a0fd03d/uhn7x1yaj8c59nv415mnahn1jblv.jpg"/><itunes:title>AI governance at enterprise scale</itunes:title><itunes:episodeType>full</itunes:episodeType></item><item><title><![CDATA[Steve and Walter talk AI - February 2026]]></title><description><![CDATA[<p>This month Steve and I talked about:<br /><br />- Clawdbot/Moltbot/OpenClaw security considerations: <a rel="noopener noreferrer nofollow" href="https://openclaw.ai/" target="_blank">https://openclaw.ai/</a><br /><br />- Rent-a-Human: <a rel="noopener noreferrer nofollow" href="https://rentahuman.ai" target="_blank">https://rentahuman.ai</a><br /><br />- Nvidia's imperiled $100B investment in OpenAI: <a rel="noopener noreferrer nofollow" href="https://www.wsj.com/tech/ai/the-100-billion-megadeal-between-openai-and-nvidia-is-on-ice-aa3025e3" target="_blank">https://www.wsj.com/tech/ai/the-100-billion-megadeal-between-openai-and-nvidia-is-on-ice-aa3025e3</a><br /><br />- Svedka's AI Super Bowl Ad: <a rel="noopener noreferrer nofollow" href="https://www.mashed.com/2091485/svedka-super-bowl-ad-2026-ai-hellscape/" target="_blank">https://www.mashed.com/2091485/svedka-super-bowl-ad-2026-ai-hellscape/</a></p>]]></description><guid isPermaLink="false">876507ce-f356-4b4f-950a-8dcf44ad3657</guid><dc:creator><![CDATA[StackAware]]></dc:creator><pubDate>Tue, 03 Feb 2026 21:28:11 GMT</pubDate><enclosure url="https://api.riverside.com/hosting-analytics/media/62932ef04951896295e8fd7c9709567930672fb0820e6948f1c7b07860ffb829/eyJlcGlzb2RlSWQiOiI4NzY1MDdjZS1mMzU2LTRiNGYtOTUwYS04ZGNmNDRhZDM2NTciLCJwb2RjYXN0SWQiOiI4MTA4ZGZmMS00NDY3LTRhNTItYmU2Yi1lYWEzOWEwZmQwM2QiLCJhY2NvdW50SWQiOiI2NjYyMWQwZmM4M2RlNjU5NjRiNWUwODMiLCJwYXRoIjoibWVkaWEvY2xpcHMvNjk4MjYzMTNhZjUzNjczOGU0NDgzMDhkL3N0ZXZlLWFuZC13YWx0ZXItdGFsay1haS1jb21wb3Nlci0yMDI2LTItM19fMjItNS0yMy5tcDMifQ==.mp3" length="69851785" type="audio/mpeg"/><itunes:summary>&lt;p&gt;This month Steve and I talked about:&lt;br /&gt;&lt;br /&gt;- Clawdbot/Moltbot/OpenClaw security considerations: &lt;a rel=&quot;noopener noreferrer nofollow&quot; href=&quot;https://openclaw.ai/&quot; target=&quot;_blank&quot;&gt;https://openclaw.ai/&lt;/a&gt;&lt;br /&gt;&lt;br /&gt;- Rent-a-Human: &lt;a rel=&quot;noopener noreferrer nofollow&quot; href=&quot;https://rentahuman.ai&quot; target=&quot;_blank&quot;&gt;https://rentahuman.ai&lt;/a&gt;&lt;br /&gt;&lt;br /&gt;- Nvidia&apos;s imperiled $100B investment in OpenAI: &lt;a rel=&quot;noopener noreferrer nofollow&quot; href=&quot;https://www.wsj.com/tech/ai/the-100-billion-megadeal-between-openai-and-nvidia-is-on-ice-aa3025e3&quot; target=&quot;_blank&quot;&gt;https://www.wsj.com/tech/ai/the-100-billion-megadeal-between-openai-and-nvidia-is-on-ice-aa3025e3&lt;/a&gt;&lt;br /&gt;&lt;br /&gt;- Svedka&apos;s AI Super Bowl Ad: &lt;a rel=&quot;noopener noreferrer nofollow&quot; href=&quot;https://www.mashed.com/2091485/svedka-super-bowl-ad-2026-ai-hellscape/&quot; target=&quot;_blank&quot;&gt;https://www.mashed.com/2091485/svedka-super-bowl-ad-2026-ai-hellscape/&lt;/a&gt;&lt;/p&gt;</itunes:summary><itunes:explicit>no</itunes:explicit><itunes:duration>00:48:30</itunes:duration><itunes:image href="https://hosting-media.riverside.com/media/imports/podcasts/8108dff1-4467-4a52-be6b-eaa39a0fd03d/uhn7x1yaj8c59nv415mnahn1jblv.jpg"/><itunes:title>Steve and Walter talk AI - February 2026</itunes:title><itunes:episodeType>full</itunes:episodeType></item><item><title><![CDATA[Accelerating AI governance at Embold Health]]></title><description><![CDATA[<p>No sector is more in need of effective, well-governed AI than healthcare.</p><p>The United States <a href="https://ourworldindata.org/grapher/life-expectancy-vs-health-expenditure" rel="noopener noreferrer nofollow">spends</a> vastly more per person than any other nation, yet is in the middle of the pack when it comes to life expectancy.</p><p>That’s why I was so excited to work with <a href="https://emboldhealth.com/" rel="noopener noreferrer nofollow">Embold Health</a> to measure and manage their AI-related cybersecurity, compliance, and privacy risk.</p><p>Recently I had the pleasure of speaking with their Chief Security and Privacy Officer, Steve Dufour, and Vice President of Engineering, Mark Blackham on the Deploy Securely podcast.</p><p>We went in depth on how they:</p><ul><li>Deliver value with AI</li><li>Protect patient data and their intellectual property</li><li>Are thinking about the future of AI (governance) in healthcare</li></ul><p>Need your own AI risk assessment and governance program build-out?<br /><br />Book a call at https://contact.stackaware.com.<br /><br />*** Show notes ***<br /><br />At 20:58, Steve refers to the Society for Information Management (https://www.simnet.org/home).<br /><br />At 34:10, Walter refers to an article about intellectual property risk management and AI (https://blog.stackaware.com/p/intellectual-property-artificial-intelligence).</p>]]></description><guid isPermaLink="false">Buzzsprout-15381066</guid><dc:creator><![CDATA[StackAware]]></dc:creator><pubDate>Mon, 08 Jul 2024 18:00:00 GMT</pubDate><enclosure url="https://api.riverside.com/hosting-analytics/media/3f919c93e3885515f0e13aa019c9cd623fd0af75272965d4a2cf18c6ffcc5c7d/eyJlcGlzb2RlSWQiOiJkNmU0ZTA3Yi1iNzhjLTRjNGItYjYxZC0xZTU2YTBmNjlmNzgiLCJwb2RjYXN0SWQiOiI4MTA4ZGZmMS00NDY3LTRhNTItYmU2Yi1lYWEzOWEwZmQwM2QiLCJhY2NvdW50SWQiOiI2NjYyMWQwZmM4M2RlNjU5NjRiNWUwODMiLCJwYXRoIjoibWVkaWEvaW1wb3J0cy9wb2RjYXN0cy84MTA4ZGZmMS00NDY3LTRhNTItYmU2Yi1lYWEzOWEwZmQwM2QvZXBpc29kZXMvZDZlNGUwN2ItYjc4Yy00YzRiLWI2MWQtMWU1NmEwZjY5Zjc4LzE1MzgxMDY2LWFjY2VsZXJhdGluZy1haS1nb3Zlcm5hbmNlLWF0LWVtYm9sZC1oZWFsdGgubXAzIn0=.mp3" length="28498780" type="audio/mpeg"/><itunes:summary>&lt;p&gt;No sector is more in need of effective, well-governed AI than healthcare.&lt;/p&gt;&lt;p&gt;The United States &lt;a href=&quot;https://ourworldindata.org/grapher/life-expectancy-vs-health-expenditure&quot; rel=&quot;noopener noreferrer nofollow&quot;&gt;spends&lt;/a&gt; vastly more per person than any other nation, yet is in the middle of the pack when it comes to life expectancy.&lt;/p&gt;&lt;p&gt;That’s why I was so excited to work with &lt;a href=&quot;https://emboldhealth.com/&quot; rel=&quot;noopener noreferrer nofollow&quot;&gt;Embold Health&lt;/a&gt; to measure and manage their AI-related cybersecurity, compliance, and privacy risk.&lt;/p&gt;&lt;p&gt;Recently I had the pleasure of speaking with their Chief Security and Privacy Officer, Steve Dufour, and Vice President of Engineering, Mark Blackham on the Deploy Securely podcast.&lt;/p&gt;&lt;p&gt;We went in depth on how they:&lt;/p&gt;&lt;ul&gt;&lt;li&gt;Deliver value with AI&lt;/li&gt;&lt;li&gt;Protect patient data and their intellectual property&lt;/li&gt;&lt;li&gt;Are thinking about the future of AI (governance) in healthcare&lt;/li&gt;&lt;/ul&gt;&lt;p&gt;Need your own AI risk assessment and governance program build-out?&lt;br /&gt;&lt;br /&gt;Book a call at https://contact.stackaware.com.&lt;br /&gt;&lt;br /&gt;*** Show notes ***&lt;br /&gt;&lt;br /&gt;At 20:58, Steve refers to the Society for Information Management (https://www.simnet.org/home).&lt;br /&gt;&lt;br /&gt;At 34:10, Walter refers to an article about intellectual property risk management and AI (https://blog.stackaware.com/p/intellectual-property-artificial-intelligence).&lt;/p&gt;</itunes:summary><itunes:explicit>no</itunes:explicit><itunes:duration>00:39:30</itunes:duration><itunes:image href="https://hosting-media.riverside.com/media/imports/podcasts/8108dff1-4467-4a52-be6b-eaa39a0fd03d/uhn7x1yaj8c59nv415mnahn1jblv.jpg"/><itunes:title>Accelerating AI governance at Embold Health</itunes:title><itunes:episodeType>full</itunes:episodeType></item><item><title><![CDATA[Code Llama: 5-minute risk analysis]]></title><description><![CDATA[<p>Someone asked me what the unintended training and data retention risk with Meta's code Llama is.<br /><br />My answer:<br /><br />the same as every other model you host and operate on your own.<br /><br />And, all other things being equal, it's lower than that of anything operating -as-a-Service (-aaS) like ChatGPT or Claude.<br /><br />Check out this video for deeper dive?<br /><br />Or read the full post on Deploy Securely: https://blog.stackaware.com/p/code-llama-self-hosted-model-unintended-training<br /><br />Want more AI security resources? Check out: https://products.stackaware.com/</p>]]></description><guid isPermaLink="false">Buzzsprout-14142560</guid><dc:creator><![CDATA[StackAware]]></dc:creator><pubDate>Wed, 13 Dec 2023 18:00:00 GMT</pubDate><enclosure url="https://api.riverside.com/hosting-analytics/media/96a3e69bcc88392ae08cc4a919d22c23435fe3fecdfc8af52d79abbfed12614c/eyJlcGlzb2RlSWQiOiIzYzAwYmU4YS1kZDg5LTQ2ZWItOTNlYi02MzY2NzcwMDk4N2QiLCJwb2RjYXN0SWQiOiI4MTA4ZGZmMS00NDY3LTRhNTItYmU2Yi1lYWEzOWEwZmQwM2QiLCJhY2NvdW50SWQiOiI2NjYyMWQwZmM4M2RlNjU5NjRiNWUwODMiLCJwYXRoIjoibWVkaWEvaW1wb3J0cy9wb2RjYXN0cy84MTA4ZGZmMS00NDY3LTRhNTItYmU2Yi1lYWEzOWEwZmQwM2QvZXBpc29kZXMvM2MwMGJlOGEtZGQ4OS00NmViLTkzZWItNjM2Njc3MDA5ODdkLzE0MTQyNTYwLWNvZGUtbGxhbWEtNS1taW51dGUtcmlzay1hbmFseXNpcy5tcDMifQ==.mp3" length="3452939" type="audio/mpeg"/><itunes:summary>&lt;p&gt;Someone asked me what the unintended training and data retention risk with Meta&apos;s code Llama is.&lt;br /&gt;&lt;br /&gt;My answer:&lt;br /&gt;&lt;br /&gt;the same as every other model you host and operate on your own.&lt;br /&gt;&lt;br /&gt;And, all other things being equal, it&apos;s lower than that of anything operating -as-a-Service (-aaS) like ChatGPT or Claude.&lt;br /&gt;&lt;br /&gt;Check out this video for deeper dive?&lt;br /&gt;&lt;br /&gt;Or read the full post on Deploy Securely: https://blog.stackaware.com/p/code-llama-self-hosted-model-unintended-training&lt;br /&gt;&lt;br /&gt;Want more AI security resources? Check out: https://products.stackaware.com/&lt;/p&gt;</itunes:summary><itunes:explicit>no</itunes:explicit><itunes:duration>00:04:43</itunes:duration><itunes:image href="https://hosting-media.riverside.com/media/imports/podcasts/8108dff1-4467-4a52-be6b-eaa39a0fd03d/uhn7x1yaj8c59nv415mnahn1jblv.jpg"/><itunes:title>Code Llama: 5-minute risk analysis</itunes:title><itunes:episodeType>full</itunes:episodeType></item><item><title><![CDATA[AI Action Plan, "tool-squatting" attacks, jobless college grads, and insurance for AI]]></title><description><![CDATA[<p>Federal AI action plan: <br />https://www.ai.gov/action-plan<br /><br />Tool-squatting attack paper: https://arxiv.org/pdf/2504.19951<br /><br />Burning Glass Institute report: <br />https://static1.squarespace.com/static/6197797102be715f55c0e0a1/t/6889055d25352c5b3f28c202/1753810269213/No+Country+for+Young+Grads+V_Final7.29.25+%281%29.pdf<br /><br />AIUC: https://aiuc.com</p>]]></description><guid isPermaLink="false">Buzzsprout-17624864</guid><dc:creator><![CDATA[StackAware]]></dc:creator><pubDate>Wed, 06 Aug 2025 15:00:00 GMT</pubDate><enclosure url="https://api.riverside.com/hosting-analytics/media/5736fe15bc9b4eaf189a9ff286a2514d9abbb868a18698363db3769e4366ba75/eyJlcGlzb2RlSWQiOiIyZjExMDU0Yy1mNGNjLTQ2ZmItOTFiNS01MzRlYmZkYjQyYmYiLCJwb2RjYXN0SWQiOiI4MTA4ZGZmMS00NDY3LTRhNTItYmU2Yi1lYWEzOWEwZmQwM2QiLCJhY2NvdW50SWQiOiI2NjYyMWQwZmM4M2RlNjU5NjRiNWUwODMiLCJwYXRoIjoibWVkaWEvaW1wb3J0cy9wb2RjYXN0cy84MTA4ZGZmMS00NDY3LTRhNTItYmU2Yi1lYWEzOWEwZmQwM2QvZXBpc29kZXMvMmYxMTA1NGMtZjRjYy00NmZiLTkxYjUtNTM0ZWJmZGI0MmJmLzE3NjI0ODY0LWFpLWFjdGlvbi1wbGFuLXRvb2wtc3F1YXR0aW5nLWF0dGFja3Mtam9ibGVzcy1jb2xsZWdlLWdyYWRzLWFuZC1pbnN1cmFuY2UtZm9yLWFpLm1wMyJ9.mp3" length="26863248" type="audio/mpeg"/><itunes:summary>&lt;p&gt;Federal AI action plan: &lt;br /&gt;https://www.ai.gov/action-plan&lt;br /&gt;&lt;br /&gt;Tool-squatting attack paper: https://arxiv.org/pdf/2504.19951&lt;br /&gt;&lt;br /&gt;Burning Glass Institute report: &lt;br /&gt;https://static1.squarespace.com/static/6197797102be715f55c0e0a1/t/6889055d25352c5b3f28c202/1753810269213/No+Country+for+Young+Grads+V_Final7.29.25+%281%29.pdf&lt;br /&gt;&lt;br /&gt;AIUC: https://aiuc.com&lt;/p&gt;</itunes:summary><itunes:explicit>no</itunes:explicit><itunes:duration>00:37:14</itunes:duration><itunes:image href="https://hosting-media.riverside.com/media/imports/podcasts/8108dff1-4467-4a52-be6b-eaa39a0fd03d/uhn7x1yaj8c59nv415mnahn1jblv.jpg"/><itunes:title>AI Action Plan, &quot;tool-squatting&quot; attacks, jobless college grads, and insurance for AI</itunes:title><itunes:episodeType>full</itunes:episodeType></item><item><title><![CDATA[Governing AI in a Fortune 500 (or 25!) healthcare firm]]></title><description><![CDATA[<p>I was excited to host Rick Doten, a powerhouse in cybersecurity, to discuss:</p><ul><li>Key insights from his time as CISO at healthcare giant Centene</li><li>The ethical nuances of AI governance in the space</li><li>His experiences advising venture capital firms and cybersecurity startupsI was excited to host Rick Doten, a powerhouse in cybersecurity, to discuss: - Key insights from his time as CISO at healthcare giant Centene - The ethical nuances of AI governance in the space - His experiences advising venture capital firms and cybersecurity startups</li></ul>]]></description><guid isPermaLink="false">Buzzsprout-17893788</guid><dc:creator><![CDATA[StackAware]]></dc:creator><pubDate>Tue, 23 Sep 2025 19:00:00 GMT</pubDate><enclosure url="https://api.riverside.com/hosting-analytics/media/4fef4b520c9f13ed0b3bf95eed25a2454239441794c1e7891188e1c45c76fa95/eyJlcGlzb2RlSWQiOiI5N2I4ZDNmZi0yN2FjLTQ0MjgtOWJlMi02N2QwZjczZTYzNjkiLCJwb2RjYXN0SWQiOiI4MTA4ZGZmMS00NDY3LTRhNTItYmU2Yi1lYWEzOWEwZmQwM2QiLCJhY2NvdW50SWQiOiI2NjYyMWQwZmM4M2RlNjU5NjRiNWUwODMiLCJwYXRoIjoibWVkaWEvaW1wb3J0cy9wb2RjYXN0cy84MTA4ZGZmMS00NDY3LTRhNTItYmU2Yi1lYWEzOWEwZmQwM2QvZXBpc29kZXMvOTdiOGQzZmYtMjdhYy00NDI4LTliZTItNjdkMGY3M2U2MzY5LzE3ODkzNzg4LWdvdmVybmluZy1haS1pbi1hLWZvcnR1bmUtNTAwLW9yLTI1LWhlYWx0aGNhcmUtZmlybS5tcDMifQ==.mp3" length="23811562" type="audio/mpeg"/><itunes:summary>&lt;p&gt;I was excited to host Rick Doten, a powerhouse in cybersecurity, to discuss:&lt;/p&gt;&lt;ul&gt;&lt;li&gt;Key insights from his time as CISO at healthcare giant Centene&lt;/li&gt;&lt;li&gt;The ethical nuances of AI governance in the space&lt;/li&gt;&lt;li&gt;His experiences advising venture capital firms and cybersecurity startupsI was excited to host Rick Doten, a powerhouse in cybersecurity, to discuss: - Key insights from his time as CISO at healthcare giant Centene - The ethical nuances of AI governance in the space - His experiences advising venture capital firms and cybersecurity startups&lt;/li&gt;&lt;/ul&gt;</itunes:summary><itunes:explicit>no</itunes:explicit><itunes:duration>00:33:00</itunes:duration><itunes:image href="https://hosting-media.riverside.com/media/imports/podcasts/8108dff1-4467-4a52-be6b-eaa39a0fd03d/uhn7x1yaj8c59nv415mnahn1jblv.jpg"/><itunes:title>Governing AI in a Fortune 500 (or 25!) healthcare firm</itunes:title><itunes:episodeType>full</itunes:episodeType></item><item><title><![CDATA[AI regulation ban, NVIDIA chip exports, and brain rot in LLMs]]></title><description><![CDATA[<p>Steve Dufour and I had our monthly AI-related discussion, touching on:</p><ul><li>The Trump Administration EO <a href="https://www.reuters.com/world/trump-says-he-will-sign-executive-order-this-week-ai-approval-process-2025-12-08/" rel="noopener noreferrer nofollow">​preempting​</a> state AI regulation</li><li>NVIDIA's H200 chip <a href="https://www.semafor.com/article/12/09/2025/trump-says-nvidia-can-sell-h200-ai-chips-to-china" rel="noopener noreferrer nofollow">​export​</a></li><li>"<a href="https://arxiv.org/pdf/2510.13928" rel="noopener noreferrer nofollow">​Brain Rot​</a>" in Large Language Models</li><li>Crowdsourced data poisoning [this <a href="https://www.reddit.com/r/TrueFactzOnly/" rel="noopener noreferrer nofollow">​site​</a> is fake news]</li><li>Apple's "<a href="https://www.bloomberg.com/news/articles/2025-12-09/apple-stock-surges-as-ai-weary-mood-grips-wall-street" rel="noopener noreferrer nofollow">​Slow​</a>" AI Strategy as a positive</li></ul>]]></description><guid isPermaLink="false">Buzzsprout-18329951</guid><dc:creator><![CDATA[StackAware]]></dc:creator><pubDate>Wed, 10 Dec 2025 09:00:00 GMT</pubDate><enclosure url="https://api.riverside.com/hosting-analytics/media/3fef745fabd06cfbb780b59f8cb63b81cd1105d4e78c1e44656439b3f922a9bb/eyJlcGlzb2RlSWQiOiJjNTY2YzZmNy0yOWMzLTQ1OWYtYjhhYi01NDVhMTFmOWIxZmIiLCJwb2RjYXN0SWQiOiI4MTA4ZGZmMS00NDY3LTRhNTItYmU2Yi1lYWEzOWEwZmQwM2QiLCJhY2NvdW50SWQiOiI2NjYyMWQwZmM4M2RlNjU5NjRiNWUwODMiLCJwYXRoIjoibWVkaWEvaW1wb3J0cy9wb2RjYXN0cy84MTA4ZGZmMS00NDY3LTRhNTItYmU2Yi1lYWEzOWEwZmQwM2QvZXBpc29kZXMvYzU2NmM2ZjctMjljMy00NTlmLWI4YWItNTQ1YTExZjliMWZiLzE4MzI5OTUxLWFpLXJlZ3VsYXRpb24tYmFuLW52aWRpYS1jaGlwLWV4cG9ydHMtYW5kLWJyYWluLXJvdC1pbi1sbG1zLm1wMyJ9.mp3" length="30629535" type="audio/mpeg"/><itunes:summary>&lt;p&gt;Steve Dufour and I had our monthly AI-related discussion, touching on:&lt;/p&gt;&lt;ul&gt;&lt;li&gt;The Trump Administration EO &lt;a href=&quot;https://www.reuters.com/world/trump-says-he-will-sign-executive-order-this-week-ai-approval-process-2025-12-08/&quot; rel=&quot;noopener noreferrer nofollow&quot;&gt;​preempting​&lt;/a&gt; state AI regulation&lt;/li&gt;&lt;li&gt;NVIDIA&apos;s H200 chip &lt;a href=&quot;https://www.semafor.com/article/12/09/2025/trump-says-nvidia-can-sell-h200-ai-chips-to-china&quot; rel=&quot;noopener noreferrer nofollow&quot;&gt;​export​&lt;/a&gt;&lt;/li&gt;&lt;li&gt;&quot;&lt;a href=&quot;https://arxiv.org/pdf/2510.13928&quot; rel=&quot;noopener noreferrer nofollow&quot;&gt;​Brain Rot​&lt;/a&gt;&quot; in Large Language Models&lt;/li&gt;&lt;li&gt;Crowdsourced data poisoning [this &lt;a href=&quot;https://www.reddit.com/r/TrueFactzOnly/&quot; rel=&quot;noopener noreferrer nofollow&quot;&gt;​site​&lt;/a&gt; is fake news]&lt;/li&gt;&lt;li&gt;Apple&apos;s &quot;&lt;a href=&quot;https://www.bloomberg.com/news/articles/2025-12-09/apple-stock-surges-as-ai-weary-mood-grips-wall-street&quot; rel=&quot;noopener noreferrer nofollow&quot;&gt;​Slow​&lt;/a&gt;&quot; AI Strategy as a positive&lt;/li&gt;&lt;/ul&gt;</itunes:summary><itunes:explicit>no</itunes:explicit><itunes:duration>00:42:28</itunes:duration><itunes:image href="https://hosting-media.riverside.com/media/imports/podcasts/8108dff1-4467-4a52-be6b-eaa39a0fd03d/uhn7x1yaj8c59nv415mnahn1jblv.jpg"/><itunes:title>AI regulation ban, NVIDIA chip exports, and brain rot in LLMs</itunes:title><itunes:episodeType>full</itunes:episodeType></item><item><title><![CDATA[Getting patients to better doctors, faster with generative AI]]></title><description><![CDATA[<p>The basics of healthcare can often be a nightmare:<br /><br />- Finding the right doctor<br />- Setting up and appointment<br />- Getting simple questions answered<br /><br />While these things might seem like an inconvenience, on the grand scale they cost a lot - of money, and unfortunately, lives.<br /><br />That’s why the Embold Virtual Assistant (EVA) is such a breakthrough.<br /><br />A generative AI-powered chatbot with access to up-to-date doctor listings and performance ratings, it’s literally a lifesaver.<br /><br />StackAware was honored to conduct a pre-deployment AI risk assessment and penetration test for EVA on behalf of our client Embold Health.<br /><br />Following up on our previous discussion, I sat down again with Steve Dufour and Mark Blackham to discuss the product’s development and rollout.<br /><br />We chatted about:<br /><br />- EVA’s performance metrics<br />- Cybersecurity, compliance, and privacy issues<br />- The future of AI governance and product development in healthcare<br /><br />Bonus: Steve and I also presented on this work at HITRUST’s Collaborate Conference. Here is our deck: https://docs.google.com/presentation/d/1EedOula8X81WxzVkQim1amiDZWMM8Lh0<br /><br />Need your own AI risk assessment and governance program build-out?<br /><br />Book a call at contact.stackaware.com.</p>]]></description><guid isPermaLink="false">Buzzsprout-16111059</guid><dc:creator><![CDATA[StackAware]]></dc:creator><pubDate>Fri, 15 Nov 2024 08:00:00 GMT</pubDate><enclosure url="https://api.riverside.com/hosting-analytics/media/d0ba3b79677639fec43dca728446d5713658b346d9102c25a51ef9995d14ca8f/eyJlcGlzb2RlSWQiOiJiMWVjMjAxMi1iMzMxLTRlNmYtYmZjMS1iNzE4OTBhNmRiOTEiLCJwb2RjYXN0SWQiOiI4MTA4ZGZmMS00NDY3LTRhNTItYmU2Yi1lYWEzOWEwZmQwM2QiLCJhY2NvdW50SWQiOiI2NjYyMWQwZmM4M2RlNjU5NjRiNWUwODMiLCJwYXRoIjoibWVkaWEvaW1wb3J0cy9wb2RjYXN0cy84MTA4ZGZmMS00NDY3LTRhNTItYmU2Yi1lYWEzOWEwZmQwM2QvZXBpc29kZXMvYjFlYzIwMTItYjMzMS00ZTZmLWJmYzEtYjcxODkwYTZkYjkxLzE2MTExMDU5LWdldHRpbmctcGF0aWVudHMtdG8tYmV0dGVyLWRvY3RvcnMtZmFzdGVyLXdpdGgtZ2VuZXJhdGl2ZS1haS5tcDMifQ==.mp3" length="27737399" type="audio/mpeg"/><itunes:summary>&lt;p&gt;The basics of healthcare can often be a nightmare:&lt;br /&gt;&lt;br /&gt;- Finding the right doctor&lt;br /&gt;- Setting up and appointment&lt;br /&gt;- Getting simple questions answered&lt;br /&gt;&lt;br /&gt;While these things might seem like an inconvenience, on the grand scale they cost a lot - of money, and unfortunately, lives.&lt;br /&gt;&lt;br /&gt;That’s why the Embold Virtual Assistant (EVA) is such a breakthrough.&lt;br /&gt;&lt;br /&gt;A generative AI-powered chatbot with access to up-to-date doctor listings and performance ratings, it’s literally a lifesaver.&lt;br /&gt;&lt;br /&gt;StackAware was honored to conduct a pre-deployment AI risk assessment and penetration test for EVA on behalf of our client Embold Health.&lt;br /&gt;&lt;br /&gt;Following up on our previous discussion, I sat down again with Steve Dufour and Mark Blackham to discuss the product’s development and rollout.&lt;br /&gt;&lt;br /&gt;We chatted about:&lt;br /&gt;&lt;br /&gt;- EVA’s performance metrics&lt;br /&gt;- Cybersecurity, compliance, and privacy issues&lt;br /&gt;- The future of AI governance and product development in healthcare&lt;br /&gt;&lt;br /&gt;Bonus: Steve and I also presented on this work at HITRUST’s Collaborate Conference. Here is our deck: https://docs.google.com/presentation/d/1EedOula8X81WxzVkQim1amiDZWMM8Lh0&lt;br /&gt;&lt;br /&gt;Need your own AI risk assessment and governance program build-out?&lt;br /&gt;&lt;br /&gt;Book a call at contact.stackaware.com.&lt;/p&gt;</itunes:summary><itunes:explicit>no</itunes:explicit><itunes:duration>00:38:27</itunes:duration><itunes:image href="https://hosting-media.riverside.com/media/imports/podcasts/8108dff1-4467-4a52-be6b-eaa39a0fd03d/uhn7x1yaj8c59nv415mnahn1jblv.jpg"/><itunes:title>Getting patients to better doctors, faster with generative AI</itunes:title><itunes:episodeType>full</itunes:episodeType></item><item><title><![CDATA[Artificial Intelligence Risk Scoring System (AIRSS) - Part 2]]></title><description><![CDATA[<p>What does "security" even mean with AI?<br /><br />You'll need to define things like:<br /><br />BUSINESS REQUIREMENTS<br /><br />- What type of output is expected?<br />- What format should it be?<br />- What is the use case?<br /><br />SECURITY REQUIREMENTS<br /><br />- Who is allowed to see which outputs?<br />- Under which conditions?<br /><br />Having these things spelled out is a hard requirement before you can start talking about the risk of a given AI model.<br /><br />Continuing the build-out of the Artificial Intelligence Risk Scoring System (AIRSS), I tackle these issues - and more - in the latest issue of Deploy Securely.<br /><br />Check out the written post as well: https://blog.stackaware.com/p/artificial-intelligence-risk-scoring-system-p2<br /><br />Here is the pURL for the model I mentioned: pkg:generic/gpt-3.5-turbo@0613?ft=80Z1hDhg</p>]]></description><guid isPermaLink="false">Buzzsprout-13927539</guid><dc:creator><![CDATA[StackAware]]></dc:creator><pubDate>Mon, 13 Nov 2023 12:00:00 GMT</pubDate><enclosure url="https://api.riverside.com/hosting-analytics/media/788a2f88ae48aeada32e47c8e90ecb8f51754e92ae3d42fba53264fa7f1a6573/eyJlcGlzb2RlSWQiOiI0Yzc0NTNhNi1iOWQ2LTRmMTMtODE1Mi03YjlmZTU0YTY2YjYiLCJwb2RjYXN0SWQiOiI4MTA4ZGZmMS00NDY3LTRhNTItYmU2Yi1lYWEzOWEwZmQwM2QiLCJhY2NvdW50SWQiOiI2NjYyMWQwZmM4M2RlNjU5NjRiNWUwODMiLCJwYXRoIjoibWVkaWEvaW1wb3J0cy9wb2RjYXN0cy84MTA4ZGZmMS00NDY3LTRhNTItYmU2Yi1lYWEzOWEwZmQwM2QvZXBpc29kZXMvNGM3NDUzYTYtYjlkNi00ZjEzLTgxNTItN2I5ZmU1NGE2NmI2LzEzOTI3NTM5LWFydGlmaWNpYWwtaW50ZWxsaWdlbmNlLXJpc2stc2NvcmluZy1zeXN0ZW0tYWlyc3MtcGFydC0yLm1wMyJ9.mp3" length="7816019" type="audio/mpeg"/><itunes:summary>&lt;p&gt;What does &quot;security&quot; even mean with AI?&lt;br /&gt;&lt;br /&gt;You&apos;ll need to define things like:&lt;br /&gt;&lt;br /&gt;BUSINESS REQUIREMENTS&lt;br /&gt;&lt;br /&gt;- What type of output is expected?&lt;br /&gt;- What format should it be?&lt;br /&gt;- What is the use case?&lt;br /&gt;&lt;br /&gt;SECURITY REQUIREMENTS&lt;br /&gt;&lt;br /&gt;- Who is allowed to see which outputs?&lt;br /&gt;- Under which conditions?&lt;br /&gt;&lt;br /&gt;Having these things spelled out is a hard requirement before you can start talking about the risk of a given AI model.&lt;br /&gt;&lt;br /&gt;Continuing the build-out of the Artificial Intelligence Risk Scoring System (AIRSS), I tackle these issues - and more - in the latest issue of Deploy Securely.&lt;br /&gt;&lt;br /&gt;Check out the written post as well: https://blog.stackaware.com/p/artificial-intelligence-risk-scoring-system-p2&lt;br /&gt;&lt;br /&gt;Here is the pURL for the model I mentioned: pkg:generic/gpt-3.5-turbo@0613?ft=80Z1hDhg&lt;/p&gt;</itunes:summary><itunes:explicit>no</itunes:explicit><itunes:duration>00:10:46</itunes:duration><itunes:image href="https://hosting-media.riverside.com/media/imports/podcasts/8108dff1-4467-4a52-be6b-eaa39a0fd03d/uhn7x1yaj8c59nv415mnahn1jblv.jpg"/><itunes:title>Artificial Intelligence Risk Scoring System (AIRSS) - Part 2</itunes:title><itunes:episodeType>full</itunes:episodeType></item><item><title><![CDATA[Aware AI Brief - January 2026]]></title><description><![CDATA[<p>In this inaugural episode of the Aware AI Brief, Cameron talks with Walter Haydock, founder of StackAware, about the practical aspects of AI governance.<br /><br />This series aims to bridge the gap between theoretical AI governance discussions and actionable, real-world practices.<br /><br />They discuss the importance of understanding a client's business objectives before implementing AI governance, identifying key stakeholders, and managing risks around intellectual property and shadow AI.<br /><br />00:00 Introduction and Greetings<br />00:59 Introducing the Aware AI Brief Series<br />02:02 Starting AI Governance: First Steps<br />02:33 Understanding Client Needs and Objectives<br />06:02 Identifying Key Stakeholders<br />08:38 Intellectual Property and AI<br />15:51 Managing Shadow AI<br />22:23 Conclusion and Call for Questions<br /><br />Blog post about IP risk with AI: https://blog.stackaware.com/p/intellectual-property-risk-compliance-indemnification-copyright-artificial-intelligence-governance</p>]]></description><guid isPermaLink="false">Buzzsprout-18596026</guid><dc:creator><![CDATA[StackAware]]></dc:creator><pubDate>Fri, 30 Jan 2026 14:00:00 GMT</pubDate><enclosure url="https://api.riverside.com/hosting-analytics/media/b63be6709201d9b81fc836cef637cd6874ee50024e4b35678f2259e7e175de03/eyJlcGlzb2RlSWQiOiJiZDZkNTFhOS03ZmEyLTQyMDYtYTgyMi1mMjIzNWU0N2E2MDMiLCJwb2RjYXN0SWQiOiI4MTA4ZGZmMS00NDY3LTRhNTItYmU2Yi1lYWEzOWEwZmQwM2QiLCJhY2NvdW50SWQiOiI2NjYyMWQwZmM4M2RlNjU5NjRiNWUwODMiLCJwYXRoIjoibWVkaWEvaW1wb3J0cy9wb2RjYXN0cy84MTA4ZGZmMS00NDY3LTRhNTItYmU2Yi1lYWEzOWEwZmQwM2QvZXBpc29kZXMvYmQ2ZDUxYTktN2ZhMi00MjA2LWE4MjItZjIyMzVlNDdhNjAzLzE4NTk2MDI2LWF3YXJlLWFpLWJyaWVmLWphbnVhcnktMjAyNi5tcDMifQ==.mp3" length="17043081" type="audio/mpeg"/><itunes:summary>&lt;p&gt;In this inaugural episode of the Aware AI Brief, Cameron talks with Walter Haydock, founder of StackAware, about the practical aspects of AI governance.&lt;br /&gt;&lt;br /&gt;This series aims to bridge the gap between theoretical AI governance discussions and actionable, real-world practices.&lt;br /&gt;&lt;br /&gt;They discuss the importance of understanding a client&apos;s business objectives before implementing AI governance, identifying key stakeholders, and managing risks around intellectual property and shadow AI.&lt;br /&gt;&lt;br /&gt;00:00 Introduction and Greetings&lt;br /&gt;00:59 Introducing the Aware AI Brief Series&lt;br /&gt;02:02 Starting AI Governance: First Steps&lt;br /&gt;02:33 Understanding Client Needs and Objectives&lt;br /&gt;06:02 Identifying Key Stakeholders&lt;br /&gt;08:38 Intellectual Property and AI&lt;br /&gt;15:51 Managing Shadow AI&lt;br /&gt;22:23 Conclusion and Call for Questions&lt;br /&gt;&lt;br /&gt;Blog post about IP risk with AI: https://blog.stackaware.com/p/intellectual-property-risk-compliance-indemnification-copyright-artificial-intelligence-governance&lt;/p&gt;</itunes:summary><itunes:explicit>no</itunes:explicit><itunes:duration>00:23:36</itunes:duration><itunes:image href="https://hosting-media.riverside.com/media/imports/podcasts/8108dff1-4467-4a52-be6b-eaa39a0fd03d/uhn7x1yaj8c59nv415mnahn1jblv.jpg"/><itunes:title>Aware AI Brief - January 2026</itunes:title><itunes:episodeType>full</itunes:episodeType></item><item><title><![CDATA[The state of AI assurance in 2024]]></title><description><![CDATA[<p>I was thrilled to have a leading voice on AI governance and assurance on the Deploy Securely podcast: Patrick Sullivan.<br /><br />Patrick is the Vice President of Strategy and Innovation at A-LIGN, a cybersecurity assurance firm. He’s an expert on the intersection of AI and compliance, regularly sharing expert insights about ISO 42001, the EU AI Act, and their interplay with existing regulations and best practices.<br /><br />We chatted about what he's seen from his customer base when it comes to AI-related:<br /><br />- Cybersecurity<br />- Compliance<br />- Privacy<br /><br />Check out the full episode!</p>]]></description><guid isPermaLink="false">Buzzsprout-15743399</guid><dc:creator><![CDATA[StackAware]]></dc:creator><pubDate>Thu, 12 Sep 2024 19:00:00 GMT</pubDate><enclosure url="https://api.riverside.com/hosting-analytics/media/efc5687440afa7ab783c7087219b24133c0f3cec95b5636c639a7dec5b7cf6c8/eyJlcGlzb2RlSWQiOiIzZDhjNTBiYS1lMTcxLTQ2YmItYWY3ZS1kNGNiMTM3MmQ5ZmQiLCJwb2RjYXN0SWQiOiI4MTA4ZGZmMS00NDY3LTRhNTItYmU2Yi1lYWEzOWEwZmQwM2QiLCJhY2NvdW50SWQiOiI2NjYyMWQwZmM4M2RlNjU5NjRiNWUwODMiLCJwYXRoIjoibWVkaWEvaW1wb3J0cy9wb2RjYXN0cy84MTA4ZGZmMS00NDY3LTRhNTItYmU2Yi1lYWEzOWEwZmQwM2QvZXBpc29kZXMvM2Q4YzUwYmEtZTE3MS00NmJiLWFmN2UtZDRjYjEzNzJkOWZkLzE1NzQzMzk5LXRoZS1zdGF0ZS1vZi1haS1hc3N1cmFuY2UtaW4tMjAyNC5tcDMifQ==.mp3" length="25806058" type="audio/mpeg"/><itunes:summary>&lt;p&gt;I was thrilled to have a leading voice on AI governance and assurance on the Deploy Securely podcast: Patrick Sullivan.&lt;br /&gt;&lt;br /&gt;Patrick is the Vice President of Strategy and Innovation at A-LIGN, a cybersecurity assurance firm. He’s an expert on the intersection of AI and compliance, regularly sharing expert insights about ISO 42001, the EU AI Act, and their interplay with existing regulations and best practices.&lt;br /&gt;&lt;br /&gt;We chatted about what he&apos;s seen from his customer base when it comes to AI-related:&lt;br /&gt;&lt;br /&gt;- Cybersecurity&lt;br /&gt;- Compliance&lt;br /&gt;- Privacy&lt;br /&gt;&lt;br /&gt;Check out the full episode!&lt;/p&gt;</itunes:summary><itunes:explicit>no</itunes:explicit><itunes:duration>00:35:46</itunes:duration><itunes:image href="https://hosting-media.riverside.com/media/imports/podcasts/8108dff1-4467-4a52-be6b-eaa39a0fd03d/uhn7x1yaj8c59nv415mnahn1jblv.jpg"/><itunes:title>The state of AI assurance in 2024</itunes:title><itunes:episodeType>full</itunes:episodeType></item><item><title><![CDATA[AI governance for Health Information Exchanges]]></title><description><![CDATA[<p>I recently spoke with Bezawit Sumner, CISO of CRISP Shared Services about:</p><ul><li>How to address stakeholder concerns when rolling out AI in sensitive spaces like healthcare</li><li>Issues related to scoping and definitions being non-trivial for AI governance</li><li>The evolving AI regulatory landscape and what companies can do to adapt</li></ul>]]></description><guid isPermaLink="false">Buzzsprout-18133769</guid><dc:creator><![CDATA[StackAware]]></dc:creator><pubDate>Tue, 04 Nov 2025 20:00:00 GMT</pubDate><enclosure url="https://api.riverside.com/hosting-analytics/media/b03f759c8c1613ddc6184290f82720c365f79a77ce148908af50b801de0da5d7/eyJlcGlzb2RlSWQiOiI1MWVkMDQwMS00Yzk4LTQ1YjAtYTI1Zi0xNzY5NmFjYzVkZmQiLCJwb2RjYXN0SWQiOiI4MTA4ZGZmMS00NDY3LTRhNTItYmU2Yi1lYWEzOWEwZmQwM2QiLCJhY2NvdW50SWQiOiI2NjYyMWQwZmM4M2RlNjU5NjRiNWUwODMiLCJwYXRoIjoibWVkaWEvaW1wb3J0cy9wb2RjYXN0cy84MTA4ZGZmMS00NDY3LTRhNTItYmU2Yi1lYWEzOWEwZmQwM2QvZXBpc29kZXMvNTFlZDA0MDEtNGM5OC00NWIwLWEyNWYtMTc2OTZhY2M1ZGZkLzE4MTMzNzY5LWFpLWdvdmVybmFuY2UtZm9yLWhlYWx0aC1pbmZvcm1hdGlvbi1leGNoYW5nZXMubXAzIn0=.mp3" length="22268022" type="audio/mpeg"/><itunes:summary>&lt;p&gt;I recently spoke with Bezawit Sumner, CISO of CRISP Shared Services about:&lt;/p&gt;&lt;ul&gt;&lt;li&gt;How to address stakeholder concerns when rolling out AI in sensitive spaces like healthcare&lt;/li&gt;&lt;li&gt;Issues related to scoping and definitions being non-trivial for AI governance&lt;/li&gt;&lt;li&gt;The evolving AI regulatory landscape and what companies can do to adapt&lt;/li&gt;&lt;/ul&gt;</itunes:summary><itunes:explicit>no</itunes:explicit><itunes:duration>00:30:51</itunes:duration><itunes:image href="https://hosting-media.riverside.com/media/imports/podcasts/8108dff1-4467-4a52-be6b-eaa39a0fd03d/uhn7x1yaj8c59nv415mnahn1jblv.jpg"/><itunes:title>AI governance for Health Information Exchanges</itunes:title><itunes:episodeType>full</itunes:episodeType></item><item><title><![CDATA[Sensitive Data Generation]]></title><description><![CDATA[<p>I’m worried about data leakage from LLMs, but probably not why you think.<br /><br />While unintended training is a real risk that can’t be ignored, something else is going to be a much more serious problem: sensitive data generation (SDG).<br /><br />A recent paper (https://arxiv.org/pdf/2310.07298v1.pdf) shows how LLMs can infer huge amounts of personal information from seemingly innocuous comments on Reddit.<br /><br />And this phenomenon will have huge impacts for:<br /><br />- Material nonpublic information<br />- Executive moves<br />- Trade secrets<br /><br />and the ability to keep them confidential.<br /><br />Check out the full post in Deploy Securely for a breakdown: https://blog.stackaware.com/p/sensitive-data-generation</p>]]></description><guid isPermaLink="false">Buzzsprout-13928890</guid><dc:creator><![CDATA[StackAware]]></dc:creator><pubDate>Mon, 27 Nov 2023 12:00:00 GMT</pubDate><enclosure url="https://api.riverside.com/hosting-analytics/media/66f164f4cb5e5d532cf6997c22bf06d2ab5cc6520bd032a5b18ac0d5c64f0d06/eyJlcGlzb2RlSWQiOiI0OTM3NzEzOC0xYmRmLTQ0YmQtODg3MS1hMTQyODM1YmJlOGUiLCJwb2RjYXN0SWQiOiI4MTA4ZGZmMS00NDY3LTRhNTItYmU2Yi1lYWEzOWEwZmQwM2QiLCJhY2NvdW50SWQiOiI2NjYyMWQwZmM4M2RlNjU5NjRiNWUwODMiLCJwYXRoIjoibWVkaWEvaW1wb3J0cy9wb2RjYXN0cy84MTA4ZGZmMS00NDY3LTRhNTItYmU2Yi1lYWEzOWEwZmQwM2QvZXBpc29kZXMvNDkzNzcxMzgtMWJkZi00NGJkLTg4NzEtYTE0MjgzNWJiZThlLzEzOTI4ODkwLXNlbnNpdGl2ZS1kYXRhLWdlbmVyYXRpb24ubXAzIn0=.mp3" length="4863320" type="audio/mpeg"/><itunes:summary>&lt;p&gt;I’m worried about data leakage from LLMs, but probably not why you think.&lt;br /&gt;&lt;br /&gt;While unintended training is a real risk that can’t be ignored, something else is going to be a much more serious problem: sensitive data generation (SDG).&lt;br /&gt;&lt;br /&gt;A recent paper (https://arxiv.org/pdf/2310.07298v1.pdf) shows how LLMs can infer huge amounts of personal information from seemingly innocuous comments on Reddit.&lt;br /&gt;&lt;br /&gt;And this phenomenon will have huge impacts for:&lt;br /&gt;&lt;br /&gt;- Material nonpublic information&lt;br /&gt;- Executive moves&lt;br /&gt;- Trade secrets&lt;br /&gt;&lt;br /&gt;and the ability to keep them confidential.&lt;br /&gt;&lt;br /&gt;Check out the full post in Deploy Securely for a breakdown: https://blog.stackaware.com/p/sensitive-data-generation&lt;/p&gt;</itunes:summary><itunes:explicit>no</itunes:explicit><itunes:duration>00:06:40</itunes:duration><itunes:image href="https://hosting-media.riverside.com/media/imports/podcasts/8108dff1-4467-4a52-be6b-eaa39a0fd03d/uhn7x1yaj8c59nv415mnahn1jblv.jpg"/><itunes:title>Sensitive Data Generation</itunes:title><itunes:episodeType>full</itunes:episodeType></item><item><title><![CDATA[Insuring AI in an agentic future]]></title><description><![CDATA[<p>I spoke with Emil Bender Lassen, Standard Lead at the Artificial Intelligence Underwriting Company.<br /><br />We talked about:<br /><br />- What AIUC-1 requires from AI agents</p><p>- How the standard drives insurance rate</p><p>- Technical tips on preventing technical detail release and avoiding IP risk</p><p>- The future of AIUC-1 and how it complements ISO 42001, NIST AI RMF, and other frameworks</p>]]></description><guid isPermaLink="false">Buzzsprout-18542654</guid><dc:creator><![CDATA[StackAware]]></dc:creator><pubDate>Tue, 20 Jan 2026 20:00:00 GMT</pubDate><enclosure url="https://api.riverside.com/hosting-analytics/media/edb2d66bcaa61e8a2835b54ffa7a95af9c1825b0307dd96be3dedc16d6d3624d/eyJlcGlzb2RlSWQiOiJhMzM3NTdiMy00NmYxLTQ2NzUtODE4OC03ODc5NDk4ODE2NWYiLCJwb2RjYXN0SWQiOiI4MTA4ZGZmMS00NDY3LTRhNTItYmU2Yi1lYWEzOWEwZmQwM2QiLCJhY2NvdW50SWQiOiI2NjYyMWQwZmM4M2RlNjU5NjRiNWUwODMiLCJwYXRoIjoibWVkaWEvaW1wb3J0cy9wb2RjYXN0cy84MTA4ZGZmMS00NDY3LTRhNTItYmU2Yi1lYWEzOWEwZmQwM2QvZXBpc29kZXMvYTMzNzU3YjMtNDZmMS00Njc1LTgxODgtNzg3OTQ5ODgxNjVmLzE4NTQyNjU0LWluc3VyaW5nLWFpLWluLWFuLWFnZW50aWMtZnV0dXJlLm1wMyJ9.mp3" length="29451078" type="audio/mpeg"/><itunes:summary>&lt;p&gt;I spoke with Emil Bender Lassen, Standard Lead at the Artificial Intelligence Underwriting Company.&lt;br /&gt;&lt;br /&gt;We talked about:&lt;br /&gt;&lt;br /&gt;- What AIUC-1 requires from AI agents&lt;/p&gt;&lt;p&gt;- How the standard drives insurance rate&lt;/p&gt;&lt;p&gt;- Technical tips on preventing technical detail release and avoiding IP risk&lt;/p&gt;&lt;p&gt;- The future of AIUC-1 and how it complements ISO 42001, NIST AI RMF, and other frameworks&lt;/p&gt;</itunes:summary><itunes:explicit>no</itunes:explicit><itunes:duration>00:40:50</itunes:duration><itunes:image href="https://hosting-media.riverside.com/media/imports/podcasts/8108dff1-4467-4a52-be6b-eaa39a0fd03d/uhn7x1yaj8c59nv415mnahn1jblv.jpg"/><itunes:title>Insuring AI in an agentic future</itunes:title><itunes:episodeType>full</itunes:episodeType></item><item><title><![CDATA[Real-world AI governance, from an auditor perspective]]></title><description><![CDATA[<p>I had the chance to speak with Patrick Sullivan. Patrick is the Vice President of Strategy and Innovation at A-LIGN. He brings over 25 years of expertise in cybersecurity, compliance, and risk management to the healthcare and life sciences sectors.<br /><br />We talked about:<br /><br />- How companies are complying with a web of AI regulation<br />- Best practices for AI agent security and accountability<br />- Genomic data security and AI</p>]]></description><guid isPermaLink="false">Buzzsprout-18509130</guid><dc:creator><![CDATA[StackAware]]></dc:creator><pubDate>Thu, 15 Jan 2026 11:00:00 GMT</pubDate><enclosure url="https://api.riverside.com/hosting-analytics/media/0ee5b22e29930d0d36303e3e5b43f426ceb22f439123de9f9cbeb2684e2d23b4/eyJlcGlzb2RlSWQiOiI0ODQxMjU3Ni05NzFiLTRiMmYtOTk5Ny00YjkyOTliYjM4M2MiLCJwb2RjYXN0SWQiOiI4MTA4ZGZmMS00NDY3LTRhNTItYmU2Yi1lYWEzOWEwZmQwM2QiLCJhY2NvdW50SWQiOiI2NjYyMWQwZmM4M2RlNjU5NjRiNWUwODMiLCJwYXRoIjoibWVkaWEvaW1wb3J0cy9wb2RjYXN0cy84MTA4ZGZmMS00NDY3LTRhNTItYmU2Yi1lYWEzOWEwZmQwM2QvZXBpc29kZXMvNDg0MTI1NzYtOTcxYi00YjJmLTk5OTctNGI5Mjk5YmIzODNjLzE4NTA5MTMwLXJlYWwtd29ybGQtYWktZ292ZXJuYW5jZS1mcm9tLWFuLWF1ZGl0b3ItcGVyc3BlY3RpdmUubXAzIn0=.mp3" length="22140701" type="audio/mpeg"/><itunes:summary>&lt;p&gt;I had the chance to speak with Patrick Sullivan. Patrick is the Vice President of Strategy and Innovation at A-LIGN. He brings over 25 years of expertise in cybersecurity, compliance, and risk management to the healthcare and life sciences sectors.&lt;br /&gt;&lt;br /&gt;We talked about:&lt;br /&gt;&lt;br /&gt;- How companies are complying with a web of AI regulation&lt;br /&gt;- Best practices for AI agent security and accountability&lt;br /&gt;- Genomic data security and AI&lt;/p&gt;</itunes:summary><itunes:explicit>no</itunes:explicit><itunes:duration>00:30:40</itunes:duration><itunes:image href="https://hosting-media.riverside.com/media/imports/podcasts/8108dff1-4467-4a52-be6b-eaa39a0fd03d/uhn7x1yaj8c59nv415mnahn1jblv.jpg"/><itunes:title>Real-world AI governance, from an auditor perspective</itunes:title><itunes:episodeType>full</itunes:episodeType></item><item><title><![CDATA[Tackling AI governance with federal data]]></title><description><![CDATA[<p>On this episode of the Deploy Securely podcast, I spoke with Kenny Scott, Founder and CEO of Paramify.<br /><br />Paramify gets companies ready for the U.S. government's Federal Risk and Authorization Management Program (FedRAMP). And in this conversation, we talked about:<br /><br />- Paramify "walking the walk" by getting FedRAMP High authorized<br />- How AI is impacting FedRAMP authorizations<br />- The future of AI regulation</p>]]></description><guid isPermaLink="false">Buzzsprout-15823562</guid><dc:creator><![CDATA[StackAware]]></dc:creator><pubDate>Thu, 26 Sep 2024 20:00:00 GMT</pubDate><enclosure url="https://api.riverside.com/hosting-analytics/media/4e8cf75ba649d4e3225eb2d792ac1ef871b035eec36d42bc4bdc7ebb88e6babc/eyJlcGlzb2RlSWQiOiI4MWFiYzhlZi01MTExLTQyNzItYTQyZS1kYjEyYTIwMTViZGUiLCJwb2RjYXN0SWQiOiI4MTA4ZGZmMS00NDY3LTRhNTItYmU2Yi1lYWEzOWEwZmQwM2QiLCJhY2NvdW50SWQiOiI2NjYyMWQwZmM4M2RlNjU5NjRiNWUwODMiLCJwYXRoIjoibWVkaWEvaW1wb3J0cy9wb2RjYXN0cy84MTA4ZGZmMS00NDY3LTRhNTItYmU2Yi1lYWEzOWEwZmQwM2QvZXBpc29kZXMvODFhYmM4ZWYtNTExMS00MjcyLWE0MmUtZGIxMmEyMDE1YmRlLzE1ODIzNTYyLXRhY2tsaW5nLWFpLWdvdmVybmFuY2Utd2l0aC1mZWRlcmFsLWRhdGEubXAzIn0=.mp3" length="26152143" type="audio/mpeg"/><itunes:summary>&lt;p&gt;On this episode of the Deploy Securely podcast, I spoke with Kenny Scott, Founder and CEO of Paramify.&lt;br /&gt;&lt;br /&gt;Paramify gets companies ready for the U.S. government&apos;s Federal Risk and Authorization Management Program (FedRAMP). And in this conversation, we talked about:&lt;br /&gt;&lt;br /&gt;- Paramify &quot;walking the walk&quot; by getting FedRAMP High authorized&lt;br /&gt;- How AI is impacting FedRAMP authorizations&lt;br /&gt;- The future of AI regulation&lt;/p&gt;</itunes:summary><itunes:explicit>no</itunes:explicit><itunes:duration>00:36:15</itunes:duration><itunes:image href="https://hosting-media.riverside.com/media/imports/podcasts/8108dff1-4467-4a52-be6b-eaa39a0fd03d/uhn7x1yaj8c59nv415mnahn1jblv.jpg"/><itunes:title>Tackling AI governance with federal data</itunes:title><itunes:episodeType>full</itunes:episodeType></item><item><title><![CDATA[3 AI governance frameworks]]></title><description><![CDATA[<p>Drive sales, improve customer trust, and avoid regulatory penalties with the NIST AI RMF, EU AI Act, and ISO 42001.<br /><br />Check out the full post on the Deploy Securely blog: https://blog.stackaware.com/p/eu-ai-act-nist-rmf-iso-42001-picking-frameworks</p>]]></description><guid isPermaLink="false">Buzzsprout-15405849</guid><dc:creator><![CDATA[StackAware]]></dc:creator><pubDate>Fri, 12 Jul 2024 21:00:00 GMT</pubDate><enclosure url="https://api.riverside.com/hosting-analytics/media/72f61c5bb49459ae43d35d274aefd758c9817fde4c0491c73677117e70d795d6/eyJlcGlzb2RlSWQiOiI0MTdkYzFhZi04Mzk3LTQxMmYtOGY3OS1lNGU4YTQ5NmM4MWIiLCJwb2RjYXN0SWQiOiI4MTA4ZGZmMS00NDY3LTRhNTItYmU2Yi1lYWEzOWEwZmQwM2QiLCJhY2NvdW50SWQiOiI2NjYyMWQwZmM4M2RlNjU5NjRiNWUwODMiLCJwYXRoIjoibWVkaWEvaW1wb3J0cy9wb2RjYXN0cy84MTA4ZGZmMS00NDY3LTRhNTItYmU2Yi1lYWEzOWEwZmQwM2QvZXBpc29kZXMvNDE3ZGMxYWYtODM5Ny00MTJmLThmNzktZTRlOGE0OTZjODFiLzE1NDA1ODQ5LTMtYWktZ292ZXJuYW5jZS1mcmFtZXdvcmtzLm1wMyJ9.mp3" length="3449408" type="audio/mpeg"/><itunes:summary>&lt;p&gt;Drive sales, improve customer trust, and avoid regulatory penalties with the NIST AI RMF, EU AI Act, and ISO 42001.&lt;br /&gt;&lt;br /&gt;Check out the full post on the Deploy Securely blog: https://blog.stackaware.com/p/eu-ai-act-nist-rmf-iso-42001-picking-frameworks&lt;/p&gt;</itunes:summary><itunes:explicit>no</itunes:explicit><itunes:duration>00:04:43</itunes:duration><itunes:image href="https://hosting-media.riverside.com/media/imports/podcasts/8108dff1-4467-4a52-be6b-eaa39a0fd03d/uhn7x1yaj8c59nv415mnahn1jblv.jpg"/><itunes:title>3 AI governance frameworks</itunes:title><itunes:episodeType>full</itunes:episodeType></item><item><title><![CDATA[The top 3 AI security concerns in healthcare]]></title><description><![CDATA[-]]></description><guid isPermaLink="false">Buzzsprout-15349506</guid><dc:creator><![CDATA[StackAware]]></dc:creator><pubDate>Tue, 02 Jul 2024 17:00:00 GMT</pubDate><enclosure url="https://api.riverside.com/hosting-analytics/media/86ff0d7690f04584665f98060213f0eeee24d38fdce3bc6d1058bd13957ab7e1/eyJlcGlzb2RlSWQiOiJmMmFkNDExNS01YjE2LTQ5YjYtOTdjYi02Yjc0ZWI5OWE4M2QiLCJwb2RjYXN0SWQiOiI4MTA4ZGZmMS00NDY3LTRhNTItYmU2Yi1lYWEzOWEwZmQwM2QiLCJhY2NvdW50SWQiOiI2NjYyMWQwZmM4M2RlNjU5NjRiNWUwODMiLCJwYXRoIjoibWVkaWEvaW1wb3J0cy9wb2RjYXN0cy84MTA4ZGZmMS00NDY3LTRhNTItYmU2Yi1lYWEzOWEwZmQwM2QvZXBpc29kZXMvZjJhZDQxMTUtNWIxNi00OWI2LTk3Y2ItNmI3NGViOTlhODNkLzE1MzQ5NTA2LXRoZS10b3AtMy1haS1zZWN1cml0eS1jb25jZXJucy1pbi1oZWFsdGhjYXJlLm1wMyJ9.mp3" length="2718713" type="audio/mpeg"/><itunes:summary>-</itunes:summary><itunes:explicit>no</itunes:explicit><itunes:duration>00:03:42</itunes:duration><itunes:image href="https://hosting-media.riverside.com/media/imports/podcasts/8108dff1-4467-4a52-be6b-eaa39a0fd03d/uhn7x1yaj8c59nv415mnahn1jblv.jpg"/><itunes:title>The top 3 AI security concerns in healthcare</itunes:title><itunes:episodeType>full</itunes:episodeType></item><item><title><![CDATA[Big Beautiful AI Moratorium fails, ISO 42005, and automating yourself out of a job]]></title><description><![CDATA[<p>Walter kicks off a recurring series with Steve Dufour, talking about:<br /><br />- Trump's "Big Beautiful Bill" moving through the Senate and how a key AI-related provision was just removed.<br />- Some key court decisions related to generative AI training on copyrighted material<br />- ISO/IEC 42005:2025, which gives guidance on AI impact assessments<br />- Ways to (avoid) automating yourself out of a job</p>]]></description><guid isPermaLink="false">Buzzsprout-17435918</guid><dc:creator><![CDATA[StackAware]]></dc:creator><pubDate>Wed, 02 Jul 2025 10:00:00 GMT</pubDate><enclosure url="https://api.riverside.com/hosting-analytics/media/93fc0e4049456b69a545635f6bbcff08c2952c96e4eae88be2f78241fbc1613b/eyJlcGlzb2RlSWQiOiI5ZjZkYWU2MC1lMDhiLTRiZDktOGEzZi1lNjViMmFiZWVmZDYiLCJwb2RjYXN0SWQiOiI4MTA4ZGZmMS00NDY3LTRhNTItYmU2Yi1lYWEzOWEwZmQwM2QiLCJhY2NvdW50SWQiOiI2NjYyMWQwZmM4M2RlNjU5NjRiNWUwODMiLCJwYXRoIjoibWVkaWEvaW1wb3J0cy9wb2RjYXN0cy84MTA4ZGZmMS00NDY3LTRhNTItYmU2Yi1lYWEzOWEwZmQwM2QvZXBpc29kZXMvOWY2ZGFlNjAtZTA4Yi00YmQ5LThhM2YtZTY1YjJhYmVlZmQ2LzE3NDM1OTE4LWJpZy1iZWF1dGlmdWwtYWktbW9yYXRvcml1bS1mYWlscy1pc28tNDIwMDUtYW5kLWF1dG9tYXRpbmcteW91cnNlbGYtb3V0LW9mLWEtam9iLm1wMyJ9.mp3" length="24493763" type="audio/mpeg"/><itunes:summary>&lt;p&gt;Walter kicks off a recurring series with Steve Dufour, talking about:&lt;br /&gt;&lt;br /&gt;- Trump&apos;s &quot;Big Beautiful Bill&quot; moving through the Senate and how a key AI-related provision was just removed.&lt;br /&gt;- Some key court decisions related to generative AI training on copyrighted material&lt;br /&gt;- ISO/IEC 42005:2025, which gives guidance on AI impact assessments&lt;br /&gt;- Ways to (avoid) automating yourself out of a job&lt;/p&gt;</itunes:summary><itunes:explicit>no</itunes:explicit><itunes:duration>00:50:55</itunes:duration><itunes:image href="https://hosting-media.riverside.com/media/imports/podcasts/8108dff1-4467-4a52-be6b-eaa39a0fd03d/uhn7x1yaj8c59nv415mnahn1jblv.jpg"/><itunes:title>Big Beautiful AI Moratorium fails, ISO 42005, and automating yourself out of a job</itunes:title><itunes:episodeType>full</itunes:episodeType></item><item><title><![CDATA[AI hardware killswitches, SB-205 troubles, ChatGPT connectors, and more]]></title><description><![CDATA[<p>NVIDIA blog on killswitches: <br />https://blogs.nvidia.com/blog/no-backdoors-no-kill-switches-no-spyware/<br /><br />Colorado Legislative AI Task Force Report: https://leg.colorado.gov/sites/default/files/images/report_and_recommendations-accessible_1_0.pdf<br /><br />SB-205 opposition: https://gazette.com/government/colorado-mayors-oppose-ai-regulation-law/article_0abe652f-a60a-583e-a138-e73fa45e9a03.html<br /><br />AI Stethoscope: https://www.imperial.ac.uk/news/249316/ai-stethoscope-rolled-100-gp-clinics/<br /><br />ChatGPT Connectors: https://help.openai.com/en/articles/11487775-connectors-in-chatgpt</p>]]></description><guid isPermaLink="false">Buzzsprout-17775939</guid><dc:creator><![CDATA[StackAware]]></dc:creator><pubDate>Tue, 02 Sep 2025 21:00:00 GMT</pubDate><enclosure url="https://api.riverside.com/hosting-analytics/media/625dc0664651068084a57072e34c6e9aee1ca81bcdb0019dccb3517527dcc420/eyJlcGlzb2RlSWQiOiI2N2U1N2U4YS1lMjUyLTQ0MGUtOWMyNS04Y2I1MDI0OWNhMDMiLCJwb2RjYXN0SWQiOiI4MTA4ZGZmMS00NDY3LTRhNTItYmU2Yi1lYWEzOWEwZmQwM2QiLCJhY2NvdW50SWQiOiI2NjYyMWQwZmM4M2RlNjU5NjRiNWUwODMiLCJwYXRoIjoibWVkaWEvaW1wb3J0cy9wb2RjYXN0cy84MTA4ZGZmMS00NDY3LTRhNTItYmU2Yi1lYWEzOWEwZmQwM2QvZXBpc29kZXMvNjdlNTdlOGEtZTI1Mi00NDBlLTljMjUtOGNiNTAyNDljYTAzLzE3Nzc1OTM5LWFpLWhhcmR3YXJlLWtpbGxzd2l0Y2hlcy1zYi0yMDUtdHJvdWJsZXMtY2hhdGdwdC1jb25uZWN0b3JzLWFuZC1tb3JlLm1wMyJ9.mp3" length="31248030" type="audio/mpeg"/><itunes:summary>&lt;p&gt;NVIDIA blog on killswitches: &lt;br /&gt;https://blogs.nvidia.com/blog/no-backdoors-no-kill-switches-no-spyware/&lt;br /&gt;&lt;br /&gt;Colorado Legislative AI Task Force Report: https://leg.colorado.gov/sites/default/files/images/report_and_recommendations-accessible_1_0.pdf&lt;br /&gt;&lt;br /&gt;SB-205 opposition: https://gazette.com/government/colorado-mayors-oppose-ai-regulation-law/article_0abe652f-a60a-583e-a138-e73fa45e9a03.html&lt;br /&gt;&lt;br /&gt;AI Stethoscope: https://www.imperial.ac.uk/news/249316/ai-stethoscope-rolled-100-gp-clinics/&lt;br /&gt;&lt;br /&gt;ChatGPT Connectors: https://help.openai.com/en/articles/11487775-connectors-in-chatgpt&lt;/p&gt;</itunes:summary><itunes:explicit>no</itunes:explicit><itunes:duration>00:43:19</itunes:duration><itunes:image href="https://hosting-media.riverside.com/media/imports/podcasts/8108dff1-4467-4a52-be6b-eaa39a0fd03d/uhn7x1yaj8c59nv415mnahn1jblv.jpg"/><itunes:title>AI hardware killswitches, SB-205 troubles, ChatGPT connectors, and more</itunes:title><itunes:episodeType>full</itunes:episodeType></item><item><title><![CDATA[Securely harnessing AI in financial services]]></title><description><![CDATA[<p>I spoke with Matt Adams, Head of Security Enablement at Citi, about:<br /><br />- The EU AI Act and other laws and regulations impacting AI governance and security<br />- What financial services organizations can do to secure their AI deployments<br />- Some of the biggest myths and misconceptions when it comes to AI governance</p>]]></description><guid isPermaLink="false">Buzzsprout-15702520</guid><dc:creator><![CDATA[StackAware]]></dc:creator><pubDate>Thu, 05 Sep 2024 18:00:00 GMT</pubDate><enclosure url="https://api.riverside.com/hosting-analytics/media/34afd34bb017d0685e985a43504aa6a6ce8136ce23e4c37678397fa026aa629a/eyJlcGlzb2RlSWQiOiJhOWMzY2VhYy02Nzk5LTQ3N2YtYjJlNi1iNmUyODlhNzI3MDkiLCJwb2RjYXN0SWQiOiI4MTA4ZGZmMS00NDY3LTRhNTItYmU2Yi1lYWEzOWEwZmQwM2QiLCJhY2NvdW50SWQiOiI2NjYyMWQwZmM4M2RlNjU5NjRiNWUwODMiLCJwYXRoIjoibWVkaWEvaW1wb3J0cy9wb2RjYXN0cy84MTA4ZGZmMS00NDY3LTRhNTItYmU2Yi1lYWEzOWEwZmQwM2QvZXBpc29kZXMvYTljM2NlYWMtNjc5OS00NzdmLWIyZTYtYjZlMjg5YTcyNzA5LzE1NzAyNTIwLXNlY3VyZWx5LWhhcm5lc3NpbmctYWktaW4tZmluYW5jaWFsLXNlcnZpY2VzLm1wMyJ9.mp3" length="19349446" type="audio/mpeg"/><itunes:summary>&lt;p&gt;I spoke with Matt Adams, Head of Security Enablement at Citi, about:&lt;br /&gt;&lt;br /&gt;- The EU AI Act and other laws and regulations impacting AI governance and security&lt;br /&gt;- What financial services organizations can do to secure their AI deployments&lt;br /&gt;- Some of the biggest myths and misconceptions when it comes to AI governance&lt;/p&gt;</itunes:summary><itunes:explicit>no</itunes:explicit><itunes:duration>00:40:12</itunes:duration><itunes:image href="https://hosting-media.riverside.com/media/imports/podcasts/8108dff1-4467-4a52-be6b-eaa39a0fd03d/uhn7x1yaj8c59nv415mnahn1jblv.jpg"/><itunes:title>Securely harnessing AI in financial services</itunes:title><itunes:episodeType>full</itunes:episodeType></item><item><title><![CDATA[How Conveyor deploys AI securely (for security)]]></title><description><![CDATA[<p>While using AI securely is a key concern (especially for companies like StackAware), on the flipside, AI has been supercharging security and compliance teams.<br /><br />Especially when tackling mundane tasks like security questionnaires, AI can accelerate sales and build trust.<br /><br />I chatted with Chas Ballew, CEO of Conveyor, about:<br /><br />- How AI can help with customer security reviews<br />- What sort of controls Conveyor has in place<br />- What Chas thinks the future will look like<br />- The regulatory landscape for AI<br /><br />Here are some resources Chas mentions in the show:<br /><br />Deepmind Solving International Mathematical Olympiad problems<br />https://deepmind.google/discover/blog/ai-solves-imo-problems-at-silver-medal-level/<br /><br />Prof. Geoffrey Hinton - "Will digital intelligence replace biological intelligence?" <br />https://www.youtube.com/watch?v=N1TEjTeQeg0<br /><br />Jim Keller on Lex Fridman<br />https://www.youtube.com/watch?v=G4hL5Om4IJ4</p>]]></description><guid isPermaLink="false">Buzzsprout-15477665</guid><dc:creator><![CDATA[StackAware]]></dc:creator><pubDate>Thu, 25 Jul 2024 23:00:00 GMT</pubDate><enclosure url="https://api.riverside.com/hosting-analytics/media/2ff9f690ac847af18f4fce9fd6f2fbb74a4d25c9d999ff1401339bcfb45bfc91/eyJlcGlzb2RlSWQiOiI5OWViNzVlMC0zYmM1LTRiMjMtYjQ5YS02NTlhMmI5NDc1YmYiLCJwb2RjYXN0SWQiOiI4MTA4ZGZmMS00NDY3LTRhNTItYmU2Yi1lYWEzOWEwZmQwM2QiLCJhY2NvdW50SWQiOiI2NjYyMWQwZmM4M2RlNjU5NjRiNWUwODMiLCJwYXRoIjoibWVkaWEvaW1wb3J0cy9wb2RjYXN0cy84MTA4ZGZmMS00NDY3LTRhNTItYmU2Yi1lYWEzOWEwZmQwM2QvZXBpc29kZXMvOTllYjc1ZTAtM2JjNS00YjIzLWI0OWEtNjU5YTJiOTQ3NWJmLzE1NDc3NjY1LWhvdy1jb252ZXlvci1kZXBsb3lzLWFpLXNlY3VyZWx5LWZvci1zZWN1cml0eS5tcDMifQ==.mp3" length="27218579" type="audio/mpeg"/><itunes:summary>&lt;p&gt;While using AI securely is a key concern (especially for companies like StackAware), on the flipside, AI has been supercharging security and compliance teams.&lt;br /&gt;&lt;br /&gt;Especially when tackling mundane tasks like security questionnaires, AI can accelerate sales and build trust.&lt;br /&gt;&lt;br /&gt;I chatted with Chas Ballew, CEO of Conveyor, about:&lt;br /&gt;&lt;br /&gt;- How AI can help with customer security reviews&lt;br /&gt;- What sort of controls Conveyor has in place&lt;br /&gt;- What Chas thinks the future will look like&lt;br /&gt;- The regulatory landscape for AI&lt;br /&gt;&lt;br /&gt;Here are some resources Chas mentions in the show:&lt;br /&gt;&lt;br /&gt;Deepmind Solving International Mathematical Olympiad problems&lt;br /&gt;https://deepmind.google/discover/blog/ai-solves-imo-problems-at-silver-medal-level/&lt;br /&gt;&lt;br /&gt;Prof. Geoffrey Hinton - &quot;Will digital intelligence replace biological intelligence?&quot; &lt;br /&gt;https://www.youtube.com/watch?v=N1TEjTeQeg0&lt;br /&gt;&lt;br /&gt;Jim Keller on Lex Fridman&lt;br /&gt;https://www.youtube.com/watch?v=G4hL5Om4IJ4&lt;/p&gt;</itunes:summary><itunes:explicit>no</itunes:explicit><itunes:duration>00:37:43</itunes:duration><itunes:image href="https://hosting-media.riverside.com/media/imports/podcasts/8108dff1-4467-4a52-be6b-eaa39a0fd03d/uhn7x1yaj8c59nv415mnahn1jblv.jpg"/><itunes:title>How Conveyor deploys AI securely (for security)</itunes:title><itunes:episodeType>full</itunes:episodeType></item><item><title><![CDATA[4th party AI processing and retention risk]]></title><description><![CDATA[<p>So you have your AI policy in place and are carefully controlling access to new apps as they launch, but then...<br /><br />...you realize your already-approved tools are themselves starting to leverage 4th party AI vendors.<br /><br />Welcome to the modern digital economy.<br /><br />Things are complex and getting even more so.<br /><br />That's why you need to incorporate 4th party risk into your security policies, procedures, and overall AI governance program.<br /><br />Check out the full post with the Asana and Databricks examples I mentioned: https://blog.stackaware.com/p/ai-supply-chain-processing-retention-risk</p>]]></description><guid isPermaLink="false">Buzzsprout-13929030</guid><dc:creator><![CDATA[StackAware]]></dc:creator><pubDate>Mon, 04 Dec 2023 12:00:00 GMT</pubDate><enclosure url="https://api.riverside.com/hosting-analytics/media/45929f3b0e445abc89f9bec41f999b23ef517573ae4aba8c47f44598a18d280f/eyJlcGlzb2RlSWQiOiJmY2Y5ZWM2NC1kZDQyLTRkYzgtYmY1NC1lYzA1YjJhMzM0OGQiLCJwb2RjYXN0SWQiOiI4MTA4ZGZmMS00NDY3LTRhNTItYmU2Yi1lYWEzOWEwZmQwM2QiLCJhY2NvdW50SWQiOiI2NjYyMWQwZmM4M2RlNjU5NjRiNWUwODMiLCJwYXRoIjoibWVkaWEvaW1wb3J0cy9wb2RjYXN0cy84MTA4ZGZmMS00NDY3LTRhNTItYmU2Yi1lYWEzOWEwZmQwM2QvZXBpc29kZXMvZmNmOWVjNjQtZGQ0Mi00ZGM4LWJmNTQtZWMwNWIyYTMzNDhkLzEzOTI5MDMwLTR0aC1wYXJ0eS1haS1wcm9jZXNzaW5nLWFuZC1yZXRlbnRpb24tcmlzay5tcDMifQ==.mp3" length="4655585" type="audio/mpeg"/><itunes:summary>&lt;p&gt;So you have your AI policy in place and are carefully controlling access to new apps as they launch, but then...&lt;br /&gt;&lt;br /&gt;...you realize your already-approved tools are themselves starting to leverage 4th party AI vendors.&lt;br /&gt;&lt;br /&gt;Welcome to the modern digital economy.&lt;br /&gt;&lt;br /&gt;Things are complex and getting even more so.&lt;br /&gt;&lt;br /&gt;That&apos;s why you need to incorporate 4th party risk into your security policies, procedures, and overall AI governance program.&lt;br /&gt;&lt;br /&gt;Check out the full post with the Asana and Databricks examples I mentioned: https://blog.stackaware.com/p/ai-supply-chain-processing-retention-risk&lt;/p&gt;</itunes:summary><itunes:explicit>no</itunes:explicit><itunes:duration>00:06:23</itunes:duration><itunes:image href="https://hosting-media.riverside.com/media/imports/podcasts/8108dff1-4467-4a52-be6b-eaa39a0fd03d/uhn7x1yaj8c59nv415mnahn1jblv.jpg"/><itunes:title>4th party AI processing and retention risk</itunes:title><itunes:episodeType>full</itunes:episodeType></item><item><title><![CDATA[Artificial Intelligence Risk Scoring System (AIRSS) - Part 1]]></title><description><![CDATA[<p>AI cyber risk management needs a new paradigm.<br /><br />Logging CVEs and using CVSS just does not make sense for AI models, and won't cut it going forward.<br /><br />That's why I launched the Artificial Intelligence Risk Scoring System (AIRSS).<br /><br />A quantitative approach to measuring cybersecurity risk from artificial intelligence systems, I am building it in public to help refine and improve the approach.<br /><br />Check out the first post in a series where I lay out my methodology: https://blog.stackaware.com/p/artificial-intelligence-risk-scoring-system-p1</p>]]></description><guid isPermaLink="false">Buzzsprout-13927233</guid><dc:creator><![CDATA[StackAware]]></dc:creator><pubDate>Tue, 07 Nov 2023 18:00:00 GMT</pubDate><enclosure url="https://api.riverside.com/hosting-analytics/media/92027473d6f206367f319c68ed979106ad7804951f1be3e2dee7f4533328388b/eyJlcGlzb2RlSWQiOiJmYjQyZTc5Yi0yMzMxLTQ3M2QtOWE3OC03N2JmYjUyZGI5ZTEiLCJwb2RjYXN0SWQiOiI4MTA4ZGZmMS00NDY3LTRhNTItYmU2Yi1lYWEzOWEwZmQwM2QiLCJhY2NvdW50SWQiOiI2NjYyMWQwZmM4M2RlNjU5NjRiNWUwODMiLCJwYXRoIjoibWVkaWEvaW1wb3J0cy9wb2RjYXN0cy84MTA4ZGZmMS00NDY3LTRhNTItYmU2Yi1lYWEzOWEwZmQwM2QvZXBpc29kZXMvZmI0MmU3OWItMjMzMS00NzNkLTlhNzgtNzdiZmI1MmRiOWUxLzEzOTI3MjMzLWFydGlmaWNpYWwtaW50ZWxsaWdlbmNlLXJpc2stc2NvcmluZy1zeXN0ZW0tYWlyc3MtcGFydC0xLm1wMyJ9.mp3" length="10373241" type="audio/mpeg"/><itunes:summary>&lt;p&gt;AI cyber risk management needs a new paradigm.&lt;br /&gt;&lt;br /&gt;Logging CVEs and using CVSS just does not make sense for AI models, and won&apos;t cut it going forward.&lt;br /&gt;&lt;br /&gt;That&apos;s why I launched the Artificial Intelligence Risk Scoring System (AIRSS).&lt;br /&gt;&lt;br /&gt;A quantitative approach to measuring cybersecurity risk from artificial intelligence systems, I am building it in public to help refine and improve the approach.&lt;br /&gt;&lt;br /&gt;Check out the first post in a series where I lay out my methodology: https://blog.stackaware.com/p/artificial-intelligence-risk-scoring-system-p1&lt;/p&gt;</itunes:summary><itunes:explicit>no</itunes:explicit><itunes:duration>00:14:19</itunes:duration><itunes:image href="https://hosting-media.riverside.com/media/imports/podcasts/8108dff1-4467-4a52-be6b-eaa39a0fd03d/uhn7x1yaj8c59nv415mnahn1jblv.jpg"/><itunes:title>Artificial Intelligence Risk Scoring System (AIRSS) - Part 1</itunes:title><itunes:episodeType>full</itunes:episodeType></item><item><title><![CDATA[How should we track AI vulnerabilities?]]></title><description><![CDATA[<p>The Cybersecurity and Infrastructure Security Agency (CISA) released a post earlier this year saying the AI engineering community should use something like the existing CVE system for tracking vulnerabilities in AI models.<br /><br />Unfortunately, this is a pretty bad recommendation.<br /><br />That's because:<br /><br />- CVEs already create a lot of noise<br />- AI systems are non-deterministic<br />- So things would just get worse<br /><br />In this episode, I dive into these issues and discuss the way ahead.<br /><br />Check out the full blog post: https://blog.stackaware.com/p/how-should-we-identify-ai-vulnerabilities</p>]]></description><guid isPermaLink="false">Buzzsprout-13875525</guid><dc:creator><![CDATA[StackAware]]></dc:creator><pubDate>Mon, 30 Oct 2023 20:00:00 GMT</pubDate><enclosure url="https://api.riverside.com/hosting-analytics/media/90570edc5caa69601d5cf6ff43f975acef2c0396c5095a8ee0413ad117a1821a/eyJlcGlzb2RlSWQiOiJiOTc2YmQ0Mi1iMzI2LTQxZWItODZiZi1hNjQwM2YwNmZhMDYiLCJwb2RjYXN0SWQiOiI4MTA4ZGZmMS00NDY3LTRhNTItYmU2Yi1lYWEzOWEwZmQwM2QiLCJhY2NvdW50SWQiOiI2NjYyMWQwZmM4M2RlNjU5NjRiNWUwODMiLCJwYXRoIjoibWVkaWEvaW1wb3J0cy9wb2RjYXN0cy84MTA4ZGZmMS00NDY3LTRhNTItYmU2Yi1lYWEzOWEwZmQwM2QvZXBpc29kZXMvYjk3NmJkNDItYjMyNi00MWViLTg2YmYtYTY0MDNmMDZmYTA2LzEzODc1NTI1LWhvdy1zaG91bGQtd2UtdHJhY2stYWktdnVsbmVyYWJpbGl0aWVzLm1wMyJ9.mp3" length="5332925" type="audio/mpeg"/><itunes:summary>&lt;p&gt;The Cybersecurity and Infrastructure Security Agency (CISA) released a post earlier this year saying the AI engineering community should use something like the existing CVE system for tracking vulnerabilities in AI models.&lt;br /&gt;&lt;br /&gt;Unfortunately, this is a pretty bad recommendation.&lt;br /&gt;&lt;br /&gt;That&apos;s because:&lt;br /&gt;&lt;br /&gt;- CVEs already create a lot of noise&lt;br /&gt;- AI systems are non-deterministic&lt;br /&gt;- So things would just get worse&lt;br /&gt;&lt;br /&gt;In this episode, I dive into these issues and discuss the way ahead.&lt;br /&gt;&lt;br /&gt;Check out the full blog post: https://blog.stackaware.com/p/how-should-we-identify-ai-vulnerabilities&lt;/p&gt;</itunes:summary><itunes:explicit>no</itunes:explicit><itunes:duration>00:07:19</itunes:duration><itunes:image href="https://hosting-media.riverside.com/media/imports/podcasts/8108dff1-4467-4a52-be6b-eaa39a0fd03d/uhn7x1yaj8c59nv415mnahn1jblv.jpg"/><itunes:title>How should we track AI vulnerabilities?</itunes:title><itunes:episodeType>full</itunes:episodeType></item><item><title><![CDATA[Generative AI and Unintended Training]]></title><description><![CDATA[<p>🔐 Think self-hosting your AI models is more secure?<br /><br />It might be...or not!<br /><br />In this video, we dig into the topic of AI model security and introduce the concept of "unintended training."<br /><br />▶️ Key Highlights:<br /><br />- The myth that self-hosting AI models is necessarily better for security<br />- Decision factors when choosing between SaaS vs. IaaS<br />- Defining "Unintentional Training" and its implications<br /><br />Read more about unintended training and AI Security: <br />https://blog.stackaware.com/p/unintended-training<br /><br />And for a deep dive on the security benefits of SaaS, check out this post:<br />https://blog.stackaware.com/p/declaring-a-truce-on-saas-security<br /><br />Hit that subscribe button for more cutting-edge AI security insights! ✅</p>]]></description><guid isPermaLink="false">Buzzsprout-13828165</guid><dc:creator><![CDATA[StackAware]]></dc:creator><pubDate>Mon, 23 Oct 2023 12:00:00 GMT</pubDate><enclosure url="https://api.riverside.com/hosting-analytics/media/45e76cabfc4e0ad483dd9e982fa353285be02017fa63f29271e6b93be20eaefa/eyJlcGlzb2RlSWQiOiIyMTJjNDgyYy05YTAzLTQxNTUtYjNmYy0wYjlhMWFmMDc5ZjMiLCJwb2RjYXN0SWQiOiI4MTA4ZGZmMS00NDY3LTRhNTItYmU2Yi1lYWEzOWEwZmQwM2QiLCJhY2NvdW50SWQiOiI2NjYyMWQwZmM4M2RlNjU5NjRiNWUwODMiLCJwYXRoIjoibWVkaWEvaW1wb3J0cy9wb2RjYXN0cy84MTA4ZGZmMS00NDY3LTRhNTItYmU2Yi1lYWEzOWEwZmQwM2QvZXBpc29kZXMvMjEyYzQ4MmMtOWEwMy00MTU1LWIzZmMtMGI5YTFhZjA3OWYzLzEzODI4MTY1LWdlbmVyYXRpdmUtYWktYW5kLXVuaW50ZW5kZWQtdHJhaW5pbmcubXAzIn0=.mp3" length="5566777" type="audio/mpeg"/><itunes:summary>&lt;p&gt;🔐 Think self-hosting your AI models is more secure?&lt;br /&gt;&lt;br /&gt;It might be...or not!&lt;br /&gt;&lt;br /&gt;In this video, we dig into the topic of AI model security and introduce the concept of &quot;unintended training.&quot;&lt;br /&gt;&lt;br /&gt;▶️ Key Highlights:&lt;br /&gt;&lt;br /&gt;- The myth that self-hosting AI models is necessarily better for security&lt;br /&gt;- Decision factors when choosing between SaaS vs. IaaS&lt;br /&gt;- Defining &quot;Unintentional Training&quot; and its implications&lt;br /&gt;&lt;br /&gt;Read more about unintended training and AI Security: &lt;br /&gt;https://blog.stackaware.com/p/unintended-training&lt;br /&gt;&lt;br /&gt;And for a deep dive on the security benefits of SaaS, check out this post:&lt;br /&gt;https://blog.stackaware.com/p/declaring-a-truce-on-saas-security&lt;br /&gt;&lt;br /&gt;Hit that subscribe button for more cutting-edge AI security insights! ✅&lt;/p&gt;</itunes:summary><itunes:explicit>no</itunes:explicit><itunes:duration>00:07:39</itunes:duration><itunes:image href="https://hosting-media.riverside.com/media/imports/podcasts/8108dff1-4467-4a52-be6b-eaa39a0fd03d/uhn7x1yaj8c59nv415mnahn1jblv.jpg"/><itunes:title>Generative AI and Unintended Training</itunes:title><itunes:episodeType>full</itunes:episodeType></item><item><title><![CDATA[Who should make cyber risk management decisions?]]></title><description><![CDATA[<p>It's a tougher challenge than many security folks talk about.<br /><br />Who should have the final say about whether to accept, mitigate, transfer, or avoid risk?<br /><br />- Cybersecurity?<br />- Compliance?<br />- Legal?<br /><br />The answer:<br /><br />None of them.<br /><br />Check out this episode of Deploy Securely to learn who should.<br /><br />Or read the original blog post here: https://blog.stackaware.com/p/who-should-make-cyber-risk-management</p>]]></description><guid isPermaLink="false">Buzzsprout-13831724</guid><dc:creator><![CDATA[StackAware]]></dc:creator><pubDate>Mon, 23 Oct 2023 10:00:00 GMT</pubDate><enclosure url="https://api.riverside.com/hosting-analytics/media/44574d9ecb4eba0ce187d454258da9b83ef21a0d9b5c11c85c5a73f3cf8a96a1/eyJlcGlzb2RlSWQiOiIwM2I3NDg2ZS0xODU5LTQ2Y2UtYTQyMy0zNTdlODYxM2QzYjkiLCJwb2RjYXN0SWQiOiI4MTA4ZGZmMS00NDY3LTRhNTItYmU2Yi1lYWEzOWEwZmQwM2QiLCJhY2NvdW50SWQiOiI2NjYyMWQwZmM4M2RlNjU5NjRiNWUwODMiLCJwYXRoIjoibWVkaWEvaW1wb3J0cy9wb2RjYXN0cy84MTA4ZGZmMS00NDY3LTRhNTItYmU2Yi1lYWEzOWEwZmQwM2QvZXBpc29kZXMvMDNiNzQ4NmUtMTg1OS00NmNlLWE0MjMtMzU3ZTg2MTNkM2I5LzEzODMxNzI0LXdoby1zaG91bGQtbWFrZS1jeWJlci1yaXNrLW1hbmFnZW1lbnQtZGVjaXNpb25zLm1wMyJ9.mp3" length="10448450" type="audio/mpeg"/><itunes:summary>&lt;p&gt;It&apos;s a tougher challenge than many security folks talk about.&lt;br /&gt;&lt;br /&gt;Who should have the final say about whether to accept, mitigate, transfer, or avoid risk?&lt;br /&gt;&lt;br /&gt;- Cybersecurity?&lt;br /&gt;- Compliance?&lt;br /&gt;- Legal?&lt;br /&gt;&lt;br /&gt;The answer:&lt;br /&gt;&lt;br /&gt;None of them.&lt;br /&gt;&lt;br /&gt;Check out this episode of Deploy Securely to learn who should.&lt;br /&gt;&lt;br /&gt;Or read the original blog post here: https://blog.stackaware.com/p/who-should-make-cyber-risk-management&lt;/p&gt;</itunes:summary><itunes:explicit>no</itunes:explicit><itunes:duration>00:14:26</itunes:duration><itunes:image href="https://hosting-media.riverside.com/media/imports/podcasts/8108dff1-4467-4a52-be6b-eaa39a0fd03d/uhn7x1yaj8c59nv415mnahn1jblv.jpg"/><itunes:title>Who should make cyber risk management decisions?</itunes:title><itunes:episodeType>full</itunes:episodeType></item><item><title><![CDATA[Who should get ISO 42001 certified?]]></title><description><![CDATA[<p>1) Early-stage AI startups often grapple with customer security reviews, making certifications like SOC 2 or ISO 27001 essential. However, ISO 42001 might be more suitable for AI-focused companies due to its comprehensive coverage.<br /><br />2) Larger corporations using AI to manage sensitive data face scrutiny and criticism. These companies can validate their AI practices through ISO 42001, offering a certified risk management system that reassures stakeholders<br /><br />3) In heavily-regulated sectors like healthcare and finance, adopting and certifying AI technologies is complex. ISO 42001 helps these enterprises manage risks and maintain credibility by adhering to industry standards.<br /><br />Check out the full post on the Deploy Securely blog: https://blog.stackaware.com/p/iso-42001-ai-management-system-company-types<br /><br />Want more AI security resources? Check out https://products.stackaware.com/</p>]]></description><guid isPermaLink="false">Buzzsprout-15345360</guid><dc:creator><![CDATA[StackAware]]></dc:creator><pubDate>Tue, 02 Jul 2024 00:00:00 GMT</pubDate><enclosure url="https://api.riverside.com/hosting-analytics/media/5b813e0d001fa8badc950c382ca4bc998c221211f60d4f6652d8ca74b51a4e63/eyJlcGlzb2RlSWQiOiI2OWM4NzYxNi1mNWQ2LTQ0ZDAtYWJlMC00NGYxZmY1ZWUxNjYiLCJwb2RjYXN0SWQiOiI4MTA4ZGZmMS00NDY3LTRhNTItYmU2Yi1lYWEzOWEwZmQwM2QiLCJhY2NvdW50SWQiOiI2NjYyMWQwZmM4M2RlNjU5NjRiNWUwODMiLCJwYXRoIjoibWVkaWEvaW1wb3J0cy9wb2RjYXN0cy84MTA4ZGZmMS00NDY3LTRhNTItYmU2Yi1lYWEzOWEwZmQwM2QvZXBpc29kZXMvNjljODc2MTYtZjVkNi00NGQwLWFiZTAtNDRmMWZmNWVlMTY2LzE1MzQ1MzYwLXdoby1zaG91bGQtZ2V0LWlzby00MjAwMS1jZXJ0aWZpZWQubXAzIn0=.mp3" length="2718695" type="audio/mpeg"/><itunes:summary>&lt;p&gt;1) Early-stage AI startups often grapple with customer security reviews, making certifications like SOC 2 or ISO 27001 essential. However, ISO 42001 might be more suitable for AI-focused companies due to its comprehensive coverage.&lt;br /&gt;&lt;br /&gt;2) Larger corporations using AI to manage sensitive data face scrutiny and criticism. These companies can validate their AI practices through ISO 42001, offering a certified risk management system that reassures stakeholders&lt;br /&gt;&lt;br /&gt;3) In heavily-regulated sectors like healthcare and finance, adopting and certifying AI technologies is complex. ISO 42001 helps these enterprises manage risks and maintain credibility by adhering to industry standards.&lt;br /&gt;&lt;br /&gt;Check out the full post on the Deploy Securely blog: https://blog.stackaware.com/p/iso-42001-ai-management-system-company-types&lt;br /&gt;&lt;br /&gt;Want more AI security resources? Check out https://products.stackaware.com/&lt;/p&gt;</itunes:summary><itunes:explicit>no</itunes:explicit><itunes:duration>00:03:42</itunes:duration><itunes:image href="https://hosting-media.riverside.com/media/imports/podcasts/8108dff1-4467-4a52-be6b-eaa39a0fd03d/uhn7x1yaj8c59nv415mnahn1jblv.jpg"/><itunes:title>Who should get ISO 42001 certified?</itunes:title><itunes:episodeType>full</itunes:episodeType></item><item><title><![CDATA[Compliance and AI - 3 quick observations]]></title><description><![CDATA[<p>Here are the top 3 things I'm seeing:<br /><br />1️⃣ Auditors don’t (yet) have strong opinions on how to deploy AI securely<br /><br />2️⃣ Enforcement is here, just not evenly distributed.<br /><br />3️⃣ Integrating AI-specific requirements with existing security, privacy, and compliance ones isn’t going to be easy<br /><br />Want to see a full post? Check out the Deploy Securely blog: https://blog.stackaware.com/p/ai-governance-compliance-auditors-enforcement</p>]]></description><guid isPermaLink="false">Buzzsprout-14905276</guid><dc:creator><![CDATA[StackAware]]></dc:creator><pubDate>Wed, 17 Apr 2024 10:00:00 GMT</pubDate><enclosure url="https://api.riverside.com/hosting-analytics/media/934321a153a7024bca91d26d5f7a4628648b67a8e176d82c39b86aeb037df5e1/eyJlcGlzb2RlSWQiOiI2NTc4ODU5NC05NzFlLTQ5MjAtYjM3MC01MTFhNWQ3ZTg0NTgiLCJwb2RjYXN0SWQiOiI4MTA4ZGZmMS00NDY3LTRhNTItYmU2Yi1lYWEzOWEwZmQwM2QiLCJhY2NvdW50SWQiOiI2NjYyMWQwZmM4M2RlNjU5NjRiNWUwODMiLCJwYXRoIjoibWVkaWEvaW1wb3J0cy9wb2RjYXN0cy84MTA4ZGZmMS00NDY3LTRhNTItYmU2Yi1lYWEzOWEwZmQwM2QvZXBpc29kZXMvNjU3ODg1OTQtOTcxZS00OTIwLWIzNzAtNTExYTVkN2U4NDU4LzE0OTA1Mjc2LWNvbXBsaWFuY2UtYW5kLWFpLTMtcXVpY2stb2JzZXJ2YXRpb25zLm1wMyJ9.mp3" length="3511570" type="audio/mpeg"/><itunes:summary>&lt;p&gt;Here are the top 3 things I&apos;m seeing:&lt;br /&gt;&lt;br /&gt;1️⃣ Auditors don’t (yet) have strong opinions on how to deploy AI securely&lt;br /&gt;&lt;br /&gt;2️⃣ Enforcement is here, just not evenly distributed.&lt;br /&gt;&lt;br /&gt;3️⃣ Integrating AI-specific requirements with existing security, privacy, and compliance ones isn’t going to be easy&lt;br /&gt;&lt;br /&gt;Want to see a full post? Check out the Deploy Securely blog: https://blog.stackaware.com/p/ai-governance-compliance-auditors-enforcement&lt;/p&gt;</itunes:summary><itunes:explicit>no</itunes:explicit><itunes:duration>00:04:48</itunes:duration><itunes:image href="https://hosting-media.riverside.com/media/imports/podcasts/8108dff1-4467-4a52-be6b-eaa39a0fd03d/uhn7x1yaj8c59nv415mnahn1jblv.jpg"/><itunes:title>Compliance and AI - 3 quick observations</itunes:title><itunes:episodeType>full</itunes:episodeType></item><item><title><![CDATA[Waymo outage, Manus, AI-generated police reports]]></title><description><![CDATA[<p>To kick off the new year, Steve Dufour and I chatted about:</p><ul><li>Steve's expertise in copying and pasting hundreds of lines from the Health Insurance Portability and Accountability Act (HIPAA), because ChatGPT couldn't parse it.</li><li>The late December 2025 <a href="https://www.usatoday.com/story/tech/news/2025/12/24/waymo-power-outage-san-francisco-cars-update/87905535007/" rel="noopener noreferrer nofollow">​Waymo outage​</a>.</li><li>Cops Forced to <a href="https://futurism.com/artificial-intelligence/ai-police-report-frog" rel="noopener noreferrer nofollow">​Explain​</a> Why AI Generated Police Report Claimed Officer Transformed Into Frog.</li><li>Manus AI's <a href="https://www.wsj.com/tech/ai/meta-buys-ai-startup-manus-adding-millions-of-paying-users-f1dc7ef8" rel="noopener noreferrer nofollow">​acquisition​</a>.</li><li>AI hampering <a href="https://fortune.com/article/does-ai-increase-workplace-productivity-experiment-software-developers-task-took-longer/" rel="noopener noreferrer nofollow">​productivity​</a> of software developers, despite expectations it would boost efficiency.</li><li><a href="https://www.reddit.com/r/ArtificialInteligence/comments/1py55r7/ai_generated_content_is_changing_our_language_and" rel="noopener noreferrer nofollow">​Impacts​</a> of AI on language, media, and culture.</li><li>Our 2026 predictions for AI</li></ul>]]></description><guid isPermaLink="false">Buzzsprout-18474572</guid><dc:creator><![CDATA[StackAware]]></dc:creator><pubDate>Thu, 08 Jan 2026 13:00:00 GMT</pubDate><enclosure url="https://api.riverside.com/hosting-analytics/media/3060b88576692555310916b3428578d15b1fc4e6059fc5f72b99d6b829ac74ba/eyJlcGlzb2RlSWQiOiJlYjgwMjQ3My1hNjBjLTRkZDktYWZkNy05ZTVmYTgwYzRkZTgiLCJwb2RjYXN0SWQiOiI4MTA4ZGZmMS00NDY3LTRhNTItYmU2Yi1lYWEzOWEwZmQwM2QiLCJhY2NvdW50SWQiOiI2NjYyMWQwZmM4M2RlNjU5NjRiNWUwODMiLCJwYXRoIjoibWVkaWEvaW1wb3J0cy9wb2RjYXN0cy84MTA4ZGZmMS00NDY3LTRhNTItYmU2Yi1lYWEzOWEwZmQwM2QvZXBpc29kZXMvZWI4MDI0NzMtYTYwYy00ZGQ5LWFmZDctOWU1ZmE4MGM0ZGU4LzE4NDc0NTcyLXdheW1vLW91dGFnZS1tYW51cy1haS1nZW5lcmF0ZWQtcG9saWNlLXJlcG9ydHMubXAzIn0=.mp3" length="33107104" type="audio/mpeg"/><itunes:summary>&lt;p&gt;To kick off the new year, Steve Dufour and I chatted about:&lt;/p&gt;&lt;ul&gt;&lt;li&gt;Steve&apos;s expertise in copying and pasting hundreds of lines from the Health Insurance Portability and Accountability Act (HIPAA), because ChatGPT couldn&apos;t parse it.&lt;/li&gt;&lt;li&gt;The late December 2025 &lt;a href=&quot;https://www.usatoday.com/story/tech/news/2025/12/24/waymo-power-outage-san-francisco-cars-update/87905535007/&quot; rel=&quot;noopener noreferrer nofollow&quot;&gt;​Waymo outage​&lt;/a&gt;.&lt;/li&gt;&lt;li&gt;Cops Forced to &lt;a href=&quot;https://futurism.com/artificial-intelligence/ai-police-report-frog&quot; rel=&quot;noopener noreferrer nofollow&quot;&gt;​Explain​&lt;/a&gt; Why AI Generated Police Report Claimed Officer Transformed Into Frog.&lt;/li&gt;&lt;li&gt;Manus AI&apos;s &lt;a href=&quot;https://www.wsj.com/tech/ai/meta-buys-ai-startup-manus-adding-millions-of-paying-users-f1dc7ef8&quot; rel=&quot;noopener noreferrer nofollow&quot;&gt;​acquisition​&lt;/a&gt;.&lt;/li&gt;&lt;li&gt;AI hampering &lt;a href=&quot;https://fortune.com/article/does-ai-increase-workplace-productivity-experiment-software-developers-task-took-longer/&quot; rel=&quot;noopener noreferrer nofollow&quot;&gt;​productivity​&lt;/a&gt; of software developers, despite expectations it would boost efficiency.&lt;/li&gt;&lt;li&gt;&lt;a href=&quot;https://www.reddit.com/r/ArtificialInteligence/comments/1py55r7/ai_generated_content_is_changing_our_language_and&quot; rel=&quot;noopener noreferrer nofollow&quot;&gt;​Impacts​&lt;/a&gt; of AI on language, media, and culture.&lt;/li&gt;&lt;li&gt;Our 2026 predictions for AI&lt;/li&gt;&lt;/ul&gt;</itunes:summary><itunes:explicit>no</itunes:explicit><itunes:duration>00:45:54</itunes:duration><itunes:image href="https://hosting-media.riverside.com/media/imports/podcasts/8108dff1-4467-4a52-be6b-eaa39a0fd03d/uhn7x1yaj8c59nv415mnahn1jblv.jpg"/><itunes:title>Waymo outage, Manus, AI-generated police reports</itunes:title><itunes:episodeType>full</itunes:episodeType></item></channel></rss>