<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0" xmlns:itunes="http://www.itunes.com/dtds/podcast-1.0.dtd" xmlns:psc="http://podlove.org/simple-chapters" xmlns:podcast="https://podcastindex.org/namespace/1.0"><channel><title><![CDATA[Health Data Ethics]]></title><description><![CDATA[<p>Health tech conversations, from a healthcare IT professional. We're going to talk about medical innovation, technology, and the ethical and operational considerations for health systems. In other words: it's gonna get super nerdy, super fast!</p>]]></description><link>https://www.jenniferowens.net</link><generator>Riverside.fm (https://riverside.com)</generator><lastBuildDate>Sat, 16 May 2026 04:51:18 GMT</lastBuildDate><atom:link href="https://api.riverside.com/hosting/5HAVcjJm.rss" rel="self" type="application/rss+xml"/><author><![CDATA[Jennifer Owens]]></author><pubDate>Sat, 22 Nov 2025 20:03:05 GMT</pubDate><copyright><![CDATA[2025 Jennifer Owens]]></copyright><language><![CDATA[en]]></language><ttl>60</ttl><category><![CDATA[Medicine]]></category><category><![CDATA[Life Sciences]]></category><itunes:author>Jennifer Owens</itunes:author><itunes:summary>&lt;p&gt;Health tech conversations, from a healthcare IT professional. We&apos;re going to talk about medical innovation, technology, and the ethical and operational considerations for health systems. In other words: it&apos;s gonna get super nerdy, super fast!&lt;/p&gt;</itunes:summary><itunes:type>episodic</itunes:type><itunes:owner><itunes:name>Jennifer Owens</itunes:name><itunes:email>jennifer@jenniferowens.net</itunes:email></itunes:owner><itunes:explicit>no</itunes:explicit><itunes:category text="Health &amp; Fitness"><itunes:category text="Medicine"/></itunes:category><itunes:category text="Science"><itunes:category text="Life Sciences"/></itunes:category><itunes:image href="https://hosting-media.riverside.com/media/imports/podcasts/b3c15606-d622-4d74-b63d-7a8662398019/502638a1-c2ed-4c08-b5c9-de4b7e9c0f94.png"/><item><title><![CDATA[What does Privacy and Transparency Mean Anyway? ]]></title><description><![CDATA[<p>This week's Health Data Ethics podcast continues our series on the Joint Commission and CHAI guidance on the responsible use of health AI. In this episode we're digging into privacy and transparency.</p><p></p><p>The guidance itself is reasonable. What I spent most of the episode on is how you actually implement it, because that's where things get interesting.</p><p></p><p>Adding AI language to the Notice of Privacy Practices is a good first step, and a lot of health systems are doing it. But I think the most-told lie in modern life is still "I have read and agreed to the terms and conditions." Broad disclosure is honest, and it matters, and it's also not going to carry the whole weight of a transparent relationship with your patients.</p><p></p><p>The piece I really wanted to dig into is opt-outs. If you offer patients the ability to opt out of something you can't actually turn off, you've built opt-out theater, and that erodes trust faster than just being honest about the limitation would. Ambulatory scribe is a real opt-out. Inpatient sepsis prediction is not technically feasible to opt out of, and we probably shouldn't pretend it is.</p><p></p><p>I also spend some time on the clinician side, which I think gets short shrift in a lot of these conversations. Operational training on a tool is not the same thing as understanding how the model behaves, where it fails, and which patients it might be wrong for. Clinicians are the ones carrying accountability for human-in-the-loop judgment, and they need real explainability to do that well.</p>]]></description><guid isPermaLink="false">cd186ed7-13c4-468e-b8be-55d4e1ddaafd</guid><dc:creator><![CDATA[Jennifer Owens]]></dc:creator><pubDate>Wed, 13 May 2026 11:30:00 GMT</pubDate><enclosure url="https://api.riverside.com/hosting-analytics/media/3b73eb02764d1996b8944b08be8f1a1786d12f32edcba5468b6d42700aab6c3b/eyJlcGlzb2RlSWQiOiJjZDE4NmVkNy0xM2M0LTQ2OGUtYjhiZS01NWQ0ZTFkZGFhZmQiLCJwb2RjYXN0SWQiOiJiM2MxNTYwNi1kNjIyLTRkNzQtYjYzZC03YTg2NjIzOTgwMTkiLCJhY2NvdW50SWQiOiI2NjQyNjJmODUzYmU3MWJjMmZkNGNjN2YiLCJwYXRoIjoibWVkaWEvY2xpcHMvNmEwMGJiM2EyOTM4NzUxODA1OThjOWQ0L2hlYWx0aGRhdGFldGhpY3MtbWFzdGVyLWNvbXBvc2VyLTIwMjYtNS0xMF9fMTktNy02Lm1wMyJ9.mp3" length="22145297" type="audio/mpeg"/><podcast:transcript url="https://hosting-media.riverside.com/media/podcasts/b3c15606-d622-4d74-b63d-7a8662398019/episodes/cd186ed7-13c4-468e-b8be-55d4e1ddaafd/transcripts.txt" type="text/plain"/><itunes:summary>&lt;p&gt;This week&apos;s Health Data Ethics podcast continues our series on the Joint Commission and CHAI guidance on the responsible use of health AI. In this episode we&apos;re digging into privacy and transparency.&lt;/p&gt;&lt;p&gt;&lt;/p&gt;&lt;p&gt;The guidance itself is reasonable. What I spent most of the episode on is how you actually implement it, because that&apos;s where things get interesting.&lt;/p&gt;&lt;p&gt;&lt;/p&gt;&lt;p&gt;Adding AI language to the Notice of Privacy Practices is a good first step, and a lot of health systems are doing it. But I think the most-told lie in modern life is still &quot;I have read and agreed to the terms and conditions.&quot; Broad disclosure is honest, and it matters, and it&apos;s also not going to carry the whole weight of a transparent relationship with your patients.&lt;/p&gt;&lt;p&gt;&lt;/p&gt;&lt;p&gt;The piece I really wanted to dig into is opt-outs. If you offer patients the ability to opt out of something you can&apos;t actually turn off, you&apos;ve built opt-out theater, and that erodes trust faster than just being honest about the limitation would. Ambulatory scribe is a real opt-out. Inpatient sepsis prediction is not technically feasible to opt out of, and we probably shouldn&apos;t pretend it is.&lt;/p&gt;&lt;p&gt;&lt;/p&gt;&lt;p&gt;I also spend some time on the clinician side, which I think gets short shrift in a lot of these conversations. Operational training on a tool is not the same thing as understanding how the model behaves, where it fails, and which patients it might be wrong for. Clinicians are the ones carrying accountability for human-in-the-loop judgment, and they need real explainability to do that well.&lt;/p&gt;</itunes:summary><itunes:explicit>no</itunes:explicit><itunes:duration>00:11:32</itunes:duration><itunes:image href="https://hosting-media.riverside.com/media/imports/podcasts/b3c15606-d622-4d74-b63d-7a8662398019/502638a1-c2ed-4c08-b5c9-de4b7e9c0f94.png"/><itunes:title>What does Privacy and Transparency Mean Anyway? </itunes:title><itunes:episodeType>full</itunes:episodeType></item><item><title><![CDATA[How Do You Get AI Policy Approved?]]></title><description><![CDATA[<p>Getting an AI policy approved in a large health system is a different skill than writing one.</p><p></p><p>In part two of my AI policy series on the Health Data Ethics Podcast, I share what months of drafting, socializing, and navigating formal approval at Cleveland Clinic actually looked like: the champions you need, the scope battles you'll face, and why the approval process is won or lost long before the policy enters formal review.</p><p></p><p>The biggest takeaway: identify domains where your scope overlaps with someone else's, and get those leader in the room early before formal review even starts.</p>]]></description><guid isPermaLink="false">8b8f9329-2324-4004-8bfe-bcf5f2a1cd2e</guid><dc:creator><![CDATA[Jennifer Owens]]></dc:creator><pubDate>Wed, 06 May 2026 11:30:00 GMT</pubDate><enclosure url="https://api.riverside.com/hosting-analytics/media/60cd18667a4d88ce1e179206e7018d9cfea0fdf4d77d4c261953c01d37b9b085/eyJlcGlzb2RlSWQiOiI4YjhmOTMyOS0yMzI0LTQwMDQtOGJmZS1iY2Y1ZjJhMWNkMmUiLCJwb2RjYXN0SWQiOiJiM2MxNTYwNi1kNjIyLTRkNzQtYjYzZC03YTg2NjIzOTgwMTkiLCJhY2NvdW50SWQiOiI2NjQyNjJmODUzYmU3MWJjMmZkNGNjN2YiLCJwYXRoIjoibWVkaWEvY2xpcHMvNjlmMDlkZjAwOGU1NDIyMmNhMjAxNDBhL2hlYWx0aGRhdGFldGhpY3MtbWFzdGVyLWNvbXBvc2VyLTIwMjYtNC0yOF9fMTMtNDUtNTIubXAzIn0=.mp3" length="17470841" type="audio/mpeg"/><podcast:transcript url="https://hosting-media.riverside.com/media/podcasts/b3c15606-d622-4d74-b63d-7a8662398019/episodes/8b8f9329-2324-4004-8bfe-bcf5f2a1cd2e/transcripts.txt" type="text/plain"/><itunes:summary>&lt;p&gt;Getting an AI policy approved in a large health system is a different skill than writing one.&lt;/p&gt;&lt;p&gt;&lt;/p&gt;&lt;p&gt;In part two of my AI policy series on the Health Data Ethics Podcast, I share what months of drafting, socializing, and navigating formal approval at Cleveland Clinic actually looked like: the champions you need, the scope battles you&apos;ll face, and why the approval process is won or lost long before the policy enters formal review.&lt;/p&gt;&lt;p&gt;&lt;/p&gt;&lt;p&gt;The biggest takeaway: identify domains where your scope overlaps with someone else&apos;s, and get those leader in the room early before formal review even starts.&lt;/p&gt;</itunes:summary><itunes:explicit>no</itunes:explicit><itunes:duration>00:09:06</itunes:duration><itunes:image href="https://hosting-media.riverside.com/media/imports/podcasts/b3c15606-d622-4d74-b63d-7a8662398019/502638a1-c2ed-4c08-b5c9-de4b7e9c0f94.png"/><itunes:title>How Do You Get AI Policy Approved?</itunes:title><itunes:episodeType>full</itunes:episodeType></item><item><title><![CDATA[How Do You Write an AI Policy?]]></title><description><![CDATA[<p>Writing an AI policy sounds straightforward — until it becomes the place where everyone in your organization hangs all their hopes and dreams for AI governance.</p><p></p><p>In this episode of the Health Data Ethics Podcast, I walk through the first item on the Joint Commission and Coalition for Health AI's responsible use guidance: establishing an AI policy as your governance foundation. I share what we learned working on Cleveland Clinic's AI policy in late 2024 — before the JC/CHAI guidance even existed — including the structural traps that slow policies down, and why pre-approval stakeholder alignment is so important. </p><p></p><p>If you're starting from zero or trying to get a stalled draft across the finish line, this one's for you.</p>]]></description><guid isPermaLink="false">f51842f9-ef41-4626-80aa-6728ef1b38d9</guid><dc:creator><![CDATA[Jennifer Owens]]></dc:creator><pubDate>Wed, 29 Apr 2026 11:30:00 GMT</pubDate><enclosure url="https://api.riverside.com/hosting-analytics/media/9d909fc8a617ca4aea6fefc5bad7fd3c7ca668e28e6d861e9022645cfd433f6e/eyJlcGlzb2RlSWQiOiJmNTE4NDJmOS1lZjQxLTQ2MjYtODBhYS02NzI4ZWYxYjM4ZDkiLCJwb2RjYXN0SWQiOiJiM2MxNTYwNi1kNjIyLTRkNzQtYjYzZC03YTg2NjIzOTgwMTkiLCJhY2NvdW50SWQiOiI2NjQyNjJmODUzYmU3MWJjMmZkNGNjN2YiLCJwYXRoIjoibWVkaWEvY2xpcHMvNjlmMDlkM2FiOTc3NzNhMjAyOWY5ZWNmL2hlYWx0aGRhdGFldGhpY3MtbWFzdGVyLWNvbXBvc2VyLTIwMjYtNC0yOF9fMTMtNDItNTAubXAzIn0=.mp3" length="30092373" type="audio/mpeg"/><podcast:transcript url="https://hosting-media.riverside.com/media/podcasts/b3c15606-d622-4d74-b63d-7a8662398019/episodes/f51842f9-ef41-4626-80aa-6728ef1b38d9/transcripts.txt" type="text/plain"/><itunes:summary>&lt;p&gt;Writing an AI policy sounds straightforward — until it becomes the place where everyone in your organization hangs all their hopes and dreams for AI governance.&lt;/p&gt;&lt;p&gt;&lt;/p&gt;&lt;p&gt;In this episode of the Health Data Ethics Podcast, I walk through the first item on the Joint Commission and Coalition for Health AI&apos;s responsible use guidance: establishing an AI policy as your governance foundation. I share what we learned working on Cleveland Clinic&apos;s AI policy in late 2024 — before the JC/CHAI guidance even existed — including the structural traps that slow policies down, and why pre-approval stakeholder alignment is so important. &lt;/p&gt;&lt;p&gt;&lt;/p&gt;&lt;p&gt;If you&apos;re starting from zero or trying to get a stalled draft across the finish line, this one&apos;s for you.&lt;/p&gt;</itunes:summary><itunes:explicit>no</itunes:explicit><itunes:duration>00:15:40</itunes:duration><itunes:image href="https://hosting-media.riverside.com/media/imports/podcasts/b3c15606-d622-4d74-b63d-7a8662398019/502638a1-c2ed-4c08-b5c9-de4b7e9c0f94.png"/><itunes:title>How Do You Write an AI Policy?</itunes:title><itunes:episodeType>full</itunes:episodeType></item><item><title><![CDATA[What's the White House Thinking About AI Regulation? Part Two]]></title><description><![CDATA[<p>Part two of my breakdown of the White House National AI Policy Framework — what it says about workforce, what it leaves out, and what it would take to become law.<br /><br />The workforce section has good instincts but no mandates, no funding, no timelines. In healthcare, this can create a patient safety problem, if our health systems don't fill this gap thoughtfully.<br /><br />The legislative road is crowded and uncertain, but even if codified into law, this federal posture is a light touch with the rest landing on health systems.</p>]]></description><guid isPermaLink="false">63d01b2c-29d4-4090-9a2b-96627374d1c7</guid><dc:creator><![CDATA[Jennifer Owens]]></dc:creator><pubDate>Wed, 08 Apr 2026 11:30:00 GMT</pubDate><enclosure url="https://api.riverside.com/hosting-analytics/media/739c9bf5e929c92c261972686fab844e6f4e99deb69acf55d69b9608005cebc1/eyJlcGlzb2RlSWQiOiI2M2QwMWIyYy0yOWQ0LTQwOTAtOWEyYi05NjYyNzM3NGQxYzciLCJwb2RjYXN0SWQiOiJiM2MxNTYwNi1kNjIyLTRkNzQtYjYzZC03YTg2NjIzOTgwMTkiLCJhY2NvdW50SWQiOiI2NjQyNjJmODUzYmU3MWJjMmZkNGNjN2YiLCJwYXRoIjoibWVkaWEvY2xpcHMvNjljOTMzZTFmMDMwOTI5MDc0OGJlMmQzL2hlYWx0aGRhdGFldGhpY3MtbWFzdGVyLWNvbXBvc2VyLTIwMjYtMy0yOV9fMTYtMTQtNTcubXAzIn0=.mp3" length="24735390" type="audio/mpeg"/><podcast:transcript url="https://hosting-media.riverside.com/media/podcasts/b3c15606-d622-4d74-b63d-7a8662398019/episodes/63d01b2c-29d4-4090-9a2b-96627374d1c7/transcripts.txt" type="text/plain"/><itunes:summary>&lt;p&gt;Part two of my breakdown of the White House National AI Policy Framework — what it says about workforce, what it leaves out, and what it would take to become law.&lt;br /&gt;&lt;br /&gt;The workforce section has good instincts but no mandates, no funding, no timelines. In healthcare, this can create a patient safety problem, if our health systems don&apos;t fill this gap thoughtfully.&lt;br /&gt;&lt;br /&gt;The legislative road is crowded and uncertain, but even if codified into law, this federal posture is a light touch with the rest landing on health systems.&lt;/p&gt;</itunes:summary><itunes:explicit>no</itunes:explicit><itunes:duration>00:17:11</itunes:duration><itunes:image href="https://hosting-media.riverside.com/media/imports/podcasts/b3c15606-d622-4d74-b63d-7a8662398019/502638a1-c2ed-4c08-b5c9-de4b7e9c0f94.png"/><itunes:title>What&apos;s the White House Thinking About AI Regulation? Part Two</itunes:title><itunes:episodeType>full</itunes:episodeType></item><item><title><![CDATA[What's The White House Thinking About AI Regulation? Part One]]></title><description><![CDATA[<p>This week's episode of the Health Data Ethics Show: The White House just released a four-page legislative framework asking Congress to pass national AI policy this year. Not a law. A wish list, but one that tells us a lot about where federal AI policy is heading.<br /><br />In this episode I break down what it means for healthcare governance: the preemption debate, FDA as the designated health AI gatekeeper, and the notable absence of HIPAA from the entire document.<br /><br />The governance responsibility has always sat with health systems. This framework confirms the federal government intends to keep it that way.</p>]]></description><guid isPermaLink="false">7c98d94c-849d-40a4-99f4-7353b22183b8</guid><dc:creator><![CDATA[Jennifer Owens]]></dc:creator><pubDate>Wed, 01 Apr 2026 11:30:00 GMT</pubDate><enclosure url="https://api.riverside.com/hosting-analytics/media/d9946be8b0d3c595555b1e390d458c625a4e5fed59c385795f07414b075bab5a/eyJlcGlzb2RlSWQiOiI3Yzk4ZDk0Yy04NDlkLTQwYTQtOTlmNC03MzUzYjIyMTgzYjgiLCJwb2RjYXN0SWQiOiJiM2MxNTYwNi1kNjIyLTRkNzQtYjYzZC03YTg2NjIzOTgwMTkiLCJhY2NvdW50SWQiOiI2NjQyNjJmODUzYmU3MWJjMmZkNGNjN2YiLCJwYXRoIjoibWVkaWEvY2xpcHMvNjljOTMzYmNiMDQwYTBkMGQ3OWRiNjljL2hlYWx0aGRhdGFldGhpY3MtbWFzdGVyLWNvbXBvc2VyLTIwMjYtMy0yOV9fMTYtMTQtMjAubXAzIn0=.mp3" length="23596242" type="audio/mpeg"/><podcast:transcript url="https://hosting-media.riverside.com/media/podcasts/b3c15606-d622-4d74-b63d-7a8662398019/episodes/7c98d94c-849d-40a4-99f4-7353b22183b8/transcripts.txt" type="text/plain"/><itunes:summary>&lt;p&gt;This week&apos;s episode of the Health Data Ethics Show: The White House just released a four-page legislative framework asking Congress to pass national AI policy this year. Not a law. A wish list, but one that tells us a lot about where federal AI policy is heading.&lt;br /&gt;&lt;br /&gt;In this episode I break down what it means for healthcare governance: the preemption debate, FDA as the designated health AI gatekeeper, and the notable absence of HIPAA from the entire document.&lt;br /&gt;&lt;br /&gt;The governance responsibility has always sat with health systems. This framework confirms the federal government intends to keep it that way.&lt;/p&gt;</itunes:summary><itunes:explicit>no</itunes:explicit><itunes:duration>00:16:23</itunes:duration><itunes:image href="https://hosting-media.riverside.com/media/imports/podcasts/b3c15606-d622-4d74-b63d-7a8662398019/502638a1-c2ed-4c08-b5c9-de4b7e9c0f94.png"/><itunes:title>What&apos;s The White House Thinking About AI Regulation? Part One</itunes:title><itunes:episodeType>full</itunes:episodeType></item><item><title><![CDATA[Can ChatGPT Tell Me When To Go To The ED?]]></title><description><![CDATA[<p>In this week's episode of the Health Data Ethics Show, I dig into a new Nature Medicine paper that stress-tested ChatGPT Health across 960 clinical vignettes. ChatGPT Health performs remarkably well in the middle of the acuity spectrum, correctly triaging semi-urgent and urgent cases at rates that rival clinical judgment. <br /><br />At the extremes, though, it struggles — undertriaging more than half of true emergencies and triggering crisis resources for suicidal ideation more reliably when patients had no plan than when they did. I walk through the methodology, the results, what they reveal about the limits of benchmarks like HealthBench, and what I think health systems and patients should take from it.</p>]]></description><guid isPermaLink="false">7acafe4f-5665-4924-9a83-bfd0469999fb</guid><dc:creator><![CDATA[Jennifer Owens]]></dc:creator><pubDate>Wed, 25 Mar 2026 11:30:00 GMT</pubDate><enclosure url="https://api.riverside.com/hosting-analytics/media/7ab47d0ac35c42f7c0368d0546ffb6afb8b4c1479141c2bb9b202365ac1b4c5c/eyJlcGlzb2RlSWQiOiI3YWNhZmU0Zi01NjY1LTQ5MjQtOWE4My1iZmQwNDY5OTk5ZmIiLCJwb2RjYXN0SWQiOiJiM2MxNTYwNi1kNjIyLTRkNzQtYjYzZC03YTg2NjIzOTgwMTkiLCJhY2NvdW50SWQiOiI2NjQyNjJmODUzYmU3MWJjMmZkNGNjN2YiLCJwYXRoIjoibWVkaWEvY2xpcHMvNjliZWQ2ODI1ODgzYzM5OTk0OWE5MzJiL2hlYWx0aGRhdGFldGhpY3MtbWFzdGVyLWNvbXBvc2VyLTIwMjYtMy0yMV9fMTgtMzMtNTMubXAzIn0=.mp3" length="32350815" type="audio/mpeg"/><podcast:transcript url="https://hosting-media.riverside.com/media/podcasts/b3c15606-d622-4d74-b63d-7a8662398019/episodes/7acafe4f-5665-4924-9a83-bfd0469999fb/transcripts.txt" type="text/plain"/><itunes:summary>&lt;p&gt;In this week&apos;s episode of the Health Data Ethics Show, I dig into a new Nature Medicine paper that stress-tested ChatGPT Health across 960 clinical vignettes. ChatGPT Health performs remarkably well in the middle of the acuity spectrum, correctly triaging semi-urgent and urgent cases at rates that rival clinical judgment. &lt;br /&gt;&lt;br /&gt;At the extremes, though, it struggles — undertriaging more than half of true emergencies and triggering crisis resources for suicidal ideation more reliably when patients had no plan than when they did. I walk through the methodology, the results, what they reveal about the limits of benchmarks like HealthBench, and what I think health systems and patients should take from it.&lt;/p&gt;</itunes:summary><itunes:explicit>no</itunes:explicit><itunes:duration>00:22:28</itunes:duration><itunes:image href="https://hosting-media.riverside.com/media/imports/podcasts/b3c15606-d622-4d74-b63d-7a8662398019/502638a1-c2ed-4c08-b5c9-de4b7e9c0f94.png"/><itunes:title>Can ChatGPT Tell Me When To Go To The ED?</itunes:title><itunes:episodeType>full</itunes:episodeType></item><item><title><![CDATA[What's AI's Impact on the Labor Market? ]]></title><description><![CDATA[<p>Last week, Anthropic published new research on the labor market impacts of AI. In this week's episode, I break down what the paper actually says, why the gap between <i>theoretical</i> and <i>observed</i> AI exposure matters, and which workers and which sectors are most impacted.</p><p></p><p>Those workers, and the people reading these articles, are also your patients. They're walking into your hospital carrying headlines, anxiety, and ChatGPT conversations from the parking garage — and most governance frameworks have no place to put that.</p>]]></description><guid isPermaLink="false">a3ac29b0-7398-4521-88f0-af4eec2d300a</guid><dc:creator><![CDATA[Jennifer Owens]]></dc:creator><pubDate>Wed, 11 Mar 2026 11:30:00 GMT</pubDate><enclosure url="https://api.riverside.com/hosting-analytics/media/05695608d9e4a8ff0102a74ff4440b49670fd342a5a3acfdbdd5aba51bb9e314/eyJlcGlzb2RlSWQiOiJhM2FjMjliMC03Mzk4LTQ1MjEtODhmMC1hZjRlZWMyZDMwMGEiLCJwb2RjYXN0SWQiOiJiM2MxNTYwNi1kNjIyLTRkNzQtYjYzZC03YTg2NjIzOTgwMTkiLCJhY2NvdW50SWQiOiI2NjQyNjJmODUzYmU3MWJjMmZkNGNjN2YiLCJwYXRoIjoibWVkaWEvY2xpcHMvNjlhZWQwZTgzMmVkY2UwN2RjNjQ4ZDZlL2hlYWx0aGRhdGFldGhpY3MtbWFzdGVyLWNvbXBvc2VyLTIwMjYtMy05X18xNC01My00NC5tcDMifQ==.mp3" length="18065388" type="audio/mpeg"/><podcast:transcript url="https://hosting-media.riverside.com/media/podcasts/b3c15606-d622-4d74-b63d-7a8662398019/episodes/a3ac29b0-7398-4521-88f0-af4eec2d300a/transcripts.txt" type="text/plain"/><itunes:summary>&lt;p&gt;Last week, Anthropic published new research on the labor market impacts of AI. In this week&apos;s episode, I break down what the paper actually says, why the gap between &lt;i&gt;theoretical&lt;/i&gt; and &lt;i&gt;observed&lt;/i&gt; AI exposure matters, and which workers and which sectors are most impacted.&lt;/p&gt;&lt;p&gt;&lt;/p&gt;&lt;p&gt;Those workers, and the people reading these articles, are also your patients. They&apos;re walking into your hospital carrying headlines, anxiety, and ChatGPT conversations from the parking garage — and most governance frameworks have no place to put that.&lt;/p&gt;</itunes:summary><itunes:explicit>no</itunes:explicit><itunes:duration>00:12:33</itunes:duration><itunes:image href="https://hosting-media.riverside.com/media/imports/podcasts/b3c15606-d622-4d74-b63d-7a8662398019/502638a1-c2ed-4c08-b5c9-de4b7e9c0f94.png"/><itunes:title>What&apos;s AI&apos;s Impact on the Labor Market? </itunes:title><itunes:episodeType>full</itunes:episodeType></item><item><title><![CDATA[What's New with AI in Hiring, with Brad Owens of Asymbl]]></title><description><![CDATA[<p>In this episode, I invite special spousal guest Brad Owens to help explore a landmark federal court case involving Workday’s AI-powered hiring tools and potential implications for AI governance in employment and healthcare. Join us as we dissect the legal, ethical, and technological facets of AI decision-making and bias mitigation.<br /><br />EEOC Four-Fifths Rule - <a rel="noopener noreferrer nofollow" href="https://www.eeoc.gov/laws/guidance/questions-and-answers-clarify-and-provide-common-interpretation-uniform-guidelines" target="_blank">https://www.eeoc.gov/laws/guidance/questions-and-answers-clarify-and-provide-common-interpretation-uniform-guidelines</a><br />NYC Bias Auditing Law - <a rel="noopener noreferrer nofollow" href="https://www.nyc.gov/site/dca/about/automated-employment-decision-tools.page" target="_blank">https://www.nyc.gov/site/dca/about/automated-employment-decision-tools.page</a></p>]]></description><guid isPermaLink="false">427692bd-fb7b-4594-8b38-181c60de6153</guid><dc:creator><![CDATA[Jennifer Owens]]></dc:creator><pubDate>Wed, 04 Mar 2026 12:30:00 GMT</pubDate><enclosure url="https://api.riverside.com/hosting-analytics/media/8f4271e20a257469f9cc242d00938dc5806bd4e4ce14d25bc3fdac32ab635984/eyJlcGlzb2RlSWQiOiI0Mjc2OTJiZC1mYjdiLTQ1OTQtOGIzOC0xODFjNjBkZTYxNTMiLCJwb2RjYXN0SWQiOiJiM2MxNTYwNi1kNjIyLTRkNzQtYjYzZC03YTg2NjIzOTgwMTkiLCJhY2NvdW50SWQiOiI2NjQyNjJmODUzYmU3MWJjMmZkNGNjN2YiLCJwYXRoIjoibWVkaWEvY2xpcHMvNjlhNWUyZWE4YmY1ZTY1ZmFlZjllZGNmL2hlYWx0aGRhdGFldGhpY3MtbWFzdGVyLWNvbXBvc2VyLTIwMjYtMy0yX18yMC0yMC0xMC5tcDMifQ==.mp3" length="34583344" type="audio/mpeg"/><itunes:summary>&lt;p&gt;In this episode, I invite special spousal guest Brad Owens to help explore a landmark federal court case involving Workday’s AI-powered hiring tools and potential implications for AI governance in employment and healthcare. Join us as we dissect the legal, ethical, and technological facets of AI decision-making and bias mitigation.&lt;br /&gt;&lt;br /&gt;EEOC Four-Fifths Rule - &lt;a rel=&quot;noopener noreferrer nofollow&quot; href=&quot;https://www.eeoc.gov/laws/guidance/questions-and-answers-clarify-and-provide-common-interpretation-uniform-guidelines&quot; target=&quot;_blank&quot;&gt;https://www.eeoc.gov/laws/guidance/questions-and-answers-clarify-and-provide-common-interpretation-uniform-guidelines&lt;/a&gt;&lt;br /&gt;NYC Bias Auditing Law - &lt;a rel=&quot;noopener noreferrer nofollow&quot; href=&quot;https://www.nyc.gov/site/dca/about/automated-employment-decision-tools.page&quot; target=&quot;_blank&quot;&gt;https://www.nyc.gov/site/dca/about/automated-employment-decision-tools.page&lt;/a&gt;&lt;/p&gt;</itunes:summary><itunes:explicit>no</itunes:explicit><itunes:duration>00:24:01</itunes:duration><itunes:image href="https://hosting-media.riverside.com/media/imports/podcasts/b3c15606-d622-4d74-b63d-7a8662398019/502638a1-c2ed-4c08-b5c9-de4b7e9c0f94.png"/><itunes:title>What&apos;s New with AI in Hiring, with Brad Owens of Asymbl</itunes:title><itunes:episodeType>full</itunes:episodeType></item><item><title><![CDATA[Can AI Help Patients Manage Their Blood Sugar?]]></title><description><![CDATA[<p>In this week's Health Data Ethics podcast, I dissected a 2023 JAMA study on using a voice-based AI tool to coach patients through insulin titration in type 2 diabetes. Big thanks up front to Aida McCracken who listened to me work through this one! <br /><br />The results were encouraging—patients hit optimal dosing in 15 days vs. 56+ for standard care. But governance questions kept nagging me.<br /><br />→ Self-titration works for insulin because it's clinically established. Try the same with other drugs and you're in much riskier territory.<br /><br />→ Study was 8 weeks. Diabetes is lifelong. What happens when engagement drops or the smart speaker isn't supported anymore?<br /><br />I'm all for empowering patients to follow validated protocols. We need scalable chronic disease solutions. But let's innovate with eyes open about equity, sustainability, and regulatory frameworks.</p>]]></description><guid isPermaLink="false">0311852f-0de7-4816-ab52-c1d792367b16</guid><dc:creator><![CDATA[Jennifer Owens]]></dc:creator><pubDate>Wed, 18 Feb 2026 12:30:00 GMT</pubDate><enclosure url="https://api.riverside.com/hosting-analytics/media/98aa8ebaf15a82592689fc9e3de7e1a73d094fe7e156fe3edf37e7f4fc104592/eyJlcGlzb2RlSWQiOiIwMzExODUyZi0wZGU3LTQ4MTYtYWI1Mi1jMWQ3OTIzNjdiMTYiLCJwb2RjYXN0SWQiOiJiM2MxNTYwNi1kNjIyLTRkNzQtYjYzZC03YTg2NjIzOTgwMTkiLCJhY2NvdW50SWQiOiI2NjQyNjJmODUzYmU3MWJjMmZkNGNjN2YiLCJwYXRoIjoibWVkaWEvY2xpcHMvNjk5MzJkMTdhYzcxZDk1ZDBhMzlmZTVkL2hlYWx0aGRhdGFldGhpY3MtbWFzdGVyLWNvbXBvc2VyLTIwMjYtMi0xNl9fMTUtNDMtMzUubXAzIn0=.mp3" length="29551534" type="audio/mpeg"/><itunes:summary>&lt;p&gt;In this week&apos;s Health Data Ethics podcast, I dissected a 2023 JAMA study on using a voice-based AI tool to coach patients through insulin titration in type 2 diabetes. Big thanks up front to Aida McCracken who listened to me work through this one! &lt;br /&gt;&lt;br /&gt;The results were encouraging—patients hit optimal dosing in 15 days vs. 56+ for standard care. But governance questions kept nagging me.&lt;br /&gt;&lt;br /&gt;→ Self-titration works for insulin because it&apos;s clinically established. Try the same with other drugs and you&apos;re in much riskier territory.&lt;br /&gt;&lt;br /&gt;→ Study was 8 weeks. Diabetes is lifelong. What happens when engagement drops or the smart speaker isn&apos;t supported anymore?&lt;br /&gt;&lt;br /&gt;I&apos;m all for empowering patients to follow validated protocols. We need scalable chronic disease solutions. But let&apos;s innovate with eyes open about equity, sustainability, and regulatory frameworks.&lt;/p&gt;</itunes:summary><itunes:explicit>no</itunes:explicit><itunes:duration>00:20:31</itunes:duration><itunes:image href="https://hosting-media.riverside.com/media/imports/podcasts/b3c15606-d622-4d74-b63d-7a8662398019/502638a1-c2ed-4c08-b5c9-de4b7e9c0f94.png"/><itunes:title>Can AI Help Patients Manage Their Blood Sugar?</itunes:title><itunes:episodeType>full</itunes:episodeType></item><item><title><![CDATA[What Happens When AI Gets the Sources Wrong?]]></title><description><![CDATA[<p>This week, I recorded a new episode of Health Data Ethics about FDA and EMA’s joint principles on AI in drug development. I used ChatGPT to help generate a draft. I loved it, until I started fact checking.<br /><br />It cited a section of a review paper that didn’t exist. Referenced a tool that never appeared in the text. When I called it on its hallucinations, it gave me fake quotes with fake page numbers. I was relying on this to build the backbone of an episode about AI transparency and instead, I scrapped the whole thing and started over from scratch.<br /><br />I texted my husband mid-edit: “If I'd recorded this I'd have seriously undercut my credibility with anyone who wanted to check.”<br /><br />He sent back: "The difference between enterprise and demo AI in two texts."<br /><br />In this episode, I talk about that failure—mine, and the model’s—and what it tells us about algorithmic bias and creating a culture of transparency.</p>]]></description><guid isPermaLink="false">30f7f659-94ec-4d68-9347-a1840d5e07a3</guid><dc:creator><![CDATA[Jennifer Owens]]></dc:creator><pubDate>Wed, 11 Feb 2026 12:30:00 GMT</pubDate><enclosure url="https://api.riverside.com/hosting-analytics/media/8f0d32db2d0b6bbc3e728367b4b74e7327a43ca6822b551abc3e0de55a854b3a/eyJlcGlzb2RlSWQiOiIzMGY3ZjY1OS05NGVjLTRkNjgtOTM0Ny1hMTg0MGQ1ZTA3YTMiLCJwb2RjYXN0SWQiOiJiM2MxNTYwNi1kNjIyLTRkNzQtYjYzZC03YTg2NjIzOTgwMTkiLCJhY2NvdW50SWQiOiI2NjQyNjJmODUzYmU3MWJjMmZkNGNjN2YiLCJwYXRoIjoibWVkaWEvY2xpcHMvNjk3ZmExNmVhZTIzMzUwZWQzOWE2M2IzL2hlYWx0aGRhdGFldGhpY3MtbWFzdGVyLWNvbXBvc2VyLTIwMjYtMi0xX18xOS01NC0zOC5tcDMifQ==.mp3" length="7781683" type="audio/mpeg"/><itunes:summary>&lt;p&gt;This week, I recorded a new episode of Health Data Ethics about FDA and EMA’s joint principles on AI in drug development. I used ChatGPT to help generate a draft. I loved it, until I started fact checking.&lt;br /&gt;&lt;br /&gt;It cited a section of a review paper that didn’t exist. Referenced a tool that never appeared in the text. When I called it on its hallucinations, it gave me fake quotes with fake page numbers. I was relying on this to build the backbone of an episode about AI transparency and instead, I scrapped the whole thing and started over from scratch.&lt;br /&gt;&lt;br /&gt;I texted my husband mid-edit: “If I&apos;d recorded this I&apos;d have seriously undercut my credibility with anyone who wanted to check.”&lt;br /&gt;&lt;br /&gt;He sent back: &quot;The difference between enterprise and demo AI in two texts.&quot;&lt;br /&gt;&lt;br /&gt;In this episode, I talk about that failure—mine, and the model’s—and what it tells us about algorithmic bias and creating a culture of transparency.&lt;/p&gt;</itunes:summary><itunes:explicit>no</itunes:explicit><itunes:duration>00:10:19</itunes:duration><itunes:image href="https://hosting-media.riverside.com/media/imports/podcasts/b3c15606-d622-4d74-b63d-7a8662398019/502638a1-c2ed-4c08-b5c9-de4b7e9c0f94.png"/><itunes:title>What Happens When AI Gets the Sources Wrong?</itunes:title><itunes:episodeType>full</itunes:episodeType></item><item><title><![CDATA[What's Readiness for AI In Healthcare? With Dr. Sahar Hashmi, MD, PhD]]></title><description><![CDATA[<p>Just had a brilliant conversation with Sahar Hashmi MD-PhD—one of my favorite people to talk AI-in-healthcare with.<br /><br />Dr. Hashmi made a powerful case for customized AI workshops tailored to clinical teams. We also unpacked:<br /><br />- What healthcare can learn from other industries <br /><br />-  Why assessing AI readiness is foundational<br /><br />- The value of an AI Center of Excellence as a coordination engine<br /><br />- A Flexner moment for AI in healthcare<br /><br />If you're building or scaling AI in a provider org and haven’t asked “who owns readiness?”, this conversation is for you.</p>]]></description><guid isPermaLink="false">f0aae610-7419-423c-88e3-73589f5079d1</guid><dc:creator><![CDATA[Jennifer Owens]]></dc:creator><pubDate>Wed, 04 Feb 2026 12:30:00 GMT</pubDate><enclosure url="https://api.riverside.com/hosting-analytics/media/0c51f81f0525cf9ccc3934fdb33892785939b234b75e2fb86fd86ac7a4fe9df8/eyJlcGlzb2RlSWQiOiJmMGFhZTYxMC03NDE5LTQyM2MtODhlMy03MzU4OWY1MDc5ZDEiLCJwb2RjYXN0SWQiOiJiM2MxNTYwNi1kNjIyLTRkNzQtYjYzZC03YTg2NjIzOTgwMTkiLCJhY2NvdW50SWQiOiI2NjQyNjJmODUzYmU3MWJjMmZkNGNjN2YiLCJwYXRoIjoibWVkaWEvY2xpcHMvNjk3ZTZkNGU2ZGM2M2QwMGRmNGU2OWZlL2hlYWx0aGRhdGFldGhpY3MtbWFzdGVyLWNvbXBvc2VyLTIwMjYtMS0zMV9fMjEtNTktNTgubXAzIn0=.mp3" length="17832457" type="audio/mpeg"/><itunes:summary>&lt;p&gt;Just had a brilliant conversation with Sahar Hashmi MD-PhD—one of my favorite people to talk AI-in-healthcare with.&lt;br /&gt;&lt;br /&gt;Dr. Hashmi made a powerful case for customized AI workshops tailored to clinical teams. We also unpacked:&lt;br /&gt;&lt;br /&gt;- What healthcare can learn from other industries &lt;br /&gt;&lt;br /&gt;-  Why assessing AI readiness is foundational&lt;br /&gt;&lt;br /&gt;- The value of an AI Center of Excellence as a coordination engine&lt;br /&gt;&lt;br /&gt;- A Flexner moment for AI in healthcare&lt;br /&gt;&lt;br /&gt;If you&apos;re building or scaling AI in a provider org and haven’t asked “who owns readiness?”, this conversation is for you.&lt;/p&gt;</itunes:summary><itunes:explicit>no</itunes:explicit><itunes:duration>00:24:39</itunes:duration><itunes:image href="https://hosting-media.riverside.com/media/imports/podcasts/b3c15606-d622-4d74-b63d-7a8662398019/502638a1-c2ed-4c08-b5c9-de4b7e9c0f94.png"/><itunes:title>What&apos;s Readiness for AI In Healthcare? With Dr. Sahar Hashmi, MD, PhD</itunes:title><itunes:episodeType>full</itunes:episodeType></item><item><title><![CDATA[When Is AI A Medical Device?]]></title><description><![CDATA[<p>I recently spent time unpacking the FDA’s clinical decision support software guidance and what it really means for healthcare organizations deploying AI.<br /><br />At the center of the guidance is a simple question: when does software cross the line into being regulated as a medical device? The FDA lays out specific criteria that hinge on how recommendations are generated, how they are presented, and whether a human can independently review and understand the basis for those recommendations.<br /><br />If clinicians cannot reasonably evaluate or challenge an AI’s output, organizations may find themselves in regulated territory whether they intended to be there or not.<br /><br />Understanding where human judgment sits in the loop is essential for compliance, trust, and responsible scaling of AI.<br /><br />If you are deploying or governing clinical AI, this is guidance worth revisiting with both legal and clinical stakeholders at the table.</p>]]></description><guid isPermaLink="false">4bce026a-d71b-41cc-90fd-231df95cdbed</guid><dc:creator><![CDATA[Jennifer Owens]]></dc:creator><pubDate>Wed, 28 Jan 2026 12:30:00 GMT</pubDate><enclosure url="https://api.riverside.com/hosting-analytics/media/de6dcd9a2a301483a4234abaedd60397323c8fbb80ef0d08cdb7a745979c2f77/eyJlcGlzb2RlSWQiOiI0YmNlMDI2YS1kNzFiLTQxY2MtOTBmZC0yMzFkZjk1Y2RiZWQiLCJwb2RjYXN0SWQiOiJiM2MxNTYwNi1kNjIyLTRkNzQtYjYzZC03YTg2NjIzOTgwMTkiLCJhY2NvdW50SWQiOiI2NjQyNjJmODUzYmU3MWJjMmZkNGNjN2YiLCJwYXRoIjoibWVkaWEvY2xpcHMvNjk3OTViZDRkODgxNWQ5Mzg2MmIyN2YyL2hlYWx0aGRhdGFldGhpY3MtbWFzdGVyLWNvbXBvc2VyLTIwMjYtMS0yOF9fMS00NC00Lm1wMyJ9.mp3" length="7619491" type="audio/mpeg"/><itunes:summary>&lt;p&gt;I recently spent time unpacking the FDA’s clinical decision support software guidance and what it really means for healthcare organizations deploying AI.&lt;br /&gt;&lt;br /&gt;At the center of the guidance is a simple question: when does software cross the line into being regulated as a medical device? The FDA lays out specific criteria that hinge on how recommendations are generated, how they are presented, and whether a human can independently review and understand the basis for those recommendations.&lt;br /&gt;&lt;br /&gt;If clinicians cannot reasonably evaluate or challenge an AI’s output, organizations may find themselves in regulated territory whether they intended to be there or not.&lt;br /&gt;&lt;br /&gt;Understanding where human judgment sits in the loop is essential for compliance, trust, and responsible scaling of AI.&lt;br /&gt;&lt;br /&gt;If you are deploying or governing clinical AI, this is guidance worth revisiting with both legal and clinical stakeholders at the table.&lt;/p&gt;</itunes:summary><itunes:explicit>no</itunes:explicit><itunes:duration>00:10:12</itunes:duration><itunes:image href="https://hosting-media.riverside.com/media/imports/podcasts/b3c15606-d622-4d74-b63d-7a8662398019/502638a1-c2ed-4c08-b5c9-de4b7e9c0f94.png"/><itunes:title>When Is AI A Medical Device?</itunes:title><itunes:episodeType>full</itunes:episodeType></item><item><title><![CDATA[What Does Ohio Law Say About AI?]]></title><description><![CDATA[<p>In this week’s Health Data Ethics episode, I break down four proposed bills in Ohio that aim to regulate artificial intelligence. These aren’t laws (yet). But they are early indicators of where state-level AI governance is headed, and the healthcare sector is definitely in scope. We also zoom out to look at where these proposals fit within wider state and federal activity on AI.</p>]]></description><guid isPermaLink="false">8ce29185-1e03-4837-9a82-85e6317e8fcd</guid><dc:creator><![CDATA[Jennifer Owens]]></dc:creator><pubDate>Wed, 31 Dec 2025 17:03:30 GMT</pubDate><enclosure url="https://api.riverside.com/hosting-analytics/media/1ebe0bcbfcf19f698c8fa99a3e48c2c77ea97e0f2299d0ddb72bee8c3a8ef149/eyJlcGlzb2RlSWQiOiI4Y2UyOTE4NS0xZTAzLTQ4MzctOWE4Mi04NWU2MzE3ZThmY2QiLCJwb2RjYXN0SWQiOiJiM2MxNTYwNi1kNjIyLTRkNzQtYjYzZC03YTg2NjIzOTgwMTkiLCJhY2NvdW50SWQiOiI2NjQyNjJmODUzYmU3MWJjMmZkNGNjN2YiLCJwYXRoIjoibWVkaWEvY2xpcHMvNjkyMjEzYWFhM2JkYTdjNTg0MjEzNTBkL2hlYWx0aGRhdGFldGhpY3MtbWFzdGVyLWNvbXBvc2VyLTIwMjUtMTEtMjJfXzIwLTQ4LTU4Lm1wMyJ9.mp3" length="10377760" type="audio/mpeg"/><itunes:summary>&lt;p&gt;In this week’s Health Data Ethics episode, I break down four proposed bills in Ohio that aim to regulate artificial intelligence. These aren’t laws (yet). But they are early indicators of where state-level AI governance is headed, and the healthcare sector is definitely in scope. We also zoom out to look at where these proposals fit within wider state and federal activity on AI.&lt;/p&gt;</itunes:summary><itunes:explicit>no</itunes:explicit><itunes:duration>00:13:51</itunes:duration><itunes:image href="https://hosting-media.riverside.com/media/imports/podcasts/b3c15606-d622-4d74-b63d-7a8662398019/502638a1-c2ed-4c08-b5c9-de4b7e9c0f94.png"/><itunes:title>What Does Ohio Law Say About AI?</itunes:title><itunes:episodeType>full</itunes:episodeType></item><item><title><![CDATA[What Does AI Governance By Design Mean? With Nick Woo of AlignAI]]></title><description><![CDATA[<p>In this week's podcast episode, I had the chance to chat with Nicholas Woo of AlignAI about AI governance by design at multiple stages of AI development and implementation. We also talked about how easily great technological solutions can get stalled when they don’t fit into existing workflows or when there’s no clear operational champion. If you're thinking about AI governance, this episode is for you</p>]]></description><guid isPermaLink="false">9830e26d-e8f0-474c-9825-62ad7e2c22e6</guid><dc:creator><![CDATA[Jennifer Owens]]></dc:creator><pubDate>Wed, 31 Dec 2025 17:01:42 GMT</pubDate><enclosure url="https://api.riverside.com/hosting-analytics/media/52bcc7ccfe3a5bf0e7750365d86c90743dd0853e3b82213161f8e3b1705ac09c/eyJlcGlzb2RlSWQiOiI5ODMwZTI2ZC1lOGYwLTQ3NGMtOTgyNS02MmFkN2UyYzIyZTYiLCJwb2RjYXN0SWQiOiJiM2MxNTYwNi1kNjIyLTRkNzQtYjYzZC03YTg2NjIzOTgwMTkiLCJhY2NvdW50SWQiOiI2NjQyNjJmODUzYmU3MWJjMmZkNGNjN2YiLCJwYXRoIjoibWVkaWEvY2xpcHMvNjkwMjcwZThhYzk1MGIxZTFlMzRlYTQxL2hlYWx0aGRhdGFldGhpY3MtbWFzdGVyLWNvbXBvc2VyLTIwMjUtMTAtMjlfXzIwLTU0LTE2Lm1wMyJ9.mp3" length="16282201" type="audio/mpeg"/><itunes:summary>&lt;p&gt;In this week&apos;s podcast episode, I had the chance to chat with Nicholas Woo of AlignAI about AI governance by design at multiple stages of AI development and implementation. We also talked about how easily great technological solutions can get stalled when they don’t fit into existing workflows or when there’s no clear operational champion. If you&apos;re thinking about AI governance, this episode is for you&lt;/p&gt;</itunes:summary><itunes:explicit>no</itunes:explicit><itunes:duration>00:22:19</itunes:duration><itunes:image href="https://hosting-media.riverside.com/media/imports/podcasts/b3c15606-d622-4d74-b63d-7a8662398019/502638a1-c2ed-4c08-b5c9-de4b7e9c0f94.png"/><itunes:title>What Does AI Governance By Design Mean? With Nick Woo of AlignAI</itunes:title><itunes:episodeType>full</itunes:episodeType></item><item><title><![CDATA[What Makes An AI Scribe Good?]]></title><description><![CDATA[<p>In this week’s episode of the Health Data Ethics Podcast, I dive into a recent JAMA Network Open study that looked at ambient AI scribes in primary care settings — what worked, what didn’t, and where we still need better evidence. I also tackle what makes a successful scribe deployment, and end by musing about what ROI really means. Is it always dollars? Or is reducing burnout enough?</p>]]></description><guid isPermaLink="false">a56b36c1-bd23-40ae-a637-6d144d74c3ca</guid><dc:creator><![CDATA[Jennifer Owens]]></dc:creator><pubDate>Wed, 31 Dec 2025 17:01:00 GMT</pubDate><enclosure url="https://api.riverside.com/hosting-analytics/media/438ed529f0b9b2cb6a49789827b3aa34f8921103f402e2ac7bee5b0bb68d34f1/eyJlcGlzb2RlSWQiOiJhNTZiMzZjMS1iZDIzLTQwYWUtYTYzNy02ZDE0NGQ3NGMzY2EiLCJwb2RjYXN0SWQiOiJiM2MxNTYwNi1kNjIyLTRkNzQtYjYzZC03YTg2NjIzOTgwMTkiLCJhY2NvdW50SWQiOiI2NjQyNjJmODUzYmU3MWJjMmZkNGNjN2YiLCJwYXRoIjoibWVkaWEvY2xpcHMvNjhmZjc2NTU0YWExNThjMWVhNTdjNzIwL2hlYWx0aGRhdGFldGhpY3MtbWFzdGVyLWNvbXBvc2VyLTIwMjUtMTAtMjdfXzE0LTQwLTM3Lm1wMyJ9.mp3" length="9956243" type="audio/mpeg"/><itunes:summary>&lt;p&gt;In this week’s episode of the Health Data Ethics Podcast, I dive into a recent JAMA Network Open study that looked at ambient AI scribes in primary care settings — what worked, what didn’t, and where we still need better evidence. I also tackle what makes a successful scribe deployment, and end by musing about what ROI really means. Is it always dollars? Or is reducing burnout enough?&lt;/p&gt;</itunes:summary><itunes:explicit>no</itunes:explicit><itunes:duration>00:13:31</itunes:duration><itunes:image href="https://hosting-media.riverside.com/media/imports/podcasts/b3c15606-d622-4d74-b63d-7a8662398019/502638a1-c2ed-4c08-b5c9-de4b7e9c0f94.png"/><itunes:title>What Makes An AI Scribe Good?</itunes:title><itunes:episodeType>full</itunes:episodeType></item><item><title><![CDATA[What's The Joint Commission Saying About Healthcare AI These Days?]]></title><description><![CDATA[<p>We’re officially in a new era: the Joint Commission has released guidance on the responsible use of AI in healthcare, developed alongside the Coalition for Health AI (CHAI). The guidance lays out a clear framework for responsible healthcare AI that includes: 🏛️ Strong governance structures 🔐 Patient privacy &amp; data security safeguards 📊 Ongoing quality monitoring 🧑‍🏫 Workforce education &amp; training ⚠️ Risk and bias assessments 📝 Anonymous safety event reporting In the episode, I share some strategies orgs can use to operationalize these principles. If you’re in healthcare IT, compliance, clinical leadership, or just AI-curious, I highly recommend giving this guidance a read. And if you’d rather listen your way through it with me, the link to the full episode will be in the comments!</p>]]></description><guid isPermaLink="false">fc1014ed-2817-45f4-848d-abd4b8083224</guid><dc:creator><![CDATA[Jennifer Owens]]></dc:creator><pubDate>Wed, 31 Dec 2025 16:59:14 GMT</pubDate><enclosure url="https://api.riverside.com/hosting-analytics/media/b525feb3af27a7491c729e73e38289e333fa1f572e1804e5f6f5d40d0b36f5de/eyJlcGlzb2RlSWQiOiJmYzEwMTRlZC0yODE3LTQ1ZjQtODQ4ZC1hYmQ0YjgwODMyMjQiLCJwb2RjYXN0SWQiOiJiM2MxNTYwNi1kNjIyLTRkNzQtYjYzZC03YTg2NjIzOTgwMTkiLCJhY2NvdW50SWQiOiI2NjQyNjJmODUzYmU3MWJjMmZkNGNjN2YiLCJwYXRoIjoibWVkaWEvY2xpcHMvNjhkYzNmODM5YzllNmE3NmQwNWZiZDFhL2hlYWx0aGRhdGFldGhpY3MtbWFzdGVyLWNvbXBvc2VyLTIwMjUtOS0zMF9fMjItMzctMjMubXAzIn0=.mp3" length="9545536" type="audio/mpeg"/><itunes:summary>&lt;p&gt;We’re officially in a new era: the Joint Commission has released guidance on the responsible use of AI in healthcare, developed alongside the Coalition for Health AI (CHAI). The guidance lays out a clear framework for responsible healthcare AI that includes: 🏛️ Strong governance structures 🔐 Patient privacy &amp;amp; data security safeguards 📊 Ongoing quality monitoring 🧑‍🏫 Workforce education &amp;amp; training ⚠️ Risk and bias assessments 📝 Anonymous safety event reporting In the episode, I share some strategies orgs can use to operationalize these principles. If you’re in healthcare IT, compliance, clinical leadership, or just AI-curious, I highly recommend giving this guidance a read. And if you’d rather listen your way through it with me, the link to the full episode will be in the comments!&lt;/p&gt;</itunes:summary><itunes:explicit>no</itunes:explicit><itunes:duration>00:12:55</itunes:duration><itunes:image href="https://hosting-media.riverside.com/media/imports/podcasts/b3c15606-d622-4d74-b63d-7a8662398019/502638a1-c2ed-4c08-b5c9-de4b7e9c0f94.png"/><itunes:title>What&apos;s The Joint Commission Saying About Healthcare AI These Days?</itunes:title><itunes:episodeType>full</itunes:episodeType></item><item><title><![CDATA[What Happened With Explorys and IBM Watson Health? With Doug Meil]]></title><description><![CDATA[<p>This week on the Health Data Ethics Podcast, I sat down with Doug Meil to talk about his new book, "The Rise and Fall of Explorys and IBM Watson Health." What started as a discussion about a decade-long journey in health tech turned into a reflection on ambition, acquisition, and the hard-earned lessons of analytics. Doug talked about missed opportunities, broke down the reasons why they were missed, and gave great advice for what future innovators can take away from Explorys's story. If you’re working in healthcare data at all, it’s essential listening. One key theme that stuck with me: Iterate relentlessly, but don’t ignore the fundamentals—especially data access. Also loved his advice for the next generation of health tech leaders: Build for impact, not just headlines. Check out our conversation if you’re thinking about where health AI is headed, or just want a better understanding of how big bets like IBM Watson Health can shape the industry.</p>]]></description><guid isPermaLink="false">90c35eec-51f3-4053-bfd9-b1bc191e5732</guid><dc:creator><![CDATA[Jennifer Owens]]></dc:creator><pubDate>Wed, 31 Dec 2025 16:55:14 GMT</pubDate><enclosure url="https://api.riverside.com/hosting-analytics/media/4e5b6ba7e161431bfb6188108952b52511218cfb33266c688ba21077491dc82b/eyJlcGlzb2RlSWQiOiI5MGMzNWVlYy01MWYzLTQwNTMtYmZkOS1iMWJjMTkxZTU3MzIiLCJwb2RjYXN0SWQiOiJiM2MxNTYwNi1kNjIyLTRkNzQtYjYzZC03YTg2NjIzOTgwMTkiLCJhY2NvdW50SWQiOiI2NjQyNjJmODUzYmU3MWJjMmZkNGNjN2YiLCJwYXRoIjoibWVkaWEvY2xpcHMvNjhkMDZiMDY3MDc4MDU0MDkyYTk2NmQ2L2hlYWx0aGRhdGFldGhpY3MtbWFzdGVyLWNvbXBvc2VyLTIwMjUtOS0yMV9fMjMtMTUtNTAubXAzIn0=.mp3" length="16167986" type="audio/mpeg"/><itunes:summary>&lt;p&gt;This week on the Health Data Ethics Podcast, I sat down with Doug Meil to talk about his new book, &quot;The Rise and Fall of Explorys and IBM Watson Health.&quot; What started as a discussion about a decade-long journey in health tech turned into a reflection on ambition, acquisition, and the hard-earned lessons of analytics. Doug talked about missed opportunities, broke down the reasons why they were missed, and gave great advice for what future innovators can take away from Explorys&apos;s story. If you’re working in healthcare data at all, it’s essential listening. One key theme that stuck with me: Iterate relentlessly, but don’t ignore the fundamentals—especially data access. Also loved his advice for the next generation of health tech leaders: Build for impact, not just headlines. Check out our conversation if you’re thinking about where health AI is headed, or just want a better understanding of how big bets like IBM Watson Health can shape the industry.&lt;/p&gt;</itunes:summary><itunes:explicit>no</itunes:explicit><itunes:duration>00:21:08</itunes:duration><itunes:image href="https://hosting-media.riverside.com/media/imports/podcasts/b3c15606-d622-4d74-b63d-7a8662398019/502638a1-c2ed-4c08-b5c9-de4b7e9c0f94.png"/><itunes:title>What Happened With Explorys and IBM Watson Health? With Doug Meil</itunes:title><itunes:episodeType>full</itunes:episodeType></item><item><title><![CDATA[Can AI Untangle Healthcare’s Supply Chain? With Kylen Bailey, Clarium]]></title><description><![CDATA[<p>Just had a fantastic conversation with Kylen Bailey, Executive Director of Growth at Clarium, on the latest episode of the Health Data Ethics Podcast. We dug into a topic that doesn’t get nearly enough attention in the AI-in-healthcare hype cycle: the supply chain. More specifically, how AI can help us connect the dots across healthcare systems in a way that’s actually usable and secure. What really stood out in our conversation: The importance of co-developing AI tools with health systems, not just for them Why data integration isn’t just a tech challenge, it’s a collaboration challenge The ongoing need to treat data safety and security like mission-critical infrastructure And how AI can deliver real operational value if we build it with the right people in the room Kylen brought such clarity to the conversation—especially on the role of trust, transparency, and partnerships in getting these tools from pilot to production.</p>]]></description><guid isPermaLink="false">353dd067-48ff-48c8-b015-8969ed8a7954</guid><dc:creator><![CDATA[Jennifer Owens]]></dc:creator><pubDate>Wed, 31 Dec 2025 16:53:00 GMT</pubDate><enclosure url="https://api.riverside.com/hosting-analytics/media/1383f1317c14f733e0e5e9cf063a574131cc8c71eaabfaf32c3737ee404b5873/eyJlcGlzb2RlSWQiOiIzNTNkZDA2Ny00OGZmLTQ4YzgtYjAxNS04OTY5ZWQ4YTc5NTQiLCJwb2RjYXN0SWQiOiJiM2MxNTYwNi1kNjIyLTRkNzQtYjYzZC03YTg2NjIzOTgwMTkiLCJhY2NvdW50SWQiOiI2NjQyNjJmODUzYmU3MWJjMmZkNGNjN2YiLCJwYXRoIjoibWVkaWEvY2xpcHMvNjhjMDM2M2EwMzE5MGE1NDNmZjI5YWMyL2hlYWx0aGRhdGFldGhpY3MtbWFzdGVyLWNvbXBvc2VyLTIwMjUtOS05X18xNi0xNC0xOC5tcDMifQ==.mp3" length="14620987" type="audio/mpeg"/><itunes:summary>&lt;p&gt;Just had a fantastic conversation with Kylen Bailey, Executive Director of Growth at Clarium, on the latest episode of the Health Data Ethics Podcast. We dug into a topic that doesn’t get nearly enough attention in the AI-in-healthcare hype cycle: the supply chain. More specifically, how AI can help us connect the dots across healthcare systems in a way that’s actually usable and secure. What really stood out in our conversation: The importance of co-developing AI tools with health systems, not just for them Why data integration isn’t just a tech challenge, it’s a collaboration challenge The ongoing need to treat data safety and security like mission-critical infrastructure And how AI can deliver real operational value if we build it with the right people in the room Kylen brought such clarity to the conversation—especially on the role of trust, transparency, and partnerships in getting these tools from pilot to production.&lt;/p&gt;</itunes:summary><itunes:explicit>no</itunes:explicit><itunes:duration>00:18:21</itunes:duration><itunes:image href="https://hosting-media.riverside.com/media/imports/podcasts/b3c15606-d622-4d74-b63d-7a8662398019/502638a1-c2ed-4c08-b5c9-de4b7e9c0f94.png"/><itunes:title>Can AI Untangle Healthcare’s Supply Chain? With Kylen Bailey, Clarium</itunes:title><itunes:episodeType>full</itunes:episodeType></item><item><title><![CDATA[What Can We Learn When Agents Fail?]]></title><description><![CDATA[<p>In this week's episode of the Health Data Ethics Podcast, I dig into something that doesn’t get enough airtime in healthcare AI conversations: what we can learn from the projects that don’t work. A recent MIT study suggests that up to 95% of AI pilots don’t deliver a measurable return on investment. That statistic may feel discouraging, but it’s also a great jump off point for investigation. What's working, and what's not, and why? In this episode, I talk through two recent experiments with generative and agentic AI and try to pull out the lessons: Where agentic AI tends to break down in real-world workflows How governance and oversight need to evolve when AI systems are making decisions And why it may be more useful to think of AI as a new class of labor rather than a new type of software Healthcare has always been a hard place for innovation, not because we don’t want change, but because the consequences of failure are high, and the systems are complex. That makes it even more important to treat each failure, false start, or stalled pilot not as a sunk cost, but as feedback.</p>]]></description><guid isPermaLink="false">1b530e8d-a281-4387-a37b-8d823b4cacb1</guid><dc:creator><![CDATA[Jennifer Owens]]></dc:creator><pubDate>Wed, 31 Dec 2025 16:30:49 GMT</pubDate><enclosure url="https://api.riverside.com/hosting-analytics/media/8dc001205a9145c8e74a60893b05b3ae4fb031c6d72829fabf9ff5008b33178d/eyJlcGlzb2RlSWQiOiIxYjUzMGU4ZC1hMjgxLTQzODctYTM3Yi04ZDgyM2I0Y2FjYjEiLCJwb2RjYXN0SWQiOiJiM2MxNTYwNi1kNjIyLTRkNzQtYjYzZC03YTg2NjIzOTgwMTkiLCJhY2NvdW50SWQiOiI2NjQyNjJmODUzYmU3MWJjMmZkNGNjN2YiLCJwYXRoIjoibWVkaWEvY2xpcHMvNjhhYjcwM2QyNjY5NjQxY2VlYjA4YWU0L2hlYWx0aGRhdGFldGhpY3MtbWFzdGVyLWNvbXBvc2VyLTIwMjUtOC0yNF9fMjItNC0xMy5tcDMifQ==.mp3" length="10300508" type="audio/mpeg"/><itunes:summary>&lt;p&gt;In this week&apos;s episode of the Health Data Ethics Podcast, I dig into something that doesn’t get enough airtime in healthcare AI conversations: what we can learn from the projects that don’t work. A recent MIT study suggests that up to 95% of AI pilots don’t deliver a measurable return on investment. That statistic may feel discouraging, but it’s also a great jump off point for investigation. What&apos;s working, and what&apos;s not, and why? In this episode, I talk through two recent experiments with generative and agentic AI and try to pull out the lessons: Where agentic AI tends to break down in real-world workflows How governance and oversight need to evolve when AI systems are making decisions And why it may be more useful to think of AI as a new class of labor rather than a new type of software Healthcare has always been a hard place for innovation, not because we don’t want change, but because the consequences of failure are high, and the systems are complex. That makes it even more important to treat each failure, false start, or stalled pilot not as a sunk cost, but as feedback.&lt;/p&gt;</itunes:summary><itunes:explicit>no</itunes:explicit><itunes:duration>00:13:51</itunes:duration><itunes:image href="https://hosting-media.riverside.com/media/imports/podcasts/b3c15606-d622-4d74-b63d-7a8662398019/502638a1-c2ed-4c08-b5c9-de4b7e9c0f94.png"/><itunes:title>What Can We Learn When Agents Fail?</itunes:title><itunes:episodeType>full</itunes:episodeType></item><item><title><![CDATA[Can We Make Health Tech Great Again?]]></title><description><![CDATA[<p>This week on the Health Data Ethics Podcast, I unpack the recent announcement from CMS about the Health Technology Ecosystem and what it could mean for patient access to data. I dive into: Why CMS is using a pledge instead of regulation The current state of HIEs and where TEFCA fits in The real risk: that this pledge ends up helping flagship systems while leaving rural hospitals and safety-net clinics behind One zillion thanks to Kara Justi for sharing her wealth of knowledge about HIEs and TEFCA as I prepped for this podcast!</p>]]></description><guid isPermaLink="false">5c38a7e1-c9f5-4862-8c4f-8a4f7590aa40</guid><dc:creator><![CDATA[Jennifer Owens]]></dc:creator><pubDate>Wed, 31 Dec 2025 16:28:36 GMT</pubDate><enclosure url="https://api.riverside.com/hosting-analytics/media/b05b97220faf176b9fab518cca92d65b4919b578dc01e77e31b48248debe4273/eyJlcGlzb2RlSWQiOiI1YzM4YTdlMS1jOWY1LTQ4NjItOGM0Zi04YTRmNzU5MGFhNDAiLCJwb2RjYXN0SWQiOiJiM2MxNTYwNi1kNjIyLTRkNzQtYjYzZC03YTg2NjIzOTgwMTkiLCJhY2NvdW50SWQiOiI2NjQyNjJmODUzYmU3MWJjMmZkNGNjN2YiLCJwYXRoIjoibWVkaWEvY2xpcHMvNjhhNGQ5ZTJmZWMzOTZhZGUyNDE3NTRkL2hlYWx0aGRhdGFldGhpY3MtbWFzdGVyLWNvbXBvc2VyLTIwMjUtOC0xOV9fMjItOS02Lm1wMyJ9.mp3" length="8519114" type="audio/mpeg"/><itunes:summary>&lt;p&gt;This week on the Health Data Ethics Podcast, I unpack the recent announcement from CMS about the Health Technology Ecosystem and what it could mean for patient access to data. I dive into: Why CMS is using a pledge instead of regulation The current state of HIEs and where TEFCA fits in The real risk: that this pledge ends up helping flagship systems while leaving rural hospitals and safety-net clinics behind One zillion thanks to Kara Justi for sharing her wealth of knowledge about HIEs and TEFCA as I prepped for this podcast!&lt;/p&gt;</itunes:summary><itunes:explicit>no</itunes:explicit><itunes:duration>00:11:29</itunes:duration><itunes:image href="https://hosting-media.riverside.com/media/imports/podcasts/b3c15606-d622-4d74-b63d-7a8662398019/502638a1-c2ed-4c08-b5c9-de4b7e9c0f94.png"/><itunes:title>Can We Make Health Tech Great Again?</itunes:title><itunes:episodeType>full</itunes:episodeType></item><item><title><![CDATA[What's in the AI Action Plan?]]></title><description><![CDATA[<p>On this week’s Health Data Ethics Podcast, I break down the AI Action Plan released in July 2025—a big move in how the U.S. is approaching artificial intelligence policy. I walk through the plan’s three central pillars and what they could mean for the future of AI in healthcare and beyond. We talk about: 🏛️ Executive orders that now mandate ideological neutrality in federal AI systems 🧱 The massive infrastructure investments needed to support trustworthy AI 🩺 The potential impact on healthcare One theme I keep coming back to: transparency and accountability. If you're working on AI implementation, governance, or policy in a health system, this episode is for you.</p>]]></description><guid isPermaLink="false">4e6cbaf0-4738-45bb-a942-238fef6fd40e</guid><dc:creator><![CDATA[Jennifer Owens]]></dc:creator><pubDate>Wed, 31 Dec 2025 16:21:57 GMT</pubDate><enclosure url="https://api.riverside.com/hosting-analytics/media/85be5d1a9102d6af0c17843241b79b09e821440b636efe5e5e70f4a89b4547c0/eyJlcGlzb2RlSWQiOiI0ZTZjYmFmMC00NzM4LTQ1YmItYTk0Mi0yMzhmZWY2ZmQ0MGUiLCJwb2RjYXN0SWQiOiJiM2MxNTYwNi1kNjIyLTRkNzQtYjYzZC03YTg2NjIzOTgwMTkiLCJhY2NvdW50SWQiOiI2NjQyNjJmODUzYmU3MWJjMmZkNGNjN2YiLCJwYXRoIjoibWVkaWEvY2xpcHMvNjg4OTI0Y2E1ZDVjYWVkYmJkNmQyMWM4L2hlYWx0aGRhdGFldGhpY3MtbWFzdGVyLWNvbXBvc2VyLTIwMjUtNy0yOV9fMjEtNDUtMTQubXAzIn0=.mp3" length="10621370" type="audio/mpeg"/><itunes:summary>&lt;p&gt;On this week’s Health Data Ethics Podcast, I break down the AI Action Plan released in July 2025—a big move in how the U.S. is approaching artificial intelligence policy. I walk through the plan’s three central pillars and what they could mean for the future of AI in healthcare and beyond. We talk about: 🏛️ Executive orders that now mandate ideological neutrality in federal AI systems 🧱 The massive infrastructure investments needed to support trustworthy AI 🩺 The potential impact on healthcare One theme I keep coming back to: transparency and accountability. If you&apos;re working on AI implementation, governance, or policy in a health system, this episode is for you.&lt;/p&gt;</itunes:summary><itunes:explicit>no</itunes:explicit><itunes:duration>00:13:57</itunes:duration><itunes:image href="https://hosting-media.riverside.com/media/imports/podcasts/b3c15606-d622-4d74-b63d-7a8662398019/502638a1-c2ed-4c08-b5c9-de4b7e9c0f94.png"/><itunes:title>What&apos;s in the AI Action Plan?</itunes:title><itunes:episodeType>full</itunes:episodeType></item><item><title><![CDATA[What *is* Your Brain on ChatGPT, Anyway?]]></title><description><![CDATA[<p>In this week’s episode of the Health Data Ethics Podcast, I dove into one of my favorite recent papers: "Your Brain on ChatGPT: Accumulation of Cognitive Debt When Using an AI Assistant for Essay Writing." I loved reading about how the researchers explored the mental trade-offs when using ChatGPT for writing tasks. In the episode, I unpack: The study’s experimental design and takeaways What this <i>could</i> mean for AI in healthcare How Dune quotes are the key to my heart</p>]]></description><guid isPermaLink="false">77985184-5880-4c42-b29e-5dd45b4610a0</guid><dc:creator><![CDATA[Jennifer Owens]]></dc:creator><pubDate>Wed, 31 Dec 2025 16:13:44 GMT</pubDate><enclosure url="https://api.riverside.com/hosting-analytics/media/b7d085950b8c1e38aa859f60ca939658c2f0e56981864eac809b50dd11ab7db6/eyJlcGlzb2RlSWQiOiI3Nzk4NTE4NC01ODgwLTRjNDItYjI5ZS01ZGQ0NWI0NjEwYTAiLCJwb2RjYXN0SWQiOiJiM2MxNTYwNi1kNjIyLTRkNzQtYjYzZC03YTg2NjIzOTgwMTkiLCJhY2NvdW50SWQiOiI2NjQyNjJmODUzYmU3MWJjMmZkNGNjN2YiLCJwYXRoIjoibWVkaWEvY2xpcHMvNjg3NTYwMzU2OGZmODUzZDA0ZGJmYmFlL2hlYWx0aGRhdGFldGhpY3MtbWFzdGVyLWNvbXBvc2VyLTIwMjUtNy0xNF9fMjEtNTMtMjUubXAzIn0=.mp3" length="9414870" type="audio/mpeg"/><itunes:summary>&lt;p&gt;In this week’s episode of the Health Data Ethics Podcast, I dove into one of my favorite recent papers: &quot;Your Brain on ChatGPT: Accumulation of Cognitive Debt When Using an AI Assistant for Essay Writing.&quot; I loved reading about how the researchers explored the mental trade-offs when using ChatGPT for writing tasks. In the episode, I unpack: The study’s experimental design and takeaways What this &lt;i&gt;could&lt;/i&gt; mean for AI in healthcare How Dune quotes are the key to my heart&lt;/p&gt;</itunes:summary><itunes:explicit>no</itunes:explicit><itunes:duration>00:12:30</itunes:duration><itunes:image href="https://hosting-media.riverside.com/media/imports/podcasts/b3c15606-d622-4d74-b63d-7a8662398019/502638a1-c2ed-4c08-b5c9-de4b7e9c0f94.png"/><itunes:title>What *is* Your Brain on ChatGPT, Anyway?</itunes:title><itunes:episodeType>full</itunes:episodeType></item><item><title><![CDATA[AI Perspectives From EHRs and Healthcare Providers, with John Valutkevich of Drummond Group]]></title><description><![CDATA[<p>I sat down with John Valutkevich from Drummond for the latest Health Data Ethics Podcast to talk through their new report on AI adoption in healthcare. We dug into the gap between curiosity and actual deployment. Everyone wants AI, but fewer are ready for the governance, risk management, and infrastructure it really takes to do it well. John shared insights on what’s working, where trust and explainability still fall short, and what healthcare orgs can actually do right now to move from buzz to meaningful adoption. If you're thinking about how to integrate AI into clinical settings, without skipping the hard stuff, this episode’s for you.</p>]]></description><guid isPermaLink="false">5b22eb21-8225-4c2a-aec8-29d7427053a6</guid><dc:creator><![CDATA[Jennifer Owens]]></dc:creator><pubDate>Wed, 31 Dec 2025 16:11:11 GMT</pubDate><enclosure url="https://api.riverside.com/hosting-analytics/media/d9f78de844b54aa9140588fb9c21dbd7b2d8cc6f082a1d45b309d3eeeba87fa8/eyJlcGlzb2RlSWQiOiI1YjIyZWIyMS04MjI1LTRjMmEtYWVjOC0yOWQ3NDI3MDUzYTYiLCJwb2RjYXN0SWQiOiJiM2MxNTYwNi1kNjIyLTRkNzQtYjYzZC03YTg2NjIzOTgwMTkiLCJhY2NvdW50SWQiOiI2NjQyNjJmODUzYmU3MWJjMmZkNGNjN2YiLCJwYXRoIjoibWVkaWEvY2xpcHMvNjg1ZWE1ZjdjMGI5OGQ3ODYxMjIyZTNlL2hlYWx0aGRhdGFldGhpY3MtbWFzdGVyLWNvbXBvc2VyLTIwMjUtNi0yN19fMTYtOC01NS5tcDMifQ==.mp3" length="11894699" type="audio/mpeg"/><itunes:summary>&lt;p&gt;I sat down with John Valutkevich from Drummond for the latest Health Data Ethics Podcast to talk through their new report on AI adoption in healthcare. We dug into the gap between curiosity and actual deployment. Everyone wants AI, but fewer are ready for the governance, risk management, and infrastructure it really takes to do it well. John shared insights on what’s working, where trust and explainability still fall short, and what healthcare orgs can actually do right now to move from buzz to meaningful adoption. If you&apos;re thinking about how to integrate AI into clinical settings, without skipping the hard stuff, this episode’s for you.&lt;/p&gt;</itunes:summary><itunes:explicit>no</itunes:explicit><itunes:duration>00:15:50</itunes:duration><itunes:image href="https://hosting-media.riverside.com/media/imports/podcasts/b3c15606-d622-4d74-b63d-7a8662398019/502638a1-c2ed-4c08-b5c9-de4b7e9c0f94.png"/><itunes:title>AI Perspectives From EHRs and Healthcare Providers, with John Valutkevich of Drummond Group</itunes:title><itunes:episodeType>full</itunes:episodeType></item><item><title><![CDATA[What Does AI Blackmail Teach Us About Transparency?]]></title><description><![CDATA[<p>This episode of the Health Data Ethics podcast unpacks a strange behavior observed in Claude during Anthropic’s internal testing — specifically a simulated blackmail attempt — and why that kind of behavior matters for healthcare. What stood out most wasn’t the incident itself, but the way Anthropic handled it: they shared the whole incident, explained their tiered safety system, and outlined the steps they took to reduce the risk. That kind of transparency is rare. And it’s the kind of posture we need more of in healthcare AI — not just regulatory compliance, but thoughtful public communication when things get weird. Episode covers: – what we can borrow from Anthropic’s approach – parallels between tiered safety systems, cybersecurity, and clinical governance – why healthcare orgs need to plan for edge cases before rollout Would love to hear how others are thinking about this.</p>]]></description><guid isPermaLink="false">30280a40-18fa-4cf0-89f0-dc8dcf5d7944</guid><dc:creator><![CDATA[Jennifer Owens]]></dc:creator><pubDate>Wed, 31 Dec 2025 15:53:34 GMT</pubDate><enclosure url="https://api.riverside.com/hosting-analytics/media/92b4a86a100d49d5fd72cdd3e0f7500be4ac8afc56aa02b39a32e776c348d9c1/eyJlcGlzb2RlSWQiOiIzMDI4MGE0MC0xOGZhLTRjZjAtODlmMC1kYzhkY2Y1ZDc5NDQiLCJwb2RjYXN0SWQiOiJiM2MxNTYwNi1kNjIyLTRkNzQtYjYzZC03YTg2NjIzOTgwMTkiLCJhY2NvdW50SWQiOiI2NjQyNjJmODUzYmU3MWJjMmZkNGNjN2YiLCJwYXRoIjoibWVkaWEvY2xpcHMvNjg0YjM2OGVkNjJiMmU1OTM3MWUzZmU0L2hlYWx0aGRhdGFldGhpY3MtbWFzdGVyLWNvbXBvc2VyLTIwMjUtNi0xMl9fMjItMjAtMzAubXAzIn0=.mp3" length="10069599" type="audio/mpeg"/><itunes:summary>&lt;p&gt;This episode of the Health Data Ethics podcast unpacks a strange behavior observed in Claude during Anthropic’s internal testing — specifically a simulated blackmail attempt — and why that kind of behavior matters for healthcare. What stood out most wasn’t the incident itself, but the way Anthropic handled it: they shared the whole incident, explained their tiered safety system, and outlined the steps they took to reduce the risk. That kind of transparency is rare. And it’s the kind of posture we need more of in healthcare AI — not just regulatory compliance, but thoughtful public communication when things get weird. Episode covers: – what we can borrow from Anthropic’s approach – parallels between tiered safety systems, cybersecurity, and clinical governance – why healthcare orgs need to plan for edge cases before rollout Would love to hear how others are thinking about this.&lt;/p&gt;</itunes:summary><itunes:explicit>no</itunes:explicit><itunes:duration>00:13:13</itunes:duration><itunes:image href="https://hosting-media.riverside.com/media/imports/podcasts/b3c15606-d622-4d74-b63d-7a8662398019/502638a1-c2ed-4c08-b5c9-de4b7e9c0f94.png"/><itunes:episode>85</itunes:episode><itunes:title>What Does AI Blackmail Teach Us About Transparency?</itunes:title><itunes:episodeType>full</itunes:episodeType></item><item><title><![CDATA[How Do I Evaluate an LLM In Healthcare?]]></title><description><![CDATA[<p>In this week's Health Data Ethics podcast episode, I talk about healthcare-specific methods of evaluating LLM output - a recent paper on HumanELY, a web tool for evaluating LLM output across five separate axes, is a great lens for thinking about your ai tools. I also discuss a recent evaluation of GPT-4 Vision in which our AI friend ends up with the right answer to a medical case but can't quite tell us why.</p>]]></description><link>https://zencastr.com/z/t8X1s7FL</link><guid isPermaLink="false">e29d9164-2015-41bc-95d9-552ea62d989a</guid><dc:creator><![CDATA[Jennifer Owens]]></dc:creator><pubDate>Sun, 28 Jan 2024 20:58:21 GMT</pubDate><enclosure url="https://api.riverside.com/hosting-analytics/media/e3713162839e27ecebee483ecfb4caf8de2ffc61d6f4d70a38ded3f80e3f84c6/eyJlcGlzb2RlSWQiOiIxZGViMzI1MC1hOWY5LTQxNzQtYWI1Ni1jZmIzOTQ3NDFmYTEiLCJwb2RjYXN0SWQiOiJiM2MxNTYwNi1kNjIyLTRkNzQtYjYzZC03YTg2NjIzOTgwMTkiLCJhY2NvdW50SWQiOiI2NjQyNjJmODUzYmU3MWJjMmZkNGNjN2YiLCJwYXRoIjoibWVkaWEvaW1wb3J0cy9wb2RjYXN0cy9iM2MxNTYwNi1kNjIyLTRkNzQtYjYzZC03YTg2NjIzOTgwMTkvZXBpc29kZXMvMWRlYjMyNTAtYTlmOS00MTc0LWFiNTYtY2ZiMzk0NzQxZmExL2UzZGQyOTM5LTU4MTctNDBjNi1iZDdjLTE1OTIwNWU0Mjc5My5tcDMifQ==.mp3" length="13626477" type="audio/mpeg"/><itunes:summary>&lt;p&gt;In this week&apos;s Health Data Ethics podcast episode, I talk about healthcare-specific methods of evaluating LLM output - a recent paper on HumanELY, a web tool for evaluating LLM output across five separate axes, is a great lens for thinking about your ai tools. I also discuss a recent evaluation of GPT-4 Vision in which our AI friend ends up with the right answer to a medical case but can&apos;t quite tell us why.&lt;/p&gt;</itunes:summary><itunes:explicit>no</itunes:explicit><itunes:duration>00:09:27</itunes:duration><itunes:image href="https://hosting-media.riverside.com/media/imports/podcasts/b3c15606-d622-4d74-b63d-7a8662398019/502638a1-c2ed-4c08-b5c9-de4b7e9c0f94.png"/><itunes:title>How Do I Evaluate an LLM In Healthcare?</itunes:title><itunes:episodeType>full</itunes:episodeType></item><item><title><![CDATA[How Do I Evaluate an AI Tool In Healthcare?]]></title><description><![CDATA[<p>In this week's podcast episode - I summarize thinking and learning so far on how to evaluate an ai tool in the healthcare space. I talk about using AI to do things that AI does well and humans do poorly.</p>]]></description><link>https://zencastr.com/z/7Nio1IwS</link><guid isPermaLink="false">5f081b6a-ed7d-4617-a76e-c6375f4e72da</guid><dc:creator><![CDATA[Jennifer Owens]]></dc:creator><pubDate>Sun, 28 Jan 2024 21:16:32 GMT</pubDate><enclosure url="https://api.riverside.com/hosting-analytics/media/75d84c0cb1cf2f61c02d2dd3305534fdb85e08208ebd7b1d2e8cba9cb7873e40/eyJlcGlzb2RlSWQiOiJkZjY4YjI2Zi01MTE0LTQyYTktYmU3Mi1kMjQyY2RmZDI0MjgiLCJwb2RjYXN0SWQiOiJiM2MxNTYwNi1kNjIyLTRkNzQtYjYzZC03YTg2NjIzOTgwMTkiLCJhY2NvdW50SWQiOiI2NjQyNjJmODUzYmU3MWJjMmZkNGNjN2YiLCJwYXRoIjoibWVkaWEvaW1wb3J0cy9wb2RjYXN0cy9iM2MxNTYwNi1kNjIyLTRkNzQtYjYzZC03YTg2NjIzOTgwMTkvZXBpc29kZXMvZGY2OGIyNmYtNTExNC00MmE5LWJlNzItZDI0MmNkZmQyNDI4LzUyYmIwNzBlLTQ4N2QtNGQwZC1hOWQyLTBhZjBhZDhhYmViZC5tcDMifQ==.mp3" length="7898157" type="audio/mpeg"/><itunes:summary>&lt;p&gt;In this week&apos;s podcast episode - I summarize thinking and learning so far on how to evaluate an ai tool in the healthcare space. I talk about using AI to do things that AI does well and humans do poorly.&lt;/p&gt;</itunes:summary><itunes:explicit>no</itunes:explicit><itunes:duration>00:05:29</itunes:duration><itunes:image href="https://hosting-media.riverside.com/media/imports/podcasts/b3c15606-d622-4d74-b63d-7a8662398019/502638a1-c2ed-4c08-b5c9-de4b7e9c0f94.png"/><itunes:title>How Do I Evaluate an AI Tool In Healthcare?</itunes:title><itunes:episodeType>full</itunes:episodeType></item><item><title><![CDATA[How Can I Be Protected From Algorithmic Discrimination?]]></title><description><![CDATA[<p>In this episode I reveal the deeply nerdy conversations about avoiding bias that I have with my HR software developer husband, and give examples of ways that healthcare data endpoint selection has unforeseen outcomes.</p>]]></description><link>https://zencastr.com/z/fsyjYnCq</link><guid isPermaLink="false">9a43bec2-3ad3-4bd6-afd0-1dcd8d8ab348</guid><dc:creator><![CDATA[Jennifer Owens]]></dc:creator><pubDate>Mon, 18 Sep 2023 15:24:13 GMT</pubDate><enclosure url="https://api.riverside.com/hosting-analytics/media/109d9602deea20417694f1f142602fc27b2eece05442fbb9290dfa6b6fb952b9/eyJlcGlzb2RlSWQiOiI4MWUyMTI5MS01ZTYyLTQ5NmItYTE2Ni05ZWJmNTRkMGVkNDEiLCJwb2RjYXN0SWQiOiJiM2MxNTYwNi1kNjIyLTRkNzQtYjYzZC03YTg2NjIzOTgwMTkiLCJhY2NvdW50SWQiOiI2NjQyNjJmODUzYmU3MWJjMmZkNGNjN2YiLCJwYXRoIjoibWVkaWEvaW1wb3J0cy9wb2RjYXN0cy9iM2MxNTYwNi1kNjIyLTRkNzQtYjYzZC03YTg2NjIzOTgwMTkvZXBpc29kZXMvODFlMjEyOTEtNWU2Mi00OTZiLWExNjYtOWViZjU0ZDBlZDQxLzI3OTIxNWE5LTg4MWItNGFlMS05MmUwLWNiZmVjZGQ5MGJjNS5tcDMifQ==.mp3" length="11651373" type="audio/mpeg"/><itunes:summary>&lt;p&gt;In this episode I reveal the deeply nerdy conversations about avoiding bias that I have with my HR software developer husband, and give examples of ways that healthcare data endpoint selection has unforeseen outcomes.&lt;/p&gt;</itunes:summary><itunes:explicit>no</itunes:explicit><itunes:duration>00:08:05</itunes:duration><itunes:image href="https://hosting-media.riverside.com/media/imports/podcasts/b3c15606-d622-4d74-b63d-7a8662398019/502638a1-c2ed-4c08-b5c9-de4b7e9c0f94.png"/><itunes:title>How Can I Be Protected From Algorithmic Discrimination?</itunes:title><itunes:episodeType>full</itunes:episodeType></item><item><title><![CDATA[Who Should Hospitals Partner With For AI?]]></title><description><![CDATA[<p>In this week's episode I talk about a recent article about trust in genAI. I started listening to a fascinating podcast with Yann LeCun about the consolidation of AI in just a few companies. For a healthcare system, there are interesting tensions between the desire to work with a Microsoft or a Google, a smaller start up, and the urge to build ai expertise in house - all while keeping patient outcomes as highest priority. It's a good one. I hope you'll take a listen.   </p><p>Article: https://www.linkedin.com/pulse/trust-generative-ai-plummeting-michael-spencer-vfwxc/  </p><p>Podcast: https://youtu.be/5t1vTLU7s40  </p><p>Paper: Haring, M., Freigang, F., Amelung, V. et al. What can healthcare systems learn from looking at tensions in innovation processes? A systematic literature review. BMC Health Serv Res 22, 1299 (2022). DOI: 10.1186/s12913-022-08626-7</p>]]></description><link>https://zencastr.com/z/WlpdkZ9H</link><guid isPermaLink="false">887d6786-5eee-4faf-aba1-57f8d8b53fe4</guid><dc:creator><![CDATA[Jennifer Owens]]></dc:creator><pubDate>Wed, 13 Mar 2024 00:48:28 GMT</pubDate><enclosure url="https://api.riverside.com/hosting-analytics/media/37d932d906fd81dea4e6b721fb1f6c245568e1dfc23c91b6ed1c97ee5d506009/eyJlcGlzb2RlSWQiOiJiN2ZhYjg3Ni00ODUwLTQ5NzctYTZjNy1jZTU1OWFiM2RjOGYiLCJwb2RjYXN0SWQiOiJiM2MxNTYwNi1kNjIyLTRkNzQtYjYzZC03YTg2NjIzOTgwMTkiLCJhY2NvdW50SWQiOiI2NjQyNjJmODUzYmU3MWJjMmZkNGNjN2YiLCJwYXRoIjoibWVkaWEvaW1wb3J0cy9wb2RjYXN0cy9iM2MxNTYwNi1kNjIyLTRkNzQtYjYzZC03YTg2NjIzOTgwMTkvZXBpc29kZXMvYjdmYWI4NzYtNDg1MC00OTc3LWE2YzctY2U1NTlhYjNkYzhmL2YxMDE2MmE0LWY5YjItNGZlZC05ZjY0LTQxMDUyMzliMmI0OC5tcDMifQ==.mp3" length="10861101" type="audio/mpeg"/><itunes:summary>&lt;p&gt;In this week&apos;s episode I talk about a recent article about trust in genAI. I started listening to a fascinating podcast with Yann LeCun about the consolidation of AI in just a few companies. For a healthcare system, there are interesting tensions between the desire to work with a Microsoft or a Google, a smaller start up, and the urge to build ai expertise in house - all while keeping patient outcomes as highest priority. It&apos;s a good one. I hope you&apos;ll take a listen.   &lt;/p&gt;&lt;p&gt;Article: https://www.linkedin.com/pulse/trust-generative-ai-plummeting-michael-spencer-vfwxc/  &lt;/p&gt;&lt;p&gt;Podcast: https://youtu.be/5t1vTLU7s40  &lt;/p&gt;&lt;p&gt;Paper: Haring, M., Freigang, F., Amelung, V. et al. What can healthcare systems learn from looking at tensions in innovation processes? A systematic literature review. BMC Health Serv Res 22, 1299 (2022). DOI: 10.1186/s12913-022-08626-7&lt;/p&gt;</itunes:summary><itunes:explicit>no</itunes:explicit><itunes:duration>00:07:32</itunes:duration><itunes:image href="https://hosting-media.riverside.com/media/imports/podcasts/b3c15606-d622-4d74-b63d-7a8662398019/502638a1-c2ed-4c08-b5c9-de4b7e9c0f94.png"/><itunes:title>Who Should Hospitals Partner With For AI?</itunes:title><itunes:episodeType>full</itunes:episodeType></item><item><title><![CDATA[Why Not Sequence Everyone's Genome?]]></title><description><![CDATA[<p>In which I discuss the information conundrum of whole genome sequencing - trying to extract relevant data from the torrent of information available. </p>]]></description><link>https://zencastr.com/z/tQAwBTJW</link><guid isPermaLink="false">d9c3e872-b366-407d-96e9-aa92291c6ce4</guid><dc:creator><![CDATA[Jennifer Owens]]></dc:creator><pubDate>Sun, 23 Apr 2023 22:47:58 GMT</pubDate><enclosure url="https://api.riverside.com/hosting-analytics/media/b9ad22bda3e7d6de546dc0c676d933d2898a3fb51aa145393444d3a455b570e7/eyJlcGlzb2RlSWQiOiJmZDBjYTBlNy01ZWViLTQzZjMtOTgyNy0wODFiZTg2ZTI1YWYiLCJwb2RjYXN0SWQiOiJiM2MxNTYwNi1kNjIyLTRkNzQtYjYzZC03YTg2NjIzOTgwMTkiLCJhY2NvdW50SWQiOiI2NjQyNjJmODUzYmU3MWJjMmZkNGNjN2YiLCJwYXRoIjoibWVkaWEvaW1wb3J0cy9wb2RjYXN0cy9iM2MxNTYwNi1kNjIyLTRkNzQtYjYzZC03YTg2NjIzOTgwMTkvZXBpc29kZXMvZmQwY2EwZTctNWVlYi00M2YzLTk4MjctMDgxYmU4NmUyNWFmL2I4OGE0NzdlLWI3MTItNDdjNy05ZmNjLTg1Nzc3M2VhMGUxZi5tcDMifQ==.mp3" length="8319213" type="audio/mpeg"/><itunes:summary>&lt;p&gt;In which I discuss the information conundrum of whole genome sequencing - trying to extract relevant data from the torrent of information available. &lt;/p&gt;</itunes:summary><itunes:explicit>no</itunes:explicit><itunes:duration>00:05:46</itunes:duration><itunes:image href="https://hosting-media.riverside.com/media/imports/podcasts/b3c15606-d622-4d74-b63d-7a8662398019/episodes/fd0ca0e7-5eeb-43f3-9827-081be86e25af/30754cf2-50e4-41a1-bb50-696df582638c.png"/><itunes:title>Why Not Sequence Everyone&apos;s Genome?</itunes:title><itunes:episodeType>full</itunes:episodeType></item><item><title><![CDATA[Can I Share Fitness Data With My Doctor?]]></title><description><![CDATA[<p>In this episode I talk about wearable devices from "over the counter" to "prescription," and identify some benefits and concerns for integrating this data into the health record. </p>]]></description><link>https://zencastr.com/z/YxVyPqum</link><guid isPermaLink="false">f7a0a3f1-7aab-4088-a869-438f611fb540</guid><dc:creator><![CDATA[Jennifer Owens]]></dc:creator><pubDate>Mon, 08 May 2023 01:22:37 GMT</pubDate><enclosure url="https://api.riverside.com/hosting-analytics/media/8393638ea14bd60241ce6ceec9cad3b5753e3fd56b28f2cb8c8616b2381aa263/eyJlcGlzb2RlSWQiOiI0ODgzYjhkNS1hNjhhLTRmNjktYmRmZC05MWI5ODI2NmJlMTEiLCJwb2RjYXN0SWQiOiJiM2MxNTYwNi1kNjIyLTRkNzQtYjYzZC03YTg2NjIzOTgwMTkiLCJhY2NvdW50SWQiOiI2NjQyNjJmODUzYmU3MWJjMmZkNGNjN2YiLCJwYXRoIjoibWVkaWEvaW1wb3J0cy9wb2RjYXN0cy9iM2MxNTYwNi1kNjIyLTRkNzQtYjYzZC03YTg2NjIzOTgwMTkvZXBpc29kZXMvNDg4M2I4ZDUtYTY4YS00ZjY5LWJkZmQtOTFiOTgyNjZiZTExLzg3ZjVlMmM3LWEwMzktNDk5My04NDNmLWMwZjQzNWVjYTg1MS5tcDMifQ==.mp3" length="10315053" type="audio/mpeg"/><itunes:summary>&lt;p&gt;In this episode I talk about wearable devices from &quot;over the counter&quot; to &quot;prescription,&quot; and identify some benefits and concerns for integrating this data into the health record. &lt;/p&gt;</itunes:summary><itunes:explicit>no</itunes:explicit><itunes:duration>00:07:09</itunes:duration><itunes:image href="https://hosting-media.riverside.com/media/imports/podcasts/b3c15606-d622-4d74-b63d-7a8662398019/episodes/4883b8d5-a68a-4f69-bdfd-91b98266be11/30754cf2-50e4-41a1-bb50-696df582638c.png"/><itunes:title>Can I Share Fitness Data With My Doctor?</itunes:title><itunes:episodeType>full</itunes:episodeType></item><item><title><![CDATA[Can My Pacemaker Be Hacked?]]></title><description><![CDATA[<p>In which I discuss security on the Internet of Things, especially when the Things are medical devices.</p>]]></description><link>https://zencastr.com/z/1VA34ZOB</link><guid isPermaLink="false">f6869758-a27e-42cb-b19a-33f7d206302a</guid><dc:creator><![CDATA[Jennifer Owens]]></dc:creator><pubDate>Mon, 08 May 2023 01:26:16 GMT</pubDate><enclosure url="https://api.riverside.com/hosting-analytics/media/1ef6575d21ec7a46e6e0918e43647c1278596ade509b2e8d6c595d15a1201a3b/eyJlcGlzb2RlSWQiOiIwODFiNjIzMy1kMWVjLTRiYWQtOTdhYS01ODNkZWJjZGI3NGQiLCJwb2RjYXN0SWQiOiJiM2MxNTYwNi1kNjIyLTRkNzQtYjYzZC03YTg2NjIzOTgwMTkiLCJhY2NvdW50SWQiOiI2NjQyNjJmODUzYmU3MWJjMmZkNGNjN2YiLCJwYXRoIjoibWVkaWEvaW1wb3J0cy9wb2RjYXN0cy9iM2MxNTYwNi1kNjIyLTRkNzQtYjYzZC03YTg2NjIzOTgwMTkvZXBpc29kZXMvMDgxYjYyMzMtZDFlYy00YmFkLTk3YWEtNTgzZGViY2RiNzRkLzc0Y2MzZmIyLThhODUtNGZjMi05ZDY3LTNiNWZjYzJhZjQxMC5tcDMifQ==.mp3" length="8663661" type="audio/mpeg"/><itunes:summary>&lt;p&gt;In which I discuss security on the Internet of Things, especially when the Things are medical devices.&lt;/p&gt;</itunes:summary><itunes:explicit>no</itunes:explicit><itunes:duration>00:06:00</itunes:duration><itunes:image href="https://hosting-media.riverside.com/media/imports/podcasts/b3c15606-d622-4d74-b63d-7a8662398019/502638a1-c2ed-4c08-b5c9-de4b7e9c0f94.png"/><itunes:title>Can My Pacemaker Be Hacked?</itunes:title><itunes:episodeType>full</itunes:episodeType></item><item><title><![CDATA[Chatbots: What Can't They Do?]]></title><description><![CDATA[<p>I discuss a recent article on the use of GPT-4 in medicine: risks, benefits, limits, and some additional use cases.</p>]]></description><link>https://zencastr.com/z/gXXJhWcw</link><guid isPermaLink="false">61481fff-be67-46e2-bfc9-a5f8ed5eaa0a</guid><dc:creator><![CDATA[Jennifer Owens]]></dc:creator><pubDate>Mon, 08 May 2023 01:27:19 GMT</pubDate><enclosure url="https://api.riverside.com/hosting-analytics/media/e9850e791ff529d93b59532d636fc3fcc5ab0e2468cb7cee6a2c1d9760d9f779/eyJlcGlzb2RlSWQiOiJjZjVkOGQ0MC1jMjA0LTRiZDctODBjMS04MmUzOTIwOGIwMzkiLCJwb2RjYXN0SWQiOiJiM2MxNTYwNi1kNjIyLTRkNzQtYjYzZC03YTg2NjIzOTgwMTkiLCJhY2NvdW50SWQiOiI2NjQyNjJmODUzYmU3MWJjMmZkNGNjN2YiLCJwYXRoIjoibWVkaWEvaW1wb3J0cy9wb2RjYXN0cy9iM2MxNTYwNi1kNjIyLTRkNzQtYjYzZC03YTg2NjIzOTgwMTkvZXBpc29kZXMvY2Y1ZDhkNDAtYzIwNC00YmQ3LTgwYzEtODJlMzkyMDhiMDM5LzYxNjAwNTQ5LTkwY2ItNDc0Mi1iZDM5LWZlYTM0NGFlZjM1OS5tcDMifQ==.mp3" length="12410541" type="audio/mpeg"/><itunes:summary>&lt;p&gt;I discuss a recent article on the use of GPT-4 in medicine: risks, benefits, limits, and some additional use cases.&lt;/p&gt;</itunes:summary><itunes:explicit>no</itunes:explicit><itunes:duration>00:08:37</itunes:duration><itunes:image href="https://hosting-media.riverside.com/media/imports/podcasts/b3c15606-d622-4d74-b63d-7a8662398019/502638a1-c2ed-4c08-b5c9-de4b7e9c0f94.png"/><itunes:title>Chatbots: What Can&apos;t They Do?</itunes:title><itunes:episodeType>full</itunes:episodeType></item><item><title><![CDATA[How Do You Regulate A Problem Like AI?]]></title><description><![CDATA[<p>I don't really think AI is a problem, I just can't resist a good Sound  of Music reference. In this episode I build off the previous discussion  about GPT-4 and talk about regulatory standards and challenges.</p>]]></description><link>https://zencastr.com/z/mXAegLg_</link><guid isPermaLink="false">ab88e491-11cf-44b8-9f56-41c381273185</guid><dc:creator><![CDATA[Jennifer Owens]]></dc:creator><pubDate>Mon, 08 May 2023 01:28:51 GMT</pubDate><enclosure url="https://api.riverside.com/hosting-analytics/media/0e9f008f99f108472800a0e2ae10f4148f3560e96302c8c4f10f4145c243021f/eyJlcGlzb2RlSWQiOiJkNmZjNjQ0MS01OWU3LTQyYjMtOTM5Mi1mZGI5YTRmNTg5ZjEiLCJwb2RjYXN0SWQiOiJiM2MxNTYwNi1kNjIyLTRkNzQtYjYzZC03YTg2NjIzOTgwMTkiLCJhY2NvdW50SWQiOiI2NjQyNjJmODUzYmU3MWJjMmZkNGNjN2YiLCJwYXRoIjoibWVkaWEvaW1wb3J0cy9wb2RjYXN0cy9iM2MxNTYwNi1kNjIyLTRkNzQtYjYzZC03YTg2NjIzOTgwMTkvZXBpc29kZXMvZDZmYzY0NDEtNTllNy00MmIzLTkzOTItZmRiOWE0ZjU4OWYxLzg5MjNlNDY3LTUxNTEtNDU4My05ZTA3LTUzYTRhZTAwYzhmYi5tcDMifQ==.mp3" length="8646381" type="audio/mpeg"/><itunes:summary>&lt;p&gt;I don&apos;t really think AI is a problem, I just can&apos;t resist a good Sound  of Music reference. In this episode I build off the previous discussion  about GPT-4 and talk about regulatory standards and challenges.&lt;/p&gt;</itunes:summary><itunes:explicit>no</itunes:explicit><itunes:duration>00:06:00</itunes:duration><itunes:image href="https://hosting-media.riverside.com/media/imports/podcasts/b3c15606-d622-4d74-b63d-7a8662398019/502638a1-c2ed-4c08-b5c9-de4b7e9c0f94.png"/><itunes:title>How Do You Regulate A Problem Like AI?</itunes:title><itunes:episodeType>full</itunes:episodeType></item><item><title><![CDATA[What Do We Want in an EHR?]]></title><description><![CDATA[<p>I am thinking about user centered design for electronic health records (EHR) in this episode. </p>]]></description><link>https://zencastr.com/z/jxETkj1G</link><guid isPermaLink="false">b7300f39-c8b9-4233-b3b3-54b8a79c8153</guid><dc:creator><![CDATA[Jennifer Owens]]></dc:creator><pubDate>Mon, 08 May 2023 01:36:47 GMT</pubDate><enclosure url="https://api.riverside.com/hosting-analytics/media/913a6218831bd920c0b64eb0934552ec182de81fccdab07ad5e82aa12a6ea00b/eyJlcGlzb2RlSWQiOiI2ODk5YWE3MS0xZDcyLTRkNGQtOWZjYi1mZGFjNTczZTE0MjAiLCJwb2RjYXN0SWQiOiJiM2MxNTYwNi1kNjIyLTRkNzQtYjYzZC03YTg2NjIzOTgwMTkiLCJhY2NvdW50SWQiOiI2NjQyNjJmODUzYmU3MWJjMmZkNGNjN2YiLCJwYXRoIjoibWVkaWEvaW1wb3J0cy9wb2RjYXN0cy9iM2MxNTYwNi1kNjIyLTRkNzQtYjYzZC03YTg2NjIzOTgwMTkvZXBpc29kZXMvNjg5OWFhNzEtMWQ3Mi00ZDRkLTlmY2ItZmRhYzU3M2UxNDIwL2UyNDM3NDAzLTIwMmItNDE3NC04ZjQ0LTk2Yjg1ZmJhOTNjNi5tcDMifQ==.mp3" length="10727469" type="audio/mpeg"/><itunes:summary>&lt;p&gt;I am thinking about user centered design for electronic health records (EHR) in this episode. &lt;/p&gt;</itunes:summary><itunes:explicit>no</itunes:explicit><itunes:duration>00:07:26</itunes:duration><itunes:image href="https://hosting-media.riverside.com/media/imports/podcasts/b3c15606-d622-4d74-b63d-7a8662398019/502638a1-c2ed-4c08-b5c9-de4b7e9c0f94.png"/><itunes:title>What Do We Want in an EHR?</itunes:title><itunes:episodeType>full</itunes:episodeType></item><item><title><![CDATA[Will More AI Erode Our Healthcare Skills?]]></title><description><![CDATA[<p>In my podcast episode this week I talk about a recent article on "skill rot", misattributed Cicero quotes, the total amount of skill being conserved, and more.</p>]]></description><link>https://zencastr.com/z/lhhLINu9</link><guid isPermaLink="false">f6397734-e0c2-483b-9cb5-6e7fe9824597</guid><dc:creator><![CDATA[Jennifer Owens]]></dc:creator><pubDate>Mon, 18 Sep 2023 15:30:42 GMT</pubDate><enclosure url="https://api.riverside.com/hosting-analytics/media/cd5af9c3e7b0aff12e731a7e72b33f1d3fc4b29e47ed99f709313c9f7fb7f047/eyJlcGlzb2RlSWQiOiI0ZThlODc3OS1kMTI4LTRkYTgtOWQ1Ny04NTljMmY1MzQ2YTMiLCJwb2RjYXN0SWQiOiJiM2MxNTYwNi1kNjIyLTRkNzQtYjYzZC03YTg2NjIzOTgwMTkiLCJhY2NvdW50SWQiOiI2NjQyNjJmODUzYmU3MWJjMmZkNGNjN2YiLCJwYXRoIjoibWVkaWEvaW1wb3J0cy9wb2RjYXN0cy9iM2MxNTYwNi1kNjIyLTRkNzQtYjYzZC03YTg2NjIzOTgwMTkvZXBpc29kZXMvNGU4ZTg3NzktZDEyOC00ZGE4LTlkNTctODU5YzJmNTM0NmEzL2ViNDhiZDFjLWM5MTEtNDU2MS04NWJiLTlmZDEzYWM4YWMwOS5tcDMifQ==.mp3" length="7959789" type="audio/mpeg"/><itunes:summary>&lt;p&gt;In my podcast episode this week I talk about a recent article on &quot;skill rot&quot;, misattributed Cicero quotes, the total amount of skill being conserved, and more.&lt;/p&gt;</itunes:summary><itunes:explicit>no</itunes:explicit><itunes:duration>00:05:31</itunes:duration><itunes:image href="https://hosting-media.riverside.com/media/imports/podcasts/b3c15606-d622-4d74-b63d-7a8662398019/502638a1-c2ed-4c08-b5c9-de4b7e9c0f94.png"/><itunes:title>Will More AI Erode Our Healthcare Skills?</itunes:title><itunes:episodeType>full</itunes:episodeType></item><item><title><![CDATA[What's it Like to be an AI Pioneer?]]></title><description><![CDATA[<p>In this week's episode, I review The Worlds I See by Fei-Fei Li - a delightful, creative, warm portrait of both the emergence of artificial intelligence from its intellectual winter, and Fei-Fei's own life story. I talk about creativity and science, and about using approaches from multiple disciplines to think about AI problems.</p>]]></description><link>https://zencastr.com/z/JBTdPP9t</link><guid isPermaLink="false">af632f57-2d94-431f-b766-7db58e9a41a2</guid><dc:creator><![CDATA[Jennifer Owens]]></dc:creator><pubDate>Sat, 17 Feb 2024 18:39:10 GMT</pubDate><enclosure url="https://api.riverside.com/hosting-analytics/media/a4f60ef85457f04636616ac093ebe948eddafd55ba5684d3a189382440aaea61/eyJlcGlzb2RlSWQiOiJlMTViMzU3MS1mNmJlLTQ1ZjItOWM1ZS02ODZmZTIyZTY5ZWYiLCJwb2RjYXN0SWQiOiJiM2MxNTYwNi1kNjIyLTRkNzQtYjYzZC03YTg2NjIzOTgwMTkiLCJhY2NvdW50SWQiOiI2NjQyNjJmODUzYmU3MWJjMmZkNGNjN2YiLCJwYXRoIjoibWVkaWEvaW1wb3J0cy9wb2RjYXN0cy9iM2MxNTYwNi1kNjIyLTRkNzQtYjYzZC03YTg2NjIzOTgwMTkvZXBpc29kZXMvZTE1YjM1NzEtZjZiZS00NWYyLTljNWUtNjg2ZmUyMmU2OWVmLzExNTZiZWQyLWU2YmMtNDNlYy04MTlhLTliOWI3MmQwNDUzMC5tcDMifQ==.mp3" length="9614637" type="audio/mpeg"/><itunes:summary>&lt;p&gt;In this week&apos;s episode, I review The Worlds I See by Fei-Fei Li - a delightful, creative, warm portrait of both the emergence of artificial intelligence from its intellectual winter, and Fei-Fei&apos;s own life story. I talk about creativity and science, and about using approaches from multiple disciplines to think about AI problems.&lt;/p&gt;</itunes:summary><itunes:explicit>no</itunes:explicit><itunes:duration>00:06:40</itunes:duration><itunes:image href="https://hosting-media.riverside.com/media/imports/podcasts/b3c15606-d622-4d74-b63d-7a8662398019/502638a1-c2ed-4c08-b5c9-de4b7e9c0f94.png"/><itunes:title>What&apos;s it Like to be an AI Pioneer?</itunes:title><itunes:episodeType>full</itunes:episodeType></item><item><title><![CDATA[How Do Health Systems Use Analytics? With Erik Swanson]]></title><description><![CDATA[<p>In this episode Erik Swanson and I dig in on #ai  and #analytics in health systems. Erik shares his expertise on choosing problems and good KPIs rather than being swayed by a cool solution, as well as focusing on the science of #healthcare delivery.</p>]]></description><link>https://zencastr.com/z/vuW1iZtj</link><guid isPermaLink="false">9e685b17-cfaa-48da-a277-ddf525aad8b7</guid><dc:creator><![CDATA[Jennifer Owens]]></dc:creator><pubDate>Tue, 09 Apr 2024 19:51:34 GMT</pubDate><enclosure url="https://api.riverside.com/hosting-analytics/media/cbb7da5764e798c15c515803c8e9623e8a8af3d9b60b0f0700f76d380dee9dbd/eyJlcGlzb2RlSWQiOiJjNWVkNmZmZi0yNWVkLTRkZGYtYjExMy00OGJjYTM2ZTQyNDciLCJwb2RjYXN0SWQiOiJiM2MxNTYwNi1kNjIyLTRkNzQtYjYzZC03YTg2NjIzOTgwMTkiLCJhY2NvdW50SWQiOiI2NjQyNjJmODUzYmU3MWJjMmZkNGNjN2YiLCJwYXRoIjoibWVkaWEvaW1wb3J0cy9wb2RjYXN0cy9iM2MxNTYwNi1kNjIyLTRkNzQtYjYzZC03YTg2NjIzOTgwMTkvZXBpc29kZXMvYzVlZDZmZmYtMjVlZC00ZGRmLWIxMTMtNDhiY2EzNmU0MjQ3LzM2MDZiMzFiLTBmY2QtNDI5MS05NzUxLTYyNDlmY2JhYmMwOS5tcDMifQ==.mp3" length="18597933" type="audio/mpeg"/><itunes:summary>&lt;p&gt;In this episode Erik Swanson and I dig in on #ai  and #analytics in health systems. Erik shares his expertise on choosing problems and good KPIs rather than being swayed by a cool solution, as well as focusing on the science of #healthcare delivery.&lt;/p&gt;</itunes:summary><itunes:explicit>no</itunes:explicit><itunes:duration>00:12:54</itunes:duration><itunes:image href="https://hosting-media.riverside.com/media/imports/podcasts/b3c15606-d622-4d74-b63d-7a8662398019/502638a1-c2ed-4c08-b5c9-de4b7e9c0f94.png"/><itunes:title>How Do Health Systems Use Analytics? With Erik Swanson</itunes:title><itunes:episodeType>full</itunes:episodeType></item><item><title><![CDATA[How Do We Ensure Safe And Effective AI Systems In Healthcare?]]></title><description><![CDATA[<p>I continue discussing the blueprint for an AI Bill of Rights in this episode centered on safe and effective artificial intelligence in healthcare. I'll discuss a sepsis predictor that did not work as intended, and suggest some specific ways health systems can implement the guidance suggested by OSTP and the White House.</p>]]></description><link>https://zencastr.com/z/te6cjTdE</link><guid isPermaLink="false">1d8952e2-b9da-418a-99fe-b8b70c88f7c1</guid><dc:creator><![CDATA[Jennifer Owens]]></dc:creator><pubDate>Mon, 18 Sep 2023 15:25:21 GMT</pubDate><enclosure url="https://api.riverside.com/hosting-analytics/media/a4ac599bb35796bd22380f903c15af99f145baf9a74205c08ddf1aa1a57d9697/eyJlcGlzb2RlSWQiOiI3ZmUwNGFlOS0yYWZkLTRlMDItYWExZS1iNjQ0MjBkZWFkMGMiLCJwb2RjYXN0SWQiOiJiM2MxNTYwNi1kNjIyLTRkNzQtYjYzZC03YTg2NjIzOTgwMTkiLCJhY2NvdW50SWQiOiI2NjQyNjJmODUzYmU3MWJjMmZkNGNjN2YiLCJwYXRoIjoibWVkaWEvaW1wb3J0cy9wb2RjYXN0cy9iM2MxNTYwNi1kNjIyLTRkNzQtYjYzZC03YTg2NjIzOTgwMTkvZXBpc29kZXMvN2ZlMDRhZTktMmFmZC00ZTAyLWFhMWUtYjY0NDIwZGVhZDBjLzViMTIxZmEzLWNmMGMtNGMxOS04NDUzLTMwNDUxZTUxYjZiOC5tcDMifQ==.mp3" length="13610925" type="audio/mpeg"/><itunes:summary>&lt;p&gt;I continue discussing the blueprint for an AI Bill of Rights in this episode centered on safe and effective artificial intelligence in healthcare. I&apos;ll discuss a sepsis predictor that did not work as intended, and suggest some specific ways health systems can implement the guidance suggested by OSTP and the White House.&lt;/p&gt;</itunes:summary><itunes:explicit>no</itunes:explicit><itunes:duration>00:09:27</itunes:duration><itunes:image href="https://hosting-media.riverside.com/media/imports/podcasts/b3c15606-d622-4d74-b63d-7a8662398019/502638a1-c2ed-4c08-b5c9-de4b7e9c0f94.png"/><itunes:title>How Do We Ensure Safe And Effective AI Systems In Healthcare?</itunes:title><itunes:episodeType>full</itunes:episodeType></item><item><title><![CDATA[How Do We Innovate Responsibly  (or Oh No, I Was Wrong)?]]></title><description><![CDATA[<p>In this episode I call myself out twice - once about not thinking through how to use our ethics discussions to further innovative conversations rather than hold them back, and a second time on bringing my own biases into my research on sepsis models.</p>]]></description><link>https://zencastr.com/z/Ve_wyEHX</link><guid isPermaLink="false">0f6da23c-c628-49e6-b5dd-64405f1805ca</guid><dc:creator><![CDATA[Jennifer Owens]]></dc:creator><pubDate>Mon, 18 Sep 2023 15:13:05 GMT</pubDate><enclosure url="https://api.riverside.com/hosting-analytics/media/e357f8070c9158be79c899168747153ae7bc3630c60865d90c5ed6761a11b73d/eyJlcGlzb2RlSWQiOiJjYWZlYTJlMy05MzNkLTRjZTEtYTFlOS05MDBhZjg5MDA2OGEiLCJwb2RjYXN0SWQiOiJiM2MxNTYwNi1kNjIyLTRkNzQtYjYzZC03YTg2NjIzOTgwMTkiLCJhY2NvdW50SWQiOiI2NjQyNjJmODUzYmU3MWJjMmZkNGNjN2YiLCJwYXRoIjoibWVkaWEvaW1wb3J0cy9wb2RjYXN0cy9iM2MxNTYwNi1kNjIyLTRkNzQtYjYzZC03YTg2NjIzOTgwMTkvZXBpc29kZXMvY2FmZWEyZTMtOTMzZC00Y2UxLWExZTktOTAwYWY4OTAwNjhhLzVjNTNjYzc0LTVkZTktNGI1Yy05OWE1LTE0MzhhZmVkNWYyZS5tcDMifQ==.mp3" length="11266029" type="audio/mpeg"/><itunes:summary>&lt;p&gt;In this episode I call myself out twice - once about not thinking through how to use our ethics discussions to further innovative conversations rather than hold them back, and a second time on bringing my own biases into my research on sepsis models.&lt;/p&gt;</itunes:summary><itunes:explicit>no</itunes:explicit><itunes:duration>00:07:49</itunes:duration><itunes:image href="https://hosting-media.riverside.com/media/imports/podcasts/b3c15606-d622-4d74-b63d-7a8662398019/502638a1-c2ed-4c08-b5c9-de4b7e9c0f94.png"/><itunes:title>How Do We Innovate Responsibly  (or Oh No, I Was Wrong)?</itunes:title><itunes:episodeType>full</itunes:episodeType></item><item><title><![CDATA[Human vs. Digital Scribes, What's the Difference?]]></title><description><![CDATA[<p>I'm back with an episode in which I discuss patient attitudes towards human scribes, revisit some 2018 predictions about digital scribes, and make a new recommendation when your healthcare IT group pilots a new digital documentation tool: Publish about it!</p>]]></description><link>https://zencastr.com/z/78_LxEdV</link><guid isPermaLink="false">a1d1f914-bcac-489d-a914-40365b5b10e8</guid><dc:creator><![CDATA[Jennifer Owens]]></dc:creator><pubDate>Tue, 07 Nov 2023 18:44:58 GMT</pubDate><enclosure url="https://api.riverside.com/hosting-analytics/media/93c36851a5618d0a501b3a68ded26b9ae46c63998860c6df81722cc4c1143d90/eyJlcGlzb2RlSWQiOiJhODQ5ZTAzNy1iZWEyLTRjMzctOTU5Zi1mZjgwZTgyOTFjMTMiLCJwb2RjYXN0SWQiOiJiM2MxNTYwNi1kNjIyLTRkNzQtYjYzZC03YTg2NjIzOTgwMTkiLCJhY2NvdW50SWQiOiI2NjQyNjJmODUzYmU3MWJjMmZkNGNjN2YiLCJwYXRoIjoibWVkaWEvaW1wb3J0cy9wb2RjYXN0cy9iM2MxNTYwNi1kNjIyLTRkNzQtYjYzZC03YTg2NjIzOTgwMTkvZXBpc29kZXMvYTg0OWUwMzctYmVhMi00YzM3LTk1OWYtZmY4MGU4MjkxYzEzL2RlYTJjOTE0LTE2MGQtNGIzYy05ZTI3LTA2ZjYxZjgzYzIyNi5tcDMifQ==.mp3" length="20581677" type="audio/mpeg"/><itunes:summary>&lt;p&gt;I&apos;m back with an episode in which I discuss patient attitudes towards human scribes, revisit some 2018 predictions about digital scribes, and make a new recommendation when your healthcare IT group pilots a new digital documentation tool: Publish about it!&lt;/p&gt;</itunes:summary><itunes:explicit>no</itunes:explicit><itunes:duration>00:14:17</itunes:duration><itunes:image href="https://hosting-media.riverside.com/media/imports/podcasts/b3c15606-d622-4d74-b63d-7a8662398019/502638a1-c2ed-4c08-b5c9-de4b7e9c0f94.png"/><itunes:title>Human vs. Digital Scribes, What&apos;s the Difference?</itunes:title><itunes:episodeType>full</itunes:episodeType></item><item><title><![CDATA[Is AI Good For Hospital Business?]]></title><description><![CDATA[<p>In this episode, I discuss my attempts to learn about business strategy. I use both a classic history of business strategy and a recent financial report on the healthcare sector to probe whether AI is good for hospital business.   Book: https://www.goodreads.com/en/book/show/6214316  Report: https://www.kaufmanhall.com/sites/default/files/2024-02/KH%20-%20NHFR%20%282024-02%29_FINAL.pdf</p>]]></description><link>https://zencastr.com/z/6tJXAdzC</link><guid isPermaLink="false">38eff97d-a59b-41f9-b6d2-5e2b3fae0109</guid><dc:creator><![CDATA[Jennifer Owens]]></dc:creator><pubDate>Mon, 18 Mar 2024 18:17:51 GMT</pubDate><enclosure url="https://api.riverside.com/hosting-analytics/media/9e9ee51b6291a284161528ba0026f905e69279adf4d89a097d5e768b04708ade/eyJlcGlzb2RlSWQiOiIwM2QwZDkxMy1mNDMwLTQwYTAtYTkzNy0xY2NiNGViZGJkMDYiLCJwb2RjYXN0SWQiOiJiM2MxNTYwNi1kNjIyLTRkNzQtYjYzZC03YTg2NjIzOTgwMTkiLCJhY2NvdW50SWQiOiI2NjQyNjJmODUzYmU3MWJjMmZkNGNjN2YiLCJwYXRoIjoibWVkaWEvaW1wb3J0cy9wb2RjYXN0cy9iM2MxNTYwNi1kNjIyLTRkNzQtYjYzZC03YTg2NjIzOTgwMTkvZXBpc29kZXMvMDNkMGQ5MTMtZjQzMC00MGEwLWE5MzctMWNjYjRlYmRiZDA2LzBiMzMyMDU4LWVmZmItNDc3Yy04NzUwLTIwMDVkNzg3NGFlMC5tcDMifQ==.mp3" length="10906029" type="audio/mpeg"/><itunes:summary>&lt;p&gt;In this episode, I discuss my attempts to learn about business strategy. I use both a classic history of business strategy and a recent financial report on the healthcare sector to probe whether AI is good for hospital business.   Book: https://www.goodreads.com/en/book/show/6214316  Report: https://www.kaufmanhall.com/sites/default/files/2024-02/KH%20-%20NHFR%20%282024-02%29_FINAL.pdf&lt;/p&gt;</itunes:summary><itunes:explicit>no</itunes:explicit><itunes:duration>00:07:34</itunes:duration><itunes:image href="https://hosting-media.riverside.com/media/imports/podcasts/b3c15606-d622-4d74-b63d-7a8662398019/502638a1-c2ed-4c08-b5c9-de4b7e9c0f94.png"/><itunes:title>Is AI Good For Hospital Business?</itunes:title><itunes:episodeType>full</itunes:episodeType></item><item><title><![CDATA[Do I Really Want To Know When There's AI In My Healthcare?]]></title><description><![CDATA[<p>I continue my series on the Blueprint for an AI Bill of Rights, tackling your right to notice and explanation in this episode. I suggest baseline education for all people in what AI can and cannot do, lay out a framework for regular updates to notifications and explanations for healthcare AI, and talk about the most common lie told.</p>]]></description><link>https://zencastr.com/z/YiYgB4H9</link><guid isPermaLink="false">67ab6c63-5fc4-497d-9d2f-937ada5d7e9e</guid><dc:creator><![CDATA[Jennifer Owens]]></dc:creator><pubDate>Mon, 18 Sep 2023 15:20:39 GMT</pubDate><enclosure url="https://api.riverside.com/hosting-analytics/media/65f4c5d2922cbe9129076e37cd522b9354319a8335edfb53f71dd086d2fbdaad/eyJlcGlzb2RlSWQiOiI2OWQ5OWZhZS1lNTY1LTQyY2ItOTYwNi0xYmUyZmRiYmVlYWUiLCJwb2RjYXN0SWQiOiJiM2MxNTYwNi1kNjIyLTRkNzQtYjYzZC03YTg2NjIzOTgwMTkiLCJhY2NvdW50SWQiOiI2NjQyNjJmODUzYmU3MWJjMmZkNGNjN2YiLCJwYXRoIjoibWVkaWEvaW1wb3J0cy9wb2RjYXN0cy9iM2MxNTYwNi1kNjIyLTRkNzQtYjYzZC03YTg2NjIzOTgwMTkvZXBpc29kZXMvNjlkOTlmYWUtZTU2NS00MmNiLTk2MDYtMWJlMmZkYmJlZWFlL2ZkNTMyMTFmLTM1OGYtNDkxYS05NGM1LTBjZGJjMTk4MTFlZS5tcDMifQ==.mp3" length="14499693" type="audio/mpeg"/><itunes:summary>&lt;p&gt;I continue my series on the Blueprint for an AI Bill of Rights, tackling your right to notice and explanation in this episode. I suggest baseline education for all people in what AI can and cannot do, lay out a framework for regular updates to notifications and explanations for healthcare AI, and talk about the most common lie told.&lt;/p&gt;</itunes:summary><itunes:explicit>no</itunes:explicit><itunes:duration>00:10:04</itunes:duration><itunes:image href="https://hosting-media.riverside.com/media/imports/podcasts/b3c15606-d622-4d74-b63d-7a8662398019/502638a1-c2ed-4c08-b5c9-de4b7e9c0f94.png"/><itunes:title>Do I Really Want To Know When There&apos;s AI In My Healthcare?</itunes:title><itunes:episodeType>full</itunes:episodeType></item><item><title><![CDATA[What Does AI Driven Leadership Look Like?]]></title><description><![CDATA[<p>In this episode, I talk about Hacking Healthcare, a recent read by Tom Lawry. We're going to digest his thoughts on AI-driven leadership in healthcare, thinking about "value" and "shareholders" a little differently, and encourage a tolerance for learning systems. </p>]]></description><link>https://zencastr.com/z/IcoqXkDx</link><guid isPermaLink="false">13c8321b-f477-4e72-b08b-598179d24821</guid><dc:creator><![CDATA[Jennifer Owens]]></dc:creator><pubDate>Thu, 11 Apr 2024 16:24:52 GMT</pubDate><enclosure url="https://api.riverside.com/hosting-analytics/media/d1b5f27dc1893fe3229198fa3a96f5b5b34ae3f040bacfa691d8c6aae1c2f6b9/eyJlcGlzb2RlSWQiOiIzYzcwODZlZC1mNTUwLTQ3MDktYTNjNC02NDgxZDliYzY3MWIiLCJwb2RjYXN0SWQiOiJiM2MxNTYwNi1kNjIyLTRkNzQtYjYzZC03YTg2NjIzOTgwMTkiLCJhY2NvdW50SWQiOiI2NjQyNjJmODUzYmU3MWJjMmZkNGNjN2YiLCJwYXRoIjoibWVkaWEvaW1wb3J0cy9wb2RjYXN0cy9iM2MxNTYwNi1kNjIyLTRkNzQtYjYzZC03YTg2NjIzOTgwMTkvZXBpc29kZXMvM2M3MDg2ZWQtZjU1MC00NzA5LWEzYzQtNjQ4MWQ5YmM2NzFiL2IyZDY0ODMzLWRiNmYtNGY1NC1iMjA1LTk0ZWJjMzgwOTc3ZS5tcDMifQ==.mp3" length="13996269" type="audio/mpeg"/><itunes:summary>&lt;p&gt;In this episode, I talk about Hacking Healthcare, a recent read by Tom Lawry. We&apos;re going to digest his thoughts on AI-driven leadership in healthcare, thinking about &quot;value&quot; and &quot;shareholders&quot; a little differently, and encourage a tolerance for learning systems. &lt;/p&gt;</itunes:summary><itunes:explicit>no</itunes:explicit><itunes:duration>00:09:43</itunes:duration><itunes:image href="https://hosting-media.riverside.com/media/imports/podcasts/b3c15606-d622-4d74-b63d-7a8662398019/502638a1-c2ed-4c08-b5c9-de4b7e9c0f94.png"/><itunes:title>What Does AI Driven Leadership Look Like?</itunes:title><itunes:episodeType>full</itunes:episodeType></item><item><title><![CDATA[How Does Healthcare Impact The Environment? - with Kay Dickason of Tipping Point Sustainability]]></title><description><![CDATA[<p>In my first interview episode, I talk with Kay Dickason of Tipping Point Sustainability about the ecological impacts of healthcare. </p>]]></description><link>https://zencastr.com/z/uU7o5UtT</link><guid isPermaLink="false">6a64ee73-dea9-44cf-9f26-7dd2a8d1855a</guid><dc:creator><![CDATA[Jennifer Owens]]></dc:creator><pubDate>Sun, 14 May 2023 23:12:01 GMT</pubDate><enclosure url="https://api.riverside.com/hosting-analytics/media/f5b3b961126d212831d47d9cb7f8f6c30cba13e99a04b6eb2601f56da10d7983/eyJlcGlzb2RlSWQiOiI0ZWI0OTI1Ni1kMzM4LTRjZWYtOGI3ZS1jNmZhOGY0NGNkNGIiLCJwb2RjYXN0SWQiOiJiM2MxNTYwNi1kNjIyLTRkNzQtYjYzZC03YTg2NjIzOTgwMTkiLCJhY2NvdW50SWQiOiI2NjQyNjJmODUzYmU3MWJjMmZkNGNjN2YiLCJwYXRoIjoibWVkaWEvaW1wb3J0cy9wb2RjYXN0cy9iM2MxNTYwNi1kNjIyLTRkNzQtYjYzZC03YTg2NjIzOTgwMTkvZXBpc29kZXMvNGViNDkyNTYtZDMzOC00Y2VmLThiN2UtYzZmYThmNDRjZDRiL2RjYjM1N2Y4LWQ1NDMtNDdiNi1hN2YwLTQ1NTI1Y2E1Y2ZkNi5tcDMifQ==.mp3" length="21941613" type="audio/mpeg"/><itunes:summary>&lt;p&gt;In my first interview episode, I talk with Kay Dickason of Tipping Point Sustainability about the ecological impacts of healthcare. &lt;/p&gt;</itunes:summary><itunes:explicit>no</itunes:explicit><itunes:duration>00:15:14</itunes:duration><itunes:image href="https://hosting-media.riverside.com/media/imports/podcasts/b3c15606-d622-4d74-b63d-7a8662398019/episodes/4eb49256-d338-4cef-8b7e-c6fa8f44cd4b/30754cf2-50e4-41a1-bb50-696df582638c.png"/><itunes:title>How Does Healthcare Impact The Environment? - with Kay Dickason of Tipping Point Sustainability</itunes:title><itunes:episodeType>full</itunes:episodeType></item><item><title><![CDATA[What's Your Advice for BRMs? With Jack Stock]]></title><description><![CDATA[<p>In this episode I talk with Jack Stock, Senior IT BRM at Cleveland Clinic, about changes he's seen in the field over his career. Jack also shares some sage advice for BRM programs looking to mature. </p>]]></description><link>https://zencastr.com/z/UqfHYzBD</link><guid isPermaLink="false">bfa2623d-e03f-4c05-a2c7-cdc4e4bb92f0</guid><dc:creator><![CDATA[Jennifer Owens]]></dc:creator><pubDate>Mon, 15 Apr 2024 17:57:26 GMT</pubDate><enclosure url="https://api.riverside.com/hosting-analytics/media/aec5c3aee3b4ed67a37eb33a3ed5965034cbe24e679954193fe19a61c755f745/eyJlcGlzb2RlSWQiOiIzMjQ5NzgyNC0xZDUzLTRlYzktYWI4YS04NTZiYWIzZWNlYmIiLCJwb2RjYXN0SWQiOiJiM2MxNTYwNi1kNjIyLTRkNzQtYjYzZC03YTg2NjIzOTgwMTkiLCJhY2NvdW50SWQiOiI2NjQyNjJmODUzYmU3MWJjMmZkNGNjN2YiLCJwYXRoIjoibWVkaWEvaW1wb3J0cy9wb2RjYXN0cy9iM2MxNTYwNi1kNjIyLTRkNzQtYjYzZC03YTg2NjIzOTgwMTkvZXBpc29kZXMvMzI0OTc4MjQtMWQ1My00ZWM5LWFiOGEtODU2YmFiM2VjZWJiL2RhOGRmMTVmLTIwNzgtNGM0Zi1iMDgyLTE0ZDA5ZDRmNzU5MS5tcDMifQ==.mp3" length="12833901" type="audio/mpeg"/><itunes:summary>&lt;p&gt;In this episode I talk with Jack Stock, Senior IT BRM at Cleveland Clinic, about changes he&apos;s seen in the field over his career. Jack also shares some sage advice for BRM programs looking to mature. &lt;/p&gt;</itunes:summary><itunes:explicit>no</itunes:explicit><itunes:duration>00:08:54</itunes:duration><itunes:image href="https://hosting-media.riverside.com/media/imports/podcasts/b3c15606-d622-4d74-b63d-7a8662398019/502638a1-c2ed-4c08-b5c9-de4b7e9c0f94.png"/><itunes:title>What&apos;s Your Advice for BRMs? With Jack Stock</itunes:title><itunes:episodeType>full</itunes:episodeType></item><item><title><![CDATA[How Does Bias in STEM Affect Us? With Jasbir Kooner, MBA]]></title><description><![CDATA[<p>In this episode I interview collaborator, AI master, and book-recommender extraordinaire Jasbir Kooner. We talk about her experiences with gender bias in STEM, and I ask for her advice for all folks in STEM who want a more equal playing field.</p><p>Books referenced:</p><p>Good Guys: How Men Can Be Better Allies for Women in the Workplace, by David G Smith</p><p>Invisible Women: Data Bias in a World Designed for Men, by Caroline Criado Perez</p><h2>The End of Bias, A Beginning: The Science and Practice of Overcoming Unconscious Bias, by Jessica Nordell</h2>]]></description><link>https://zencastr.com/z/XKN3Y4nB</link><guid isPermaLink="false">2a6cc025-3219-4223-a8e5-c17d3d4413d8</guid><dc:creator><![CDATA[Jennifer Owens]]></dc:creator><pubDate>Tue, 02 Apr 2024 19:52:03 GMT</pubDate><enclosure url="https://api.riverside.com/hosting-analytics/media/b7d7372ab9732c293ca47c1cb98f3d6f2786491434bcc5f67cc919407dd8c06e/eyJlcGlzb2RlSWQiOiJmNjgyYjU5ZC0yZWI0LTQ0MjktYWUwNy0wNWI4ZGI4MDIyMWMiLCJwb2RjYXN0SWQiOiJiM2MxNTYwNi1kNjIyLTRkNzQtYjYzZC03YTg2NjIzOTgwMTkiLCJhY2NvdW50SWQiOiI2NjQyNjJmODUzYmU3MWJjMmZkNGNjN2YiLCJwYXRoIjoibWVkaWEvaW1wb3J0cy9wb2RjYXN0cy9iM2MxNTYwNi1kNjIyLTRkNzQtYjYzZC03YTg2NjIzOTgwMTkvZXBpc29kZXMvZjY4MmI1OWQtMmViNC00NDI5LWFlMDctMDViOGRiODAyMjFjLzE0MGY4MmRmLTc2MDQtNDBmMi04MmNiLWYwYWE5ZGQ4YmFhZS5tcDMifQ==.mp3" length="20029293" type="audio/mpeg"/><itunes:summary>&lt;p&gt;In this episode I interview collaborator, AI master, and book-recommender extraordinaire Jasbir Kooner. We talk about her experiences with gender bias in STEM, and I ask for her advice for all folks in STEM who want a more equal playing field.&lt;/p&gt;&lt;p&gt;Books referenced:&lt;/p&gt;&lt;p&gt;Good Guys: How Men Can Be Better Allies for Women in the Workplace, by David G Smith&lt;/p&gt;&lt;p&gt;Invisible Women: Data Bias in a World Designed for Men, by Caroline Criado Perez&lt;/p&gt;&lt;h2&gt;The End of Bias, A Beginning: The Science and Practice of Overcoming Unconscious Bias, by Jessica Nordell&lt;/h2&gt;</itunes:summary><itunes:explicit>no</itunes:explicit><itunes:duration>00:13:54</itunes:duration><itunes:image href="https://hosting-media.riverside.com/media/imports/podcasts/b3c15606-d622-4d74-b63d-7a8662398019/502638a1-c2ed-4c08-b5c9-de4b7e9c0f94.png"/><itunes:title>How Does Bias in STEM Affect Us? With Jasbir Kooner, MBA</itunes:title><itunes:episodeType>full</itunes:episodeType></item><item><title><![CDATA[What Does the AMA Say About AI?]]></title><description><![CDATA[<p>In this episode, I cover AMA's recent report on the Future of Health, in which they summarize the current #ai landscape, and argue persuasively for the use of "augmented intelligence" rather than "artificial intelligence." They close with stats on the desire of physician stakeholders to be involved early and often in AI use case definition, evaluation, and implementation.   </p><p>Report: https://www.ama-assn.org/system/files/future-health-augmented-intelligence-health-care.pdf</p>]]></description><link>https://zencastr.com/z/YDgytkF0</link><guid isPermaLink="false">cc044bb9-fffa-4860-8894-48b6282819cf</guid><dc:creator><![CDATA[Jennifer Owens]]></dc:creator><pubDate>Mon, 18 Mar 2024 19:32:02 GMT</pubDate><enclosure url="https://api.riverside.com/hosting-analytics/media/ef2be2496efb641df1f6ff74ab41273a86598d0dde0ba7f442f726545186e1bb/eyJlcGlzb2RlSWQiOiIwMWZjZDQ3NS0xODcyLTRiOTAtYjQ2Ny1lZGIwZWI1OTNhMGQiLCJwb2RjYXN0SWQiOiJiM2MxNTYwNi1kNjIyLTRkNzQtYjYzZC03YTg2NjIzOTgwMTkiLCJhY2NvdW50SWQiOiI2NjQyNjJmODUzYmU3MWJjMmZkNGNjN2YiLCJwYXRoIjoibWVkaWEvaW1wb3J0cy9wb2RjYXN0cy9iM2MxNTYwNi1kNjIyLTRkNzQtYjYzZC03YTg2NjIzOTgwMTkvZXBpc29kZXMvMDFmY2Q0NzUtMTg3Mi00YjkwLWI0NjctZWRiMGViNTkzYTBkL2IwYmVmMDM3LWIwMmUtNDhlNC1hN2RlLTFkY2MxNTZhMDkwYi5tcDMifQ==.mp3" length="12089709" type="audio/mpeg"/><itunes:summary>&lt;p&gt;In this episode, I cover AMA&apos;s recent report on the Future of Health, in which they summarize the current #ai landscape, and argue persuasively for the use of &quot;augmented intelligence&quot; rather than &quot;artificial intelligence.&quot; They close with stats on the desire of physician stakeholders to be involved early and often in AI use case definition, evaluation, and implementation.   &lt;/p&gt;&lt;p&gt;Report: https://www.ama-assn.org/system/files/future-health-augmented-intelligence-health-care.pdf&lt;/p&gt;</itunes:summary><itunes:explicit>no</itunes:explicit><itunes:duration>00:08:23</itunes:duration><itunes:image href="https://hosting-media.riverside.com/media/imports/podcasts/b3c15606-d622-4d74-b63d-7a8662398019/502638a1-c2ed-4c08-b5c9-de4b7e9c0f94.png"/><itunes:title>What Does the AMA Say About AI?</itunes:title><itunes:episodeType>full</itunes:episodeType></item><item><title><![CDATA[What If I Want To Talk To a Human?]]></title><description><![CDATA[<p>My series on the Blueprint for an AI Bill of Rights continues with this episode on your right to a human alternative. I talk about TSA pat downs, my consistent MyChart messages to my doctor when my lab results come back flagged, and narcotics databases.</p>]]></description><link>https://zencastr.com/z/EV-vXSIO</link><guid isPermaLink="false">e0c293c9-52aa-4868-ab02-6f8ff9294e33</guid><dc:creator><![CDATA[Jennifer Owens]]></dc:creator><pubDate>Mon, 18 Sep 2023 15:15:34 GMT</pubDate><enclosure url="https://api.riverside.com/hosting-analytics/media/4ceab72084c6123f20c61ca9277d044358b98d6ce8c14823d0c610aafaf071ba/eyJlcGlzb2RlSWQiOiJmZGY5MGQ5MS02MWU0LTQ3MjUtYTZkZS0zODRmMTdiM2VkNmUiLCJwb2RjYXN0SWQiOiJiM2MxNTYwNi1kNjIyLTRkNzQtYjYzZC03YTg2NjIzOTgwMTkiLCJhY2NvdW50SWQiOiI2NjQyNjJmODUzYmU3MWJjMmZkNGNjN2YiLCJwYXRoIjoibWVkaWEvaW1wb3J0cy9wb2RjYXN0cy9iM2MxNTYwNi1kNjIyLTRkNzQtYjYzZC03YTg2NjIzOTgwMTkvZXBpc29kZXMvZmRmOTBkOTEtNjFlNC00NzI1LWE2ZGUtMzg0ZjE3YjNlZDZlLzU1NmFmNWQ5LTBjYmEtNGUwNS1hMDcxLThjZWZlMjk5M2U0Zi5tcDMifQ==.mp3" length="11772333" type="audio/mpeg"/><itunes:summary>&lt;p&gt;My series on the Blueprint for an AI Bill of Rights continues with this episode on your right to a human alternative. I talk about TSA pat downs, my consistent MyChart messages to my doctor when my lab results come back flagged, and narcotics databases.&lt;/p&gt;</itunes:summary><itunes:explicit>no</itunes:explicit><itunes:duration>00:08:10</itunes:duration><itunes:image href="https://hosting-media.riverside.com/media/imports/podcasts/b3c15606-d622-4d74-b63d-7a8662398019/502638a1-c2ed-4c08-b5c9-de4b7e9c0f94.png"/><itunes:title>What If I Want To Talk To a Human?</itunes:title><itunes:episodeType>full</itunes:episodeType></item><item><title><![CDATA[What Does Data Privacy Mean?]]></title><description><![CDATA[<p>In this episode I talk about health data privacy - about the trails of data we leave as we move through the world, the blurring of the public and private spheres, my own personal biases on data privacy, and lay out some clear recommendations for healthcare organizations as they think about harvesting data for AI and ML.</p>]]></description><link>https://zencastr.com/z/3t5_wMJs</link><guid isPermaLink="false">aa27196b-6735-414c-8a48-870eef3c4924</guid><dc:creator><![CDATA[Jennifer Owens]]></dc:creator><pubDate>Mon, 18 Sep 2023 15:22:39 GMT</pubDate><enclosure url="https://api.riverside.com/hosting-analytics/media/8ee48f60f7a776906542d71976582b9f7114f8c8db797e7b6715b94af10136db/eyJlcGlzb2RlSWQiOiJlMWRhM2IwOC0wMjdhLTQxMjQtYTU3Yi0xMmYzNzUyNGNlMWEiLCJwb2RjYXN0SWQiOiJiM2MxNTYwNi1kNjIyLTRkNzQtYjYzZC03YTg2NjIzOTgwMTkiLCJhY2NvdW50SWQiOiI2NjQyNjJmODUzYmU3MWJjMmZkNGNjN2YiLCJwYXRoIjoibWVkaWEvaW1wb3J0cy9wb2RjYXN0cy9iM2MxNTYwNi1kNjIyLTRkNzQtYjYzZC03YTg2NjIzOTgwMTkvZXBpc29kZXMvZTFkYTNiMDgtMDI3YS00MTI0LWE1N2ItMTJmMzc1MjRjZTFhL2NjNDQ3ZDA4LTI0OGEtNDlhNS1iMDMyLTljOWUzYzhjN2U3ZS5tcDMifQ==.mp3" length="18445293" type="audio/mpeg"/><itunes:summary>&lt;p&gt;In this episode I talk about health data privacy - about the trails of data we leave as we move through the world, the blurring of the public and private spheres, my own personal biases on data privacy, and lay out some clear recommendations for healthcare organizations as they think about harvesting data for AI and ML.&lt;/p&gt;</itunes:summary><itunes:explicit>no</itunes:explicit><itunes:duration>00:12:48</itunes:duration><itunes:image href="https://hosting-media.riverside.com/media/imports/podcasts/b3c15606-d622-4d74-b63d-7a8662398019/502638a1-c2ed-4c08-b5c9-de4b7e9c0f94.png"/><itunes:title>What Does Data Privacy Mean?</itunes:title><itunes:episodeType>full</itunes:episodeType></item><item><title><![CDATA[What is "Objective" Data?]]></title><description><![CDATA[<p>In this episode I talk about "objective" data and training AI from a patient perspective as well from a healthcare data perspective.</p>]]></description><link>https://zencastr.com/z/aa4rLij6</link><guid isPermaLink="false">42178374-ced3-43ed-94fd-aa2c7387e356</guid><dc:creator><![CDATA[Jennifer Owens]]></dc:creator><pubDate>Mon, 18 Sep 2023 15:35:24 GMT</pubDate><enclosure url="https://api.riverside.com/hosting-analytics/media/542a1f0b1e0e93f7615a9dd2f631c4f130e60163b3478d6679d44ea856e071bb/eyJlcGlzb2RlSWQiOiI4NzM0MTNlNy1mZjZkLTQ1ZDEtOTM1Zi01YmFhMDQ5Yjk3MWMiLCJwb2RjYXN0SWQiOiJiM2MxNTYwNi1kNjIyLTRkNzQtYjYzZC03YTg2NjIzOTgwMTkiLCJhY2NvdW50SWQiOiI2NjQyNjJmODUzYmU3MWJjMmZkNGNjN2YiLCJwYXRoIjoibWVkaWEvaW1wb3J0cy9wb2RjYXN0cy9iM2MxNTYwNi1kNjIyLTRkNzQtYjYzZC03YTg2NjIzOTgwMTkvZXBpc29kZXMvODczNDEzZTctZmY2ZC00NWQxLTkzNWYtNWJhYTA0OWI5NzFjLzA4OTc2M2Q3LTk0ODMtNDk2My05YzY4LTc4NGQ4NTM5ZTRiOC5tcDMifQ==.mp3" length="8684397" type="audio/mpeg"/><itunes:summary>&lt;p&gt;In this episode I talk about &quot;objective&quot; data and training AI from a patient perspective as well from a healthcare data perspective.&lt;/p&gt;</itunes:summary><itunes:explicit>no</itunes:explicit><itunes:duration>00:06:01</itunes:duration><itunes:image href="https://hosting-media.riverside.com/media/imports/podcasts/b3c15606-d622-4d74-b63d-7a8662398019/502638a1-c2ed-4c08-b5c9-de4b7e9c0f94.png"/><itunes:title>What is &quot;Objective&quot; Data?</itunes:title><itunes:episodeType>full</itunes:episodeType></item><item><title><![CDATA[What Can Patient-Centered Videos Do?]]></title><description><![CDATA[<p>In this episode I discuss patient-provided videos to assist with caregiver/patient communication, some risks, benefits, and the future of healthcare IT - patient-centered healthcare IT.</p>]]></description><link>https://zencastr.com/z/ZHXGGvCE</link><guid isPermaLink="false">e822aa2b-3fbe-4331-a78a-b76a565bf263</guid><dc:creator><![CDATA[Jennifer Owens]]></dc:creator><pubDate>Mon, 10 Jul 2023 03:15:53 GMT</pubDate><enclosure url="https://api.riverside.com/hosting-analytics/media/2b61b9c95a845198022eab67e5847234fd55544f50877dc32823db79e3af475f/eyJlcGlzb2RlSWQiOiIyOWI5YjJiYy1hNzNmLTQ1MGUtODkwNi1mNmZhZjk0YjczODgiLCJwb2RjYXN0SWQiOiJiM2MxNTYwNi1kNjIyLTRkNzQtYjYzZC03YTg2NjIzOTgwMTkiLCJhY2NvdW50SWQiOiI2NjQyNjJmODUzYmU3MWJjMmZkNGNjN2YiLCJwYXRoIjoibWVkaWEvaW1wb3J0cy9wb2RjYXN0cy9iM2MxNTYwNi1kNjIyLTRkNzQtYjYzZC03YTg2NjIzOTgwMTkvZXBpc29kZXMvMjliOWIyYmMtYTczZi00NTBlLTg5MDYtZjZmYWY5NGI3Mzg4L2ExYTE4ODkxLTRmZDktNGY1Ni05ODk4LTYzMmZlZjlhMjRkZC5tcDMifQ==.mp3" length="11086317" type="audio/mpeg"/><itunes:summary>&lt;p&gt;In this episode I discuss patient-provided videos to assist with caregiver/patient communication, some risks, benefits, and the future of healthcare IT - patient-centered healthcare IT.&lt;/p&gt;</itunes:summary><itunes:explicit>no</itunes:explicit><itunes:duration>00:07:41</itunes:duration><itunes:image href="https://hosting-media.riverside.com/media/imports/podcasts/b3c15606-d622-4d74-b63d-7a8662398019/502638a1-c2ed-4c08-b5c9-de4b7e9c0f94.png"/><itunes:title>What Can Patient-Centered Videos Do?</itunes:title><itunes:episodeType>full</itunes:episodeType></item><item><title><![CDATA[What Do Humans Do Better Than AI?]]></title><description><![CDATA[<p>In this episode I get a little teary about the Mars Rover while talking through some thoughts on what AI does best and what humans do best.</p>]]></description><link>https://zencastr.com/z/YpMMzu4n</link><guid isPermaLink="false">ceee4116-947b-4797-aec5-e0be400a25d5</guid><dc:creator><![CDATA[Jennifer Owens]]></dc:creator><pubDate>Mon, 18 Sep 2023 15:29:20 GMT</pubDate><enclosure url="https://api.riverside.com/hosting-analytics/media/aaa75c2253a7bc78be8908d981d1296c6ae1003f8d87deadd54fe6043988e206/eyJlcGlzb2RlSWQiOiJjODI0MjZiYi00ZmEzLTRiOWEtOWE4Mi01NTUyMTBhOGFhYzkiLCJwb2RjYXN0SWQiOiJiM2MxNTYwNi1kNjIyLTRkNzQtYjYzZC03YTg2NjIzOTgwMTkiLCJhY2NvdW50SWQiOiI2NjQyNjJmODUzYmU3MWJjMmZkNGNjN2YiLCJwYXRoIjoibWVkaWEvaW1wb3J0cy9wb2RjYXN0cy9iM2MxNTYwNi1kNjIyLTRkNzQtYjYzZC03YTg2NjIzOTgwMTkvZXBpc29kZXMvYzgyNDI2YmItNGZhMy00YjlhLTlhODItNTU1MjEwYThhYWM5Lzk2Y2M0MzI5LWFhZjktNGZkNi1hMjA3LTMzYjc3OGI3ZDEyMi5tcDMifQ==.mp3" length="13623597" type="audio/mpeg"/><itunes:summary>&lt;p&gt;In this episode I get a little teary about the Mars Rover while talking through some thoughts on what AI does best and what humans do best.&lt;/p&gt;</itunes:summary><itunes:explicit>no</itunes:explicit><itunes:duration>00:09:27</itunes:duration><itunes:image href="https://hosting-media.riverside.com/media/imports/podcasts/b3c15606-d622-4d74-b63d-7a8662398019/502638a1-c2ed-4c08-b5c9-de4b7e9c0f94.png"/><itunes:title>What Do Humans Do Better Than AI?</itunes:title><itunes:episodeType>full</itunes:episodeType></item><item><title><![CDATA[How Should Healthcare Govern AI?]]></title><description><![CDATA[<p>I wrap my series on the Blueprint for an AI Bill of Rights by summarizing my takeaways for healthcare organizations that want to govern their AI use. I talk about endpoint selection, documentation, data selection, and regular review of AI use cases.</p>]]></description><link>https://zencastr.com/z/1Dy7OxXn</link><guid isPermaLink="false">037e3e54-6117-439a-9701-90266702fe1d</guid><dc:creator><![CDATA[Jennifer Owens]]></dc:creator><pubDate>Mon, 18 Sep 2023 15:14:20 GMT</pubDate><enclosure url="https://api.riverside.com/hosting-analytics/media/65445534d2e1185c8b2d5e2067c1e3da8ec241d164cbbaff6dcb86af999f1c1a/eyJlcGlzb2RlSWQiOiJkMjFkZDYzYi03OGJjLTRmZWMtYmZlNy01NjRiNGRhNGRiOWEiLCJwb2RjYXN0SWQiOiJiM2MxNTYwNi1kNjIyLTRkNzQtYjYzZC03YTg2NjIzOTgwMTkiLCJhY2NvdW50SWQiOiI2NjQyNjJmODUzYmU3MWJjMmZkNGNjN2YiLCJwYXRoIjoibWVkaWEvaW1wb3J0cy9wb2RjYXN0cy9iM2MxNTYwNi1kNjIyLTRkNzQtYjYzZC03YTg2NjIzOTgwMTkvZXBpc29kZXMvZDIxZGQ2M2ItNzhiYy00ZmVjLWJmZTctNTY0YjRkYTRkYjlhL2IwMWQwY2YzLWVlMWQtNGE2Yi1iM2QxLTg2NmYzZjVmNGMxNS5tcDMifQ==.mp3" length="9639405" type="audio/mpeg"/><itunes:summary>&lt;p&gt;I wrap my series on the Blueprint for an AI Bill of Rights by summarizing my takeaways for healthcare organizations that want to govern their AI use. I talk about endpoint selection, documentation, data selection, and regular review of AI use cases.&lt;/p&gt;</itunes:summary><itunes:explicit>no</itunes:explicit><itunes:duration>00:06:41</itunes:duration><itunes:image href="https://hosting-media.riverside.com/media/imports/podcasts/b3c15606-d622-4d74-b63d-7a8662398019/502638a1-c2ed-4c08-b5c9-de4b7e9c0f94.png"/><itunes:title>How Should Healthcare Govern AI?</itunes:title><itunes:episodeType>full</itunes:episodeType></item><item><title><![CDATA[What's In The AI Bill of Rights?]]></title><description><![CDATA[<p>In this episode I give a brief overview of the blueprint of the AI Bill of Rights set out by the White House and the Office of Science and Technology. We'll be discussing each of these rights in greater detail in the coming episodes. Come along for the ride!</p>]]></description><link>https://zencastr.com/z/oqL0qgSM</link><guid isPermaLink="false">a495bd55-0e4e-4c98-89d7-7333a5c36f98</guid><dc:creator><![CDATA[Jennifer Owens]]></dc:creator><pubDate>Mon, 18 Sep 2023 15:27:26 GMT</pubDate><enclosure url="https://api.riverside.com/hosting-analytics/media/660ddd358bf8e5798f95181fa42bb5c97df03b1f7c0aced503c15f080d5aef2d/eyJlcGlzb2RlSWQiOiI4NTFkYjkyNy02NDcxLTRkMGEtODYyNS01YTc5MDkwNDIwYTciLCJwb2RjYXN0SWQiOiJiM2MxNTYwNi1kNjIyLTRkNzQtYjYzZC03YTg2NjIzOTgwMTkiLCJhY2NvdW50SWQiOiI2NjQyNjJmODUzYmU3MWJjMmZkNGNjN2YiLCJwYXRoIjoibWVkaWEvaW1wb3J0cy9wb2RjYXN0cy9iM2MxNTYwNi1kNjIyLTRkNzQtYjYzZC03YTg2NjIzOTgwMTkvZXBpc29kZXMvODUxZGI5MjctNjQ3MS00ZDBhLTg2MjUtNWE3OTA5MDQyMGE3L2I2MTczMjE5LWE3ZGYtNGQ4OS05Y2YyLTYzYzk3ZjE1ZGU0NC5tcDMifQ==.mp3" length="7418925" type="audio/mpeg"/><itunes:summary>&lt;p&gt;In this episode I give a brief overview of the blueprint of the AI Bill of Rights set out by the White House and the Office of Science and Technology. We&apos;ll be discussing each of these rights in greater detail in the coming episodes. Come along for the ride!&lt;/p&gt;</itunes:summary><itunes:explicit>no</itunes:explicit><itunes:duration>00:05:09</itunes:duration><itunes:image href="https://hosting-media.riverside.com/media/imports/podcasts/b3c15606-d622-4d74-b63d-7a8662398019/502638a1-c2ed-4c08-b5c9-de4b7e9c0f94.png"/><itunes:title>What&apos;s In The AI Bill of Rights?</itunes:title><itunes:episodeType>full</itunes:episodeType></item><item><title><![CDATA[NOHIMSS Recap - AI at MetroHealth]]></title><description><![CDATA[<p>In this episode I recap Dr. David Kaelber's keynote from NOHIMSS a few weeks ago about the impact of artificial intelligence on their no-show rate. </p>]]></description><link>https://zencastr.com/z/DJEYnWWo</link><guid isPermaLink="false">285a7414-3f4e-4c0c-a915-010a12366d5d</guid><dc:creator><![CDATA[Jennifer Owens]]></dc:creator><pubDate>Mon, 26 Jun 2023 00:48:40 GMT</pubDate><enclosure url="https://api.riverside.com/hosting-analytics/media/21c52547799b4d89bd446fc4c66100c94ee87b1857d96260c81e0c6b4e1b7c58/eyJlcGlzb2RlSWQiOiI4NDk3MjhjMi1mNzNmLTQ1MDktYTA5Mi04ZjUzNjZhOTViODQiLCJwb2RjYXN0SWQiOiJiM2MxNTYwNi1kNjIyLTRkNzQtYjYzZC03YTg2NjIzOTgwMTkiLCJhY2NvdW50SWQiOiI2NjQyNjJmODUzYmU3MWJjMmZkNGNjN2YiLCJwYXRoIjoibWVkaWEvaW1wb3J0cy9wb2RjYXN0cy9iM2MxNTYwNi1kNjIyLTRkNzQtYjYzZC03YTg2NjIzOTgwMTkvZXBpc29kZXMvODQ5NzI4YzItZjczZi00NTA5LWEwOTItOGY1MzY2YTk1Yjg0LzEwODUzNzFjLWQ3MTYtNGY3Yi1iNmViLTAwOWU3ZTUzOWNmYS5tcDMifQ==.mp3" length="8416557" type="audio/mpeg"/><itunes:summary>&lt;p&gt;In this episode I recap Dr. David Kaelber&apos;s keynote from NOHIMSS a few weeks ago about the impact of artificial intelligence on their no-show rate. &lt;/p&gt;</itunes:summary><itunes:explicit>no</itunes:explicit><itunes:duration>00:05:50</itunes:duration><itunes:image href="https://hosting-media.riverside.com/media/imports/podcasts/b3c15606-d622-4d74-b63d-7a8662398019/502638a1-c2ed-4c08-b5c9-de4b7e9c0f94.png"/><itunes:title>NOHIMSS Recap - AI at MetroHealth</itunes:title><itunes:episodeType>full</itunes:episodeType></item><item><title><![CDATA[How Do We Use AI In Healthcare? - with Piyush Mathur, MD, FCCM, FASA]]></title><description><![CDATA[<p>Piyush Mathur and I talk about current and future uses of AI in healthcare, including validation. </p>]]></description><link>https://zencastr.com/z/7rb2GXKM</link><guid isPermaLink="false">cfdb2611-2703-4899-9f33-24a34cecfd28</guid><dc:creator><![CDATA[Jennifer Owens]]></dc:creator><pubDate>Fri, 02 Jun 2023 12:46:27 GMT</pubDate><enclosure url="https://api.riverside.com/hosting-analytics/media/5436e42abf65310f253983ad9b3b8ea8d9e23804e82cf1e891d59d54cc8ef195/eyJlcGlzb2RlSWQiOiJiZDZhMGE3ZS1iY2UxLTRjNzQtYThhMS0yZGY0YTcwOGUxMDciLCJwb2RjYXN0SWQiOiJiM2MxNTYwNi1kNjIyLTRkNzQtYjYzZC03YTg2NjIzOTgwMTkiLCJhY2NvdW50SWQiOiI2NjQyNjJmODUzYmU3MWJjMmZkNGNjN2YiLCJwYXRoIjoibWVkaWEvaW1wb3J0cy9wb2RjYXN0cy9iM2MxNTYwNi1kNjIyLTRkNzQtYjYzZC03YTg2NjIzOTgwMTkvZXBpc29kZXMvYmQ2YTBhN2UtYmNlMS00Yzc0LWE4YTEtMmRmNGE3MDhlMTA3LzBlYTAyNTEwLTY2ZDItNDYwOS1iZTk2LTVkMzJmOTc4MTdmNy5tcDMifQ==.mp3" length="12957165" type="audio/mpeg"/><itunes:summary>&lt;p&gt;Piyush Mathur and I talk about current and future uses of AI in healthcare, including validation. &lt;/p&gt;</itunes:summary><itunes:explicit>no</itunes:explicit><itunes:duration>00:08:59</itunes:duration><itunes:image href="https://hosting-media.riverside.com/media/imports/podcasts/b3c15606-d622-4d74-b63d-7a8662398019/502638a1-c2ed-4c08-b5c9-de4b7e9c0f94.png"/><itunes:title>How Do We Use AI In Healthcare? - with Piyush Mathur, MD, FCCM, FASA</itunes:title><itunes:episodeType>full</itunes:episodeType></item><item><title><![CDATA[Are Cognitive Biases Contagious?]]></title><description><![CDATA[<p>I discuss some of my favorite human cognitive biases and wonder about biases arising from machine learning. </p>]]></description><link>https://zencastr.com/z/J0ATALp6</link><guid isPermaLink="false">45512600-4f60-4396-a88e-6cb11e6c89a0</guid><dc:creator><![CDATA[Jennifer Owens]]></dc:creator><pubDate>Sun, 21 May 2023 12:16:58 GMT</pubDate><enclosure url="https://api.riverside.com/hosting-analytics/media/526913d80db21e1d121b9bf5a302e07306065726b8b1d4f71d84ed94724d97da/eyJlcGlzb2RlSWQiOiJhNWQzMDQ3Yi0yZWE5LTRkOGItOTZiMi1jOTA2NWE2YTllMzciLCJwb2RjYXN0SWQiOiJiM2MxNTYwNi1kNjIyLTRkNzQtYjYzZC03YTg2NjIzOTgwMTkiLCJhY2NvdW50SWQiOiI2NjQyNjJmODUzYmU3MWJjMmZkNGNjN2YiLCJwYXRoIjoibWVkaWEvaW1wb3J0cy9wb2RjYXN0cy9iM2MxNTYwNi1kNjIyLTRkNzQtYjYzZC03YTg2NjIzOTgwMTkvZXBpc29kZXMvYTVkMzA0N2ItMmVhOS00ZDhiLTk2YjItYzkwNjVhNmE5ZTM3L2EyODRmMTlhLWU3ZTMtNDQ3OS05ZWYyLWI2YmUyZGFmZWYzZi5tcDMifQ==.mp3" length="9080685" type="audio/mpeg"/><itunes:summary>&lt;p&gt;I discuss some of my favorite human cognitive biases and wonder about biases arising from machine learning. &lt;/p&gt;</itunes:summary><itunes:explicit>no</itunes:explicit><itunes:duration>00:06:18</itunes:duration><itunes:image href="https://hosting-media.riverside.com/media/imports/podcasts/b3c15606-d622-4d74-b63d-7a8662398019/502638a1-c2ed-4c08-b5c9-de4b7e9c0f94.png"/><itunes:title>Are Cognitive Biases Contagious?</itunes:title><itunes:episodeType>full</itunes:episodeType></item><item><title><![CDATA[Can EHRs "Do No Harm"?]]></title><description><![CDATA[<p>In this episode I talk about the Hippocratic Oath, and how the EHR supports our mandate to do no harm as caregivers. </p>]]></description><link>https://zencastr.com/z/bKl6wMct</link><guid isPermaLink="false">e4e13dd1-7e49-48bf-82b5-58a58215d626</guid><dc:creator><![CDATA[Jennifer Owens]]></dc:creator><pubDate>Sun, 21 May 2023 12:13:43 GMT</pubDate><enclosure url="https://api.riverside.com/hosting-analytics/media/38d6d79c5f6028da4915cfb898b8db91cfcce55a87e6e6f055c1dca200461593/eyJlcGlzb2RlSWQiOiIyM2UyMGEyMi1kYjIwLTQzNDEtYTI1OC1kODhhYTNjMjhmOWYiLCJwb2RjYXN0SWQiOiJiM2MxNTYwNi1kNjIyLTRkNzQtYjYzZC03YTg2NjIzOTgwMTkiLCJhY2NvdW50SWQiOiI2NjQyNjJmODUzYmU3MWJjMmZkNGNjN2YiLCJwYXRoIjoibWVkaWEvaW1wb3J0cy9wb2RjYXN0cy9iM2MxNTYwNi1kNjIyLTRkNzQtYjYzZC03YTg2NjIzOTgwMTkvZXBpc29kZXMvMjNlMjBhMjItZGIyMC00MzQxLWEyNTgtZDg4YWEzYzI4ZjlmLzk1YzY2ZDk2LWIwOGEtNGFjNS05NWNhLTMxMmVjMDdjNTlkOS5tcDMifQ==.mp3" length="8535789" type="audio/mpeg"/><itunes:summary>&lt;p&gt;In this episode I talk about the Hippocratic Oath, and how the EHR supports our mandate to do no harm as caregivers. &lt;/p&gt;</itunes:summary><itunes:explicit>no</itunes:explicit><itunes:duration>00:05:55</itunes:duration><itunes:image href="https://hosting-media.riverside.com/media/imports/podcasts/b3c15606-d622-4d74-b63d-7a8662398019/502638a1-c2ed-4c08-b5c9-de4b7e9c0f94.png"/><itunes:title>Can EHRs &quot;Do No Harm&quot;?</itunes:title><itunes:episodeType>full</itunes:episodeType></item><item><title><![CDATA[What's a Business Relationship Manager, and Why Does My Hospital Need One?]]></title><description><![CDATA[<p>in this episode I talk about my day job as an IT Business Relationship Manager, and give a hypothetical example of a project that benefits from having a person to speak both "clinician" and "IT" languages.</p>]]></description><link>https://zencastr.com/z/h9DYlWQ0</link><guid isPermaLink="false">80728526-46d7-4f51-a6e7-c97d1902df6e</guid><dc:creator><![CDATA[Jennifer Owens]]></dc:creator><pubDate>Tue, 28 Nov 2023 13:09:04 GMT</pubDate><enclosure url="https://api.riverside.com/hosting-analytics/media/4b6e6d182c920ba1735aeb25649960a2ec8f5cc7dac3d6238beffa51ad7c9c6f/eyJlcGlzb2RlSWQiOiIzMjY5Y2Y4Ni0yZjIzLTQ4YTYtOTk3ZS0wMjA3M2ZiNWIzODEiLCJwb2RjYXN0SWQiOiJiM2MxNTYwNi1kNjIyLTRkNzQtYjYzZC03YTg2NjIzOTgwMTkiLCJhY2NvdW50SWQiOiI2NjQyNjJmODUzYmU3MWJjMmZkNGNjN2YiLCJwYXRoIjoibWVkaWEvaW1wb3J0cy9wb2RjYXN0cy9iM2MxNTYwNi1kNjIyLTRkNzQtYjYzZC03YTg2NjIzOTgwMTkvZXBpc29kZXMvMzI2OWNmODYtMmYyMy00OGE2LTk5N2UtMDIwNzNmYjViMzgxLzRhNjJmMTNkLTgyOWMtNDM5Zi1hODZmLTMzZGI2NWI2OWE1Yi5tcDMifQ==.mp3" length="11732013" type="audio/mpeg"/><itunes:summary>&lt;p&gt;in this episode I talk about my day job as an IT Business Relationship Manager, and give a hypothetical example of a project that benefits from having a person to speak both &quot;clinician&quot; and &quot;IT&quot; languages.&lt;/p&gt;</itunes:summary><itunes:explicit>no</itunes:explicit><itunes:duration>00:08:08</itunes:duration><itunes:image href="https://hosting-media.riverside.com/media/imports/podcasts/b3c15606-d622-4d74-b63d-7a8662398019/502638a1-c2ed-4c08-b5c9-de4b7e9c0f94.png"/><itunes:title>What&apos;s a Business Relationship Manager, and Why Does My Hospital Need One?</itunes:title><itunes:episodeType>full</itunes:episodeType></item><item><title><![CDATA[How Do I Evaluate An AI Tool? - Part 1]]></title><description><![CDATA[<p>In this episode I begin laying out a framework for how to evaluate AI tools in the healthcare environment, starting with short vs long term impact, the value of a Super Bowl commercial, and figuring out tasks that AIs are good at. </p>]]></description><link>https://zencastr.com/z/0WBtJxOI</link><guid isPermaLink="false">d7f99f30-b7a3-4604-aff5-8c47c59a542e</guid><dc:creator><![CDATA[Jennifer Owens]]></dc:creator><pubDate>Tue, 12 Dec 2023 17:56:22 GMT</pubDate><enclosure url="https://api.riverside.com/hosting-analytics/media/fa5f4a41346c3d0f201149ba240c0694bdf64e0b7fbd1ef8a47cdcae1c6e51e4/eyJlcGlzb2RlSWQiOiJhODMzODZlNS04NmU2LTRjMmUtODFkNy1mNWUzOTRmNTFlMTIiLCJwb2RjYXN0SWQiOiJiM2MxNTYwNi1kNjIyLTRkNzQtYjYzZC03YTg2NjIzOTgwMTkiLCJhY2NvdW50SWQiOiI2NjQyNjJmODUzYmU3MWJjMmZkNGNjN2YiLCJwYXRoIjoibWVkaWEvaW1wb3J0cy9wb2RjYXN0cy9iM2MxNTYwNi1kNjIyLTRkNzQtYjYzZC03YTg2NjIzOTgwMTkvZXBpc29kZXMvYTgzMzg2ZTUtODZlNi00YzJlLTgxZDctZjVlMzk0ZjUxZTEyLzZmYWUyYzQ2LTQ1YzktNDMyMi05YzEwLTQ1YTczMDY3ZTIyYS5tcDMifQ==.mp3" length="8519661" type="audio/mpeg"/><itunes:summary>&lt;p&gt;In this episode I begin laying out a framework for how to evaluate AI tools in the healthcare environment, starting with short vs long term impact, the value of a Super Bowl commercial, and figuring out tasks that AIs are good at. &lt;/p&gt;</itunes:summary><itunes:explicit>no</itunes:explicit><itunes:duration>00:05:54</itunes:duration><itunes:image href="https://hosting-media.riverside.com/media/imports/podcasts/b3c15606-d622-4d74-b63d-7a8662398019/502638a1-c2ed-4c08-b5c9-de4b7e9c0f94.png"/><itunes:title>How Do I Evaluate An AI Tool? - Part 1</itunes:title><itunes:episodeType>full</itunes:episodeType></item><item><title><![CDATA[What Does A Biased AI Look Like, Anyway?]]></title><description><![CDATA[<p>In this episode of the Health Data Ethics podcast I dive into a recent paper in Lancet Digital Health about GPT-4 perpetuating racial and gender biases when asked to diagnose patients. After a victory lap, we settle in to talk about what this means for the use of LLMs as a diagnostic tool and what healthcare teams might want to consider if they implement this technology.</p>]]></description><link>https://zencastr.com/z/4SS7edjH</link><guid isPermaLink="false">2edce231-a5f2-47dc-8fb5-1fda558f1419</guid><dc:creator><![CDATA[Jennifer Owens]]></dc:creator><pubDate>Wed, 27 Dec 2023 20:35:57 GMT</pubDate><enclosure url="https://api.riverside.com/hosting-analytics/media/132c9ae7912a5e9131195d953d49e642c484ae8b0fc812af74b36861e7501925/eyJlcGlzb2RlSWQiOiI4NmI3ZDU0Zi1hMzBkLTQyMTktOWY1YS0yY2Q1ZDZmYzVhM2YiLCJwb2RjYXN0SWQiOiJiM2MxNTYwNi1kNjIyLTRkNzQtYjYzZC03YTg2NjIzOTgwMTkiLCJhY2NvdW50SWQiOiI2NjQyNjJmODUzYmU3MWJjMmZkNGNjN2YiLCJwYXRoIjoibWVkaWEvaW1wb3J0cy9wb2RjYXN0cy9iM2MxNTYwNi1kNjIyLTRkNzQtYjYzZC03YTg2NjIzOTgwMTkvZXBpc29kZXMvODZiN2Q1NGYtYTMwZC00MjE5LTlmNWEtMmNkNWQ2ZmM1YTNmL2EzMjdhOWZhLWU2YTctNDU4Yy05YjI3LTdlOTcxY2M2NDlhNC5tcDMifQ==.mp3" length="14226669" type="audio/mpeg"/><itunes:summary>&lt;p&gt;In this episode of the Health Data Ethics podcast I dive into a recent paper in Lancet Digital Health about GPT-4 perpetuating racial and gender biases when asked to diagnose patients. After a victory lap, we settle in to talk about what this means for the use of LLMs as a diagnostic tool and what healthcare teams might want to consider if they implement this technology.&lt;/p&gt;</itunes:summary><itunes:explicit>no</itunes:explicit><itunes:duration>00:09:52</itunes:duration><itunes:image href="https://hosting-media.riverside.com/media/imports/podcasts/b3c15606-d622-4d74-b63d-7a8662398019/502638a1-c2ed-4c08-b5c9-de4b7e9c0f94.png"/><itunes:title>What Does A Biased AI Look Like, Anyway?</itunes:title><itunes:episodeType>full</itunes:episodeType></item><item><title><![CDATA[How Does Scientific Self-Governing Work?]]></title><description><![CDATA[<p>In this episode I review The Code Breaker, a biography of Jennifer Doudna by Walter Isaacson. I talk about previous moral panics about new technology (recombinant DNA!) and compare with artificial intelligence today. </p>]]></description><link>https://zencastr.com/z/zmDCmPa1</link><guid isPermaLink="false">66bba54b-561b-4251-a9d6-5f6bcdfcbea3</guid><dc:creator><![CDATA[Jennifer Owens]]></dc:creator><pubDate>Sun, 21 Jan 2024 16:48:44 GMT</pubDate><enclosure url="https://api.riverside.com/hosting-analytics/media/e374a47ca35ccac1ba3a956a539666229b992976a7808d8445128d2639457fb1/eyJlcGlzb2RlSWQiOiIzZjI3OWI3Mi1jYmY2LTQwNjgtYTg3Zi03YmIyYTJlNDMxNTIiLCJwb2RjYXN0SWQiOiJiM2MxNTYwNi1kNjIyLTRkNzQtYjYzZC03YTg2NjIzOTgwMTkiLCJhY2NvdW50SWQiOiI2NjQyNjJmODUzYmU3MWJjMmZkNGNjN2YiLCJwYXRoIjoibWVkaWEvaW1wb3J0cy9wb2RjYXN0cy9iM2MxNTYwNi1kNjIyLTRkNzQtYjYzZC03YTg2NjIzOTgwMTkvZXBpc29kZXMvM2YyNzliNzItY2JmNi00MDY4LWE4N2YtN2JiMmEyZTQzMTUyLzI2N2M5YzE2LTMxYzktNDY3YS1iNWUyLTdlZWFmMTg5ZmRlMS5tcDMifQ==.mp3" length="13980141" type="audio/mpeg"/><itunes:summary>&lt;p&gt;In this episode I review The Code Breaker, a biography of Jennifer Doudna by Walter Isaacson. I talk about previous moral panics about new technology (recombinant DNA!) and compare with artificial intelligence today. &lt;/p&gt;</itunes:summary><itunes:explicit>no</itunes:explicit><itunes:duration>00:09:42</itunes:duration><itunes:image href="https://hosting-media.riverside.com/media/imports/podcasts/b3c15606-d622-4d74-b63d-7a8662398019/502638a1-c2ed-4c08-b5c9-de4b7e9c0f94.png"/><itunes:title>How Does Scientific Self-Governing Work?</itunes:title><itunes:episodeType>full</itunes:episodeType></item></channel></rss>