Episode 260
Episode 260
September 4, 2025

Why AI Progress Can Feel So Uncomfortable

We’ve celebrated the promise of AI: the speed, the scale, the potential. But with every new advance comes a growing unease. In this episode, we explore the cognitive dissonance consumers and healthcare leaders are feeling as AI tools outpace ethics, regulation, and even our own understanding. Deepfakes, patient privacy concerns, and the emotional toll of synthetic content are all adding to the discomfort and raising urgent questions. We also dive into bold brand reactions to Taylor Swift’s engagement and a heartfelt farewell to co-host Desirée Duncan.

Balancing AI’s Promise and Perils

We’ve been bullish on the promise of AI and for good reason. The scale, the speed, the potential to transform healthcare marketing is real. But if history tells us anything, it’s that every revolutionary advance carries unintended consequences. And the AI revolution may be the most disruptive one yet.

From Printing Press to Deepfakes

Throughout history — from the Renaissance to the Information Age — progress has always come with a shadow side. The printing press brought literacy to the masses, but also sparked social upheaval and censorship. The internet unlocked global connection, but opened the floodgates to privacy erosion and misinformation. AI is on a similar path, promising exponential gain but revealing cracks as it scales. And the cracks are showing.

Mental health professionals are now naming a new phenomenon: AI-induced psychosis. Unnatural emotional bonds with AI companions are emerging. Cognitive decline is being linked to over-reliance on generative tools. And lawsuits are being filed by families who say AI use contributed to tragic outcomes. While OpenAI claims fewer than 1% of users have unhealthy relationships with ChatGPT, 1% of 700 million users is still 7 million people.

AI and the Environment: The Hidden Cost

AI doesn’t only live in the cloud. It lives in data centers. Massive, energy-intensive hubs that rely on water and fossil fuels to operate. And as more states offer tax incentives and regulatory flexibility, those data centers often end up in lower-income regions in the southern U.S., where communities are more vulnerable to environmental harm. BPD’s VP of Health Equity and Inclusion, Desiree Duncan, draws the parallel clearly: “Follow the money. The most important color to these corporations is green and not the environmental kind.” Some of these decisions, she notes, are repeating a historic pattern of exploitation and inequity this time wrapped in the banner of innovation.

The Risk to Brands (and Humanity)

Perhaps more pressing is the societal risk. As AI accelerates, so do issues of truth decay, civil unrest, and cultural division. Can healthcare organizations afford to wait until the next crisis to act? Our answer is no. Now is the time to scenario plan. To educate. To build responsible, cross-functional AI strategies. To challenge the assumption that progress is always good or at least always good for everyone. It starts with asking better questions. Embracing AI doesn’t mean ignoring its dark side. It means staying informed, staying grounded, and acting with intention. As Stephanie put it, “The point isn’t to stop progress. It’s to be smarter this time.”

Read our latest blog, The Einstein Divide, where we explore the two paths marketing leaders are taking with AI: iteration versus imagination, and why only one will unlock real transformation. And join us at the 2026 Joe Public Retreat: The AI Dream, the premier gathering for health system marketers looking to build what has never been done before.

Read the transcript here

Previous Episodes

Episode 259

Can CMOs Lead the Future of AI?

Episode 258

How Financial Pressures Are Reshaping the CMO Role

Episode 257

Let’s Talk Tech with Andy Chang

Stay in Touch

Stay in the loop on the latest from BPD— our newsletter The No Normal Rewind brings you highlights from The No Normal Show plus strategic insights to help CMOs prepare for the future of health system marketing.