What We Get Wrong About People (and AI)

This month brought some really interesting conversations about how we shape behavior, protect creative work, and deal with the increasingly weird intersection of technology and daily life. From marketing psychology to AI in image licensing to GM's truly confusing decision to ditch CarPlay, I’ve been collecting thoughts and moments worth sharing.

Let’s jump in.

Behavioral Science with Nancy Harhut

Face of a Fan Boy

I have been plotting how to have Nancy share her ideas with the marketing team I work with for months. Victory!

I got to meet someone I’ve admired from the instant I hear her speak. I first saw Nancy Harhut speak at SXSW 2025, and this month we were lucky to host her for a session with our Palo Alto Networks marketing teams.

Nancy is a master of using behavioral science in marketing. Her talk—Three Dangerous Mistakes Marketers Make with Today’s Consumers—was packed with insights. Most marketers lean way too hard on logic and features. But our audiences are making decisions based on emotion, social proof, and mental shortcuts. Nancy breaks it all down in a way that’s not just smart but super practical.

She also has a great book if you want to dig deeper: Using Behavioral Science in Marketing. If you work in marketing, sales, or anywhere persuasion matters, her work is worth your time.

More Behavioral Science: The Nudge Podcast

I’ve been collecting snippets from the Nudge podcast for a while now, and October was a goldmine. This show is great if you're into behavioral economics, choice architecture, and designing systems that work with how people actually behave.

A few episodes I loved this month:

The show is full of small, actionable ideas that stick with you.

My Thoughts on AI: Speaking at DMLA

Speaking on “How to Keep Image Workflows and Systems Safe in the Age of AI

I had the chance to speak at the Digital Media Licensing Association conference about how to keep image workflows safe as AI tools become more common in creative pipelines. It was a thoughtful, sometimes tough conversation about copyright, attribution, and what’s coming next.

Here’s what I shared:

Image-centric systems are a massive target for attackers
Three things that make them vulnerable:

  • A ton of valuable IP (which can be stolen, ransomed, or used to train someone else’s model)

  • A strong reputation to protect (which can be hit by deepfakes, poisoned uploads, or fake leaks)

  • And infrastructure like GPUs and cloud storage, which attackers love to hijack

There really two blinds spots that I hope every industry that is adopting AI are accounting for sooner than later.

Blind spot #1: Once an AI system learns something, it’s hard to make it “forget” that data. That’s why we need an AI firewall, a system that catches and rewrites oversharing answers… I’ll let you guess what one I think you should use.

Blind spot #2: The AI supply chain is mostly untracked. Most teams don’t know where their models came from, what data was used, or if anything has been tampered with. That’s a huge risk.

And the biggest problem, we trust these systems too much. We treat them like they’re magic and forget that bad actors know how to manipulate them.

So what can you do? Start with three things:

  1. Data governance: Know what’s sensitive, and control who gets to see it

  2. AI use policies: Make it clear what’s allowed, and who’s responsible

  3. Vet your vendors: If they touch your data, they need to meet your standards

Big thanks to Sarah Lefebvre, Auturo Pérez Amores, and William Liani for making that panel a smart, honest conversation.

A final thought…

GM Ditches CarPlay (Why?)

I’ve always enjoyed The Macalope’s takes, and this one nailed it. GM has decided to phase out CarPlay in favor of its own infotainment system. This is... baffling.

In a world where seamless tech integration is expected, GM is betting that customers will give up a system they already like for one nobody asked for. It’s a move that screams “control over usability” and feels like the kind of decision that backfires hard.

Sometimes the best behavioral science lessons come from watching what not to do.

Wrapping Up

This month reminded me that whether we’re building campaigns, protecting images, or designing car dashboards, the same rule applies: respect the human on the other side. Pay attention to what they want. Don’t force them into systems that serve you more than them.

David Moulton
I guide strategic conversations and drive innovation with my customers. I lead my teams in conceptualizing and designing incredible experiences that solve real problems for businesses. Specialties: Consulting, Strategy, Innovation, Visual Design, Enterprise Software, Mobile, Sales, Multi-Touch & Multi-User Interactive Design, User Interface (UI), User Experience (UX), Customer Experience (CX), Information Architecture, Usability
http://www.davidrmoulton.com
Next
Next

Clarity is a Competitive Advantage