The CTS//circle.responsibleComputing is hosting the last brown bag talk of the semester on Digital Laziness: Why the Real AI Risk Is Human Surrender with Dr. Patrizia Ecker who is researcher, author, and founder of the AI Literacy Alliance.

Please join us for this event online.

About the speaker

Dr. Patrizia Ecker is a researcher, author, and founder of the AI Literacy Alliance. With a PhD in psychology and computer science, she specializes in the intersection of cognitive bias, digital media, and human-centered technology. Beyond her academic work, she advises corporations and leading consultancies on digital transformation and responsible AI adoption. Her recent book, The Digital Reinforcement of Bias and Belief, explores how online environments shape our thinking and decision-making. Through the AI Literacy Alliance, she also empowers young people to think critically and thrive in an AI-driven world.


Patrizia Ecker

Seminar details

  • Digital Laziness: Why the Real AI Risk Is Human Surrender
  • Dr. Patrizia Ecker
  • 26 June 2025 12:00 - 13:00
  • Location: Zoom


Abstract:
We talk about responsible technology. But what if the real problem isn’t the technology itself — but how comfortably we let it think for us?

In this talk, Dr. Patrizia Ecker explores the subtle but growing threat of digital laziness: a condition in which we outsource not just tasks but thinking, judgment, and even curiosity to algorithms. Drawing from her book The Digital Reinforcement of Bias and Belief, she shows how AI systems silently shape our convictions — and how this undermines not just individual autonomy, but democratic resilience.

Combining academic frameworks from her research with practical lessons from the AI Literacy Alliance, Dr. Ecker outlines how young people (and adults) are losing the ability to think critically, formulate complex opinions, and engage with doubt — skills essential for any responsible society.

This session asks five uncomfortable questions about our cognitive habits and explores how we can design educational and technological systems that protect what makes us human. Because the real danger isn’t machine intelligence. It’s human surrender.

Key questions:
  • - When does AI help us think — and when does it stop us from thinking?
  • - What do we risk losing when we prioritize convenience over cognition?


Organizer

CTS//circle.responsibleComputing