We are proud to announce that Dr. Kevin M. Blasiak has been awarded a highly competitive research grant under the 1000 Ideas Programme of the Austrian Science Fund (FWF), leading pioneering work at CTSi//circle.responsibleComputing.

The FWF’s 1000 Ideas Programme supports bold, high-risk, high-reward basic research that aims to question existing paradigms and open up radically new lines of inquiry. Projects funded under this scheme are selected for their originality, transformative potential, and willingness to explore uncharted scientific territory.

Dr. Blasiak’s project, “Designing AI to Ethically Influence Human Cognition: Inoculation Against Harmful Digital Persuasion”, tackles one of the most pressing challenges in today’s information landscape: how to resist manipulation in an era of generative AI, propaganda, and immersive digital environments.

While most current interventions focus on detecting and removing harmful content, this project asks a different question: Can AI be used not just to filter information, but to help people think critically and resist manipulation?

Inspired by psychological inoculation theory, the research reimagines AI as a kind of “mental vaccine”—an intelligent system that interacts with users through ethically guided dialogue and helps build cognitive resilience. This bold vision moves beyond the idea of AI as a neutral assistant and instead sees it as an active agent for positive social influence.

The project combines behavioral science, human-computer interaction, and AI safety research in a unique way. It explores not only how such systems can be technically built, but also how ethical values like transparency, empathy, and user autonomy can be embedded directly into the design.

The chatbot developed in the project will be tested in controlled experiments to evaluate whether ethically persuasive AI can help users resist disinformation, hate speech, or extremist narratives—especially in immersive or emotionally charged digital environments.

By shifting the conversation from reactive content moderation to proactive cognitive empowerment, this project lays the groundwork for a new class of digital interventions. It also contributes to public debates about the limits and responsibilities of persuasive AI in democratic societies.

Project phases include:
– Empirical research through interviews with experts in violence prevention, psychology, and platform governance
– Conceptual development of ethical trade-offs and design principles using design thinking
– Technical implementation and experimental testing of an AI chatbot trained on counterspeech and prebunking content

For more details, see the official FWF project listing:
FWF Research Radar – Project TAI1208725