Several countries are introducing or considering new restrictions on social media use among minors, alongside increasing demands for transparency and stronger user protection. At the same time, major platforms such as Meta, Google, TikTok and Snapchat are facing mounting scrutiny over concerns related to addiction and their impact on mental health.
This signals the emergence of a new era, where not only user behaviour is evolving, but also the broader dynamics shaping both online and offline environments. The relationship between users, platforms and institutions is being redefined. In this context, Artificial Intelligence plays a pivotal role: it fuels the spread of information and disinformation, while also enabling new methods to detect and counter them, raising important questions around responsibility and regulation.
These issues are discussed with Filippo Menczer, Luddy Distinguished Professor and Director of the Observatory on Social Media at Indiana University, and Silvia Giordano, Full Professor at the Information Systems and Networking Institute (ISIN) of the Department of Innovative Technologies. The topic is further explored during the event AI & Manipulation: Good vs Evil, taking place on 21 April 2026 at the Department of Innovative Technologies of SUPSI.
What does “online manipulation” mean today, and why has it become such an urgent issue?
Manipulation has always existed, but online social interactions make it easier to carry out and harder to detect. On social media, it is often unclear whether you are interacting with a bot or a human, or whether the information you are reading is true or false. This makes us particularly vulnerable to attacks targeting our finances, our health, or our democratic institutions. Manipulation aims to influence what we think, feel and do in digital environments, and consequently in our everyday lives. In other words, it is not only information that is being manipulated, but the entire context in which we interpret it.
How is generative Artificial Intelligence changing the production and dissemination of content?
Our vulnerability to online manipulation is further amplified by the Artificial Intelligence tools available today. In particular, generative AI and large language models make it easy and cost-effective to launch fast, large-scale attacks. For example, they enable the creation of false yet credible content such as news, images or videos, as well as networks of fake users designed to amplify certain narratives or suppress others.
This can create the illusion that a particular opinion, individual or conspiracy theory is widely shared and supported, when in fact it is not. The human brain tends to perceive narratives as more credible when they appear to have broad popular support.
What are the most effective techniques for detecting coordinated operations and manipulated content?
AI-generated content, such as deepfake videos and images of public figures, is becoming increasingly difficult to identify, both for users and for detection algorithms. Although digital watermarking technologies exist to trace the origin of content and prevent tampering, platforms have been slow to adopt them at scale.
By contrast, coordinated campaigns can be identified more reliably through statistical analysis of user behaviour. While it is normal for multiple users to share popular content, repeated and highly synchronized actions, such as posting identical material from obscure sources at the same time or following identical interaction patterns, may indicate coordinated activity. When the probability of such patterns occurring by chance is extremely low, they are likely the result of organized efforts rather than independent users.
How effective are AI-based solutions in countering manipulation, and what are their limitations?
Artificial Intelligence is increasingly used to detect and mitigate manipulation, but its effectiveness comes with inherent limitations. The same technologies that enable detection can also be used to generate misleading content or impersonate individuals at scale. Moreover, even applications designed for positive purposes raise concerns. Unsupervised or prolonged interaction with chatbots may, in some cases, contribute to dependency, reinforce extreme views, or encourage harmful behaviour.
At the same time, language models can support media literacy by helping users identify false or misleading information, but they can also be exploited to promote disinformation. A further structural limitation lies in the reliability of these systems. AI models can produce inaccurate or misleading outputs without signalling uncertainty, which may lead users to place undue trust in their responses.
Looking ahead, which skills will be essential to navigate an increasingly complex digital ecosystem?
“Literacy” is the key concept: digital literacy, information literacy, and literacy in the use of AI. Developing a form of “measured scepticism” will be essential to reduce vulnerability without losing trust in reliable sources. It will be important to verify original sources, to question appearances, and to resist the pull of engagement metrics. Understanding how recommendation algorithms and generative models work, at least at a basic level, will be necessary to interpret what we see on social media and to critically assess the relationship between what a chatbot tells us and reality.
Finally, there is a need to become more aware of the impact of our own online actions, such as sharing, liking and commenting, which contribute to shaping the digital ecosystem. Education will play a crucial role not only in learning how to use and develop these technologies, but also in managing them responsibly.