Highlights
The use of AI to detect misinformation in the Metaverse
20 December 2021
Andrea Emilio Rizzoli, Director of the Dalle Molle Institute for Artificial Intelligence USI-SUPSI, was recently interviewed by Bloomberg to discuss the limits of the Metaverse and the use of AI to detect offensive and misleading content.

Metaverse is a new-born technology developed as an immersive digital world where to establish in person interactions.

Due to its infancy, as reported by Bloomberg, industry observers are raising alarms that the nightmarish content moderation challenges that already plague social media could be even worse in these new worlds powered by virtual and augmented reality.
Offensive or misleading contents is indeed easy to find in the Metaverse and it is still difficult to control and prevent their spreading.

«The degree to which the metaverse remains a safe space will depend partially on how companies train their AI systems to moderate the platforms» said Andrea Emilio Rizzoli, Director of the Dalle Molle Institute for Artificial Intelligence USI-SUPSI. «AI can be trained to detect and take down hate speech and misinformation, and systems can also inadvertently amplify it».

To know more about the metaverse and its open challenges, read the full article: Misinformation Has Already Made Its Way to the Metaverse. Virtual worlds will be even harder to police than social media by Jillian Deutsch, Naomi Nix and Sarah Kopit

Contacts
st.wwwsupsi@supsi.ch