Ali Al Housseini
The sound of the future: exploring the Musical Metaverse
SUPSI Image Focus
Ali Al Housseini is a PhD candidate at the Institute of Information Systems and Networking (ISIN) within the Department of Innovative Technologies, working in the area of Trustworty and Secure Information Networks and Society. His research focuses on the Musical Metaverse, exploring how networks and Artificial Intelligence can enable immersive, fluid, and synchronized musical experiences, where meaning and interaction become central elements.
Where do you come from, what did you study, and what are you currently working on?
I come from Lebanon, and my academic path later brought me to Italy and then to Switzerland. Moving between different countries and academic environments shaped the way I see research: not only as a technical activity, but also as a way of connecting ideas, cultures, and applications.
Today, I am a PhD student at the Institute of Information Systems and Networking at SUPSI. My research is part of the MUSMET project, which explores the Musical Metaverse.
Music is one of the most human forms of communication. It is made of sound, but what we experience is much more than sound: rhythm, emotion, timing, memory, movement, and connection with others. This makes music a fascinating case for future digital environments, because a musical experience is not only something we hear; it is something we feel.
In simple terms, I work on how future networks can support immersive musical experiences powered by artificial intelligence. Imagine a virtual concert where musicians, audiences, AI tools, audio, video, and interaction all have to work together in real time. My research studies how to manage and orchestrate these AI-native services so that the experience feels fluid, synchronized, and alive.
This is also where semantic communication becomes important. Traditional networks mainly ask: “How can we transmit data correctly and efficiently?” In the Musical Metaverse, the question becomes deeper: “What information truly matters for the experience?” A tiny delay in rhythm may be more damaging than a small visual imperfection. A missing interaction may matter more than a perfectly transmitted background detail. My research sits at this intersection: orchestration tells us how to coordinate complex AI services, while semantic communication helps us think about what should be preserved to keep the musical experience meaningful.
What or who inspired you to pursue a career in research? And why this particular field?
I have always been attracted by systems that are invisible but essential. Communication networks are a perfect example. Most people do not think about them when everything works well. But the moment there is delay, disconnection, or poor quality, the invisible system suddenly becomes very visible.
What inspired me to pursue research is the possibility of working on questions that are still open. In networking, we are moving beyond the classical idea of simply sending more data faster. Future digital experiences will require networks that are more intelligent, more adaptive, and more aware of the application they are supporting.
Music makes this problem especially interesting. In a normal video call, a small delay may be annoying. In a musical performance, it can destroy the experience. Music is unforgiving because timing, synchronization, and interaction are part of its meaning.
This is why I find the Musical Metaverse such a powerful research context. It is not just about building virtual concerts. It asks a larger scientific question: can future networks understand what is important for a human experience, instead of treating all data as equally important?
That question naturally connects to semantic communication. Instead of focusing only on bits, packets, and throughput, semantic communication asks what information carries meaning for the task, the user, or the experience. In music, this question becomes both technically difficult and intellectually beautiful.
What has been the biggest challenge in your research journey so far?
The biggest challenge is turning a visionary idea into something scientifically precise.
Words like “metaverse,” “AI-native,” “immersive experience,” and “semantic communication” can sound attractive, but research cannot stop at attractive words. We need models, metrics, algorithms, simulations, and experiments. We need to ask: what exactly should be optimized? Latency? Resource consumption? Synchronization? Quality of experience? Semantic relevance? The answer is rarely one single metric. It is usually a difficult balance.
In the Musical Metaverse, this challenge becomes sharper because the technical system and the human experience are deeply connected. A network problem may become a musical problem. A delay is not just a delay; it may break rhythm. A wrong service placement is not just inefficient; it may reduce interaction. A loss of information is not just a packet-level issue; it may change what the user perceives.
The hardest part is therefore defining what “meaningful communication” means in such a context. What should the system protect first? What can be adapted? What can be compressed or ignored without damaging the experience? These are not trivial questions, because music is not only data. It is structure, timing, expression, and shared presence.
For me, this is the most interesting challenge: connecting low-level network decisions with high-level human experience.
What is it like to work at SUPSI and within the Department of Innovative Technologies?
Working at SUPSI and within the Department of Innovative Technologies means working in an environment where research is close to real applications. This is very important for my topic, because the Musical Metaverse cannot be studied only as a theoretical idea. It requires engineering, experimentation, interdisciplinary thinking, and a strong connection with future use cases.
At ISIN, I work in a context where communication networks, software systems, artificial intelligence, and applied research naturally come together. For a PhD student, this is valuable because it pushes you to think beyond the paper. A model should not only be elegant; it should help explain, simulate, or improve a real system.
I also appreciate that the topic itself is interdisciplinary. The Musical Metaverse sits between networking, AI, immersive systems, human experience, and creative applications. This makes the research challenging, but also rich. You cannot solve the problem by looking from one angle only.
SUPSI’s applied research environment is well suited for this type of work because it encourages both scientific rigor and practical relevance.