NWO grants for AI research on privacy-preserving cancer studies and virtual harassment

Two projects by Utrecht University researchers are receiving a grant from the National Growth Fund program AINed. This funding will propel promising, innovative, and bold initiatives in the field of artificial intelligence, addressing pressing needs in healthcare and virtual reality.
Julian Frommel, Assistant Professor in the Interaction/Multimedia group of the Faculty of Science, will conduct research on harassment in virtual reality environments. His colleague Wilson dos Santos Silva, newly appointed at the same faculty, will focus on improving AI models that can learn from patient data from different hospitals without breaching privacy.
Robust AI models for cancer research
Dos Santos Silva joined Utrecht University just recently, as Assistant Professor Explainable AI for Life at the group of Professor Sanne Abeln. 鈥淚t鈥檚 wonderful to obtain my first Dutch grant within six months of starting my Assistant Professor position,鈥 says Dos Santos Silva, who also works at the Netherlands Cancer Institute.
With the grant of 80,000 euros, he will focus on making AI models used in cancer research in the Netherlands more robust. Currently, this type of research relies on collecting medical data 鈥搒uch as MRI scans鈥 from different hospitals, which are then used to train AI algorithms at a central location.
It鈥檚 wonderful to obtain my first Dutch grant within six months of starting my Assistant Professor position
鈥淭he problem is that, although the data is pseudonymized, you can still derive information about the patient from the biometric characteristics in the images鈥, says Dos Santos Silva. This is why the focus is shifting to decentralized learning methods, where AI models are trained locally at each hospital. Hospitals then share only the information about the models, not the data itself.
One challenge that arises is that the data varies greatly from hospital to hospital, and therefore so do the local models contributing to the aggregated final model. This leads to uncertainty in the model鈥檚 predictions.
Uncertainty in predictions
Dos Santos Silva: 鈥淔or example, if an AI model is trained exclusively on data from the Netherlands Cancer Institute, it will be primarily exposed to severe cancer cases and will not have data on healthy and mild cases. However, during model aggregation, this model 鈥搘hich is uncertain about less severe cases鈥 is given the same weight as a model trained on a more comprehensive dataset with a full spectrum of patients, which has a more accurate representation of the overall disease distribution.鈥
Dos Santos Silva鈥檚 research aims to make AI algorithms more robust so that they can take into account the different types of data they have seen, ultimately enhancing cancer care by supporting more accurate clinical decisions.
Harassment in the virtual world
Julian Frommel also receives 80,000 euros for his AI research. Frommel and his colleagues are delving into the world of social extended reality (XR). These immersive, virtual environments are becoming increasingly popular, for example in gaming, but also for meetings and social interactions. However, users are also increasingly confronted with intimidation. Frommel: 鈥淧eople can be confronted with insults, discrimination, or threats, but also with the invasion of someone鈥檚 personal space or virtual groping. This is a new phenomenon and very specific to these virtual environments.鈥
Currently, there are human 鈥榤oderators鈥 to whom such behavior can be reported. These moderators assess whether the complaint is justified and can impose penalties, such as removal from the platform. 鈥淲e want our AI models to learn to understand social interactions between people. If we know how that works, the AI models can help these moderators by detecting potentially inappropriate behavior and prioritizing what to review when there are many complaints.鈥
Virtual environments must feel safe and inclusive for everyone
The ultimate goal of the project is to limit this kind of dangerous behavior. 鈥淭his is enormously important because such behavior is also harmful in the virtual world. Users can experience anxiety, low self-esteem and distress. Virtual environments must feel safe and inclusive for everyone.鈥
Frommel鈥檚 research will be conducted in the AI & Media Lab, part of the Utrecht AI Labs. In these labs, researchers from different disciplines, along with experts from public and private organisations, governments and other knowledge institutions, are developing new knowledge and applications in the field of artificial intelligence.
About the NWO grant
The NGF AiNed XS Europa grant from NWO is part of the national AI research agenda AIREA-NL. A strong AI knowledge and innovation base is of great importance to the Netherlands. An important aspect of this is the connectedness of Dutch researchers worldwide, and especially in Europe. All ten projects awarded in this grant are therefore collaborations between at least two European partner organizations.
Julian Frommel鈥檚 project collaborates with Technical University Darmstadt (Germany). Wilson dos Santos Silva will work together with the private non-profit research association INESC TEC in Portugal.