I am Bea
Hello! My name is Bea and I am currently working as a postdoctoral researcher at the University of Gothenburg | Chalmers. I will soon start looking for new positions and I was wondering whether you could help me with that...
My career so far:
Postdoc in Software Engineering (mostly focusing on Generative AI, Digital Twins, and automotive safety assessment)
Master in Intelligent Interactive Systems (thesis on cooperative-competitive multi-agent systems)
PhD in Machine Learning for Crowd Simulation (model tuning, dynamic policies)
Those are not my only research interests, of course... I love researching on topics more related to HCI (e.g., AI trustworthiness)!!
Some thoughts on Artificial Intelligence
In recent years, the rise of AI (cough language models cough) has been substantial. However, out of the many applications of AI (including language models), the most interesting to me are those with a physical body that can interact with the environment, and people (willing or not, users or bystanders).
Why are they so interesting? Well, these AI systems might be taken out of their operational design domain, a subset of scenarios in which they are supposed to be thoroughly tested. And that calls for trouble. For instance, a robot designed to deliver pineapple pizza might struggle in a country where the signposts are different from those it saw during training. Similarly, an embodied conversational agent might appear rude in some cultures (or to some minorities).
But even if AIs were perfectly robust, a number of other requirements would still be needed to make them trustworthy, such as transparency, explicability, accountability, and more. Many of these super important requirements are, as of today, impossible to describe, formalise, measure, and/or verify.
And as if all that was not enough, these AIs need to coexist with people doing people-things: no matter how advanced AIs are, people are amazingly weird. People might not know much about AI but still use it and over-trust it, leading to AI misuse, which can be dangerous. On the other hand, some people might be unwilling to use AI completely... but, given the rise of large language models and the expansion of autonomous driving features (among many other, yet less popular, things), it is likely that people will coexist with AIs whether they like or not.
It is for these reason that I strongly believe we must aim at ensuring not only robust, but transparent, and accountable AI systems, with a particular focus on explainability. And explanations should be adapted to each and every AI user to ensure that embodied AI is safe and trustworthy and... overall nice to be around.