I am Bea

Hello! My name is Bea and I am currently working as a postdoctoral researcher at the University of Gothenburg | Chalmers. I will soon start looking for new positions and I was wondering whether you could help me with that... 

My career so far:

Those are not my only research interests, of course... I love researching on topics more related to HCI (e.g., AI trustworthiness)!!

Some thoughts on AI

In recent years, the rise of AI (cough language models cough) has been substantial. However, out of the many applications of AI (including language models), the most interesting to me are those with a physical body that can interact with the environment, and people (willing or not, users or bystanders). 

Why are they so interesting? Well, these AI systems might be taken out of their operational design domain, a subset of scenarios in which they are supposed to be thoroughly tested. And that calls for trouble. For instance, a robot designed to deliver pineapple pizza might struggle in a country where the signposts are different from those it saw during training (I struggle myself!!). Similarly, an embodied conversational agent might appear rude in some cultures (or to some minorities).

But even if AIs were perfectly robust, a number of other requirements would still be needed to make them trustworthy, such as transparency, explicability, accountability, and more. Many of these super important requirements are, as of today, impossible to describe, formalise, measure, and/or verify.

And as if all that was not enough, these AIs need to coexist with people doing people things. No matter how advanced AIs are, people are amazing and amazingly weird. People might not know much about AI but still use it and over-trust it, leading to AI misuse, which can be dangerous. On the other hand, some people might be unwilling to use AI completely... but, given the rise of large language models and the expansion of autonomous driving features (among many other, yet less popular, things), it is likely that people will coexist with artificial intelligences whether they like or not.

It is for these reason that I strongly believe we must aim at ensuring not only robust, but transparent, and accountable AI systems, with a particular focus on explainability. And explanations should be adapted to each and every AI user to ensure that embodied AI is safe and trustworthy and... overall nice to be around.