FCAI Teams

FCAI Teams are groups of postdocs, doctoral students and supervising PIs working together towards a common goal. FCAI Teams boost the crucially important collaboration between the Research Programs in developing new AI, and demonstrating its power in Highlight applications by solving AI-assisted decision-making, design and modeling problems. Each FCAI Team contributes one important piece to this big picture. Each piece is scientifically important in and of itself, but the work of the other teams gives new ideas, complementary methods, and convincing case studies.

List of FCAI Teams

Methods Teams

Virtual Laboratory Teams

Methods Teams

 

Amortized inference

The Amortized inference FCAI team aims to develop methods that use machine learning and surrogate models to speed up, amortize and improve optimization and probabilistic inference. The team will work on novel approaches that leverage recent advances in deep learning (such as normalizing flows, neural processes, etc.) while preserving the desirable statistical properties and sample-efficiency of Gaussian process -based methods. The novel methods will be applied to challenging inference problems in fields such as AI, biology and neuroscience, in coordination with other FCAI Teams. The team combines FCAI’s expertise in different fields: statistical inference & probabilistic programming (Research Program 1, Highlight A); simulator-based inference and Gaussian process -based approaches (Research Program 2); and deep learning (Research Program 3).

References:

  • Huang D, Haussmann M, Remes U, John ST, Clarté G, Luck KS, Kaski S, Acerbi L. Practical Equivariances via Relational Conditional Neural Processes. arXiv preprint https://arxiv.org/abs/2306.10915

Keywords: Amortized inference, simulator-based inference, Gaussian processes, normalizing flows, deep learning.

PIs in the team: Luigi Acerbi, Jukka Corander, Samuel Kaski

Other team members: Ayush Bharti, Hany Abdulsamad, Kevin Luck, Ti John, Ulpu Remes, Grégoire Clarté, Manuel Haussmann, Daolang Huang

Artificial agents with Theory Of Mind

Numerous advances in AI have allowed it to surpass humans in many complex optimization and reasoning tasks. Despite these advances, it still fails to generalize to new contexts or contexts when there is no clear objective, limiting its applicability in real world applications. In order for AI systems to find success in such applications, they must collaborate with their users by actively anticipating with and reasoning about humans. However, this requires realistic computational models of the human which support, in particular, inferring tacit and changing goals, eliciting the knowledge of the human, and understanding how the human interprets the actions of the AI. In cognitive science, this capability is also referred to as theory of mind.

In the Artificial agents with Theory Of Mind team at FCAI, we focus on principles of AI agents who are able to interact, adapt, and anticipate with us by learning and using models of humans. Our main goal is to develop inference techniques that are adapted to complex user models, and to deploy them in realistic collaborative settings.

Keywords: Human-AI collaboration, reinforcement learning, user modeling, human-in-the-loop, theory of mind, multi-agent learning, Bayesian inference, partially observable Markov decision process, likelihood-free inference

PIs in the team: Samuel Kaski

Other team members: Sammie Katt, Mahsa Asadi, Sebastiaan De Peuter, Alex Hämäläinen, Ali Khoshvishkaie, Alex Aushev, Elena Shaw

Computational rationality

What if AI could better understand people’s needs, beliefs, capabilities, and situations? Computational rationality is an emerging integrative theory of intelligence in humans and machines with applications in human-computer interaction, cooperative AI, and robotics [1,2,3]. The  theory assumes that observable human behavior is generated by cognitive mechanisms that are adapted to the structure of not only the environment but also the mind itself. Implementations use, among others, deep reinforcement learning to approximate optimal policy within assumptions about cognitive architecture and their bounds. Cooperative AI systems can utilize such models to infer causes behind observable behavior [4] and plan actions and interventions in settings like semiautonomous vehicles, game-level testing, AI-assisted design etc [5]. These models also shed light on causes behind human behavior in everyday environments [6]. FCAI researchers are at the forefront in developing computational rationality as a generative model of human behavior in interactive tasks as well as suitable inference mechanisms.

References:

[1] S. Gershman et al. Computational rationality: A converging paradigm for intelligence in brains, minds, and machines. Science 2015.
[2] A. Oulasvirta, A. Howes, J. Jokinen. Computational rationality as a theory of interactions. Proc. CHI’22.
[3] A. Howes, J. Jokinen, A. Oulasvirta. Towards machines that understand people. AI Magazine, to appear.
[4] J. Jokinen et al. Parameter Inference for Computational Cognitive Models with Approximate Bayesian Computation. Proc. CHI’17, ACM Press.
[5] S. De Peuter, A. Oulasvirta, S. Kaski. Towards AI assistants that let designers design.
[6] C. Gebhardt et al. Hierarchical Reinforcement Learning Explains Task Interleaving Behavior. Computational Brain & Behavior 2021.

Keywords: Computational behavior, computational rationality, user modeling, computational cognitive science, reinforcement learning

PIs in the team: Antti Oulasvirta (Aalto University), Andrew Howes (University of Exeter), Perttu Hämäläinen (Aalto University), Jussi Jokinen (University of Jyväskylä), Sami Kaski (Aalto University), Christian Guckelsberger (Aalto University), Jukka Corander (University of Helsinki)

Other team members: Suyog Chandramouli, Danqing Shi, Hee-Seung Moon, Sebastiaan De Peuter, Aini Putkonen, Yue Jiang

Deployment as fundamental machine learning challenge

Machine learning (ML) is now used widely in sciences and engineering, in prediction, emulation, and optimization tasks. Contrary to what we would like to think, it does not work well in practice. Why? Because oftentimes the conditions in which the models are deployed in radically differs from the conditions in which they were developed or trained. This has been conceptualized as distribution shift or sim-to-real gap, which significantly hampers performance of ML models and methods. Distribution shifts can occur due to a number of reasons such as prior probability shift, sample selection bias, performative predictions, and concept drift, to name a few.  In this team, we will tackle the interesting challenge of distribution shifts due to unobserved confounders. Solving this challenge is imperative for widespread deployment of ML. We will develop new principles and methods for tackling this problem which can be argued to be the main show-stopper in making machine learning seriously useful in solving the real problems we are facing, in sciences, companies and society.

Keywords: Distribution shift, confounders, robust ML, human-in-the-loop, experimental design

PIs in the team: Samuel Kaski, Vikas Garg

Other team members: Ayush Bharti, Julien Martinelli, Armi Tiihonen, Rafal Karczewski, Tianyu Cui (Imperial College London), Sabina Sloman (University of Manchester)

Foundation models for Language & reinforcement learning

The team aims to leverage natural language as a modality in sequential decision-making and reinforcement learning problems. Foundation models, such as large language models, have been shown to capture wide corpora of information, which can serve as rich prior knowledge for goal-driven agents. In particular, we are interested in investigating how such pretrained models could be used to elicit structured information, including causal relationships, knowledge graphs or predictive models, and in using these as decision-making support in downstream tasks.

Conversely, reinforcement learning approaches have potential to improve language understanding and generation ability in natural language processing domains. For instance, reward maximization has proven to be a successful strategy for training conversational agents through reinforcement learning from human preferences. Through identifying the limitations of current supervised and reinforcement learning from human feedback approaches to large language model training, we aim to advance capabilities such as multi-step planning, structured prediction, active learning and uncertainty quantification.

Keywords: Natural language processing, large language models, foundation models, causal representation learning, world models, reinforcement learning

PIs in the team: Pekka Marttinen, Samuel Kaski

Other team members: Minttu Alakuijala, Nicola Dainese, Alexander Nikitin, Hans Moen, Ya Gao

Long-term decision-making and transfer between tasks

A long-standing problem in artificial intelligence is how to build autonomous agents that operate in environments where actions may have long-term consequences. The problem becomes more challenging when the agent has to deal with sparse rewards and also adapt to new contexts. In the presence of sparse rewards, the agent has to disentangle, cluster, and predict independent model parts of the environment in a self-supervised way. In order to adapt to new contexts, the agent can use hierarchical planning and/or transfer learning to re-use its knowledge and skills in unseen situations.

The Long-term decision-making and transfer between tasks team at FCAI consists of several doctoral students, postdoctoral researchers, and principal investigators whose research interests lie within planning, reinforcement learning, representation learning, domain adaptation, and computer vision. The team’s goal is to design decision making agents that can solve tasks with sparse feedback and also adapt to new contexts.

Keywords: Reinforcement learning, hierarchical planning, hierarchical reinforcement learning, domain adaptation, transfer learning, Markov decision process

PIs in the team: Joni Pajarinen, Juho Kannala

Other team members: Aidan Scannell, Shoaib Azam, Marcus Klasson, Kalle Kujanpää, Rongzhen Zhao

Physics-informed Sustainable AI

We are committed to shaping the future of AI by designing decision-making systems that are both data-efficient and environmentally sustainable. Our focus is on Physics-Informed based AI methods, which use less data and computing power while reducing AI’s carbon footprint. Our goal is to build AI models that are not just smart, but also responsible - capable of performing effectively with limited data and aligning with global sustainability standards.

Keywords: Physics-informed Machine Learning, Sustainable AI

PIs in the team: Arto Klami, Laura Ruotsalainen, Simo Särkkä, Kai Puolamäki

Other team members: Hany Abdulsamad, Denys Iablonskyi, Muhammad Iqbal, Sahel Mohammad Iqbal, Outi Savolainen, Collin Leiber

Drug design

Developing new drugs is a central problem in modern science. It usually comprises long, complex, and costly steps from the initial target identification to approval by regulatory agencies. Only a fraction of the initial candidate drugs reaches the final stages. In our team, we aim to reimagine the drug design pipeline by developing novel interactive artificial intelligence (AI) tools. Our focus is on leveraging expert knowledge and first principles to inform machine learning models for efficient and trustworthy molecular generation  and simulation. Current investigations in our team revolve around methods in geometric deep learning, reinforcement learning, and deep generative models. More specifically, we have been interested in: i) principled ways to ensure the generation of valid molecules with good latent-space interpolation properties; ii) equivariant generative models of 3D molecules; and iii) reward elicitation with humans-in-the-loop for de novo drug design.

Keywords: Geometric deep learning, AI-assisted drug design, Explainable AI, Physics-informed deep learning, 3D molecular generation

PIs in the team: Vikas Garg, Samuel Kaski, Markus Heinonen

Other team members: Amauri Souza, Julien Martinelli, Sakshi Varshney, Mahsa Asadi, Anirudh Jain, Mohammad Moein, Arslan Masood, Yogesh Verma, Sebastiaan De Peuter, Yasmine Nahal, Najwa Laabid, Rafał Karczewski, Marshal Sinaga

Fusion and Plasma Physics Research

The mission of our team at FCAI is to unite experts in magnetic confinement fusion (MCF) energy from FinnFusion and our FCAI experts in advanced machine learning methods. Our goal is twofold: to speed up the development of fusion energy—a potential clean energy source—and to enhance cutting-edge machine learning technologies through this complex field of application. Fusion energy research involves costly experiments, limited and unclear data, and complex computer models. This makes it hard to draw clear conclusions or predict future outcomes accurately. We plan to use innovative modeling, experimental design and data analysis techniques, such as Bayesian optimization, simulator-based inference and amortized inference, to improve the way we validate large-scale models and infer operational states. Our machine learning models will be improved by incorporating knowledge from physics and domain experts. We aim to significantly quicken fusion research by merging different kinds of models and experimental data. Our virtual laboratory interacts with various FCAI research programs and teams, creating a rich environment for applying machine learning in meaningful ways.

Keywords: Fusion energy, simulator-based inference, amortized inference, Bayesian optimization, adaptive experimental design, multi-fidelity

PIs in the team: Luigi Acerbi, Aaro Järvinen, Jukka K. Nurminen, Simo Särkkä

Other members: Ayush Bharti, Adam Kit, Daniel Jordan, Amanda Bruncrona, Petrus Mikkola, Ivan Zaitsev, Anna Niemelä

Machine Learning for health records and genetics

We believe that AI has great potential to revolutionise the healthcare industry and provide preventive diagnostics to help people live a better life as well as reduce the burden on the healthcare system. With the vast of data available in Finland regarding the healthcare events of population, we will be able to train models which can predict healthcare needs for each individual well in advance and assess the risk profile for each person. With this aim, we work with Finregistry dataset to develop novel machine learning methods for disease risk prediction from electronic health records. We have identified three problem statements which the team will be working on:

  1. Efficient embeddings for EHR data: Longitudinal EHR data is quite complex to be directly used with ML models but contain the trajectory of health events of each individual person. We want to develop efficient embeddings for each patient’s healthcare trajectory so that it easy to compare different patients with same trait and identify subgroups/clusters. Further, this embedding will be used by downstream ML models for task specific training.

  2. Predictive power of familial/geographical relationships: With the family tree information available in Finregistry, we want to use graph neural networks to model the healthcare risk for entire families/regions.

  3. Interpretability of models: When ML models are trained for a particular task, we will be able to apply/develop interpretability methods on top of it which can provide interesting insights into the relationship between variables for predicting the outcome. These ML based interpretability methods could also help in accelerating the bio-medical reasons for the specific healthcare events

Keywords: Human-in-the-loop AI, deep learning, Bayesian inference

PIs in the team: Pekka Marttinen, Andrea Ganna, Samuel Kaski

Other members: Hans Moen, Tuomo Hartonen, Zhiyu Yang, Vishnu Raj, Andrius Vabalas, Essi Viippola, Matteo Ferro

Sustainable mobility and autonomous systems

The integration of artificial intelligence into various sectors, including transportation and autonomous vehicles, offers transformative benefits such as enhanced safety and efficiency. However, these advancements often come at a high environmental cost due to the computational complexity of machine learning algorithms. Our research group aims to address these dual challenges in a holistic manner. On one hand, we are focused on designing data-driven models to better understand and simulate human driving behavior, thereby improving the decision-making algorithms for autonomous vehicles. These algorithms consider complex interactions with surrounding objects and traffic flow, aiming to optimize safety and efficiency in transportation systems.

On the other hand, we are also committed to environmental sustainability by developing a Python-based framework/API that allows researchers to evaluate the energy consumption and computational complexity of their reinforcement learning models. This tool will provide metrics on power consumption for training, inference, and actions, as well as the required time for each, enabling a balanced evaluation of model accuracy against environmental impact. By doing so, we aim to guide the development of more energy-efficient algorithms that can perform agile tasks, such as controlling drones or autonomous vehicles, without compromising on performance.

In summary, our group is at the intersection of advancing AI-driven transportation solutions and promoting computational sustainability. We aim to push the boundaries in autonomous vehicle algorithms while equipping the research community with the tools needed to make responsible, eco-friendly choices in algorithm design.

Keywords: Reinforcement learning, Markov decision process, multi-objective RL, multi-agent RL, transfer learning

PIs in the team: Joni Pajarinen, Ville Kyrki, Laura Ruotsalainen, Tomasz Kucner

Other members: Shoaib Azam, Gokhan Alcan, Daulet Baimukashev, Shibei Zhu, Kalle Kujanpää, Farzeen Munir

Synthetic psychologist

[Being updated]

Theories in psychology are increasingly expressed as computational cognitive models that simulate human behavior. Such behavioral models are also becoming the basis for novel applications in areas such as human computer interaction, human-centric AI, computational psychiatry, and user modeling. As models account for more aspects of human behavior they increase in complexity. The Synthetic psychologist virtual laboratory broadly aims to develop and apply methods that assist a researcher in dealing with complex and intractable cognitive models. For instance, by developing optimal experiment design methods to help with model selection and parameter inference, or by using likelihood-free methods with cognitive models. This virtual lab will also encourage avenues of research relevant to cognitive modelling and AI-assistance which can be pursued in collaboration with other FCAI teams and virtual laboratories. We are looking for excellent candidates who are excited by cognitive models, Bayesian methods, probabilistic machine learning, and in open-source software environments, in no order of preference.

Keywords: Cognitive Science, simulator models, optimal experiment design, likelihood-free inference

PIs in the team: Andrew Howes (Birmingham), Antti Oulasvirta, Samuel Kaski, Luigi Acerbi