Second Multimodal AI Community Forum

Online | Wednesday, 11 March 2026 | 13:00–18:00 (GMT)

Latest updates:


The Second Multimodal AI Community Forum brings together researchers, practitioners, and students working on or interested in multimodal AI across disciplines and sectors.

Multimodal AI integrates diverse data types such as vision, language, audio, time series, sensor, spatial, tabular, and graph data. We adopt a broad and deployment-centric perspective, embracing multiple levels of multimodality, including data types, subtypes, views, and fidelities.

This forum aims to foster collaboration, share innovations, and address cross-cutting challenges aligned with Tomorrow’s Engineering Research Challenges (TERCs).

We look forward to welcoming you online.

Keynote Speaker

Avatar

Hoifung Poon

General Manager at Microsoft Research

Full Programme

Wednesday, 11 March 2026

Time (GMT)    Event
13:00 - 13:20    Haiping Lu - Introduction to UKOMAIN and OMAIB
13:20 - 13:50    OMAIB Round 1 - Funded projects
       Munib Mesinovic - CRITICAL-MM: Cross-modal Reasoning with Integrated Clinical Assessment for Large Multimodal Models
       Lu Gan - Towards Carbon-Neutral Living: An Open Dataset for Smart Social Housing
       Sam Wild - SONAIR - Sim2real Operational beNchmark for AI Robotics
       Stephen McGough - Neural Architecture Search Benchmark for Multimodal Self-driving Car’s Test Data
13:50 - 14:00    Charles Anjah - OMAIB management
14:00 - 14:20    Haiping Lu - Interest Group and OMAIB Round 2 Launching
Break
14:40 - 15:40    Community talks - Session 1
       Chen Chen - Multi-modal AI for Cardiac Care
       Maria Luisa Davila Garcia - Analysis of Quality of Life in Patients after Abdominal Hernia Surgery, a Multimodal Approach
       Rasheleh Kafieh - Multimodal AI in Ophthalmology
       Pengeng Hu - AI for 3D Human Body Scanning and Measurement
       Mehtab Ansari - Deep Learning Architectures for Multi-Disease Detection in Medical Imaging
       Budi Setiawan - Are we ready for Artificial Intelligence: Pilot Project in Medical Laboratory Technologist in Indonesia
       Gerardo Aragon-Camarasa - Multimodal Anomaly Detection in Additive Manufacturing
       Haolin Wang - Benchmarking Bandgap Prediction in Semiconductors under Experimental and Realistic Evaluation Settings
       Divya Sitani - Unravelling Immune Signatures of Whole-cell vs Acellular Pertussis Vaccine Priming using
       Tabular Machine Learning and Multimodal Feature Fusion
       Deysi Anaya - Integrating Behavioural Signals and AI for Improved Mental Health Assessment
       Yuhan Wang - Explainable Multimodal AI for Reliable Clinical Decision Support in Medical Imaging
       Xiaolei Xu - Home Monitoring of Sleep, A Multimodal Approach
       Linus Ericsson - Multimodal AI for Radiotherapy
       Chidozie Managwu - Bridging Vision and Language: Multimodal Frameworks for Real-Time Engineering and Health Diagnostics
       Ganzorig Chuluunbat - Developing a Geospatial Foundation Model for Global Soil Organic Carbon Estimation
       Ye Ha Kim - Deep Learning Models for IoT-Data Harvesting and Ethical Decision-Making in Circular Design Processes
Break
16:00 - 16:40    Keynote Presentation by Hoifung Poon (General Manager at Microsoft Research)
    Title: Towards Virtual Patient: AI for Accelerating Medical Discovery
Abstract: Today, medical discovery advances one clinical trial at a time, each taking years to execute and often costing $100 million or more. As we enter the era of precision health in which we recognize that “one size doesn’t fit all” and thus try to tailor treatments for each individual, continuing on today’s discovery processes is clearly not sustainable. The confluence of technological advances and social policies has led to rapid digitization of multimodal, longitudinal patient journeys, such as electronic health records (EHRs), imaging, and multiomics. Our overarching research agenda lies in advancing multimodal generative AI to learn the language of patients and create a virtual patient world model as digital twin for forecasting disease progression and treatment response. This enables us to synthesize population-scale real-world evidence from hundreds of millions of patients and accelerate medical discovery through AI-powered virtual clinical trials, in deep partnerships with real-world stakeholders such as large health systems and life sciences companies.
16:40 - 17:10    Roundtable discussions with the keynote speaker and the community
Break
17:20 - 17:50    Community talks - Session 2
       Jianqiao Long - Computation and Communication Cooperation for Molecular Network
       Xiang Li - Beyond the Lab: Translating Multimodal AI into Scalable Innovation
       Qiwen Guan - RMSSC: A Robust Multimodal Framework for Sleep Stage Classification with Noisy Labels and Missing Modalities
       Wenxing Ji - EEGGraphNet: A Multi-Class Seizure Classification Model with Adaptive Channel Aggregation via Graph Attention Neural Network
       Jiajie Luo - MoDE: A Dynamic Expert Aggregation Framework for Sleep Stage Classification with Contextual Factors
       Jichun Li - Learning Disentangled Subject-Invariant Representations for EEG Sleep Staging via Spectral-Spatial-Sequential Feature Fusion
       Vrinda Gotr - AI in Vaccines
       Misbah Rafique - From Simulation to Discovery: Multimodal AI for Detecting Jellyfish Galaxies
17:50 - 18:00    Final Q&A and closing remarks

Contact Us