First Multimodal AI Research Sprint

Welcome to “The First Multimodal AI Research Sprint: Beyond Vision & Language.” This unique gathering is where the convergence of diverse data types and cutting-edge machine learning models and algorithms is not just discussed but actively pursued. Our focus is on exploring the untapped potential in Multimodal AI, venturing beyond the established domains of vision and language. This event is a journey into uncharted territories, seeking to uncover new challenges and opportunities in data and problem-solving that have yet to be fully explored. While our primary aim is to delve into these new areas, we also value and welcome the rich insights and contributions from the fields of vision and language. The knowledge from these areas will provide a solid foundation for our exploration and innovation.

Click to show more details

This event is more than a mere meeting of minds; it's the beginning of a long-term collaboration aimed at making a lasting impact in the field of Multimodal AI. It is your expertise, your insights, and your collaborative spirit that will drive us toward achieving significant academic and collaborative milestones.

Final Programme

Wednesday, 22nd November 2023

Time    Activity    Room
10:00 - 10:15    Welcome & Introduction - Haiping Lu & Xianyuan Liu    Enigma
10:15 - 11:30    Pitch session    Enigma
11:30 - 11:45    Coffee break    Enigma
11:45 - 12:15    Breakout session 1
    Healthcare and medicine    Enigma
    Engineering    David Blackwell
    Social science and humanities    Mae Jemison
    Science    Jack Good
    Finance and economics    Cipher
    Environment and sustainability    Florence Nightingale
12:15 - 12:45    Breakout reflection and consolidation    Enigma
12:45 - 13:30    Lunch (provided)    Enigma
13:30 - 15:30    Breakout session 2
    Healthcare and medicine    Enigma
    Engineering    David Blackwell
    Social science and humanities    Margaret Hamilton
    Science    Jack Good
    Finance and economics    Enigma
    Environment and sustainability    Enigma
15:30 - 16:00    Consolidation and Next Steps    Enigma

Pitches

Name    Title
Peter Charlton    Using multimodal AI to diagnose atrial fibrillation from smart wearables
Yuhan Wang    Explainable Alzheimer Disease Early Detection Framework Based on Multi-Modal Clinical Data
Mohammod Suvon    Multimodal Cardiothoracic Disease Prediction
Chris Tomlinson    graphICM: graph and semantic representation learning for critical illness aetiology
Avish Vijayaraghavan    Interpretable Multi-Modal Learning for Clinical Multi-Omics
Luigi Moretti    Can MultimodalAI be effectively implemented to help treat Anxiety Disorders?
Lucas Farndale    Super Vision Without Supervision: Self-Supervised Multimodal Privileged Information Integration for Enhanced Biomedical Imaging
Jinge Wu    Facilitating factual checking on radiology reports using multimodal benchmark datasets
Greg Slabaugh    Multimodal AI for Multi’omics Data Integration in Healthcare
Chen Chen    Towards Responsible AI in Healthcare: Enhancing Generalizability, Robustness, Explainability, and Fairness with Multi-modality Data
Marta Varela    Physics-Informed Neural Networks
Oya Celiktutan    Multimodal Behavioural AI for Human-Robot Interaction
Nitisha Jain    Semantic Interpretations of Multimodal Embeddings towards Explainable AI
Roger Moore    Vocal interactivity in a multimodal context: pragmatic, synchronic and energetic constraints
Ruizhe Li    Hearing Lips in Noise: Fusing Acoustic and Visual Data for Noise-Robust Speech Recognition
Cyndie Demeocq    Data annotation and curation for multimodal methods in online crime detection systems
Valentin Danchev    Data Governance and Responsible Sharing of Multimodal Data for AI Research
Lucia Cipolina-Kun    Diffusion models for the restoration of cultural heritage
Pin Ni    Financial multi-modal fusion and learning
Arunav Das    Multimodal Knowledge Graph based Question Answering system
Thijs van der Plas    Biodiversity monitoring
Alejandro Coca-Castro    Environmental Data Modalities: Challenges and Opportunities

Contact Us

Direction

Click to show

By rail

The Institute is located adjacent to St Pancras International Rail Station and within a five-minute walk of Kings Cross Rail Station and a 10-minute walk of Euston Rail Station.

By underground

The nearest underground station is Kings Cross St Pancras Station (five-minute walk), which is on the Circle, Hammersmith & City, Metropolitan, Northern (Bank branch), Piccadilly, and Victoria underground lines.

By bus

There are many nearby services, including 10, 30, 59, 63, 73, and 91.

Finding the Turing from within the British Library

Enter the British Library via the main entrance on Euston Road.

To access the first floor via stairs, take the main staircase to the left of the ticket and membership desk. Once at the large central bookcase display, turn right and head towards the sign for The Alan Turing Institute.

To access the first floor via lift, go right after entering the British Library, past the ticket and membership desk and fountain, and turn left at the bookshop. Continue until reaching Lift 6. Take the lift to the upper ground floor (UG). Once on the upper ground floor, exit the lift and take one of the three lifts diagonally across to the first floor. On the first floor, exit the lift, turn left, and follow the sign for The Alan Turing Institute.

The Alan Turing Institute entrance is illuminated with signage over the doorway, behind the display of the enigma machine. Please report to reception once you arrive at the entrance.

Acknowledgement

This event is brought to you by the Turing Interest Group on Meta-Learning for Multimodal Data (welcome to sign up) and the Multimodal AI Community (welcome to subscribe to our Google Group) supported by the Centre for Machine Intelligence at the University of Sheffield.