MultimodalAI'23 Photos Now Available! Click the photo to view and download!
Join the Multimodal AI Community mailing list by subscribing to our Google Group.
Multimodal AI combines multiple types of data (image, text, audio, etc) via machine learning models and algorithms to achieve better performance. Multimodal AI is key for AI research and applications including healthcare, net zero, finance, robotics, and manufacturing. Multimodal AI in these areas is challenging due to the inherent complexity of data integration and the limited availability of labelled data. Unimodal AI for a single type of data input is maturing at an accelerating pace, thus creating vast opportunities for tackling multimodal AI challenges.
MultimodalAI’23 brings together researchers and practitioners from AI, data science, and various scientific and application domains to discuss problems and challenges, share experiences and solutions, explore collaborations and future directions, and build networks and a vibrant community on multimodal AI. We have three keynote speakers covering academic research, industrial research, and industrial applications: Professor Mirella Lapata (University of Edinburgh, UKRI Turing AI World-Leading Researcher Fellow), Dr Yutian Chen (Google DeepMind, AlphaGo Developer), and Dr Chew-Yean Yam (Microsoft, Principal Data and Applied Scientist).
We offer participants opportunities to give 3-min pitches and present posters, with four prizes (£150 each) in total for the best pitches and best posters. You may submit proposals for a pitch and/or poster when you register. We will confirm accepted pitches and posters in the week ending June 17th.
Should you require assistance with accessibility for this event, or if you have any other special requirements, or if you would like to discuss your needs with the organizing team, please contact us. We will do our best to fulfill your requirements to allow you to fully participate in this event.
Join this interdisciplinary event to create a diverse community that shapes and builds the future of multimodal AI research and developments.
Welcome to share the workshop flyer in PDF with your network.
This workshop is jointly organised by University of Sheffield and University of Oxford under the Turing Network Funding from the Alan Turing Institute, with support from University of Sheffield’s Centre for Machine Intelligence and Alan Turing Institute’s Interest Group on Meta-learning for Multimodal Data (welcome to sign-up and join).
Disclaimer: This event is supported by The Alan Turing Institute. The Turing is not involved in the agenda or content planning.