Welcome to our MICCAI (2024) tutorial website for MedShapeNet. Here, you can find essential information about the program, speakers, organizers, and useful links:
We invite you to join our tutorial. If you have any questions, please contact us.
Before the deep learning era, statistical shape models (SSMs) were widely used in medical imaging. MedShapeNet builds upon this foundation, aiming to bridge computer vision methods to medical problems and clinical applications, inspired by benchmarks like ShapeNet and Princeton ModelNet.
MedShapeNet boasts a collection of over 100,000 medical shapes, covering bones, organs, vessels, muscles, and surgical instruments. These shapes can be searched, viewed in 3D, and downloaded individually using our shape search engine. It’s important to note that MedShapeNet is intended for research and educational purposes only. MedShapNet usefullness was demonstrated by its incorporation in research papers.
The tutorial spans 4.5 hours and focuses on 3D shape analysis in medical imaging. It includes a hands-on session utilizing MedShapeNetCore. Topics covered include motivation, shape acquisition, processing pipelines, and selected use cases.
Our organizing team includes esteemed members such as Dr. Zongwei Zhou, Dr. Jiancheng Yang, and Dr. Beatriz Paniagua. For more detailed information, please refer to the program section.
Dr. Jiancheng Yang, a researcher at EPFL, collaborates with Prof. Pascal Fua on AI for health, medical image analysis, and 3D vision. With over 50 publications in esteemed journals and conferences, including MICCAI and NeurIPS, Dr. Yang’s research is highly regarded. His success in AI competitions and leadership in organizing MICCAI challenges demonstrate his versatility and expertise.
His talk gave an introduction into 3D shapes and an overview of classical as well as modern learning-based shape analysis methods.
M.Sc. Gijs Luijten a PhD candidate on the FWF enFaced 2.0 project, specializes in AR applications for maxillofacial surgery. With experience in 3D scanning, printing, and augmented reality at Radboudumc Nijmegen. He gained experience in Unity and Unreal Engine development, evident in contributions to a surgical scene datasets, HL2 applications, and help with organization of an ISBI challenge. Additionally, he is venturing into machine learning integration for medical shapes and data in extended reality applications.
His talks where about the initiation, current standings and future outlook of MedShapeNet a Large Database of 3D medical shapes. Furthermore a tutorial was given on how to use the showcases of MedShapeNet 2.0
For more information see GitHub Samples/Showcases
Dr. Zongwei Zhou, a postdoctoral researcher at Johns Hopkins University, is recognized for his groundbreaking work in reducing annotation efforts for computer-aided detection and diagnosis. His accolades, including the AMIA Doctoral Dissertation Award and the MICCAI Young Scientist Award, underscore his contributions to the field. Dr. Zhou’s recognition by Stanford University further highlights his impact and expertise.
His talk delved into the creation of a dataset (AbdomenAtlas) within MedShapeNet and it’s usecases within healthcare -> see the talk here
Dr Yucheng Tang is a research scientist at Nvidia’s medical and healthcare division, focusing on efficient healthcare AI, medical image computing, and translational research. I earned my Ph.D. at Vanderbilt University under Prof. Bennett A. Landman, specializing in medical image analysis and computational biomedicine. I’ve contributed to major initiatives like MONAI, advancing AI platforms for radiology, pathology, and beyond. Previously, I worked at Nvidia (CA) and Siemens Healthineers, and taught at Vanderbilt as an independent instructor.
His talk delved into MONAI and MedShapeNet and demonstrated and explained all available tools for researchers within MONAI -> see the talk here
M.Sc. Jana Frageman is a mathematician by training, having completed her masters at the University Duisburg Essen. In her PhD research at IKIM she is working on analyzing the latent space representations of supervised and unsupervised generative models for radiological data. She organized prestige events in Essen such as ETIM, and co-organized the MICCAI workshop Medical Applications with Disentanglements (MAD)
Her talk was about the MedShapeNet API, e.g., how to retrieve shape data from its web interface, and how to use the shape data to develop data-driven machine learning models for medical applications MedShapeNet API
Professor Xiaojun Chen, a Full Professor at Shanghai Jiao Tong University (SJTU), China, is a leading authority in biomedical engineering and computer-assisted surgery. With over 200 peer-reviewed articles and 20 patents to his name, his research spans crucial areas such as biomedical image analysis, AI in biomedical physics, and medical robotics. Notably, he has received prestigious awards including the National Science & Technology Progress Award of China (2019) and the “France Talent Innovation (FTI) Program” Award. His international recognition is evident through visiting professorships at Harvard Medical School, CNRS in France, and others.
His talk was about the intuation and ability to integrate the concept of shape in machine learning and medical image analysis task, his students showcased their research and how this concept was used within it.
The tutorial focuses on 3D shape analysis in medical imaging and comprises two main sessions:
Learning Objectives
- Gain an overview of classical and modern learning-based shape analysis methods.
- Learn to utilize MedShapeNet for shape data retrieval from its web interface and develop data-driven machine learning models for medical applications.
- Explore the Python API of MedShapeNet, including its usage in machine learning frameworks like MONAI and TensorFlow.
- Develop intuition and ability to integrate shape concepts into machine learning and medical image analysis tasks.
MedShapeNet 2.0 API is the continuation of MedShapeNet and includes all functionality of MedShapeNet 1.0 and MedShapeNetCore.
MedShapeNet 2.0 strives towards a AWS compliant S3 COSCINE storage which will be funded by NRW and host the data for a minimum of ten years.
We hope to demonstrate the improved MedShapeNet next year and enable researchers to not only contribute datasets, but also showcases, code and functionality.
FYI: MedShapeNetCore is a subset of MedShapeNet, containing more lightweight 3D anatomical shapes in the format of mask, point cloud and mesh.
The shape data are stored as numpy arrays in nested dictonaries in npz format Zenodo.
This API provides means to downloading, accessing and processing the shape data via Python, which integrates MedShapeNetCore seamless into Python-based machine learning workflows.
Master students are encouraged to mail when interested in an internship.