
DeepMICROIA
Deep learning for TOMCAT imaging

Abstract
The beamline for TOmographic Microscopy and Coherent rAdiology experimentTs (TOMCAT) at the Paul Scherrer Institut (PSI) allows to do tomographic microscopy (SRXTM) using the X-rays from a synchrotron. This technique makes it possible to capture the three-dimensional structure of animal tissue from organs like, but not limited to, heart, brain, lungs, and bones at a high speed and high resolution in a non-destructive manner. Such three-dimensional imaging can help clinicians, pathologists, and biomedical researchers in obtaining a deeper understanding of tissue and its functioning without going through the cumbersome conventional process of tissue fixing, drying, staining, paraffin blocking, and slicing, which in the end only provides two-dimensional images.
PSI, ETH Zurich, and the Swiss Data Sciene Center (SDSC) worked jointly on automatically analysing such three-dimensional images to ease the burden of the clinicians and pathologists in identifying healthy tissue from sick ones. But in addition to easing the aforesaid burden, the three-dimensional imaging and its automatic analysis can reveal new insights into the functioning of organs and their constitution. Together, both the accelerated analysis of tissue as well as a better understanding of the organs is going to benefit patients suffering from maladies of vital organs.
The focus of DeepMicroia is on heart tissue, in particular that of rodents. The goal is to assess the health of the tissue by estimating the amount and nature of collagen fibers. This calls for a pixel-wise segmentation of the three-dimensional image volumes. Since performing such segmentation manually is tedious and error-prone, the focus of the project is to develop automatic methods for this.
Presentation
People
Scientists


Radhakrishna has a PhD in Computer Science from EPFL Switzerland, an MSc in Computer Science from NUS Singapore, and a BEng degree in Electrical Engineering from JEC India. During his 16 years of working experience, he has worked in the industry and academia, and has founded three start-ups. He has published over 20 refereed papers, which have received over 9000 citations. He is a co-inventor in 4 patents. He has served as a reviewer for several conferences and journals of repute and as area chair for ECCV 2016. His main interests are Computer Vision, Image Processing, and Machine Learning.
description
Problem:
Improving state-of-the-art in micro-CT image analysis for studying tissue microstructure and disease related alterations in the heart. This required segmenting hypertensive heart tissue into collagen, cells, and background. The specific goals were to:
- Create and Improve segmentation and quantification models
- Improve robustness to artefacts
- Reduce need for annotations
Impact:
Micron-scale CT images are essential in studying tissue microstructure and alterations. Better automatic / semi-automatic tools would allow high throughput and more accurate analysis, which can be performed at a much faster rate than using manual approaches, which is the only option widely used.
Progress
SDSC contributed three solutions to the problem of finding collagen fibres in the heart tissue image volumes: a training-free image processing solution, a trained deep network solution, and a deep network solution that required minimal labelling. The video below show what a segmented volume looks like (collagen fibres in red, cells in yellow, and empty space being background).
The first approach was based on an image processing technique, involving the use of Difference of Gaussians to identify potential fibres, followed by hysteresis-based region growing. This method could separate both collagen fibres and background regions with the help of 8 manually tuneable parameters. This code was ported from C to Python for ease of use. Also, to be able to tune parameters easily, a user interface was developed in C++. The advantage of this method was that annotations were not required and the processing could be done on a regular laptop in a few seconds. The disadvantage was that the parameters had to be tuned manually for each case and certain low-contrast regions of the volumes were hard to segment with this method.
So, a second method was developed, which used the deep learning architecture called UNet. It required training data, which was collected laboriously over a few stacks. Compared to the image processing method, the segmentation was deemed to be of better quality by the experts. On the flip side though, the method required annotated training data and the use of GPUs.
One issue faced throughout the project was that the annotations themselves were of poor quality. Getting good annotations was very laborious and time consuming despite the use of Ilastik as a tool for making this easier. So, a third method was developed that relied on training the UNet with a single diagonal slice. This cut the training data requirements by two orders of magnitude. Thanks to the isotropic nature of the image volumes, a diagonal slice contains similar structure as a horizontal or vertical slices. This allowed labeling just 1 slice carefully instead of nearly 400 slices erroneously.
Gallery

Annexe
Additionnal resources
Bibliography
Publications
Related Pages
More projects
CLIMIS4AVAL
News
Latest news


Climate-smart agriculture in sub-Saharan Africa: optimizing nitrogen fertilization with data science
Climate-smart agriculture in sub-Saharan Africa: optimizing nitrogen fertilization with data science


Street2Vec | Self-supervised learning unveils change in urban housing from street-level images
Street2Vec | Self-supervised learning unveils change in urban housing from street-level images


DLBIRHOUI | Deep Learning Based Image Reconstruction for Hybrid Optoacoustic and Ultrasound Imaging
DLBIRHOUI | Deep Learning Based Image Reconstruction for Hybrid Optoacoustic and Ultrasound Imaging
Contact us
Let’s talk Data Science
Do you need our services or expertise?
Contact us for your next Data Science project!