DeepMICROIA

Deep learning for TOMCAT imaging

Started
January 1, 2018
Status
Completed
Share this post

Abstract

The beamline for TOmographic Microscopy and Coherent rAdiology experimentTs (TOMCAT) at the Paul Scherrer Institut (PSI) allows to do tomographic microscopy (SRXTM) using the X-rays from a synchrotron. This technique makes it possible to capture the three-dimensional structure of animal tissue from organs like, but not limited to, heart, brain, lungs, and bones at a high speed and high resolution in a non-destructive manner. Such three-dimensional imaging can help clinicians, pathologists, and biomedical researchers in obtaining a deeper understanding of tissue and its functioning without going through the cumbersome conventional process of tissue fixing, drying, staining, paraffin blocking, and slicing, which in the end only provides two-dimensional images.

PSI, ETH Zurich, and the Swiss Data Sciene Center (SDSC) worked jointly on automatically analysing such three-dimensional images to ease the burden of the clinicians and pathologists in identifying healthy tissue from sick ones. But in addition to easing the aforesaid burden, the three-dimensional imaging and its automatic analysis can reveal new insights into the functioning of organs and their constitution. Together, both the accelerated analysis of tissue as well as a better understanding of the organs is going to benefit patients suffering from maladies of vital organs.

The focus of DeepMicroia is on heart tissue, in particular that of rodents. The goal is to assess the health of the tissue by estimating the amount and nature of collagen fibers. This calls for a pixel-wise segmentation of the three-dimensional image volumes. Since performing such segmentation manually is tedious and error-prone, the focus of the project is to develop automatic methods for this.

People

Scientists

SDSC Team:
PI | Partners

X-Ray Tomography Group:

  • Dr. Anne Bonnin

More info

description

Problem:

Improving state-of-the-art in micro-CT image analysis for studying tissue microstructure and disease related alterations in the heart. This required segmenting hypertensive heart tissue into collagen, cells, and background. The specific goals were to:

  1. Create and Improve segmentation and quantification models
  2. Improve robustness to artefacts
  3. Reduce need for annotations

Impact:

Micron-scale CT images are essential in studying tissue microstructure and alterations. Better automatic / semi-automatic tools would allow high throughput and more accurate analysis, which can be performed at a much faster rate than using manual approaches, which is the only option widely used.

Progress

SDSC contributed three solutions to the problem of finding collagen fibres in the heart tissue image volumes: a training-free image processing solution, a trained deep network solution, and a deep network solution that required minimal labelling. The video below show what a segmented volume looks like (collagen fibres in red, cells in yellow, and empty space being background).

The first approach was based on an image processing technique, involving the use of Difference of Gaussians to identify potential fibres, followed by hysteresis-based region growing. This method could separate both collagen fibres and background regions with the help of 8 manually tuneable parameters. This code was ported from C to Python for ease of use. Also, to be able to tune parameters easily, a user interface was developed in C++. The advantage of this method was that annotations were not required and the processing could be done on a regular laptop in a few seconds. The disadvantage was that the parameters had to be tuned manually for each case and certain low-contrast regions of the volumes were hard to segment with this method.

So, a second method was developed, which used the deep learning architecture called UNet. It required training data, which was collected laboriously over a few stacks. Compared to the image processing method, the segmentation was deemed to be of better quality by the experts. On the flip side though, the method required annotated training data and the use of GPUs.

One issue faced throughout the project was that the annotations themselves were of poor quality. Getting good annotations was very laborious and time consuming despite the use of Ilastik as a tool for making this easier. So, a third method was developed that relied on training the UNet with a single diagonal slice. This cut the training data requirements by two orders of magnitude. Thanks to the isotropic nature of the image volumes, a diagonal slice contains similar structure as a horizontal or vertical slices. This allowed labeling just 1 slice carefully instead of nearly 400 slices erroneously.

Gallery

Annexe

Bibliography

Publications

Related Pages

More projects

ML4FCC

In Progress
Machine Learning for the Future Circular Collider Design
Big Science Data

CLIMIS4AVAL

In Progress
Real-time cleansing of snow and weather data for operational avalanche forecasting
Energy, Climate & Environment

SEMIRAMIS

Completed
AI-augmented architectural design
Energy, Climate & Environment

4D-Brains

In Progress
Extracting activity from large 4D whole-brain image datasets
Biomedical Data Science

News

Latest news

Climate-smart agriculture in sub-Saharan Africa: optimizing nitrogen fertilization with data science
November 6, 2023

Climate-smart agriculture in sub-Saharan Africa: optimizing nitrogen fertilization with data science

Climate-smart agriculture in sub-Saharan Africa: optimizing nitrogen fertilization with data science

Food insecurity in sub-Saharan Africa is widespread, with crop yields much lower than in many developed regions. The project aims to use laser spectroscopy to measure fluxes and isotopic composition of N2O from maize and potato crops subjected to a range of fertilization levels.
Street2Vec | Self-supervised learning unveils change in urban housing from street-level images
October 31, 2023

Street2Vec | Self-supervised learning unveils change in urban housing from street-level images

Street2Vec | Self-supervised learning unveils change in urban housing from street-level images

It is difficult to effectively monitor and track progress in urban housing. We attempt to overcome these limitations by utilizing self-supervised learning with over 15 million street-level images taken between 2008 and 2021 to measure change in London.
DLBIRHOUI | Deep Learning Based Image Reconstruction for Hybrid Optoacoustic and Ultrasound Imaging
February 28, 2023

DLBIRHOUI | Deep Learning Based Image Reconstruction for Hybrid Optoacoustic and Ultrasound Imaging

DLBIRHOUI | Deep Learning Based Image Reconstruction for Hybrid Optoacoustic and Ultrasound Imaging

Optoacoustic imaging is a new, real-time feedback and non-invasive imaging tool with increasing application in clinical and pre-clinical settings. The DLBIRHOUI project tackles some of the major challenges in optoacoustic imaging to facilitate faster adoption of this technology for clinical use.

Contact us

Let’s talk Data Science

Do you need our services or expertise?
Contact us for your next Data Science project!