Case ID: M19-189LC^

Published: 2020-03-06 12:15:16

Last Updated: 1734524381


Inventor(s)

Zongwei Zhou
Md Mahfuzur Rahman Siddiquee
Nima Tajbakhsh
Jianming Liang

Technology categories

Computing & Information TechnologyImagingLife Science (All LS Techs)Medical Imaging

Licensing Contacts

Jovan Heusser
Director of Licensing and Business Development
[email protected]

UNet++: A Novel Architecture for Medical Imaging Segmentation

Fully convolutional networks (FCN) and variants of U-Net are the state-of-the-art models for medical image segmentation. However, these models have limitations, namely 1. their optimal depth is apriori unknown, requiring extensive architecture search or inefficient ensemble of models and 2. their skip connections impose a restrictive fusion scheme, forcing aggregation only at the same-scale feature maps of the encoder and decoder sub-networks.

 

Researchers at Arizona State University have developed a new neural architecture, UNet++, for semantic and instance segmentation. UNet++ alleviates the unknown network depth with an efficient ensemble of U-Nets of varying depths, redesigns skip connections to aggregate features of varying semantic scales at the decoder sub-networks and devises a pruning scheme to accelerate the inference speed of UNet++. This architecture has been extensively evaluated using six different medical image segmentation datasets covering multiple imaging modalities and is shown to outperform the baseline models in semantic segmentation and enhance segmentation quality of varying-size objects of only certain sizes.

 

This novel UNet++ architecture with redesigned skip connections, extended decoders and deep supervision is a significant improvement over the classical U-Net architecture and enables higher levels of performance for semantic and instance segmentation in medical imaging applications. 

 

Potential Applications

•       Medical image segmentation (CT, MRI, electron microscopy, etc.)

o       Computer-aided diagnoses (cancers, nodules/polyps, anatomical defects, other diseases)

o       Automatic measurement/segmentation of tissues and organs

o       Cell counting and segmentation

o       Simulations based on determined boundaries

o       Contouring during treatment planning

 

Benefits and Advantages

•       Training UNet++ with deep supervision results in all constituent U-Nets trained simultaneously while benefiting from a shared image representation

•       Redesigned skip connections

o       Highly flexible feature fusion scheme – only the same-scale feature maps from the encoder and decoder can be fused

•       Enhances segmentation quality of varying-size objects

•       UNet++ with deep supervision achieves an average IoU gain of 3.9 and 3.4 points over U-Net and wide U-Net respectively

•       UNet++ is not prone to the choice of network depth because it embeds U-Nets of varying depths in its architecture

•       Improves overall segmentation performance, and also enables model pruning during the inference time

o       Pruned UNet++ models achieve significant speedup with only modest performance degradation

 

For more information about this opportunity, please see

Zhou et al – ArXiv.org – 2019

Zhou et al – Poster

Zhou et al – Github

For more information about the inventor(s) and their research, please see

Dr. Liang’s departmental webpage