Speaker
Description
Recent advancements in deep learning have revolutionized the way microscopy images of cells are processed. Deep learning network architectures have a large number of parameters, thus, in order to reach high accuracy, they require massive amount of annotated data. A common way of improving accuracy builds on the artificial increase of the training set by using different augmentation techniques. A less common way relies on test-time augmentation (TTA) which yields transformed versions of the image for prediction and the results are merged. In the current paper, we describe incorporating the test-time argumentation prediction method into two major segmentation approaches used in the single-cell analysis of microscopy images, namely semantic segmentation using U-Net and instance segmentation using Mask R-CNN models. Our findings indicate show that even using only simple test-time augmentations, such as rotation or flipping and proper merging methods, will result in significant improvement of prediction accuracy. We utilized images of tissue and cell cultures from the Data Science Bowl (DSB) 2018 nuclei segmentation competition and other sources. Additionally, boosting the highest-scoring method of the DSB with TTA, we could further improve and our method has reached an ever-best score at the DSB.