Due to its ability to provide multiple information about a target (tumor, organ, or tissue), multi-modality is frequently utilized in medical imaging. In order to improve the segmentation, multimodality segmentation involves fusing multiple information sources. In recent times, approaches based on deep learning have demonstrated cutting-edge results in image classification, segmentation, object detection, and tracking tasks. Multi-modal medical image segmentation has recently also piqued the interest of deep learning researchers due to their capacity for self-learning and generalization across large amounts of data. We present an overview of deep learning-based methods for the multi-modal medical image segmentation task in this paper. Multi-modal medical image segmentation and the general principle of deep learning are first discussed. Second, we compare and contrast the outcomes of various fusion strategies and deep learning network architectures. Because it is straightforward and focuses on the subsequent segmentation network architecture, the earlier fusion is frequently utilized. However, the later fusion places a greater emphasis on the fusion strategy in order to learn the intricate connection that exists between the various modalities. If the fusion technique is effective enough, later fusion can generally yield more accurate results than earlier fusion. Additionally, we talk about some typical issues with medical image segmentation. In conclusion, we offer a synopsis and some perspectives on the upcoming research.
Published Date: 2022-08-18; Received Date: 2022-08-01