MULAN: Multitask Universal Lesion Analysis Network for Joint Lesion Detection, Tagging, and Segmentation

TitleMULAN: Multitask Universal Lesion Analysis Network for Joint Lesion Detection, Tagging, and Segmentation
Publication TypeConference Proceedings
Year of Conference2019
AuthorsYan K, Tang Y, Peng Y, Sandfort V, Bagheri M, Lu Z, Summers RM
Conference NameMedical Image Computing and Computer Assisted Intervention – MICCAI
Volume11769
Pagination194-202
Date Published10/2019
PublisherSpringer, Cham
ISBN Number978-3-030-32226-7
Abstract

When reading medical images such as a computed tomography (CT) scan, radiologists generally search across the image to find lesions, characterize and measure them, and then describe them in the radiological report. To automate this process, we propose a multitask universal lesion analysis network (MULAN) for joint detection, tagging, and segmentation of lesions in a variety of body parts, which greatly extends existing work of single-task lesion analysis on specific body parts. MULAN is based on an improved Mask R-CNN framework with three head branches and a 3D feature fusion strategy. It achieves the state-of-the-art accuracy in the detection and tagging tasks on the DeepLesion dataset, which contains 32K lesions in the whole body. We also analyze the relationship between the three tasks and show that tag predictions can improve detection accuracy via a score refinement layer.

DOI10.1007/978-3-030-32226-7_22