Abstract
Most existing deep learning based methods for vessel segmentation neglect two important aspects of retinal vessels: The orientation information of vessels and the contextual information of the whole fundus region. In this paper, we propose a robust orientation and context entangled network (OCE-Net), which can extract complex orientation and context information from blood vessels. To achieve complex orientation-aware convolution, we propose a dynamic complex orientation-aware convolution (DCOA Conv) to extract complex vessels with multiple orientations for improving vessel continuity. To simultaneously capture the global context information and emphasize the important local information, we propose a global and local fusion module (GLFM) to simultaneously model the long-range dependency of vessels and give sufficient attention to local thin vessels. A novel orientation and context entangled nonlocal (OCE-NL) module is also proposed to entangle the orientation and context information together. In addition, an unbalanced attention refining module (UARM) is proposed to deal with the unbalanced pixel numbers of the background and thick and thin vessels. Extensive experiments were performed on several commonly used datasets (DRIVE, STARE, and CHASEDB1) and some more challenging datasets (AV-WIDE, UoA-DR, RFMiD, and UK Biobank). The ablation study results show the proposed method's good performance in maintaining the continuity of thin vessels, and the comparative experimental results show OCE-Net's good performance in retinal vessel segmentation. Thus, the proposed framework can effectively carry out retinal vessel segmentation.</p>