I. Introduction
Object detection is a fundamental visual analysis task aimed at recognizing and locating objects [1]. Early research mainly focused on horizontal object detection in natural scenes. However, instances in aerial images are densely distributed in any direction. Horizontal object detection causes misalignment between the HBBs and the objects in any orientation. HBBs introduce massive irrelevant information in the area of objects distributed in any direction, which will seriously interfere with the good performance of object detection. In addition, the HBBs cannot accurately and uniquely locate the object in any orientation. Therefore, oriented object detection gradually emerged. In recent years, oriented object detection has drawn increasing attention due to the demands of different scenes including aerial images [2], [3], scene texts [4], [5], etc [6], [7]. Existing oriented object detectors usually follow the general object detector paradigm and represent OBBs by adding additional rotation angles to the HBBs. Consequently, oriented object detection methods widely use the five-parameters to represent OBBs. Despite the satisfactory results, oriented object detection based on regression angles often faces some new issues. Unlike horizontal detection, the Intersection over Union (IoU) of two OBBs is non-differentiable for learning [8]. Due to the PoA and the EoE, oriented detectors based on angle regression often suffer from boundary discontinuity and square-like problems [9]. PoA and EoE can cause suboptimal predictions outside the defined range. Due to the sharp increase in loss at the boundary, inconsistent regression at the boundary and non-boundary during training can easily lead to training instability. Therefore, these issues greatly affect the performance of the oriented detectors.