Skip to Main Content
This paper explores our solution aiming to provide efficient retrieval of medical imaging. Depending on the user, the same image can be described through different views. In essence, an image can be described on the basis of either low-level properties, such as texture or color; contextual data, such as date of acquisition or author; or semantic content, such as real-world objects and relations. Our approach consists of providing a multispaced description model capable of integrating different facets (or views) of the medical image. Visual retrieval solutions are recommended and are the most appropriated for noncomputer-science users. However, current visual languages suffer from several problems, especially ambiguities generated by the user and/or the system, and imprecision at different levels of image description. In this paper, we expose our solution and demonstrate how spatial precision of medical image content and ambiguities can be resolved. An implementation called Medical Image Management System (MIMS) has been realized to prove our proposition. A set of tests has been deployed to validate our prototype.