Introduction
Images have many elements which include important details and information. One of these items is the pixel of which the digital image consists. Image’s pixels are exploited in order to help find edges of objects and regions inside digital images. Pixels are considered as a very useful tool that helps discover the borders between regions by verifying their intensities, color, and/ or values’ variation. A digital image has been involved in a wide variety of applications. It has been efficiently and largely exploited by many research studies from different fields. As for example, Digital Image Processing (DIP) has been used to carry out a real-time response for security purposes due to its images high resolution(s) by exploiting pixel intensities and variations for a varied diversity of purposes some of which are extensively and technically described and reviewed in [1]–[14].
Usually, digital images include many items on which different image processes are applied. Each image’s item or process contains a different degree of importance depending on feature(s) to be obtained from processing that item. One of the important considerations in regard to the pixel item is how to accurately process each pixel with consideration of 4 or 8 neighbors. In this article, image’s pixels and their connected neighbors are treated in order to accurately extract edges of objects and regions inside digital images. Meaning, pixel contrast and pixel intensities variations in binarized images by focusing on 1’s and 0’s values are exploited.
There is a number of correlated images’ processes one of which is edge detection. Edge detection has been considered by many digital images processes and therefore many related fields and image-based applications have effectively and efficiently exploited such a process due to its importance and features it has. There are a lot of information and details an image’s edges have. These details provide image processes with ability to apply further processes once edges have been detected and extracted. The way those edges are extracted is, the highly accurate and robust further processes are. Examples of applications which mainly might rely on an image’s edges could be medical images [15], [16], machine vision [17], motion detection for tracking purposes [18], smart vehicles technologies [19]–[23], safety applications [11], [24], and so on. For those applications, different types of techniques have been used utilizing edges detection for example, vertical edges [25], text analysis [18], and many others.
There are several issues still need to be enhanced for those methods in order to increase performance quality. For example, computation time is important for several applications. Additionally, code complexity is a considerable factor to make the proposed method as simple as could. Accurately detected edges will contribute much to edge detection process and further image’s processes. In this research work, computation time while processing an image is aimed to be short and edges’ detection process is aimed to be accurate. Code complexity is usually influenced by the computation time for most cases and scenarios. This is evaluated thru Results and Discussion section.
A Pixel Intensity-based Contrast Algorithm (PICA) for Image Edges Extraction (IEE) is presented and in detail discussed in this paper. PICA aims to extract edges from complex-background images using a simple edges’ detector with an accurate process.
Another important issue related to edges’ detection is that some techniques which use global thresholding or those that do not efficiently consider regional and neighboring pixels well are applied. Thus, such an issue will affect further image’s processes and the whole performance as well in case most of foreground and background have not been carefully considered. Therefore, in this research study, to detect and extract edges efficiently, it is proposed to process images using adaptive thresholding to binarize image [26], [27].
Current applied mask-based techniques e.g.,
The organization of paper is described as follows: In the Section II, literature review is provided. Section III describes in detail the proposed Pixel Intensity-based Contrast Algorithm (PICA) for Image Edges Extraction (IEE) process. In Section, IV, Experimental Setup and Preparation are discussed. This is followed by Results and Discussion in Section V. After this, Conclusion is depicted in Section VI.
Literature Review
Edge extraction (EE) is a key process for a lot of related applications and digital image processes in many fields and areas. Thus, EE has been widely and extensively exploited to contribute to many image processes needed depending on the purpose of image related application. EE has a lot of image’s features from which significant information can be derived to feed to the respective application. Here, a brief review on those applications utilizing EE.
A proposed work presented in [29], [30] has used histogram equalization process in order to represent and recognize a series of speech. It has been also exploited by other techniques and applications such as mapping and geographic information systems to extract certain regions [31]. In robotics, DIP has been very widely exploited [32] to implement very crucial images-based tasks [31], [33]–[37] e.g., to do a 3D-based motion detection and multi-frame images recognition [38], to implement a vision-based tracking process for underwater vehicles [39], to enable an image sketching procedure from which a selected robot can benefit [32], or to enable the robot move to autonomously detect shadows of a region-of-interest (ROI) objects [40]. It has also been used by machine translation to measure the distance between an object and machine for a better focus calculation [41] or to propose a spectral-spatial classification method [42]. One of the attractive social network-applications is discussed in [43] which proposes a method helping to retrieve images from websites. In [44], a proposed method to carry out an analysis on certain images being transferred between social networks whereas this analysis has considered the level of pixel resolution in terms of color values and intensities for security purposes.
One interesting research field is image utilized social networks. It is with the help of image processes can do a varied band of useful services and implement important tasks for numerous applications. In [45], an approach that in parallel exploits textual and visual details to obtain the relation between tagged images is discussed.
One of the most important fields by which DIP is extensively exploited is medical imaging [46]–[52]. Different applications in literature have been reviewed. In [53], for example, a method to detect whether a fusion-based process can be applied on multimodal medical images or not is studied. A region-based segmentation process [54] was applied for certain objects to be highlighted from medical images. Similarly, a detection method was designed in [55] to passively extract features from medical images taking into consideration pixels’ intensities, region contrast, and inclined images. It tries to detect such tapering when compared to original digital images taken from the source after they have been subjectively evaluated. Sometimes, medical imaging applications require an additional supportive technique. Meaning, medical imaging applications are usually required to be combined with neural networks techniques in order to carry out complicated processes. For example, in [56], several image processes have been applied on medical images which, for example, include segmentation, features extraction, and classification. Then, trained neural network technique is applied on digital images to do further processes.
Recently, the topic of interactive image-based applications that require several successive processes has become of much interest. There have been many methods and approaches proposed for a wide range of research areas to provide a semi-optimal level of performance(s) taking into account accuracy, reliability, and robustness; e.g., an interactive image-based approach is discussed in [57]. In this research work, an image-based segmentation process has been applied on image’s pixels in order to enhance the accuracy of segmentation using an updated interactively game theory for an optimized approach. It contributes to solve graph constrains caused by multi-layer combination. This also contributes to reduce the computation time caused by interactive layers of pixels for multi-frame images.
Another example has been described in [58]. It has simply adopted a game theoretical approach to be applied on pixel-based for image noise removal algorithm. The interesting point in this approach is that it has mimicked a game player role with the relation to player’s partners. The approach reviewed in [58] did the same scenario with pixel-domain neighbors and joint pixels in order to efficiently remove noise from neighborhoods with errors as low as could to obtain a highly accurate noise-removal image. One of bio-medical image processing applications which has been presented in [59] has replaced the vision-based tracking approach with the game theory. In [59], an image-guided game theory application has been proposed in order to track changes occurring during a surgery. Also, this application aims to utilize the game theory to acquire images by inferring information collected from several images. The process of image acquisition is aimed to help produce an accurate made decision. Similarly to the two research works reviewed in [58], [59], the denoising image process has been efficiently exploited in order to enhance the application of game theory as it has been explained in [60]. This research work has somehow utilized both image’s pixels regions clustering which has been discussed in [58] and also re-use each pixel as a game’s player mentioned in [59]. This has enhanced region-based segmentation and its boundaries’ detection. Thus, it has reduced noise when considering a set of pixels’ neighbors when the work proposed in [60] is compared to median filter based algorithms in terms of visual quality, as stated. Numerous research works which consider to process pixel-by-pixel in order to reduce noise from images for dynamical behavior based image theory have been reviewed in literature e.g., [61], [62]. Each implemented algorithm has treated pixels as players in order to detect boundaries and edges or to segment criteria-based regions.
Image-based gaming applications are so extensive due to its feature, as for example, to detect accurately a location better than other location detection algorithms. In [63], this conception using pixel based manipulation has been used in order to localize a set of unequal objects inside the image for the augmented reality game.
Applications mentioned above have included many image processes such as edge detection, region extraction, contrast equalization, pixel-by-pixel manipulation and many other processes to do several image-based tasks. Amongst them, edge extraction is of key importance to perform subsequent image processes. Therefore, edges extraction process can mainly affect the whole performance of those application in terms of computation time and accuracy and detection rate. It is needed to make the edges extraction simple and accurate so that it can contribute much to many image processing related applications. In this paper, PICA has been proposed in which a simple mask designed to do an IEE process. The proposed PICA for IEE aims to achieve fast PICA’s processing time with less code complexity and computation time.
The Proposed Pixel Intensity-Based Contrast Algorithm (PICA) for Image Edges Extraction (IEE)
The proposed PICA flowchart contains four main processes as shown in Figure 1.
In Figure 1, there have been four processes mentioned. In the first process, the image is pre-processed. In the second one, the image is enhanced. The proposed pixel-based
A. IPP
In this sub-section, several processes are proposed and applied on input image. Loading and initializing of the color input image and conversion of input image into a grayscale image are applied. After that, to acquire a black-and-white image from the grayscale one, a thresholding process is used. The proposed flowchart of image pre-processing is shown in Figure 2.
1) IIIP
The color input image will be loaded. The size of color input image will be tested according to the following condition: If image’s height is 1024 pixels or below and the image’s width is 768 pixels or below, then the image is processed.
2) GICP
The input for this process is a color image. It is processed in order to compose the grayscale image. The following procedure has been applied on the color image shown in Figure 3(a).
To convert the red, green, and blue (RGB) values in the color image into grayscale values; a sum of R, G, and B components will be weighted as formulated in (1):\begin{equation*} I_{gs}=0.299\times R+0.587\times G+0.114\times B\tag{1}\end{equation*}
The proposed pseudocode implemented by the library Open Computer Vision (OpenCV) written using the programming language C++ and DEV-C++ Integrated Environment (IE) is provided in Algorithm 1.
After this process is applied on the input image shown in Figure 3 (a), the output image is shown in Figure 3 (b).
3) ATTP
Using an adaptive thresholding (AT) technique is needed due to that foreground’s contents might be dismissed in case global thresholding technique has been used. The feature in ATs is that they consider all pixels of regions and objects exist in neighboring pixels of the currently processed pixels. When the mask is moving, neighboring pixels need to be carefully considered while processing. In this process, an AT technique has been used [26], [27]. The proposed pseudocode of this process is provided in Algorithm 2.
In Algorithm 2, the current pixel of input image (i.e., a grayscale image) will be compared to the locally neighboring pixels, as mentioned in line 13 of Algorithm 2.
The condition applied in order to produce the binarized value is mathematically represented in (2):\begin{align*} p_{th}(i,j)=\begin{cases} \displaystyle 0,& p_{in}(i,j)\times s^{2} < {op}_{s}\times c \\ \displaystyle 255,& otherwise \end{cases}\tag{2}\end{align*}
is the output thresholded assigned value to the pixel inside a thresholded image$p_{th}(i,j)$ is the currently processed pixel at location in the input grayscale image$p_{in}(i,j)$ is the squared local region to which the$s^{2}$ belongs$p_{in}(i,j)$ is a summation operator of a group of pixels per each row and this process is performed using (3)${op}_{s}$ is a constant by which the binarization process is adjusted and it can be computed using the formula given in (4):$c$ \begin{align*} {op}_{s}=&\sum \nolimits _{j=0}^{w_{I_{in}}} {p_{in}(i,j)} \vert _{i^{th}} \tag{3}\\ c=&1-T\tag{4}\end{align*} View Source\begin{align*} {op}_{s}=&\sum \nolimits _{j=0}^{w_{I_{in}}} {p_{in}(i,j)} \vert _{i^{th}} \tag{3}\\ c=&1-T\tag{4}\end{align*}
In (2), if the logical operation is true, then it is decided to convert the corresponding value of the pixel intensity in the input image
Once the AT is applied to the image (Figure 3 (b)), the output result will be a thresholded image shown in Figure 4.
In Figure 4, foreground contents of regions and objects have been maintained after AT has been performed. This thresholding technique is able to process low contrast areas inside the gray scale image.
B. IEP
Image enhancement consists of several sub-processes applied to the output image resulted from the AT. Image enhancement process includes three proposed steps. One is a pre-process in which a statistical operation will be applied to the AT’s image. The other two are post-processes in which on-pixel-wise processes are applied and performed to yield an enhanced image. An initialized pre-process to check whether the image being processed has many details or not. In order to make a decision, the Image Background details Histogram Process (IBHP) will be applied on the image. This pre-process is followed by two post-processes. Image enhancement process aims to remove unwanted details and is proposed to remove certain pixels. For noise removal, there are two post-processes which are Noise Removal of Pixels-wise (NR-o-P) and Noise Removal of Lines (NR-o-L) by processing unnecessary details existing in horizontal, vertical, and diagonal lines. The flowchart of Image Enhancement process is shown in Figure 5.
1) IBHP
In this process, a very important procedure will be carefully applied on each region and/ or object as one unit. Meaning, this procedure is applied on each object separately whereas whole object’s pixels are together treated. IBHP mainly performs histogram statistics for every region to make a decision either the currently processed region has huge number of pixels with almost similar intensity or not. Then, this process will go for the second region; starting to move from top to bottom and from left to right. After that, the third region is checked, then the fourth one, and so forth until whole region and objects have been reached. The total number of foreground pixels will be judged whether the image has many details or not. If Yes, then the process will jump to the NR-o-L step; otherwise, the image will be enhanced using two subsequent steps which are NR-o-P and NR-o-L. The output image, after both NR-o-P and NR-o-L have been completely done, will be sent to the next process in order for the proposed mask to be applied on.
2) NR-o-P
This process will be applied on the thresholded image. NR-o-P processes pixels one-by-one in relation to its neighboring pixels. NR-o-P is going to consider four neighboring and eight neighboring pixels. Thus, the currently processed pixel will be compared if it has a potential relation to neighboring pixels or not. If Yes, it might be linked to an adjacent region in a four neighboring or eight neighboring relationship thru shared features in terms of pixel intensity. And in this case, the processed pixel will be considered a foreground one. Consequently, that pixel will be maintained. The Region of Interest (ROI) in this scenario is foreground details. Otherwise, it will be classified to belong to the image’s background. In that case, it is unnecessary and would be removed.
3) NR-o-L
In order to remove unnecessary details that probably are noise or don’t belong to objects inside images, multiple pixels gather in a way that they have no relations to neighboring pixels will be checked in order to decide to remove or keep. Therefore, the proposed PICA has applied an elimination algorithm dedicated for lines that don’t belong to the ROI’s regions, which has been proposed by [64], [65]. In [64], the proposed algorithm processes each pixel that exists in a cluster of pixels to make sure either it belongs to background of the image or not. If Yes, it is considered unnecessary and will be eliminated. Otherwise, that pixel would be unremoved. The proposed algorithm in [64] is able to process lines exist inside the image. Different directions can be processed even there are inclined objects because the proposed algorithm considered diagonal lines in addition to crossed lines to represent pixels being processed in 4- and 8- neighborhood similar to von Neumann and Moore neighborhood(s).
Both NR-o-P and NR-o-L processes have been used to enhance the thresholded image shown in Figure 6 (a) whereas the result is given in Figure 6 (b).
Image enhancement process. (a) a thresholded image and (b) an enhanced binarized image of (a).
C. IEE
1) Overview
In this part, the enhanced thresholded image (Figure 6 (b)) is considered as an input for this process. the input image will be processed using the
2) Proposed Design of the Mask
To implement the edge’s detection process, the mask shown in Figure 7 will be proposed for this purpose.
3) Proposed $2\times4$
Mask Flowchart
A proposed flowchart explaining steps of the
The flowchart shown in Figure 8 briefly explains general steps of the conceptual idea of the proposed mask applied to extract edges of objects based on the contrast between pixels’ intensities.
4) Technical Procedure of IEE
This process is dedicated to exploit the feature of pixel-based contrast. By utilizing pixels’ intensities of the binarized image and also the abundance of contrast exists between white and black pixels, the
The final step of PICA is to apply the proposed mask on the enhanced thresholded image shown in Figure 6 (b). when the 2 by 4 pixels related mask is applied to the selected image (i.e., thresholded), vertical edges will be highlighted and extracted. As for each region, left and right edges are extracted with a feature that the left edges have double thickness compared to those on the right edges. By applying the mask on the input, edges extraction and detection can be highlighted.
In order for PICA to perform the IEE process, the proposed
5) Mask’s Design and Coding
In regard to the IEE process. the proposed PICA has adopted a simple mask with a design that allows for processing two neighboring pixels at once as the center of mask. This design has reduced processing time needed to process a single image by
6) Big-O-Notation and Code Complexity
This subsection describes a simple analysis of the proposed IEE process. First, the code structure is explained. Second, the code complexity is analyzed using Big-O-Notation.
a: Code Structure
The proposed PICA code depends on using if-statement based condition in order to fulfil two criteria which are: if-statement conditional procedure and parallel processing of pixels. That is, for the first criterion, it is important to use fewer number of while-loop(s) in coding. For the second criterion, two pixels are processed at once. Here, it is important not to check only one pixel while movement of the proposed mask but two pixels are checked for location (i, j) once. These two criteria are further explained for a more clarification.
Criteria 1: If-Statement Conditional Procedure (If-CP)
The proposed design of IEE’s code has focused on how to reduce the use of while-loop. Instead, it has used if-statement conditions so that the time complexity is being reduced by
Criteria 2: Pixels Parallel Processing Procedure (4P)
In this procedure, every pixel is needed to be checked as in lines 5, 7, and 9 of Algorithm 3. The proposed code parallelly uses a checking process. It checks two pixels at once rather than one-by-one pixel each time. This has reduced the code complexity by 1/
b: Big-O-Notation Analysis
As abovementioned discussed, the designed code has proposed two procedures be applied in order to reduce the code complexity while processing images. Here, the code complexity of IEE process is provided step-by-step in equations (5)–(9):
The code complexity before the if-CP is applied is formulated in (5):\begin{equation*} {CXTY}_{code}\vert _{if-CP}^{b}=O(M)\times O(N)\times O(u)\times O(u)\tag{5}\end{equation*}
This equation can be simplified rewritten as in (6):\begin{equation*} {CXTY}_{code}\vert _{if-CP}^{b}=O(M)\times O(N)\times O(u^{2})\tag{6}\end{equation*}
Once the if-CP has been applied, the code complexity of IEE process is formulated in (7):\begin{equation*} {CXTY}_{code}\vert _{if-CP}^{a}=O(M)\times O(N)\tag{7}\end{equation*}
As for the 4P, the code complexity before and after the 4P has been applied will be mathematically defined in equations (8) and (9):\begin{align*} {CXTY}_{code}\vert _{4P}^{b}=&O(M)\times O(N)\times O(k) \tag{8}\\ {CXTY}_{code}\vert _{4P}^{a}=&O(M)\times O(N)\times \frac {1}{4}O(k)\tag{9}\end{align*}
D. MPM
1) Definition of Foreground/Background Functions
Let \begin{equation*} P_{B}=\left \{{p(m,n)\vert p~\mathrm {is ~a ~pixel ~element},p\in \{0\} }\right \}\tag{10}\end{equation*}
represents number of image’s row$m$ represents number of image’s column$n$ represents a pixels’ set, i.e.,$P_{B}$ and in which (11) is fulfilled:$\{P\}\subset \{0\}$ \begin{equation*} p(m,n)=c_{1}\tag{11}\end{equation*} View Source\begin{equation*} p(m,n)=c_{1}\tag{11}\end{equation*}
Similarly, let \begin{equation*} P_{F}=\left \{{p(m,n)\vert p~\mathrm {is ~a~pixel ~element},p\in \{1\} }\right \}\tag{12}\end{equation*}
\begin{equation*} p(m,n)=c_{2}\tag{13}\end{equation*}
2) Definition of Mask’s Functions
Let
The proposed design of the mask M in three blocks; left (a), center (b), and right (c).
Let
3) Definition of M Movement
Suppose that M movement will route from left-to-right and from top-to-bottom. To let M start moving, an initialization process is taken as mathematically represented in equations (14) and (15):\begin{align*} \dot {m}=&f_{iz}^{m} \tag{14}\\ \dot {n}=&f_{iz}^{n}\tag{15}\end{align*}
is the first location of rows from which the pixel with the intensity$\dot {m}$ starts movement$p(m,n)$ is the first location of columns from which the pixel with the intensity$\dot {n}$ starts movement$p(m,n)$ and$f_{iz}^{m}$ are initialized values that determine the location for first movement of M specified by$f_{iz}^{n}$ row and$m^{th}$ column.$n^{th}$
Therefore, the first location at which the mask M starts to move will be a function with a relation to \begin{equation*} L_{M}=f(\dot {m},\dot {n})\tag{16}\end{equation*}
Simply, \begin{equation*} f(\dot {m},\dot {n})=(\dot {m},\dot {n})\tag{17}\end{equation*}
Therefore, (16) can be simplified in (18):\begin{equation*} L_{M}=(\dot {m},\dot {n})\tag{18}\end{equation*}
In (18), it is mentioned that
At each time, M moves, the location of M, denoted by \begin{equation*} \vec {L}_{M}=L_{M}+\vec {d}\tag{19}\end{equation*}
represents a sub-sequential location and not the initialized location which is$\vec {L}_{M}$ $L_{M}$ is a distance or a new shifted location defined in (20), as follows:$\vec {d}$ \begin{equation*} \vec {d}=(\dot {m}+\delta,\dot {n}+\varepsilon)\tag{20}\end{equation*} View Source\begin{equation*} \vec {d}=(\dot {m}+\delta,\dot {n}+\varepsilon)\tag{20}\end{equation*}
Both \begin{align*} \delta=&f(\dot {m}) \tag{21}\\ \varepsilon=&f(\dot {n})\tag{22}\end{align*}
For every changing in variations of a row, \begin{align*} \delta=&\delta +1 \tag{23}\\ \varepsilon=&\varepsilon +1\tag{24}\end{align*}
Therefore, while the column at the second move varies and row does not, then (24) will be applied only. Therefore, the new location for M will be as defined in (25):\begin{equation*} \vec {L}_{M}=(\dot {m},\dot {n}+1)\tag{25}\end{equation*}
\begin{equation*} \vec {L}_{M}=(\dot {m}+1,\dot {n})\tag{26}\end{equation*}
4) Definition of M Size’s Functions
Let \begin{equation*} S\left ({M }\right)=W_{M}\times H_{M}\tag{27}\end{equation*}
represents M width$W_{M}$ represents M height.$H_{M}$

,$S(M_{l})$ , and$S(M_{c})$ are obtained sizes’ values for left, center, and right parts exist in M, respectively$S(M_{r})$ ,$w(M_{l})$ , and$w(M_{c})$ are width(s) related values of left, center, and right parts of M, respectively$w(M_{r})$ ,$h(M_{l})$ , and$h(M_{c})$ are height(s) related values of left, center, and right parts of M, respectively.$h(M_{r})$
5) Definition of EE Functions
At every time M moves, every pixel
When \begin{equation*} c_{1}=p(m,n)\tag{31}\end{equation*}
It is pre-defined that \begin{equation*} c_{p}(M)=c_{1}\tag{32}\end{equation*}
\begin{equation*} \because ~c_{p}(M)=p(m,n)\tag{33}\end{equation*}
6) Definition of Logical Operation-Based EE Process
If \begin{align*} c=\begin{cases} \displaystyle 0,& c_{1}=0,~c_{2}=0,~c_{3}=0,~c_{4}=0 \\ \displaystyle 1,& otherwise \\ \displaystyle \end{cases}\tag{34}\end{align*}
Similarly, this process is performed two times with left and right parts of M, i.e., \begin{align*} l=&\begin{cases} \displaystyle 0,& l_{1}=0, ~l_{2}=0 \\ \displaystyle 1,& otherwise \\ \displaystyle \end{cases} \tag{35}\\ r=&\begin{cases} \displaystyle 0,& r_{1}=0,~r_{2}=0 \\ \displaystyle 1,& otherwise \\ \displaystyle \end{cases}\tag{36}\end{align*}
Once each equation of equations (34)–(36) is equal to [0], i.e., they are black, the first column of \begin{align*} c_{1}=&\begin{cases} \displaystyle 255,& l=0,~c=0,~r=0 \\ \displaystyle 0,&otherwise \\ \displaystyle \end{cases} \tag{37}\\ c_{3}=&\begin{cases} \displaystyle 255,& l=0,~~r=0 \\ \displaystyle 0,& otherwise \\ \displaystyle \end{cases}\tag{38}\end{align*}
In equations (37) and (38),
Experimental Setup and Preparation
A. Overview
This section presents experimental conditions applied to images such as images sizes, types of image in term of background complexity, and the total number of images assigned to each size. Software platform and demonstration of implementation are also in detail described and explained in this section in order to give readers a further clarification on how such processes are implemented using the software and coding write-up.
B. Quantitative Criteria and Conditions
In this experimental work, about 526 samples have been used. They have different images related conditions. Criteria applied on these images contain, as for example, images sizes, image background details, and other related features. Further criteria are discussed in the following sub-sections.
C. Experimental Conditions
Samples used in this experiment have been divided into three datasets based on image’s size criterion. A total of 526 samples has been used for all datasets. Each dataset consists of a number of samples as shown in Table 1.
The three datasets have been also classified based on image conditions (e.g., has complex background; lots of details, low contrast) into three groups. Each group represents one corresponding dataset, i.e., Dataset 1 is represented by G1, Dataset 2 is represented by G2, and Dataset 3 is represented by G3. Per each group, there are a number of samples (images) based on images’ conditions mentioned above. The total number of images assigned to each group based on the image’s conditions is shown in Table 2.
Images conditions have been defined into three classes. There are three main classes to which images of each group have been assigned. Description of each class is provided in Table 3.
D. Software Platform
In this experimental work, the C++ programming language has been used with the help of Open Computer Vision (OpenCV) library written in the DEV-C++ integrated environment (IE). Additionally, experiments have been tested using a MS-Windows 7 Professional run on a laptop with specifications as follows: CPU: Intel Core i3 2.26 GHz, RAM: 2GB. In Figure 12, a screenshot of DEV-C++ is provided.
E. Demonstration
In this subsection, a simple clarification is provided to show how experimental works are implemented in steps. Few examples are shown in figures 13–15.
Examples of PICA-IEE implementation on a “car house” sample image. (a) an input image (color), (b) a gray scale image, (c) an adaptive thresholded image, and (d) an edges’ extracted image (i.e., IEE).
Examples of edges’ extraction process by the proposed PICA-IEE of an image. (a) a color input image, (b) a gray scale image, (c) an adaptive thresholded image, and (d) PICA-IEE’s output.
Examples of edges’ extraction process by the proposed PICA-IEE applied on a “stilt house” image. (a) a color input image, (b) its gray scale image, (c) a binarized image, and (d) PICA-for-IEE’s output (edges extraction image).
Examples shown in Figure 13–15 demonstrate that PICA is able to extract edges from different types of images inclusive varieties in colors, image background, similar-region areas, and day-or-night images. The threshold has been set to the value of 0.15 where it is the best threshold value for experiments. However, some regions that have high similarity with surrounding background might be removed from the edges detected image as shown in Figure 15 (d).
F. Interactive Multimedia Files Demonstration
The proposed and real code for the part of IEE process is included with this paper and an open source. The implementation of this real code has been represented in a video motion form provided in a WMV file type. Besides, the real code of the PICA is also provided alongside this paper (IEE process part).
Results and Discussion
Here, results of PICA’s evaluation are obtained and discussed. This section previews findings in five sub-sections, which are as follows:
In the first sub-section, the accuracy of PICA has been evaluated. Two processes have been used for a comparing two types of results. The first one shows the output results of PICA before a noise removal process of enhancement has been used while the second process shows obtained results of PICA for IEE after a noise removal has been applied to the thresholded image.
The second sub-section shows an evaluation of the proposed PICA in term of computation time for processing a single image taking into account the different images’ sizes. The computation time for the three images’ sizes (i.e., groups mentioned in Table 1) for each sample used in this experimental work with a total of 526 samples has been computed and graphically presented. Additionally, an average computation time for whole samples based on groups of images (mentioned in Table 2) has been calculated and provided.
Performance of PICA in term of IEE has been evaluated. Different image’s sizes have been also considered in this evaluation. Several samples representing images sizes (i.e., three classes mentioned in Table 2) have also been presented.
This sub-section evaluates the proposed PICA in term of robustness. Images classes mentioned previously in Table 3 are used in this experiment. Images conditions such as blurry, inclined, and complex background have been used in this experiment.
The fifth sub-section provides a comparative evaluation between the proposed PICA and a number of competitive research studies in terms of computation time, computation time-based enhancement rate for processing a single image with consideration of different sizes based on the three classes, accuracy, and code complexity.
A. Accuracy Evaluation of the Proposed PICA’s Noise Removal Process
Two scenarios are used in this test to measure accuracy level of IEE of PICA. In the first scenario, this order of processes: input image
As shown in Figure 16, the same input image will be tested using two processes. One of them includes an enhanced noise removal process applied on the thresholded image while the other process does not. Results obtained after both processes have been applied will be compared to each other in order to see which one has less noise or unnecessary details. In this process, the PICA will be evaluated using a single input image as shown in Figure 17.
Applied noise removal process on PICA’s accuracy evaluation; (a) input, (b) thresholded image, (c) edges’ detected image of (b), (d) thresholded image after a noise removal has been applied, and (e) edges’ detected image of (d).
The input image shown in Figure 17 (a) will go thru two processes. The first one applies an adaptive thresholding technique as shown in Figure 17 (b) while the second one applies two processes subsequentially on input image (Figure 17 (a)) which are: an adaptive thresholding technique followed by a noise removal process and related result is shown in Figure 17 (c). After that, the same proposed mask process has been applied on both images shown in Figure 17 (b) and (c), in order to extract images’ edges. Finally, corresponding obtained results are shown in Figure 17 (d) and (e), respectively. Obtained results shown in Figure 17 (e) compared to Figure 17 (d) is considered of accuracy. Additionally, most details are maintained with a less processing time is needed in the case of Figure 17 (e) compared to Figure 17 (d).
B. PICA’s Computation Time
In this experiment, computation time for processing a single image has been provided. Besides, the average computation time per a group of samples (depending on images sizes) is also shown. The computed results are discussed as follows:
1) Total Computation for Whole Samples of Each Category
The obtained computation time after PICA has been applied on whole samples is recorded. Computation times for processing a single image has been computed for each sample per each category.
The computation times for the three datasets (i.e., groups, G1, G2, and G3) based on images’ sizes will be shown in Figure 18, Figure 19, and Figure 20, respectively.
As shown in Figure 18, Figure 19, and Figure 20, computation times for the three datasets vary between 5.6 and 5.7, 19.1 and 19.2, and 24 mS, and 24.3 mS, respectively.
C. Performance Evaluation of IEE for Edges Detection and Extraction Process
In this evaluation, the performance of the proposed PICA for IEE process will be evaluated. The evaluation is quantitatively performed using different sizes of images and different types of classes (i.e., different conditions). Several samples of input images and their edges extracted output images will be shown in Figure 22.
Examples of PICA’s output. Input images are in (a)–(h) and their corresponding edges’ extracted images in (i)–(p).
As shown in Figure 22, edges of objects can be highlighted and detected using the proposed PICA. Samples shown in this figure are varied in terms of images’ sizes and conditions. Samples include low contrast images (as mentioned in Table 3; Class 1) e.g., Figure 22 (c), (d), (f) in which some objects have low contrast with either background details or their neighbors. Some other samples include complex background and/ or lots of details (Class 2) e.g., Figure 22 (e). In Figure 22 (h), some objects or foreground details resemble with other objects’ edges due to such excessive light.
D. PICA’s Robustness
1) Overview
The PICA is evaluated in term of robustness against blurry, inclined, and complex background images.
2) Experiments
This is to evaluate robustness of the proposed PICA. Firstly, the PICA is applied on varied types of images. Obtained results are evaluated. In this experiment, several categories of samples have been used. The selected samples of images might be inclined, low-contrast, and a huge portion of pixels images. Samples of images used in this experiment to be described as follows: The PICA is applied on a blurry and inclined image to extract its edges for the first experiment. In the second one, the PICA’s robustness is evaluated taking into account a high similarity of pixels’ intensities and also with low contrast between foreground details. It also shows a sample image in which a huge portion of pixels belong to both objects and foreground area is inclusive. Related input images and results obtained after the PICA has been applied can be found in Figure 23 (a) – (d).
PICA evaluation in term of robustness; an input image shown in (a) and its corresponding vertical edges in (b); an input image in (c) and its horizontal edges shown in (d). Both (b) and (d) are extracted using the proposed PICA.
These examples show that the proposed PICA is able to perform an IEE with inclined and complex background images. The proposed PICA can accurately and efficiently detect edges associated with different objects inside an image and regions even their contrasts are low. The PICA has confirmed its robustness with inclined edges as shown in Figure 23 (a) and (b) and it has accurately extracted edges of car license plate. Additionally, as shown in Figure 23 (c), lots of edges exist whereas they have been efficiently detected as sown in Figure 23 (d).
E. A Comparison Between PICA and Other Competitive Research Works
The proposed PICA for IEE will be compared to other competitive research works in terms of computation time, accuracy, and code complexity.
1) Computation Time
In this experimental work, there have been different sizes of images used which are:
As shown in Figure 24, when comparing the computation time of processing one image using the proposed PICA to other competitive methods and operators, it looks that the proposed PICA outperforms others.
2) Computation Time-Based Enhancement Percentage Compared to Other Competitive Methods
The PICA computation time in a relation to total computation times for the three methods is calculated using (39):\begin{equation*} R_{CT}\vert _{PICA}\% =\frac {CT_{PICA}}{CT_{Sobel}+CT_{[{64}]}+CT_{PICA}}\%\tag{39}\end{equation*}
By using (39), \begin{equation*} E_{CT}\vert _{PICA}=(1-R_{CT}\vert _{PICA})\times 100\tag{40}\end{equation*}
From (40), it is found that PICA has enhanced the edge detection and extraction process in term of computation time with a percentage of 92.1, 88.8, and 88.5% corresponding to images sizes
3) Accuracy
The performance of the proposed PICA’s output is compared to other research works e.g., Sobel’s in term of accuracy of edges extraction process. In Figure 25, a comparison between the proposed PICA and Sobel operator is performed in term of accuracy.
Image edges extraction (IEE) process of an image shown in (a) using Sobel operator in (b) and the proposed PICA in (c).
In this evaluation, the horizontal edges have been used. As shown in Figure 25, edges are obviously extracted for both Sobel operator and the PICA. For example, details and edges of textual contents might be acceptable and could be read. Thus, the PICA is able to extract small details specifically if the contrast between edges (foreground) and background details is high.
4) PICA’s Code Complexity
In this subsection, a comparison between Sobel and PICA in term of code complexity is discussed.
The code complexity for Sobel is formulated in (41):\begin{equation*} {CXTY}_{code}\vert _{S}=O\left ({M }\right)\times O\left ({N }\right)\times O\left ({u^{2} }\right)\tag{41}\end{equation*}
As discussed in earlier, the code complexity for the proposed PICA can be obtained using a summation operation of equations (7) and (9) and the result is given in (42):\begin{equation*} {CXTY}_{code}\vert _{IEE}^{a}=O\left ({M }\right)\times O\left ({N }\right)+\frac {1}{4}O(k)\tag{42}\end{equation*}
It is simply re-formulated as in (43):\begin{equation*} {CXTY}_{code}\vert _{P}=O\left ({M }\right)\times O\left ({N }\right)+\frac {1}{4}O(k)\tag{43}\end{equation*}
In (43), \begin{equation*} {CXTY}_{code}\vert _{P}=O\left ({M }\right)\times O\left ({N }\right)+O(1)\tag{44}\end{equation*}
It can be formulated in (45):\begin{equation*} {CXTY}_{code}\vert _{P}=O\left ({M }\right)\times O\left ({N }\right)\tag{45}\end{equation*}
By comparing (41) to (45), it can be concluded that the code complexity of PICA is less by
F. Qualitative and Quantitative Comparisons Between PICA and a Number of the State-of-the-Art Research Works
In this subsection, the proposed PICA algorithm is compared to a number of the state-of-the-art methods in terms of quantitative and qualitative evaluation(s).
1) Computation Time Based PICA vs. Several Competitive Research Works
The PICA is evaluated using a computation time parameter and it is compared to a number of the state-of-the-art methods. In Table 4, PICA is compared to several edge detectors in term of computation time.
As shown in Table 4, the proposed PICA has outperformed other competitive research works in term of the computation time.
2) Accuracy Based Pica Compared to a Number of the State-of-the-Art Methods
The proposed PICA is compared to a number of the state-of-the-art methods in term of accuracy of edges detection. Obtained results are shown in figures 26, 27, and 28. Original images shown in figures 26, 27, and 28 are available in [66].
IEE process of a black-white board sample (a), using several edge detectors; Robert (b), Sobel (c), Prewitt (d), Canny (e), [66] (f), and the proposed PICA (g).
A gear-sample (a) edge extraction process using a number of edge detectors which are Robert (b), Sobel (c), Prewitt (d), Canny (e), [66] (f), and the proposed PICA (g).
Performance of edges extraction process of an original image (a), using Robert (b), Sobel (c), Prewitt (d), Canny (e), [66] (f), and the proposed PICA (g).
As shown in Figure 26, edges are obviously extracted and accurately detected using all mentioned detectors. But, in Figure 27, Canny detector fails to accurately and correctly extract edges while the rest detectors are suitable to be used with such images. For the sample shown in Figure 28 (a), three out of six detectors have failed to extract correctly edges where it is clear that the written phrase shown in Figure 28 (a) is not clearly readable and edges are not understandable as in Figure (b), (c) and (d). These three examples have confirmed that the proposed PICA can extract objects’ edges even there are two adjacent edges inside a small region of images (as proven in example shown in Figure 26 (a) and (g)). Additionally, PICA is able to extract small edges and texts as shown in Figure 28 (a) and (g).
3) Samples-Based Quantitative Comparison Between PICA and Several State-of-the-Art Methods
In this comparison, PICA is evaluated based on the samples have been collected and used in experiments for the edges extraction process. It is hence compared to other several methods applied to extract edges. In Table 5, a quantitative comparison is provided in which the number of samples used for the IEE process is used.
As shown in Table 5, the proposed PICA has been tested using 526 samples of images including several images dimension(s) and conditions.
4) A Comparison Between PICA and Several State-of-the-Art Methods in Terms of Images’ Sizes, Image Conditions, and Advantages
In Table 6, the proposed PICA is compared to a number of the state-of-the-art methods quantitatively and qualitatively. Table 5 shows that PICA has the highest number of samples used for experiment of the edge extraction process.
As shown in Table 6, the proposed PICA has outperformed a number of the state-of-the-art methods in terms of no. of samples used per experiment, the variety of images sizes, and image conditions including ‘low contrast’ and ‘complex background’ images.
5) Code Complexity-Based Comparison Between PICA and Several State-of-the-Art Methods
In Table 7, a simple comparison between PICA and some other state-of-the-art methods in term of code complexity is performed.
The provided comparison in Table 7 shows that the code complexity of PICA equals to
Source Code
The related code: “PICA-4-IEE process code” has been uploaded to the Internet and is available via this link: https://github.com/abbasghaili/PICA-4-IEE_code
Conclusion
This paper has proposed a Pixel Intensity-based Contrast Algorithm (PICA) for Image Edges Extraction (IEE). The proposed PICA consists of several steps in which one of its main processes is to use a simple mask that processes pixel-by-pixel individually and in a group of lines to extract objects’ edges inside an image. The proposed mask depends on processing two pixels in a parallel way to reduce an image’s processing time so that the computation time will be enhanced and reduced specifically for images with large size or those images contain a huge portion of pixels and details. Besides, PICA is designed with such a way to reduce the use of while-loop(s) so that both computation time and code complexity get enhanced. The proposed PICA has been evaluated in terms of computation time, rate of enhancement of processing a single image, accuracy, and code complexity and compared to some other competitive research works.
The PICA’s computation time was compared to other competitive research works. PICA has less computation time than other research works. For processing an image with size equals to
, PICA consumes 5.7 ms to process a single image. This computation time is less by about ten times when comparing to Sobel operator. Computation time of PICA has shown better performance for different sizes of images’ sizes.$352\times 288$ In regard to enhancement rate of processing a single image with size of
, PICA has enhanced the rate by about 92.1%.$352\times 288$ Results show that PICA for IEE performs accurately with different images’ sizes and conditions.
The code complexity of the proposed PICA has been analyzed and it is less by
than other competitive methods such as Sobel operator.$u^{2}$ Analysis of PICA’s code and internal architecture and design of the mask show that PICA doesn’t use additional while-loop(s) for the mask movement like the traditional methods. This has reduced PICA’s code complexity and the computation time. This has contributed much in term of computation time and made PICA suitable for real-time applications.
Results have shown accurate performance of PICA for IEE process. Robustness evaluation of the proposed PICA has shown high performance.