Towards a Low Complexity Scheme for Medical Images in Scalable Video Coding

Medical imaging has become of vital importance for diagnosing diseases and conducting noninvasive procedures. Advances in eHealth applications are challenged by the fact that Digital Imaging and Communications in Medicine (DICOM) requires high-resolution images, thereby increasing their size and the associated computational complexity, particularly when these images are communicated over IP and wireless networks. Therefore, medical research requires an efficient coding technique to achieve high-quality and low-complexity images with error-resilient features. In this study, we propose an improved coding scheme that exploits the content features of encoded videos with low complexity combined with flexible macroblock ordering for error resilience. We identify the homogeneous region in which the search for optimal macroblock modes is early terminated. For non-homogeneous regions, the integration of smaller blocks is employed only if the vector difference is less than the threshold. Results confirm that the proposed technique achieves a considerable performance improvement compared with existing schemes in terms of reducing the computational complexity without compromising the bit-rate and peak signal-to-noise ratio.


I. INTRODUCTION
Recent advancements in video coding, supporting information technologies, and infrastructure are evident in telemedical services, such as electrocardiographic (ECG) communications in teleradiology [1]. High-speed data transmission is required for future multimedia service priorities. For example, in medical scenarios, a single whole-slide image file occupies >15 GB. Moreover, the accumulated file size enlarges to terabytes when multiple focal planes (Z-stack images) are included [2].
Teleradiological systems transmit medical images to medical centers for telesurgery, remote patient monitoring, and remote health diagnoses [1], [2]. This requires highresolution multi-frame medical images to be transmitted remotely for diagnostic purposes. Therefore, efficient low complexity compression techniques are in high demand for time-sensitive healthcare services. Prominent among such The associate editor coordinating the review of this manuscript and approving it for publication was Victor Hugo Albuquerque . services are real-time teleradiological systems, frozen section diagnoses, and dynamic telepathology.
With scalable video coding (SVC), state-of-the-art video coding standards like MPEG-4 AVC/H.264 can extract appropriate partial bit streams. This partial bit stream is vital for target bit rates, which have low temporal scalability, spatial resolution, or overall low quality. The bit stream extraction process retains a rebuilding quality appropriate to the rate of the partial bit streams [3], [4]. Nonetheless, the encoded SVC hierarchical prediction architecture with probable error propagation imposes penalties on video stream transmission. Minimal packet loss rates in macroblocks can translate into significantly high frame loss rates [5]. Hence, a compression scheme that supports healthcare communication services is critically required. In the medical field, these requirements have not received adequate attention from researchers. SVC standards also support several error-resilient tools, such as slide coding and flexible macroblock ordering (FMO) to improve robustness against errors in the bit stream. With FMO, the picture is divided into a number of slice groups; VOLUME 8, 2020 This work is licensed under a Creative Commons Attribution 4.0 License. For more information, see http://creativecommons.org/licenses/by/4.0/ each is mapped by several macroblocks. Subsequently, different slice groups are assigned to prioritized areas and regionsof-interest. In this study, we accomplished using FMO for error resilience for encoding medical videos with low complexity, and identified how FMO benefits coding and error concealment efficiency. The organization of the remainder of this paper is as follows: Section 2 presents the related work, and Section 3 briefly highlights the importance of FMO and slice coding. In Section 4, we detail supportive medical imaging in eHealth. Section 5 describes the implementation. The performance analysis is provided in Section 6. Finally, Section 7 presents our conclusion.

II. RELATED WORK
The research into video coding has attempted to augment eHealth services, particularly medical imaging. Recent advances in the Internet of Healthcare Things (IoHT), smartphones with internet and wearable healthcare devices such as body-mounted sensors e.g. the Holter heart monitor provide feasible communications between machine and patient. Systematic analysis based on healthcare surveys reveals a trend towards online systems for heart monitoring [6]. In healthcare, electroencephalograms (EEGs) are diagnostic tools to study physiological regions of the brain. The authors in [7] applied nonlinear analysis tools and models that revealed the EEG autism spectrum disorder (ASD) in children. Inspired by the ASD outcomes, various other EEG-based diagnoses were presented. Cerebral palsy is an infantile disease caused by low levels of brain oxygenation during pregnancy and at birth. In [8], the authors devised a supporting technology called REHAB FUN that developed virtual reality scenarios to help in the treatment of children with cerebral palsy.
Razzak et al. [9] discuss challenges in the development of data analytic techniques in healthcare for disease avoidance. They outline relevant algorithms commonly used for classification, clustering, anomalies detection, and association among diseases to select appropriate models. Ullah et al. [10] propose an integrated infrastructure of unmanned aerial vehicles (UAV) and body area networks (BANs). The proposed framework would collect and process health data in realtime by linking a UAV with a BAN. In [11], the authors contributed to the classification of myocardial infarction and atrial fibrillation to recognize the signal pattern for different cardiac diseases by applying deep learning techniques. In [12], the authors propose a zero-watermarking algorithm to secure the patients' identity during the transmission of their medical information via the Internet of Things. For the proposed implementation, an encrypted key is embedded in the patient's identity image.
The authors in [2] describe the provision of best healthcare services and the necessities for next-generation telehealth and telemedicine systems. This research provides superfast broadband, ultralow latency across various telemedicine modalities to support remote healthcare information systems and medical diagnostics. However, this architecture is not practical for patient home care services.
The authors in [13] discuss the demands of multiuser video streaming with the quality of service (QoS) over wireless networks. These researchers provide a solution for video streaming over multichannel, multi-radio, multi-hop wireless networks. They propose a distributed scheduling scheme to minimize video distortion to achieve the desired fairness level. However, this study ignores video quality degradation due to inherent transmission errors. The authors in [14] examine several QoS approaches and estimate different combinations to increase the quality of a video stream communicated over an unstable network. An error control technique is employed with a channel coder, a multiplexer and unequal error-protection schemes to overcome the losses. However, transcoding or re-encoding is then necessary to decode for diverse devices.
Cui et al. [3] proposed a fast-mode decision algorithm to expedite the SVC encoding and exploit the relationship between rate-distortion (RD) and the statistical mode decision of the enhancement layer. A limitation of this algorithm is that it cannot provide any evidence of compatibility with slice coding. Paluri et al. [15] propose a low complexity and low delay generalized model for predicting the loss of coded video slices for prioritization. They applied the proposed model for loss prediction to the cumulative mean squared error caused by the loss of different coded slices. The approach uses unequal protection for the slice. The main drawback of this model is that the approach cannot exploit local video content characteristics. Buhari et al. [16] propose a human visual system-based watermarking algorithm with low complexity to take advantage of texture features in the frame. The main disadvantage of this algorithm is the increase in computational complexity. Koziri et al. [17] explore the challenges to decrease load inequalities among various threads in slice-based parallelization for appropriate slice sizing. Their method that manipulates group of picture structures is important for low delay scenarios but its main disadvantage is that it increases computational complexity. Santos et al. [1] added bidirectional prediction support to improve the efficiency of the rate predictor's lossless encoder. The main drawback of this work is that it cannot provide solutions evidence for error-prone scenarios. Ali et al. [18] introduced a subpartition in addition to three extant partitions in the H.264/AVC codec for the coding of prioritized information. The main drawback of this work is the increase in complexity. Grois and Hadar [19] proposed coding for regions of interest, which were flexibly selected from the pre-encoded scalable video bit stream and that with adaptive settings enabled the extraction of desirable regions of interest by location, size, and resolution. Their methodology provides an efficient mechanism to meet the requirement of heterogeneous end-user devices. The main deficiency of this approach is the increase in complexity under conditions inherent to an erroneous environment.
All these methods are efficient with acceptable quality deterioration. However, the impact of communication errors along with single-encoded streams has not yet been explored, specifically from the perspective of error-resilient slice coding. The present work focuses on the transmission of highresolution medical images in error-prone environments with low complexity by using FMO for resilience. The proposed research has the potential to deal with the challenges of lowdelay teleradiology, pathological procedures for rapid microscopic sample analysis. Target medical procedures include videos for magnetic resonance imaging (MRI), endoscopy, orthopedics, bronchoscopy, laparoscopy, gynecology, ophthalmology, and gastroenterology. In the following section, we briefly discuss FMO, slice coding, SVC, and Digital Imaging and Communications in Medicine (DICOM) standards relevant to the proposed scheme.

III. FMO AND SLICE CODING
The H.264 video coding standard augments various tools used to enhance error robustness. In this paper, we focus on slice coding and error concealment efficiency. FMO is an extremely effective error-resilient tool for H.264 video coding. The key advantage of using the FMO tool is that it allows slices consisting of neighboring macroblocks. By configuring the FMO, any macroblock can be assigned freely to any slice group. The advantage of using a dispersed macroblock order is the ease with which missing blocks can be reconstructed because information from the surrounding macroblocks can be used. Consequently, the errors are dispersed evenly across the entire frame instead of being bound to a particular area. FMO has considerable prospects, particularly in error resilience. Performance is significantly improved, and FMO is recommended especially in environments that experience significant packet losses [15], [17], [18].
In slice coding, macroblocks can be generously allocated to diverse slice groups during coding. So, the decoder must be informed about the allocation of macroblocks to the slice group. This information is communicated by the macroblock allocation map which is transmitted together with the coded information of the macroblocks embedded within the picture parameter set. Given that there can be up to eight slice groups in a picture three bits are required for each macroblock to identify its associated slice group. This gives slice coding a computational aspect. In most cases, however, certain patterns appear in the macroblock allocation map. One of the benefits of a pattern is that its regular structure can often be described using a simple function with few variables. Transmitting a pattern can be reduced to transmitting the predefined pattern type (i.e., the function). This decreases the variables needed to construct the macroblock allocation map. The macroblock allocation map can often be stored in two to eight bytes.
The H.264/AVC standard offers seven options to maintain the macroblock allocation map information within the picture parameter set. The first six alternatives are patterns, whereas the seventh is used when the allocation map cannot be characterized by any of the six predefined patterns. In the subsequent discussions, we provide a brief overview of types 0 to 5. For FMO type 0, each slice group in a picture contains a maximum number of macroblocks that follow the sequential raster scan order. FMO type 1 uses a predefined dispersed pattern. The number of slice groups determines the macroblock arrangement of the allocation map. FMO type 2 is used with the region of interest where high quantization and resilience will be applied compared with the background which will have less.. FMO types 3-5 share the macroblocks over two different slice groups and are commonly known as evolving types. Fig. 1 shows the different FMO types.

IV. SVC IN SUPPORTING MEDICAL IMAGING
The SVC standard is an extension of MPEG-4 H.264/AVC video coding. The Joint Video Team from the VCEG and MPEG jointly standardized SVC [4]. Encoded SVC bit streams can transmit and decode a partial bit stream by using feature extraction. The subsequent (decoded) video has lowtemporal, low-special-resolution, or low-quality scalabilities. The video reconstructed of partial bit streams is comparable to the single-layer H.264 design. Hence, SVC can provide a solution for medical imaging for heterogeneous networks and receiving devices that have an adaptation capability.
SVC encoding is comprised of intra/interlayer coding and P and B hierarchical prediction structures. SVC supports a flexible format for the real-time transport protocol interface. The bit stream scalability allows an adaptive bitrate for media streaming without requiring transcoding or re-encoding. The frame rate, special resolution, and picture quality variation in the IP network can also be managed easily for varying network conditions. The inherent characteristics of SVC, which include transmission robustness, compression efficiency, and flexibility for the inter-operability of heterogeneous modalities and networks, make it a good choice for eHealth imaging. However, these medical images follow certain health standards regarding format, structure, and transmission as discussed in the following paragraphs.
DICOM is a standard for distributing, exchanging, and viewing medical images required for communication between medical imaging equipment [20], [21]. The objectives of DICOM are to achieve compatibility and improve workflow between various modalities and imaging systems in healthcare. Working Group 13 of the DICOM Committee has developed a new enhancement for MPEG-4 H.264/AVC encoded video sequences [22]. However, some healthcare applications in DICOM, such as diagnostic videos, still require functional development to integrate with the professional profile of MPEG-4 H.264/AVC. eHealth research requires an efficient transmission scheme to transfer patient diagnostic images and videos. To address the evolving needs of medical imaging, an improved coding scheme and framework are presented using the scalable extension of MPEG-4 H.264/AVC. The proposed scheme can support the emerging requirements of the DICOM standard, such as high resolution, color sampling, and bit depth.

V. PROPOSED IMPLEMENTATION
The feature contents of video frames in H.264/AVC can generally be detected in macroblock mode by motion vector mapping. In this research, we expand our earlier research to evaluate the motion activity in the macroblock to identify the background [23]. We also extend our scheme using the FMO in slice coding mode and measure its performance.
The partition types in the background regions always contain 8 × 16, 16 × 8, and 16 × 16 partitions. Conversely, the partition types for macroblocks (MBs) in the foreground regions involve 4 × 4, 4 × 8, and 8 × 8 partitions with active motion or textures. Identical mode scattering is also found in the base layer frames of SVC [24], [25]. Figure 2 shows MB size and the partition and subpartitions in a frame from a stomach ulcer endoscopy. The image in Fig. 2(a) is developed from the bit stream that depicts the macroblock size in the picture. As shown in Fig. 2(b) Fig. 3(a) depict active motions due to interpredictions from forward and backward reference frames. The motion vectors in the video frame are largely clustered in the active regions. Contrastingly, the red arrows and differently sized partitions in Fig. 3(b) show the relationship between the macroblock partitions and sub partitions and the motion vectors in a frame. Figure 4(a) shows macroblock modes computed by optimized RD cost for each partition and subpartition (red, blue, and the yellow dots represent MB_Intra, MB_Inter, and MB_SKIP, respectively). In Fig. 4

A. MOTION VECTORS
To estimate the mode and its motion features, we analyze the motion activity computed by the following equation: (1) where L symbolizes the motion vector activity related to the base layer MB; ( . 2 ) denotes the l 2 -norm of the motion vector difference; and V 1 and V 2 denote the motion vectors of the two adjacent blocks, respectively. We evaluate the similarity between V 1 and V 2 by the Euclidean distance separating them.

B. MB MODE PARAMETER
Li et al. [24] investigated the correlation between mode distributions in the base and its interrelated enhancement layers MBs. For SVC spatial scalability, the same trends are present in the MB mode partitions in the base layer and its upsampled MBs in the enhancement layers. For SVC temporal scalability, the MBs' mode partition in the frame is almost identical to its reference frames. This relationship strongly correlates to the background in the enhancement layers. These findings inspired us to develop the MB-mode parameter (α) from the mode setting of the MBs in the base layer. The motion and texture features of the corresponding MBs in the higher layers are measured by α.
In extended special scalability (ESS), generally up to four MBs in the base-layer may be required for a MB in an enhancement layer [26]. Fig. 5 shows two successive spatial layers: the base (right side) and the enhancement layers (left side; shown with blue grids are the enlarged up-sampled base layer MB ''C''). W base , H base , and W enh , H enh symbolize the (width, height) of the base layer and related enhancement layer frames, respectively. The base layer frame is a subsampled form positioned at W extract and H extract partially or completely inside the enhancement layer frame, located at (x orig , y orig ) coordinates. W extract /W base and H extract /H base are the upsampling components between the base layer frame and the extracted area in the enhancement layer frame.   in Table 1. The Area factor (A) is the ratio of the coincided area of uMB b and MB e to the area of MB e itself. In [27], the enhancement layer MBs are categorized into four types, namely: center, vertical, corner, and horizontal. Based on the MB e characteristics, we computed the MB e mode parameter (α) as follows: 1) When MB e corresponds to the corner, uMB b has no adjacent MBs, and the n of uMB b n is 0; α is derived as VOLUME 8, 2020 follows: 2) When MB e corresponds to the vertical or horizontal, uMB b has one adjacent uMB b n (i.e., n = 1); α is computed as follows: for MB e ∈ Horizontal/Vertical (3) 3) When MB e corresponds to the center, uMB b has three neighbors uMB b n (i.e., n = 3); α is computed as follows: Hence, a patient's images can be coded with a better peak signal-to-noise ratio gain ( PSNR) and low complexity by applying the block-combining technique.

C. MODE DECISION IN BACKGROUND
Distinct mode decision techniques are applied to derive MB e in the background or active areas based on the MB mode and its active motion characteristics. In general, the higher the weightage of mode factor M the MB complexity will increase. As shown in Table 1, a mode factor with a high M weightage is allocated to smaller MB mode partitions and vice versa. For n = 0, the mode factor M is 0 (skip mode) as shown in Table 1, and the area factor A is allocated as 1. To decide whether MB e matches to the background or foreground regions, we set the threshold T H to α. The criterion is given as: where T H = 0.36, as computed empirically. Since MB e exists in a background region, the MB e partition type is directly set to be a large partition (16 × 16). Hence, the derivation of the MB partition type is interrupted.

D. MB MODE PARTITION IN AN ACTIVE REGION
Different method is employed to compute the MB mode in an active or rich texture region. In [26], the authors suggested an alternate technique in which two smaller subpartitions were combined if they shared similar motion vectors with the same reference index. In [25], the researchers showed that the motion vectors of contiguous blocks were similar when they corresponded to the same feature contents. Exploiting these conclusions, we can combine two bordering smaller blocks into one large block, even if their MV absolute difference is larger than the MV threshold set in [26]. Hence, we propose an alternative solution for block combining in active regions as: where L denotes the activity calculated using (1); α is the MB-mode parameter calculated using either (2), (3), or (4) based on the content features of MB e . T active = 4 is set as an experimental threshold. Given that T active is a static value, the correlation between V 1 and V 2 is adjusted to α, which reveals the motion and texture features of MB e . It reveals that α is low for background regions. By contrast, α is high for active motion or rich texture regions.

VI. RESULT AND ANALYSIS
To validate the performance of the proposed scheme, we used available high-quality video sequences. In raw format, medical videos with high quality are usually inaccessible due to the privacy of patients' records. This research was conducted on several sharable medical video sequences. The testing platform was an Intel R Core i5, 2.5 GHz with 4 GB RAM and the Windows 10 operating system. The research objective was to analyze the computational complexity and visual quality of the proposed approach in comparison with other contemporary schemes. The investigation included RD due to compression as well as distortion from packetized streams passing through error-prone communication channels. Our simulation used two commonly error-prone transmission scenarios, namely, IP networks and wireless networks. Errors exhibited in wireless networks are confined to small regions in a frame, and are susceptible to fading, attenuation, multiuser interference, and shadowing. Errors exhibited in IP networks are limited to the large regions of the frame. Transmissions through IP networks attempt best-effort service. The packet losses mainly occur in a congested network at intermediate nodes due to buffer overflow. Moreover, transmission errors in wireless and IP networks are also transitory and appear as glitches in the video.

A. BIT STREAM GENERATION
In this research, four compressed videos were downloaded and transformed into raw (YUV format) video sequences: (i) MRI sequence [28], (ii) gastric cancer sequence [29], (iii) brain neuron sequence [30], and (iv) stomach ulcer sequence [31], as mentioned in Table 2. High-profile H.264/AVC configuration and testing conditions were applied for the experiment. The success of the proposed scheme would be determined by the PSNR-Y (dB) versus the bit rate savings (kbps). The well-accepted measurements [26], [32] were also used to evaluate success, i.e. the percentage change in the MB partition and subpartitions ( MB), which is calculated as follows: where MB pro represents the total number of MB partitions and subpartitions for the proposed scheme, and MB orig denotes the total number of MB partitions and subpartitions for the anchor as defined in the Joint Scalable Video Model (JSVM). The positive value in MB for large partitions (16 × 16, 16 × 8) and negative values in MB for the 8 × 8, VOLUME 8, 2020 8 × 4, 4 × 8, and 4 × 4 macroblocks reflect a decrease in computational complexity.

B. SIMULATION CONDITIONS
In this experiment, four error patterns (3%, 5%, 10%, and 20%) with average packet loss rates (PLRs) were used as recommended [33]. In this research, the important parameters were set at a 4:2:0 (progressive) video format, group of picture structure with an intra-coded and predicted frame pattern, YUV color spacing, and the CABAC symbolic mode. The error pattern in the simulator dropped random packets as found in the error pattern from an SVC-compressed bit stream. Then, the output H.264 bit stream was decoded, and the errors were recovered using the concealment tool as defined in the JSVM [32]. Other testing settings were as follows: 1) In this experiment, we used DICOM compliant imaging, video graphics array (VGA) with a resolution of (640 × 480) for the base layer and a high-definition (HD) resolution of (1280 × 720) for the enhancement layer with a (16:9) aspect ratio [21]. 2) Lower PLRs occurred in the base layer than in the enhancement layer. 3) The packet loss simulator enforced a 3% PLR for the base layer bit stream and a 20% PLR for the enhancement layer bit stream -a condition that applies to error-prone wireless networks [34]. 4) For error-prone IP networks, the simulator enforced a 5% PLR for the base layer bit stream and a 10% PLR for the enhancement layer bit stream. 5) Other relevant main configuration parameters are listed in Table 3. The simulation testing consisted of FMO used for error resilience to measure performance at low complexity.

C. OBJECTIVE ACHIEVEMENT
To evaluate the objective achievement of proposed scheme, we increased the PSNR and block merging i.e. the number of subpartitions converged to large partitions. We analyzed the success of the 'proposed' method compared to JSVM [26], [32], which is specified as 'anchor' for concealed decoded sequences. Figure 6(a) demonstrates the RD for lung echo in the MRI sequence for the proposed and anchor methods, with HD resolutions for the wireless network. Figure 6(b) shows RD curves for the proposed and anchor schemes of the gastric cancer sequence. Figures 7(a) and (b) show the RD curves for the proposed and anchor schemes for brain neuron and stomach ulcer endoscopy, respectively. Our approach matched or slightly increased the image quality with regard to coding efficiency and PSNR gain.  The MB between the proposed and anchor algorithms is recorded in   to the anchor algorithms, thereby reflecting the decrease in complexity. The subpartitions for 4 × 8 and 4 × 4 are not mentioned in Tables 4 and 5 because during encoding they converged with both the anchor and the proposed algorithms. As shown in Table 4, our proposed scheme increased the number of the 16 × 16 partition type by 141.8% compared with the anchor consequently, lowering the number of MBs of the subpartition type tested in the various sequences. Figures 8(a) and (b) show RD curves for the proposed and anchor schemes for the IP network of the lung echo MB between anchor and proposed schemes in decoder with 5% PLR for BL and 10% PLR for EL for different sequences.
MRI sequence and gastric cancer sequences, respectively. Figures 9(a) and (b) show the RD curves for the brain neuron sequence and the stomach ulcer endoscopy, respectively. Our approach is comparable in frame quality and with regard to coding efficiency and PSNR. Table 5 contains the MB between the proposed and the anchor algorithms. According to Table 5, our proposed scheme raised the number of MBs in the 16 × 16 partition type by 143% compared with the anchor. The number of MBs in subpartition types are also reduced in various video sequences.

D. SUBJECTIVE ACHIEVEMENT
We also evaluated the subjective quality of the decoded videos transmitted through the wireless and IP networks. Selected frames were decoded with the pYUV viewer tool taken from different video sequences, and compared for both the proposed and the anchor algorithms [35]. 3) Gastric cancer: 12th frame coded with QP BL = 30 and QP EL = 32 4) Stomach ulcer endoscopy: 14th frame coded with QP BL = 30 and QP EL = 32 Moreover, as shown in Figs. 10-13, some visual contents of the decoded HD size frames are unclear due to limitations in the manuscript page fitting size, especially for the subjective quality variations between the proposed and anchor methods shown in Figs. 10(c)-13(c). Tables 4 and 5 validate that the proposed scheme considerably reduces the computation for wireless and IP networks without any degradation of either PSNR or RD. The resulting high-quality medical images hold promise for eHealth applications, especially when transmitted in an error-prone environment.

E. STUDY CONSTRAINTS
The proposed technique considerably reduces complexity for homogeneous regions as the search for an optimal MB mode is early terminated. For non-homogeneous regions, it merges constituent 4 × 4 block having vector differences of less than the threshold. Whereas, due to inherent encoding and transmission constraints, the concealed qualities of images may differ subject to the following: 1) Loss of an instantaneous decoder refresh frame is the most severe case, as the synchronization depends on it. 2) Loss of an intracoded frame or a predicted frame resulting in distortions of the video persistence quality. 3) Loss of base layer MBs resulting in a loss of the corresponding enhancement of later MBs as there is a loss of interlayer prediction.

VII. CONCLUSION
Our research proposes a fast MB mode selection technique to reduce the complexity of high-definition medical images passing through error-prone environments. Flexible macroblock ordering offers a noteworthy approach to reduce the influence of errors inherent to wireless and IP networks. We exploit the content features of video to reduce complexity with flexible macroblock ordering for error resilience. Given that high-definition medical videos are extremely large, multimedia services with low complexity are an encouraging advancement for healthcare systems. The experimental outcomes confirm that the proposed scheme achieved significant improvement with an average of 143%, which reduces complexity without compromising visual quality.
MUHAMMAD IMRAN received the Ph.D. degree in information technology from Universiti Teknologi PETRONAS, Malaysia, in 2011. He is currently an Associate Professor with the College of Applied Computer Science, King Saud University, Saudi Arabia. His research is financially supported by several grants. He has completed a number of international collaborative research projects with reputable universities. He has published more than 150 research articles in peer-reviewed, well-recognized international conferences, and journals. Many of his research articles are among the highly cited and most downloaded. His research interests include the Internet of Things, mobile and wireless networks, big data analytics, cloud computing, and information security. He has been consecutively awarded with an Outstanding Associate Editor of IEEE ACCESS, in 2018 and 2019 besides many others. He has served as an Editor-in-Chief for the European Alliance for Innovation (EAI) Transactions on Pervasive Health and Technology. He has served/serving as a Guest Editor for about two dozen special issues in journals, such as the IEEE Communications Magazine, the IEEE Wireless Communications Magazine, Future Generation Computer Systems, IEEE ACCESS, and Computer Networks. He is also serving as an Associate Editor for top ranked international journals, such as IEEE Communications Magazine, the IEEE NETWORK, Future Generation Computer Systems, and IEEE ACCESS. He has been involved in about 100 peer-reviewed international conferences and workshops in various capacities, such as the chair, the co-chair, and a technical program committee member. He is currently a Faculty Member with the Information Technology Department, Faculty of Computer and Information Technology, King Abdulaziz University. He has been serving as a faculty member and a research supervisor at various other universities, since 2001. He has also been involved in several funded projects as PI and Co-PI. He has published several articles in reputed journals and conferences. He is also a member of several scientific and professional bodies. VOLUME 8, 2020