Deep-Learning-Based System for Change Detection Onboard Earth Observation Small Satellites

In recent years, the important evolution in the number, potentiality, and diversity of Earth observation (EO) satellites has resulted in dramatic increases in the payload data volume and rate. However, these exponential increases in the generated data volume are creating a significant bottleneck onboard EO satellites due to transmission bandwidth limits and communication delays. Onboard imaging payload data processing can provide an appropriate solution to alleviate the induced data bottleneck. It can also facilitate rapid response for decision-making operations. Change detection is one of the most significant functions in onboard payload data processing systems that enable a real-time reaction to natural disasters, such as flooding, earthquakes, and volcanic eruptions. In this article, we address the problem of automatic change detection onboard EO satellites. This article aims to design an automatic onboard change detection system (OCDS) that can run on existing flight-proven hardware by taking advantage of the attractive features of a leading model in deep learning (DL) called convolutional neural network. The contribution of this article is twofold. An efficient algorithmic solution for change detection based on DL that fulfills space environment-induced constraints is first proposed. Second, a preliminary hardware architecture of the proposed OCDS is designed based on payload data processing flight-proven hardware. The experimental results demonstrate the efficiency of the proposed DL-based change detection approach and the suitability of the designed OCDS for onboarding on EO small satellites.


I. INTRODUCTION
E ARTH observation (EO) space missions are today experiencing rapid advancements in imaging instrument and sensor technologies resulting in an increase in the achievable spatial, spectral, temporal, and radiometric resolutions. Consequently, the quantity and diversity of image data provided by EO satellites have increased significantly. However, satellite-to-earth transmissions of the generated data are increasingly becoming a bottleneck, as the developments in sensor technology have not been matched with equivalent advancements in satellite downlink technologies. Increasing transmission throughput might not be an option to alleviate this bottleneck due to induced cost issues. An alternative option, which is establishing itself as a new trend for several space missions, lies in the intelligent onboard processing of imaging payload data [1], [2], [3]. Instead of transmitting to ground the whole volume of EO raw data generated onboard the satellite, the amount of data to downlink can be reduced by transmitting only the low-volume data products resulting from onboard data processing minimizing consequently downlink requirements. Onboard data processing can also facilitate rapid response for decision-making operations. Onboard image data processing gained an ever-increasing acceptance within the last 20 years due to the growth of onboard satellite computation capabilities. Recent technological advancements in high-performance commercial off-the-shelf computing platforms have made possible the onboarding of data processing algorithms [4]. The early onboard data processing systems that have been incorporated in EO satellite missions are image compression and cloud detection [3], [5]. The bispectral infrared detection mission [6] demonstrated the feasibility of fire recognition from space based on onboard image data processing. In [7], the authors reported the first demonstration of fully automatic hyperspectral scene analysis, feature discovery, and mapping onboard the EO One spacecraft. The experiments described are the first instance of autonomous spacecraft detection of spectral endmembers and the first hyperspectral mineralogical mapping onboard a spacecraft. A real-time orbital cloud detection method using decision forests deployed onboard the IPEX spacecraft has been described in [8]. As claimed by the authors, this was the first-ever deployment of visible image-based cloud screening onboard any operational spacecraft. Recently, Mateo-Garcia et al. [9] demonstrated the feasibility of flood mapping onboard Cubesats using machine learning (ML). A flood segmentation algorithm that produces flood masks to be transmitted instead of raw images has been proposed.
Among image data processing capabilities that could be implemented onboard EO satellites, change detection plays an important role by enabling a rapid reaction to dynamic events, such as flooding, earthquakes, and volcanic eruptions [5]. Change detection is the process of identifying temporal changes in satellite images that cover the same area of the Earth's surface [10]. An approach to disaster monitoring using an automatic change detection system onboard small satellites that features image tiling and fuzzy inference has been proposed in [5] for the purpose of flood detection.
Over the past decade, artificial intelligence (AI), in particular deep learning (DL) has shown great potential to tackle the problem of automatic change detection tasks in remote sensing applications. DL has been proven to improve change detection accuracy thanks to its attractive features especially its ability to extract high-level visual features and to provide rich semantic information not considered by traditional methods [11].
Change detection using DL has attracted major attention in recent years and a number of methods have been published in the literature. Numerous literature surveys focusing on the state-of-the-art methods, applications, and challenges of DL for automatic change detection in the field of EO have been conducted recently [10], [11], [12], [13]. However, the reported change detection methods based on DL were not designed to be deployed onboard a satellite. There have been relatively few attempts that have been conducted on DL-based change detection with the specific purpose of onboard satellite deployment. Recently, Vít Růžička et al. [14] introduced RaVAEn, a lightweight, unsupervised approach for change detection in satellite data based on variational autoencoders for onboard deployment.
The choice of hardware and processing algorithms drives the performance of any onboard data processing [15]. In this article, we intend to address the problem of change detection onboard EO small satellites. For this purpose, the focus will be on the proposal of an algorithmic solution and a hardware design for an onboard change detection system (OCDS). This is part of ongoing work that focuses on the development of a global OCDS to be integrated onboard the platform of the upcoming Algerian EO mission.
To fulfill an optimized onboard computing solution, a computationally simple yet effective automatic change detection algorithm based on the leading model in DL named convolutional neural network (CNN) is at first proposed. The key idea is to use a pretrained CNN as a black-box feature extractor to automatically extract deep features from images to be used for producing a change detection map. Bypassing the training phase of the CNN, which is the most time-consuming step, results in a relatively lightweight change detection framework suitable to space environment-induced computing limitations. The effectiveness of the proposed change detection method is evaluated using real satellite images.
Following that, a preliminary hardware architecture for the change detection system is designed based on existing payload data processing flight-proven hardware previously flown on the Alsat-1B Algerian satellite.
The computation performance of the proposed OCDS is estimated and the results are compared to the computation capabilities of the hardware flown on the Alsat-1B platform to assess the suitability of the proposed OCDS for onboarding on the future Algerian EO satellite. The proposed algorithmic approach for change detection was implemented using Matlab code, and the well-known computing benchmark program Dhrystone 2.1 was used to reproduce Alsat-1B's hardware computation performance because the flight hardware was not available.
The rest of the article is organized as follows. In Section II, the benefits of DL for space applications are first highlighted; afterward, the proposed algorithmic approach for onboard change detection is detailed and at the end, experimental results on the accuracy assessment of the proposed change detection algorithm with real satellite images are presented. Section III outlines the preliminary hardware architecture of the proposed OCDS for EO small satellites and the operational scenario. In Section IV, the computing performance of the proposed system is tested and the results are compared to the computation capabilities of the hardware flown on the Alsat-1B platform. Finally, Section V concludes this article.

II. DEEP-LEARNING-BASED ALGORITHMIC APPROACH
Onboard change detection plays a critical role in monitoring natural disasters by enabling a rapid reaction to dynamic events, such as flooding, earthquakes, and volcanic eruptions [5]. Nevertheless, unlike other onboard image processing functions, change detection is relatively less developed due to the complexity of the change detection process [5], [11].
Change detection aims to identify changes that have occurred on the Earth's surface. In a nutshell, the process of change detection involves taking as inputs two images or more covering the same geographical area and captured at different times. An important preprocessing step that consists of the operation of registration is needed to align the images into the same coordinates system. The resulting output of this process is a change map (CM) that comprises change and no-change information.
In this article, we aim to address the problem of change detection onboard EO small satellites. To achieve an optimized onboard computing solution, we want an algorithm that is simple to implement, fast to execute, and with reasonable accuracy and robustness. Toward this goal, our proposed approach for onboard change detection is based on DL due to its attractive features particularly its ability to extract high-level visual features that are semantically rich. Furthermore, DL architectures are suitable to be deployed in space onboard satellites under specific constraints related mainly to the execution time and the memory required for storage and implementation [3], [16]. In this article, we take advantage of the leading model in DL called CNN to address on-board change detection.

A. Deep Learning for Applications in Space
Space offers a complex, constrained, and challenging environment to software and hardware designers. Size, weight, and power (SWaP) constraints along with radiation effects are the most defining challenges for onboarding data processing on satellites.
Due to these space environment-induced resource constraints, the development of space data processing systems usually encounters limitations that are not always experienced on Earth. Consequently, when designing algorithms for onboard satellite data processing, it is important to find solutions that fit the SWaP constraints while ensuring a tradeoff between performance and reliability (which are often at odds) in the space environment [17], [18].
As of now, AI is progressively finding a central role in various aspects of space missions, including general spacecraft operations like guidance, navigation, and control [19], as well as mission planning [20]. In particular, during the last decade, ML, mainly DL, has emerged as a promising option to perform onboard EO image data processing [16], [21]. The first-ever in-flight demonstration of DL for onboard image data processing was performed in the frame of the Phisat-1 ESA nanosatellite mission [22]. The PhiSat-1 imager, named HyperScout-2, was equipped with a state of art vision processing unit developed by Intel to investigate the use of AI for a variety of applications in the field of object detection and data inference. The first application that has been run is cloud screening [23].
The principle attractive features which make DL models wellsuited for data processing onboard satellites are as follows [24]: 1) DL models are trainable and consequently adaptive.
2) DL models have a massively parallel structure: this means, they could be able to run at high speed and could be fault tolerant. Those characteristics enable DL models to conform to space environment-induced computing limitations as well as radiation effects.
In the past decade, there has been a push for the deployment of ML, principally DL, in space due to the following reasons [3], [16]: 1) Advancements in hardware and recent research developments in DL algorithms for constrained environments have started to meet at a point that enables DL models to be deployed in space. 2) Modern space compute platforms systems are moving toward reconfigurable multicore systems which would be able of running DL models onboard the highly constrained environment of a spacecraft.
3) The requirements of space applications and terrestrial technologies are becoming progressively aligned.

B. Proposed Deep-Learning-Based Change Detection Approach
CNNs have been emerged as the leading models in DL for image processing since 2012 [25]. CNNs have considerably improved performance in image processing tasks including remotely sensed satellite image processing [26]. Indeed, CNNs have been proven to be very efficient in extracting high-level visual features that are semantically rich [27]. A CNN is a multilayer deep architecture that captures levels of growing abstraction and complexity throughout the feature hierarchy and can learn powerful discriminative features by performing convolutions in the image domain. This capability makes CNN a very good feature extractor. Consequently, CNNs have recently become a standard way for automatically learning deep features from image data [3].
The key idea behind our approach is to use a pretrained CNN as a black-box feature extractor to automatically extract deep features from images to be further used for producing a change detection map. This results in a faster and low-complex method that can be used for onboard change detection. The benefit of using a pretrained CNN is to bypass the training phase of a neural network which is the most difficult point of using the CNNs in general because it is the most time-consuming step both in terms of effort required to set up the process and computational complexity needed to run the process.
Unlike image processing tasks that only process a single image, such as image compression and image encryption, the change detection process is not an easy task. It is a multitemporal analysis based on the comparison of two or more coregistered images, acquired over the same geographical area, to identify changed regions [5]. Fig. 1 depicts the flowchart of the proposed algorithm. Our designed change detection framework is made up of the following five main steps.
1) Preprocessing Coregistration: The preprocessing step involving coregistration operation is an important component of the change detection process. Coregistration aims to mitigate geometric distortions and misalignments in the multitemporal optical satellite images caused by the difference in acquisition conditions. Because the change detection process is sensitive to registration errors, the quality of the image registration is of particular importance [5]. In our case, we used a fast and robust method based on optical-flow estimation and proposed for the coregistration of remote sensing images [28].
2) Multilayer Deep Feature Extraction: After coregistration, the multitemporal deep features are obtained by passing the coregistered multitemporal images (prechange and postchange images) separately as input to a pretrained CNN and extracting features from some layers of the CNN that can learn powerful discriminative features. A CNN architecture is made up of many layers and each layer, in turn, is made up of many features that have learned certain complex visual concepts during the training process.
Although the pretrained CNN network is used like a black-box feature extractor, it is important to choose a suitable pretrained CNN for the change detection context and to select appropriate layers to extract features from the CNN. CNN architectures that can model remote sensing images are the most suitable candidates for the process of change detection. Furthermore, a balance must be struck when selecting layers to extract features by combining lower level visual features and complex visual features.
In our case, we used the pretrained CNN proposed in [29] for the following reasons: 1) this pretrained CNN has been trained on a remote sensing image dataset and therefore is appropriate to our mission, and 2) the given CNN architecture proposed in [29] can accept four channels as input, while most of the pretrained CNNs proposed in the literature are trained to accept only three channels (RGB: red, green, and blue) as input images [30]. As such, the pretrained CNN proposed in [29] is well suited for Alsat-1B multispectral images, which are composed of four bands or channels: RGB and near-infrared (NIR). This later channel (NIR) is very useful for change detection, particularly for vegetation analysis.
As we are using the CNN model as a feature extractor, only convolutional layers are used to extract deep features. The convolution layers of a CNN are in fact responsible for extracting semantically rich deep features because they act as filters in the image domain. In order to combine lower level visual features and complex visual features, a set of layers, L, of the total number of the CNN's layers, N, is selected from the lower to the deeper convolutional layers of the CNN. 3) Intralayer Difference Image Generation: After deep feature extraction, the next step is the generation of the intralayer difference image. Let F i 1 and F i 2 be the feature maps extracted from each layer i in L and corresponding to the prechange and the postchange images, respectively. These feature maps are first upsampled to the same spatial size as the input image by bilinear interpolation. A deep layerwise difference map, D i , is then computed by subtracting F i 1 from F i 2 for each layer i ࢠ L. 4) Intralayer Feature Maps Selection: The number of feature difference maps obtained at each layer is relatively high (up to 128 at the third convolutional layer). Because some of these feature difference maps may contain information relevant to change detection while others do not, relevant feature maps must be identified to reduce the number of those feature maps. Toward this goal, a feature selection approach based on a variance calculation is used in our case. Since feature values highly vary between changed and unchanged pixels, it is assumed that features carrying potentially pertinent change information have higher variability than those less affected by changes. Accordingly, the variance is used as an index of sensitivity to change information. For each layer i ࢠ L, the variance of the obtained difference maps is calculated. The difference map with the highest variance, Dvar i , is then selected for each layer i ࢠ L. 5) Binary Change Map Generation: In this step, unchanged and changed pixels are discriminated based on the assumption that unchanged pixels produce similar deep features, whereas changed pixels produce dissimilar deep features. Consequently, the generation of the binary CM is conducted in the following two stages: a) Intralayer binary change detection: After selecting the feature map with the highest variance for each layer, i, the k-means clustering algorithm followed by unsupervised thresholding [31] are used to generate the binary CM, CM i , for each layer i ࢠ L.
The k-means clustering algorithm is used to generate two clusters, w c and w u , of changed and unchanged pixels according to the class average value calculated over the difference image. In regions where there is a specific change between the two images, it is expected that the pixel values will be higher compared to regions where no change occurs. According to this assumption, a binary CM can be created, in which "1" indicates that the corresponding pixel location involves a change, while "0" involving no changes. This process can be viewed as unsupervised thresholding according to where 2 is the Euclidean distance, v (m, n) is the deep feature vector at pixel spatial location (m, n), and v wc and v wu are the cluster mean feature vectors for classes w c and w u , respectively. b) Interlayer change maps fusion: The final CM is generated by conducting the interlayer CMs fusion in order to jointly consider all the binary detected changes across all layers. The main reason for this is that any really noticeable change should be detected in all layers. The final CM is given as follows: where ∩ performs binary AND operations.

C. Experimental Results
This section presents experimental results on the accuracy assessment of the proposed change detection algorithm with real satellite images.
The proposed change detection approach was tested and validated on two change detection datasets openly available. The first one is the SZTAKI air change benchmark set [32] (AC), and the second is the ONERA satellite change detection  dataset (OSCD) [33]. AC contains RGB aerial images (a ground truth collection for change detection in optical aerial images captured at several years' differences), while OSCD contains multispectral satellite images acquired by Sentinel-2 satellite. The Sentinel-2 dataset has been used because it is quite similar to Alsat-1B images in terms of spectral bands and medium spatial resolution. RGB images of the two temporal datasets used for testing and the corresponding ground truth CM are presented in Fig. 2.
The first three convolutional layers of the pretrained CNN (L = 3) have been selected for this experiment. This selection covers the lower and the deeper convolutional layers of the CNN in order to combine lower level visual features and complex visual features. Fig. 3 illustrates examples of feature maps resulting from the first three convolutional layers: conv1, conv2, and conv3, respectively, for the SZTAKI dataset.
As we can see, the lower layers of the CNN capture primitive features, such as edges, while deeper layers capture semantic information.
To assess the efficiency of the proposed method, a comparative study was conducted with three classical baselines of change detection methods, namely change vector analysis [34], multivariate alteration detection [35], and principle component analysis k-means [31]. Fig. 4 shows the detection results for the two datasets. By visual inspection, it can be seen that the proposed method gives the best results compared to other methods.   Table I contains the quantitative evaluation of the proposed approach against the considered three classical baselines of change detection methods in terms of the following metrics: overall accuracy, F-measure, area under the curve, and kappa coefficient. As can be seen, the proposed method provides better quantitative results with respect to the other approaches.

III. PRELIMINARY HARDWARE ARCHITECTURE OF THE PROPOSED OCDS
In this section, we describe the preliminary hardware architecture of the proposed OCDS for EO small satellites. Our objective is to design a change detection solution that can run on flight-proven hardware previously flown on Alsat-1B.
On September 26th, 2022, the Algerian medium-resolution EO satellite, Alsat-1B, celebrated its sixth year in orbit. Alsat-1B has been built in the framework of a collaborative mission between the Algerian Space Agency and Surrey Satellite Technology Ltd (SSTL). Alsat-1B carries an optical imaging payload, called ALgerIan Telescope (ALITE), based on a push-broom concept, and provides 12-m imagery in panchromatic and 24-m imagery in multispectral (MS) in four bands (RGB and NIR) along a 140-km wide swath and with 10/12 bits radiometric resolution [36].
The target platform on which the proposed OCDS is intended to be integrated is the one of the upcoming Algerian EO mission which will be based on SSTL heritage previously flown on Alsat-1B.  . The FEE design is mainly based on analog-to-digital converters and ProAsic3 fieldprogrammable gate array (FPGA), it receives telecommands (TC) and sends telemetry (TLM) from/to the onboard computer (OBC750) through the controller area network bus.

A. Alsat-1B Payload Chain Overview
The HSDR is a data recording device for space applications, comprising a mass memory block and providing processing capabilities. The design of the HSDR is centered on a reprogrammable in orbit Xilinx Virtex 4 FPGA including several functional blocks and a double data rate (DDR2) payload memory of 16 GB. The FPGA contains two embedded PowerPC 405 processors (only one is used by the HSDR) [38]. The data storage system incorporates a 16 GB data recorder (DDR2 payload memory of the HSDR) with additional 256 GB FMMU storage provided to act as a secondary mass memory storage device and operates in a cold redundant configuration [38]. So the total data storage capacity is 272 GB per payload chain. For one MS image composed of four spectral bands of 7064 × 6144 pixels each coded on 10 bits, a memory of 207 Mbyte is needed. Alsat-1B is also capable of taking a strip of 37 images continuously in the same orbit [37].
During nominal imaging operations, the FEE communicates raw image data to the HSDR via low-voltage differential signaling (LVDS) interfaces. The HSDR then performs several operations or processing including digital gains application, bits truncation, compression, and encryption when instructed to do so. The data are then streamed out to the FMMU through LVDS interlink in case it is a high capacity image or it stays on the HSDR in case of a high priority image [37]. HSDRs typically operate in a cold redundant configuration.
ALITE imager data are downloaded as a set of scenes to the ground via the spacecraft's radio frequency transmitters using high-speed communications at the X-band (or via the S-band as a redundant chain).

B. Preliminary Hardware Architecture
The goal of the OCDS that we propose is to provide a capability to detect temporal changes in newly taken satellite optical images based on comparison with reference images. This system is designed to be implemented within the FPGA circuit of the HSDR as a functional block as shown in Fig. 6.
The proposed system contains a database to store and supply reference images. To this end, the FMMU storage unit is used to implement this database. FMMU unit is a slave data recorder that has been designed to interface directly with the HSDR and provides 256 GB of nonvolatile storage [38]. The images can enter the OCDS in two ways-either directly from the imager via the payload interface, or from the database via FMMU interlink. When an image is received from the imager, the system begins operating. The newly captured image will pass to the change detection processing block via the DDR wrapper, which performs change detection from the image by comparing it with a reference image fetched from the database. The CM produced by the system will be sent via the DDR wrapper to the downlink processor for prompt downlink transmission or to other functional blocks (image compression and image encryption). The newly captured image will also be stored in the database to be used as a reference image for the following processing task.
The digital processing unit (DPU) of the attitude and orbit control system offers additional information for the subsequent tasks of image coregistration and identification of the reference image in the database. GPS data included in geolocation auxiliary files (GAF) are provided along with a timestamp from the GPS unit via the GAF receiver.
The rest of the functional blocks within the HSDRs FPGA are described as follows: PPC wrapper and PPC controller: The Xilinx FPGA used on the HSDR has an embedded power PC (PPC) microprocessor. These components serve as an interface between the PPC and the rest of the FPGA fabric.
PPC registers: The PPC registers provide an interface between the software running on the PPC and the functional blocks within the FPGA.
DDR wrapper: The DDR wrapper acts as an interface between the HSDRs bulk storage (16 GB of DDR2 memory) and any components that require access to this storage.
DMA controller: The direct memory access controller is used to allow moving large volumes of data around within bulk storage while leaving the PPC free to continue other tasks.
Uplink SRx0 and SRx1: These uplink blocks are used to receive data in the form of IP packets from the ground station via either of the S-band receivers SRx0 and SRx1.
GAF receiver: This GAF receiver is used to receive incoming GAFs from the DPU.
Timer: The timer is used to keep the UNIX time for the HSDR, synchronize with the incoming pulse per second pulse from the OBC750 and provide accurate timing to start pulses to the imaging operations.
Downlink processor: This downlink processor is used to output payload data via the X-band. This component also buffers incoming TLM packets from the OBC and inserts them into the stream of downlinked payload data.
Encryption block: The AES encryption block is used to apply 128-bit AES encryption to image data that have been stored in DDR bulk storage prior to being downloaded.
FMMU interlink: This FMMU interlink is used to transfer data between the HSDR and the FMMU.
Payload interface: The payload interface is used to capture data from the ALITE imager.

C. Operational Scenario
Image targets are scheduled through the ground software in the ground station. This ground software tool is used to calculate the imaging opportunities of any target and then generate a TC file including the area, the date, and the time of imaging. The satellite performs therefore image acquisition when it is instructed to do so.
When an event happens, the satellite will be instructed to perform an image acquisition over the damaged area through a TC file. The newly captured image will pass to the proposed change detection processing block, which performs change detection from the image by comparing it with a reference image fetched from the database. Consequently, instead of downstream the total number of MS images [eight (08) spectral bands for the bitemporal images], only the CM image which is the output of the change detection system will be downlinked. This results in a rapid damage assessment.

IV. COMPUTATION PERFORMANCE
This section presents computation performance results for the proposed change detection system and discusses the required computing resources onboard. Our objective is to develop a change detection solution that can run on existing flight hardware. The target platform, as said previously, is the one of the future Algerian EO satellite which will be based on SSTL heritage flown on Alsat-1B.
As mentioned before, on the Alsat-1B platform, payload data is captured, processed, compressed, and stored in the HSDR [38]. The design of the HSDR is centered on a reprogrammable in orbit Xilinx Virtex 4 FPGA and DDR2 payload memory of 16 GB. The FPGA contains two embedded PowerPC 405 processors. The change detection system to be developed is aimed to be implemented in the HSDR. Consequently, the PowerPC-based HSDR unit will be used for the computing performance evaluation presented here.
The following assumptions are made to estimate the required processing time: 1) The proposed OCDS must be fast enough to be able to detect changes from a pair of bitemporal MS raw images knowing that each MS raw image is composed of four bands (RGB and NIR) with a size of 7064 × 6144 pixels each, 2) Because the orbit period (defined as the time to attain back the ground station) for the Alsat-1B LEO satellite is about 98 min and its highest visibility time (the contact time between the satellite and the ground station) is around 11 min, the proposed OCDS must accordingly be capable of processing the raw images into change detection map within 87 (98-11) min before the satellite reaches the ground station. As the flight test hardware was not available, the proposed method for change detection was implemented using The test image set is composed of a pair of bitemporal Alsat-1B MS images covering the great Sebkha (Salt Lake) of Oran, Algeria. Each MS image contains four bands (blue, green, red, and NIR) of size 7064 × 6144 pixels each. Fig. 7 shows RGB images of the Alsat-1B bitemporal dataset and the resulting CM. The detected changes are related to the changes in the Salt Lake over two seasons summer (dry season) and autumn (wet season) related to water level and salt concentration. Table II shows the computation performance results for the change detection experiment.
The estimated total processing time (including the processing time for the coregistration process) for a pair of Alsat-1B MS images of maximal size is Processing Time = 12342 (s) /60 = 205.7min.
As can be seen, the obtained estimation of 205.7 min for the processing time is higher than the maximum available Authorized licensed use limited to the terms of the applicable license agreement with IEEE. Restrictions apply.
processing time of 87 min as explained above. However, the target platform of the future Algerian EO satellite will be equipped with an HSDR unit based on quad-core FPGA which means that four processors will be available. Consequently, if all four processors are used in a parallel way, the processing time for an image of maximal size will be approximately 51.4 min, which is less than the required execution time. Those results demonstrate the feasibility of our proposed method for onboarding the future Algerian EO satellite.
Another alternative to speed up the processing is related to the CNN model architecture. One way to explore in this context is neural network pruning which consists in discarding some weights or entire convolution filters and their corresponding useless feature maps, in order to extract a functional subnetwork that has lower computational complexity and comparable accuracy.

V. CONCLUSION
In this article, we aimed to design an automatic OCDS that can run on existing flight hardware. The target platform on which the proposed OCDS is intended to be integrated is the one of the upcoming Algerian EO mission which will succeed the Alsat-1B mission. First, an algorithmic solution for change detection based on DL that fulfills space environment-induced constraints was presented. Second, a preliminary hardware architecture of the proposed OCDS has been designed based on data processing hardware previously flown on Alsat-1B satellite.
Experimental results using real satellite images have demonstrated the efficiency of the proposed change detection algorithm.
The computation performance of the proposed OCDS has been estimated and compared to the computation capabilities of the hardware flown on the Alsat-1B platform. The proposed algorithmic approach for change detection was implemented using Matlab code, and the well-known computing benchmark program Dhrystone 2.1 was used to reproduce Alsat-1B's hardware computation performance as the flight hardware was not available.
The experimental results have shown that the proposed change detection system is well suited for onboarding on the future Algerian EO satellite provided that four processors are used in a parallel way. Fortunately, the target platform of the future Algerian EO satellite will be equipped with an HSDR unit based on quad-core FPGA.
The next phase of this article will be focusing on the hardware implementation of the proposed change detection framework on the FPGA circuit of the HSDR. This is the most challenging phase as it involves hardware implementation using quad-core FPGA. The FPGA software is intended to run on four processors and should be capable to perform operations up to four times as fast as running on a single processor. Consequently, the choice of an adequate interprocessor communication architecture is the first challenge. Software debugging and analysis will be the second challenge as its execution is invisible when running a real-time hardware implementation.