Building a Secure Platform for Digital Governance Interoperability and Data Exchange Using Blockchain and Deep Learning-Based Frameworks

A secured platform is a critical component of digital governance, as it helps to ensure the privacy, security, and reliability of the electronic platforms and systems used to manage and deliver public services. Interoperability and data exchange are essential for digital governance, as they enable different government agencies and departments to share data, information, and resources seamlessly, regardless of the platforms and technologies they use. In this paper, we build a secure platform to enhance the trustworthiness of digital governance interoperability and data exchange using blockchain and deep learning-based frameworks. Initially, an optimal blockchain leveraging approach is designed using the bonobo optimization algorithm to authenticate data generated from smart city environments. Furthermore, we introduce the integration of a lightweight Feistel structure with optimal operations to enhance privacy preservation. This integration provides two levels of security and ensures interoperability and double-secured data exchange in digital governance systems. In addition, we utilize a deep reinforcement learning (DRL) model to detect and prevent intrusions such as fraud/corruption in the smart city data. This approach enhances transparency and accountability in accessing the data and shows its predominance over other cutting-edge techniques on two benchmark datasets, BoT-IoT and ToN-IoT. Furthermore, the effectiveness of the framework in real-time scenarios has been demonstrated through two case studies. Overall, our proposed framework provides a trustworthy platform for digital governance, interoperability, and data exchange, addressing the challenges of privacy, security, and reliability in managing and delivering public services.

decision-making processes [3]. It also involves ensuring the privacy, security, and reliability of digital platforms and systems, as well as promoting equitable access to digital technologies and services. A secure platform is crucial for digital governance as it ensures the privacy, security, and reliability of electronic platforms and systems used to manage and deliver public services. Digital governance involves the use of technology to improve the efficiency and effectiveness of public services, which requires the collection, processing, and storage of sensitive information. Without proper security measures, such as encryption, access control, and authentication, this information can be vulnerable to unauthorized access, manipulation, and theft [4]. A secure platform ensures that citizens can trust and rely on digital services provided by the government, thereby promoting their adoption and utilization. It also enables various government agencies and departments to share data, information, and resources with each other seamlessly, reducing administrative burden and improving the quality of services provided to citizens [5].
Digital governance interoperability [6] refers to the ability of different digital systems and applications used in public administration to communicate, exchange data, and use information in a coordinated and efficient manner. Interoperability enables various government agencies and departments to share and exchange data, information, and resources with each other seamlessly, regardless of the platforms and technologies they are using. Data exchange, on the other hand, refers to the process of transferring data between different digital systems and applications, either manually or automatically. In digital governance, data exchange enables different government agencies and departments to collaborate, share data, and provide services to citizens more efficiently and effectively [7]. By exchanging data, different agencies and departments can avoid duplication of efforts, reduce administrative burden, and improve the quality of services they provide to citizens. Blockchain technology [8], [9] has several potential roles in digital governance. One key role is in providing secure and transparent data storage and sharing capabilities, which is essential for building trust in government systems and ensuring the integrity of public data. Blockchain can be used to create a decentralized, tamperproof database of government records and transactions, enabling secure and transparent sharing of data between government agencies and departments. Blockchain can also facilitate secure and transparent voting systems, which is critical for ensuring the integrity of elections and democratic processes. By using blockchain, governments can create a transparent and tamper-proof voting system that enables citizens to vote securely and anonymously [10]. Utilizing blockchain innovation can likewise decrease risk of election fraud and hacking, while increasing voter confidence in the electoral process. In addition to these roles, blockchain can also facilitate secure and transparent financial transactions, which is essential for government funding and public services [11], [12], [13].
The waste management [14] is an important aspect of digital governance, as it helps to optimize waste collection, processing, and disposal through the use of advanced technologies such as sensors, internet of things (IoT), and artificial intelligence (AI) [15], [16]. However, the success of such initiatives depends on the ability of different government agencies and departments to collaborate and share data in a seamless and secure manner. This requires a secure platform for digital governance interoperability and data exchange, which can ensure the privacy, security, and reliability of the electronic platforms and systems used to manage and deliver waste management services [17]. In recent years, there has been an increasing trend towards using electronic voting systems [18] to facilitate the electoral process. However, there are concerns around the security and reliability of these systems, particularly in terms of protecting the integrity of the voting data and ensuring the accuracy of the results [19]. To building a secure platform for digital governance interoperability and data exchange can play a crucial role in enhancing the transparency and trustworthiness of the waste management and voting system [20].
Our contributions. Propose a solid stage framework in view of blockchain and profound figuring out how to increment trust of digital governance interoperability and data exchange. Below are the primary contributions of our proposed framework: • We designed an optimal blockchain leveraging approach using the bonobo optimization algorithm to authenticate data generated from smart city environments. This approach enhances the security and reliability of digital governance, interoperability and data exchange.
• We integrated a lightweight Feistel structure with optimal operations to enhance privacy preservation in the digital governance system. This integration provides two levels of security, ensuring interoperability and double-secured data exchange.
• We utilized a deep reinforcement learning (DRL) model to detect and prevent intrusions such as fraud and corruption in the smart city data. This approach enhances transparency and accountability in accessing the data, contributing to a more secure digital governance system.
• In terms of, we verified that our proposed framework is superior to other cutting-edge methods various measures using the BoT-IoT and ToN-IoT datasets.
• Through two case studies in waste management and electronic voting systems, we demonstrated the potential of our framework in enhancing the security and reliability of digital governance, contributing to more trustworthy and democratic governance. The following is how this paper's subsequent sections are arranged. The second section provides a summary of recent works related to digital governance interoperability, data exchange, and blockchain. The system architecture and methodology of our In Section III, the proposed framework is described. In Section IV, the proposed framework's in-depth operational procedure and relevant mathematical model are discussed. In Section V, case studies and a comparison of the simulation results are presented. The paper's final section is Section VI.

II. RELATED WORKS
In this section, we will discuss the fundamental concepts and recent advancements associated with interoperability, data exchange, blockchain, and deep learning-based anomaly detection in the context of digital governance systems. An overview of research gaps collected from state-of-the-art papers is presented in Table 1.
Ren et al. [21] proposed blockchain-based data-sharing mechanism and provided implementation suggestions and technical key points. By enabling data sharing, guaranteeing data quality, and protecting intellectual property, their system addresses the issues of complex quality control, high costs, and unverified data rights that are associated with traditional data sharing methods. The creators fostered a show information sharing stage in view of the execution of blockchain and blockchain-based information sharing innovation. National investment in mineral exploration will be significantly reduced as a result of this move, as will its impact on the natural environment. Singh et al. [22] presented a system that makes use of blockchain technology to automate the check payment process, which includes creating, processing, and paying for checks both online and in person. This framework interfaces every one of the various saves money with a typical stage that permits electronic really looks at gave by one bank to be introduced to some other bank utilizing any method of activity. By reducing consensus time by 25%, their consensus method was found to be more effective than the existing proof-of-work method and partially transforming the current banking system by implementing it on the blockchain.
Alketbi et al. [23] proposed a blockchain model that governments to provide government services, it is possible to create a blockchain ecosystem led by the government. Their analysis of a permissioned blockchain platform and a blockchain housing rental use case developed by the Dubai Government serve as the foundation for the model. Model results incorporate the blockchain's administration structure, the meaning of partners and their jobs, and the plan of an organization engineering that portrays sending choices and parts. This study investigates a few blockchain use cases through verification of-ideas utilized by legislatures. The features of the Hyperledger Fabric design show how important the platform is for government services and use cases.
Kumar et al. [24] have proposed an e-governance system that ensures safety and confidentiality in the public domain by utilizing blockchain technology. The study shows that the immutability, cryptography, and decentralized administration of the system provide the required authenticity and confidentiality. Furthermore, it addresses the issues related to interoperability among administration divisions that are a limitation of existing systems. The next step is to implement such systems and explore their full potential in a realworld scenario. Schulz et al. [25] have explored Potential for sustainable development initiatives to be supported by blockchain and distributed ledger technology (DLT). The study focuses on how public administration and governance can adapt to DLT in the coming years through innovative climate finance, social service delivery, and digital identities. According to the authors, DLT solutions ought to be looked at not only from a technical but also a social point of view taking into account the impact of the new technologies on society. Cross-border operations and stakeholder engagement are enhanced by DLT, it is important to ensure that the development of DLT aligns with sustainable development goals.
Kassen [26] explored E-participation and civic participation are both enhanced by a variety of cutting-edge technology platforms. Various independent emails were analyzed throughout the research. E. Coordinate initiatives that are participatory From the viewpoints of advocates for participatory government, specialists in open government, and software developers, the primary drivers and obstacles that prevent citizens from engaging are identified. Beyond traditional government-run e-commerce, the study highlights the need for more decentralized platforms for citizen engagement. Zarpala and Casino [27] have introduced a classification system for existing financial investigation methods and have proposed comprehensive blockchain-based digital forensic techniques. They have also suggested a standardized forensic investigation framework and documented the corresponding methodology for investigating embezzlement schemes. This approach is not limited to embezzlement schemes, Be that as it may, can be applied to a wide range of extortion examinations and general inward reviews. Likewise, they carried out a practical Ethereum framework, as well as exhibited standard court methods and guardianship security techniques the proposed approach.
Soner et al. [28] have proposed a framework that automates the maintenance of registry papers and facilitates loan clearance operations while ensuring transparency in egovernance. Unlike the traditional manual or online registry record-keeping processes, this framework is powered by the blockchain network. It employs a multi-level authentication scheme to ensure security and quick verification for all stakeholders. Additionally, smart contracts are applied to automate the loan process in accordance with government norms, buyer and seller agreements. The framework is evaluated by transparency, accountability and consistent data across the different government departments. Miyachi and Mackey [29] have proposed a framework called hybrid off-chain blockchain system (hOCBS), which offers a secure and scalable way to integrate off-chain storage with blockchain technology while maintaining privacy and focusing on patients' needs. The hOCBS framework is particularly relevant for healthcare, where a privacy-focused infrastructure could help break down data silos and put more control in the hands of patients. The main advantages 70112 VOLUME 11, 2023 Authorized licensed use limited to the terms of the applicable license agreement with IEEE. Restrictions apply. of this framework include the ability to share and process healthcare data more easily, reduce storage requirements for health blockchain systems, and implement privacypreserving mechanisms, such as anonymity and dynamic consent management. Dursun et al. [30] have proposed a model of on-chain governance that combines decentralized identity technologies with policy-based governance. The policy-based management concept is particularly suitable for blockchain ecosystems with stakeholders of varying knowledge levels, as it allows for the control of the system's behavior through easily understandable higher-level expressions. This model is simpler and easier to use than other chain control models, and it is more effective than models that do not use chain control. Additionally, a number of issues related to the decision-making process in blockchain governance may be resolved by the emerging concept of decentralized identity. It ensures well-directed governance and increases decentralization, security, and justice.

A. PROBLEM DEFINITION
As societies become more digital data, there is an increasing need for secure platforms for digital governance. This is because digital governance involves handling sensitive data and critical operations that must be protected against cyber-attacks and other security threats. Furthermore, the need for transparency, accountability, and fairness in the digital governance process requires a secure and trustworthy platform that can ensure the integrity of data and the decisions made based on that data. A secure platform for digital governance can help ensure that citizens' privacy is protected, that data is secure, and that public institutions can operate efficiently and effectively. Without a secure platform for digital governance, there is a risk of data breaches, corruption, and the erosion of trust in public institutions. Kumar et al. [31] have proposed there are three modules in the Believed Protection Saving Secure Framework (TP2SF) for Shrewd Urban areas framework: trust, assurance and interference area. The trust module is comprised of an address-based blockchain reputation system. To stop inference and poisoning attacks, the privacy module uses advanced proof-of-work and principal component analysis.
A streamlined slope tree scaling framework is set in motion in the interruption discovery module. The TP2SF system is powered by CloudBlock and FogBlock, the Blockchain-IPFS coordinated mist cloud framework. The system is tested on datasets from ToN-IoT and BoT-IoT. The experimental results show that TP2SF outperforms current cutting-edge techniques in terms of accuracy, detection speed, accuracy, and F1 when tested on original and transformed datasets like ToN-IoT and BoT-IoT scores information trade in advanced administration present huge security issues. When there is a lot of sensitive information, data breaches and the possibility of unauthorized access are one of the main concerns systems communicate with each other. In addition, data exchange between systems with each other. In addition, data exchange between systems with different security levels can also create vulnerabilities. Ensuring secure data exchange and interoperability requires robust authentication, encryption mechanisms, as well as secure data storage and transmission protocols. Any weaknesses in these security measures can compromise the integrity of the data and threaten the confidentiality of citizens' personal information [21], [22].Therefore, it is crucial to prioritize security in the design and implementation of digital governance systems to ensure safe and effective data exchange and interoperability [31].
Different systems may use different formats and protocols for exchanging data, making it difficult to integrate and exchange information. The exchange of sensitive information between different systems can pose a risk to data privacy and security [22], [23]. It is essential to ensure that the data is protected during exchange and is only accessible to authorized personnel. Integrating various systems and applications can be challenging, especially when dealing with legacy systems that were not designed to work together [24]. Many organizations may not have the necessary technical expertise or resources to develop and maintain interoperability and data exchange solutions. Compliance with legal and regulatory requirements can add additional complexity and challenges to the interoperability and data exchange process [25]. Addressing these challenges requires an optimal approach that includes developing common standards, ensuring data privacy and security, simplifying integration processes, and providing the necessary technical resources and expertise [26]. Digital governance systems often involve sensitive data and critical infrastructure, making them an attractive target for cyber-attacks. Intrusion detection systems monitor network traffic and system activity for signs of unauthorized access, attacks, or abnormal behavior [27]. By detecting and alerting administrators to potential security incidents, intrusion detection systems can help prevent data breaches, system downtime, and other negative consequences [28], [29]. They can also provide valuable insights into the types of attacks and vulnerabilities that need to be addressed in order to improve the security of the digital governance system [30], [31].The research objectives to solve problems related to interoperability and data exchange in digital governance may include as follows.
• Designing interoperable systems that can communicate with each other seamlessly.
• Developing mechanisms for ensuring data privacy and confidentiality during exchange and storage.
• Developing methods for reducing false positives and improving the accuracy of intrusion detection.

B. SYSTEM ARCHITECTURE
The system architecture of the proposed framework, illustrated in Fig. 1, is designed to operate on both the fog and cloud sides, thereby overcoming the drawbacks of using standalone architectures and collaborative networks.  Additionally, we have incorporated the advantages of blockchain technology to fully it with deep learning on both the fog and cloud sides. Our developed framework is composed of three primary modules: blockchainbased data collection, lightweight privacy preservation, and fraud/corruption detection. When an IoT device generates a service request Using a nearby gateway or router, network traffic is routed to the Fogblock at the smart city device level. The Fogblock system extracts relevant features from incoming traffic by means of a sensor. Our system uses an address-based reputation system for the blockchain that uses the Bonobo optimization algorithm to calculate a reputation score to guarantee the authenticity of the data source. A privacy-preserving module receives trusted information and raw data and uses blockchain to create a hash-proof digest of the message. The blockchain network is distributed with this message digest. This method defends against inference attacks that can be learned from systematic deep learning by examining a chain of data records. The next step is the implementation of a privacy module with two levels that transforms the original data using • Developing standardized protocols for data exchange between different systems and platforms.
• Implementing secure authentication and authorization mechanisms to ensure only authorized access to data. Additionally, a DRL model is employed to classify data into normal and different attack types, and the administrator is alerted in the event of an anomalous class. In the end, if the necessary information is available on the fog side, and it is made available for normal transactions; otherwise, CloudBlock is included with the request receives security data. CloudBlock comprises of various server farms given by various sellers that structure a blockchain network and is executed in every server farm. By making the network immutable, testable, and verifiable, this strategy builds trust between end users. The mechanism alerts the administrator in the event of an attack when CloudBlock receives a request for information.

C. PROPOSED METHODOLOGY
In this section, we describe the proposed methodology for the secure platform that has been developed to enhance the trustworthiness of digital governance interoperability and data exchange. The proposed methodology includes an optimal blockchain leveraging approach for authenticating data generated from smart city environments, a lightweight Feistel structure for enhancing privacy preservation, and a DRL model for detecting and preventing intrusions such as fraud/corruption in smart city. The integration of these components ensures interoperability and double-secured data exchange in digital governance systems. Overall, our methodology aims to provide a secure and trustworthy platform for digital governance.

1) BLOCKCHAIN-BASED DATA COLLECTION
Blockchain-based privacy has been added to the peer-topeer blockchain protocol, which ensures encrypted data transmission or secure network nodes [31]. These validation  messages form a chain of records or blocks that are stored in each participating cloud hub or haze. which guarantee the legitimacy of the exchange and guarantee that no information can be eliminated or changed from the record, i.e. This indicates that it encourages decentralization and immutability.
Assume that a data set (Dset) is made up of a list of records called ''R1, R2,. . . RN,'' where the number of records in the data sets is denoted by ''N.'' This will serve as an illustration of the proposed approach. The raw data's integrity is safeguarded by the secure hash function SHA, which is present in each entry of the corresponding message digest N. A one-way cryptographic hash, also known as an output fingerprint with fixed lengths of unique structures and bits, is generated by this SHA. Because it is collision-resistant and cannot be altered by brute-force attacks during real-time processing, the privacy-preserving method uses SHA512 [31]. Since a one-way cryptographic hash forestalls harming because of the torrential slide impact, blocks are created using a message digest. Consequently, this interaction keeps up with the trustworthiness of IoT information. The block index, previous hash, and timestamp are some of the information contained in the networkgenerated blocks reputation score and current block hash. The detailed description of each piece of information included in the blockchain model as follows.
• Block index is a unique identifier for each block in the blockchain. It helps to keep track of the order of the blocks and ensures that the blockchain is maintained in a sequential and chronological manner.
• Previous hash is the hash of the previous block in the blockchain. It provides a link between the current block and the previous block, which ensures the immutability and integrity of the blockchain.
• Timestamp is the time at which the block is added to the blockchain. It helps to maintain the chronological order of the blocks and provides a reference point for verifying the authenticity of the transactions.
• Reputation score is a measure of the trustworthiness of the data source. It is calculated using the bonobo optimization algorithm and is used to verify the authenticity and reliability of the data being added to the blockchain.
• The hash of the current block in the blockchain is called the current block hash. It is created utilizing the hash of the past block, the block file, the timestamp, and the standing score. The blockchain's immutability and integrity are guaranteed by the current block hash, which also provides a safe way to verify the authenticity of the data being added to the blockchain. A block inside a blockchain is kept up utilizing the hash of the past block displayed in Fig. 2. Because any change in a block of data triggers an avalanche effect that the smart city network makes it simple to check in real time, this technology significantly enhances verification. During the process of creating new blocks using the Bonobo Optimization Algorithm, an optimal blockchain leverage method, the integrity of the hash chain is examined here. The algorithm has shown promising results in various optimization problems, including engineering, finance, and telecommunications. It is particularly useful for solving problems that are complex and have a large number of variables. Additionally, the bonobo optimization algorithm with the split and fusion operation is used to select new solutions or bonobos to mate before each update. Group size is determined randomly and gathers information for several days, after which they rejoin the natives.
Here, the mating strategy of generating bonobos is determined by specifying a parameter called fading probability. The current initial value of this parameter is 0.5 and changes at the end of each iteration. If the generated random numbers in the range of 0 and 1 are considered imprecise or inconsistent with other bonobos as follows.
If the -bonobo value generated using the greater than or equal to the alpha bonobo variable, random number, and then another random number is used to generate the offspring. In another scenario, if the random number is less than or equal to the trend probability value and has a value greater than it, the initial value is updated according to the development using the next. This search process uses directional probabilities to guide promising areas across the search space.
where and are the upper and lower bound values of the variables and the interval parameters that can be used to calculate the values. The total number of decision variables is chosen randomly and ranges from 0.0 to 1.0.
where and are two random numbers and is an inverse probability. In conclusion, we can say that in the positive phase, the bonobo has a high probability of moving towards the alpha bonobo, and in the negative phase, the random movement direction is high. However, depending on the algorithm and the behavior of the objective function, the value of this step probability varies from iteration to iteration.
where yafa FXC_initial represents the fitness function for the cluster formation. Algorithm 2 describes the steps involved in the process of reputation score computation using bonobo optimization.

2) LIGHTWEIGHT PRIVACY PRESERVATION
The two-level security system is an approach to enhance the security of digital data. The system uses a lightweight.
The Feistel system fundamentally safeguards against harmful assaults to change over crude information into another changed configuration. This approach provides two levels of security to ensure double-secured data exchange in digital governance systems. The first level of security is provided by the message digest with proof of hash generated using blockchain technology, while the second level is provided by the lightweight Feistel structure. This structure is a type of cryptographic algorithm that converts the first information into another changed organization, making it challenging for assailants to take advantage of the first information. The Feistel structure is a widely used technique in cryptography due to its simplicity, effectiveness, and speed. In the proposed system, the Feistel structure is integrated with optimal operations to provide two levels of security, making the data exchange more secure and ensuring interoperability in digital governance systems.
In lightweight Feistel structure, the key generation (GEN k ) is a crucial part of the encryption and decryption process. For this purpose, we employed a Feistel architecture-based encryption that operates in multiple rounds, each requiring unique key. For each encryption phase, this procedure generates five distinct keys that are used for encryption and decryption. The 64-bit input ciphertext is divided into four 4bit segments to begin the encryption process. Each block's 16-bit data is produced by applying the GENk function to each segment. After that, the first blocks or segments are replaced with the 16-bit data obtained after processing the GEN k . The GEN k function is applied similarly, and we obtain 16-bit data for each function. We ensure the privacy of patient information and enhance the security of healthcare data by encrypting it before storing it on the cloud.
In order to obtain the first four round keys, a variable I ranging from 1 to 4 is utilized. These values Jp I ∈1,2,3,4 F are then processed to derive Jx I F for each 16-bit block. The mathematical formula jW S F for obtaining is as follows.
The schematic diagram shown in the document highlights two crucial functions, namely the linear (PF) what's more, non-straight (QF) capabilities. The output is rearranged in the form of a matrix following the generation of the GENk function for each 16-bit block. The resampling of the data into a matrix format allows for efficient storage of the data and enables quick retrieval of the data during the decryption process.
After obtaining the key matrix, the next step is to repossess the solutions for apiece 16-bit block. The key matrix is rehabilitated into four different collections, each consisting of 16 bits. This process involves the concatenation of the four segments using the concatenation operator * .
After obtaining the key matrix, the next step is to repossess the solutions for apiece 16-bit block. The key matrix is rehabilitated into four different collections, each consisting of 16 bits. This process involves the concatenation of the four segments using the concatenation operator * .
After obtaining the four distinct keys for each 16-bit block, we further estimated the key values through XOR logical operation. This operation was performed among the four distinct keys obtained for a particular 16-bit block. By using the XOR logical operation, we obtained the key values which were then used for the subsequent round of encryption or decryption. This step ensured the generation of unique and unpredictable keys for each round, which enhanced the security and confidentiality of the healthcare data. Overall, the XOR operation provided a robust and reliable method for generating secure keys in our encryption and decryption process.
To ensure the security of the healthcare data, our proposed methodology applies various logical functions during the encryption process, such as shifting, swapping, and substitution. This process is shown in Fig. 3. Initially, the 64-bit plaintext is divided into four chunks, each comprising 16 bits (ma 0,15 , ma 16,31 , ma 32,47 , and ma 48,63 ). The proposed model shuffles the bits in each round to reorder them and reduce the data overhead. Then, the KG is used for encryption by performing procedures like swapping and replacement. This process ensures the confidentiality and integrity of the healthcare data.
To complete the encryption process, a transformation process was carried out where each subsequent round was transformed in a specific manner. For example, in the first round, ro 11 is set to ma 16−31 , while ro 12 becomes ma 0−15 , and so on. The final round of this conversion process combines the output to form the final ciphertext, which is used for further communication, and it continues through all remaining steps. Through this transformation, the final cipher text is generated with the help of all the previously generated keys and processed data, ensuring that the original data is now encrypted and can only be accessed using the appropriate decryption key. Cipher = r 51 * r 52 * r 53 * r 54 (23) It is important to note that the Feistel structure model used in this proposed method exhibits decipherment as well as encryption. Be that as it may, the mechanical succession is the opposite of the past encryption process. The decryption process involves the use of the same keys in reverse order to obtain the original plaintext. The cipher text is split into four masses of 16 bits each, and then the same logical functions are applied in reverse order to obtain the original plaintext. The final plaintext obtained is the original input data before encryption.

3) FRAUD/CORRUPTION DETECTION
Fraud/corruption detection in digital governance systems refers to the process of identifying and preventing any malicious activities or attempts to manipulate or compromise data within the system. Here, fraud/corruption detection is performed by using deep reinforcement learning (DRL) model. The DRL model is trained to categorize data into normal and anomalous classes based on their features. When an anomalous data record is detected, the system alerts the administrator for further action. DRL helps to improve the accuracy of intrusion detection in the system and prevent potentially fraudulent activities. Deep reinforcement learning (DRL) typically consists of the following layers: • The input layer receives the state information of the environment or system that the agent is interacting with.
• The hidden layers process the information received from the input layer and perform complex computations using various activation functions.
• The output layer produces an action, or a sequence of actions based on the information processed by the hidden layers.
• The reward signal is used to evaluate the performance of the agent and provide feedback for reinforcement learning. The working process of DRL involves the agent interacting with the environment, receiving feedback in the form of rewards, and using this feedback to adjust its behavior through trial and error. The agent learns to take actions that maximize the reward signal over time, leading to optimal behavior in the given environment or system. The deep neural network is trained using backpropagation to minimize the difference between predicted and actual reward signals and improve the agent's performance. First, we compute the initial fitness value for given input as follows.
where Z w and a w weight matrix and displacement matrix, respectively, of the update layer.
where Z R and a r The reset gate's bias and weight matrix are, respectively. The sigmoid stays put. the reset gate's output to 0 to erase previous moment's information about the hidden layer.g where Z R and a r The candidate output state displacement and weight matrices, respectively, and tanh calls a function that changes the data from 0 to 1. The ideal state of the hidden layer can be found in the DRL output layer, which defines as follows.
The larger z s the confidence is high, and ht1 has little effect on the output. The ongoing hub's data, ht, addresses the chose memory of the past secret state. The first hyb-DRL layer stores the second t and the past snapshot of the info succession, and the following second is put away in the retrogressive hyb-DRL layer. The secret layers in Hib-DRL grouping are the spread cycle, 70118 VOLUME 11, 2023 Authorized licensed use limited to the terms of the applicable license agreement with IEEE. Restrictions apply.
where g s onĝ s Specify the hidden layer's forward and reverse estimations separately; Both the contribution difficulty and the previous state of the hidden layer are shown in the forward and reverse calculations; and denote, respectively, the forward and backward counting errors. The condition of the I-th sq-bit neuron model in the M-th sets in view of these layers is characterized as follows.
In (m-1) sets, flip the parameter of the controlled NOT gate and the input of the Kth neuron. For cyclic gate phase parameters and threshold parameters, this is the best phase. The result layer of the organization is signified by the noticed state m from the j th neuron of the result layer. Addressed as follows: Training a quantum neural network with multiple layers to look for the best parameters., which reduces the cost function as follows: where U D s Direction sign to the i th neuron in the q th design.
The new cohort members split in a genuine coded crossover caused by individuals with multiple parents.
In every generational change model used, children take the place of parents. This is the correct generation gap model. A fitness function is defined by the reciprocal of the cost function, where j is the number of people.
where x is the optimal value for output layer, U is the utility function, K is the model number which described by the dynamic control variable. Algorithm 2 describes the working steps of intrusion detection using DRL model.

IV. CASE STUDY
To demonstrate the effectiveness of our proposed framework in real-world scenarios, we conducted two case studies in smart city applications: waste management and a voting system. In this section, we will describe each case study in detail, including the challenges faced, the solutions provided by our framework, and the results obtained. The goal of these case studies is to show how our framework can enhance the trustworthiness of digital governance systems in practical applications. The management of municipal waste in Qatar's Al Rayyan City area is the focus of this case study. The research aims to enhance the requirements of the waste collection strategy by incorporating savvy receptacles into the waste administration methodology. A simulation model is created using the Janus multi-agent platform [32], and a portion of the Al Rayyan city dataset is used as a case study.
Problem Description: This study's smart bins are equipped with sensors that can send data to the cloud and measure density. The region deals with the loss nearby at a particular area and the landfill site is likewise assigned. For canisters, a gathering of families is related with a specific receptacle and a populace is related with a specific family.
Actual solution for case study: The waste administration process from squander age to garbage removal is displayed utilizing specialist based demonstrating (ABM) ideas. ABM is utilized to foresee and recreate the singular administration qualities of specialists. ADM is separated into two sections: squander age and waste treatment [33]. The waste collection procedure is modeled in the waste management module, while generation module simulates the process of producing waste. A driver and a truck are utilized for squander assortment.
How it is digital convergence problem: The problem described in the case study can be related to the digital convergence problem in the sense that the use of IoTenabled bins, cloud-based data storage, and agent-based modeling are all examples of digital technologies that are being employed to improve the efficiency and effectiveness of waste management. The integration of these digital technologies is a key aspect of digital convergence, which involves the merging of various digital technologies to create new opportunities for innovation and collaboration. In this case study, the use of IoT-enabled bins gives constant information on the fill level of the receptacles, which can be utilized to upgrade squander assortment courses and timetables. This data is stored in the cloud, making it easily accessible to waste management planners and drivers. Agentbased modeling is used to simulate the waste generation and waste collection processes, providing a more accurate and efficient way of managing the waste. Therefore, the waste management problem described in the case study is an example of how digital convergence can be applied to improve public service delivery, increase efficiency, and enhance sustainability.
Why secure platform for Domestic waste management-Al Rayyan, Qatar Region: There could be several security problems that can affect this case study, including: i. Data privacy: As sensitive data such as waste generation, collection, and disposal information are involved, it is crucial to ensure that the data is securely stored and transmitted between different stakeholders. ii. Data integrity: The waste management system heavily relies on accurate data to make decisions. Therefore, the system must ensure that the data is not tampered with and is accurate and up-to-date. iii. Cyber-attacks: Since the system is connected to the internet, it can be vulnerable to various cyber-attacks such as DDoS attacks, malware, and ransom-ware. iv. Insider threats: The system also needs to guard against insider threats, which could be intentional or unintentional. For instance, an employee or contractor with access to the system could steal or manipulate data or disrupt the system. v. Physical security: The system also needs to be physically secured to prevent unauthorized access or tampering with hardware components such as the IoTenabled bins or the waste collection truck.

Proposed framework for Domestic waste management-Al Rayyan, Qatar Region:
In the case study of waste management in Al Rayyan, our proposed framework can help address the security challenges by providing secure and trustworthy platform for digital governance interoperability and data exchange.
i. Firstly, our framework utilizes an optimal blockchain leveraging approach designed using the bonobo optimization algorithm to authenticate data generated from smart city environments. This ensures the integrity and authenticity of the data collected from the IoT-enabled smart bins, preventing any tampering or falsification. ii. Secondly, the integration of a lightweight Feistel structure with optimal operations enhances the privacy preservation and provides two levels of security. This double-secured data exchange ensures that the collected data is protected from unauthorized access or malicious attacks, ensuring confidentiality.
iii. Finally, DRL model is used to detect and prevent intrusions such as fraud/corruption in the smart city data. This approach enhances transparency and accountability in accessing the data, contributing to more secure digital governance systems. Overall, our proposed framework provides a more secure and trustworthy platform for digital governance interoperability and data exchange in the waste management system, addressing the challenges of privacy, security, and reliability in managing and delivering public services.

B. VOATZ -THE STATE OF WEST VIRGINIA, 2018 FEDERAL ELECTIONS, US
In the U.S. midterm elections held on November 6, 2018, the State of West Virginia utilized Voatz's mobile voting application based on blockchain technology to facilitate voting for overseas voters, including active-duty military personnel [34]. This marked the first instance of blockchain being used in a U.S. federal election. Voatz is a startup based in the United States that was established in 2015 and enables citizens to vote in various types of elections and voting events using their smartphones. The company raised $2.2 million in seed funding in January 2018 and has conducted over 30 pilots, recording more than 75,000 votes since its inception [35].
Problem Description: The Voatz mobile application leverages blockchain technology to create an unchangeable record of all cast votes, while also relying on advanced cyber security software to identify any malware on voters' smartphones, as well as biometrics to authenticate users. Voters must first register by uploading a photo ID, such as a driver's license, and then submit a brief video of their own face in order to use this app to vote. The facial acknowledgment innovation in the elector's iPhone or Android gadget matches this video against the ID photograph, and the data on the ID is thought about against West Virginia's citizen enrollment data set [36]. Voters can use fingerprints or facial recognition to submit their ballots after they have been verified. The company uses both technology-based verification and human workers to manually review the submitted information. After the vote is secured, all personally identifiable data is deleted, and the voter's selections remain private and do not appear in any public record.
Actual solution for case study: The votes are securely stored on public-permission blockchain, which is a distributed database held in 16 different locations [37]. The records are protected by sophisticated after the polls close, computational algorithms unlock them by county clerks. After being retrieved from the blockchain, the votes are printed on scannable paper ballots, which are then tabulated by machines at the state level. A certified email receipt is sent to both the voter and the designated election office when a ballot is cast, serving as an audit mechanism and paper backup. The application is only compatible with specific smartphones that meet stringent security standards and have up-to-date software. If malware is detected, the app will prevent users from accessing it, and any suspicious activity is flagged for human review [38], [39], [40]. Voters who prefer to vote by traditional methods or are ineligible to vote via smartphone can do so.
How it is digital convergence problem: The use of Voatz's mobile voting application in the State of West Virginia's 2018 federal elections is an example of digital convergence. It involves the convergence of various technologies, such as blockchain, biometrics, and cyber security software, to provide a seamless and secure voting experience. However, this digital convergence also presents several problems and challenges. One of the main problems is the potential vulnerability of the system to cyber-attacks and hacking attempts. As with any technology that handles sensitive information, the use of blockchain-based mobile voting applications raises concerns about data privacy and security. Furthermore, digital convergence also raises concerns about accessibility and equity. While mobile voting applications like Voatz can make the voting process more convenient for some individuals, it may also exclude those who do not have access to smartphones or those who are not comfortable with using technology.
Why secure platform for Voatz -The State of West Virginia, 2018 Federal Elections, US: A secure platform was necessary for the Voatz application used in the 2018 Federal Elections in West Virginia for several reasons.
i. Voting is a critical process in any democracy, and it is essential to maintain the integrity and trustworthiness of the voting process. Any security breaches or manipulation of the results could compromise the democratic process and undermine the legitimacy of the election outcomes. ii. The use of mobile devices for voting raises additional security concerns, such as the potential for malware or hacking attempts. A secure platform was therefore crucial to ensure that the votes cast through the Voatz application were not tampered with or manipulated in any way. iii. The use of blockchain technology provides an additional layer of security and transparency, as it creates an immutable record of all the votes cast. This makes it virtually impossible to alter or delete any records, ensuring that the election results are accurate and reliable.

Proposed framework for Voatz -The State of West Virginia, 2018 Federal Elections, US:
Our proposed framework can be used to enhance the security and reliability of the Voatz mobile voting application, which uses blockchain technology to create an unchangeable record of all cast votes. The optimal blockchain leveraging approach using the bonobo optimization algorithm can be used to authenticate data generated from the smart city environment, which in this case would include the voting data generated from the mobile app. This approach enhances the security and reliability of the data exchanged through the app, ensuring that the voting records are authentic and trustworthy.
The integration of a lightweight Feistel structure with optimal operations can also be used to enhance privacy preservation in the Voatz system. This integration provides two levels of security, ensuring that the voting records remain private and confidential. The deep reinforcement learning (DRL) model can be used to detect and prevent intrusions such as fraud and corruption in the voting data, enhancing transparency and accountability in accessing the data. Overall, the proposed framework can be used to enhance the security, privacy, and reliability of the Voatz system, ensuring that the voting data generated through the mobile app is authentic, trustworthy, and confidential.

V. RESULTS AND DISCUSSIONS
In this section, we will discuss the results of our simulations and comparative analysis of our proposed framework with existing frameworks. The simulations were performed using Python programming language, and we implemented Blockchain technology using Ethereum and Solidity programming language. We evaluated the performance of our framework by using two of the latest IoT-based datasets, namely ToN-IoT and BoT-IoT. To provide a fair comparison, we compared our results with both the original and transformed datasets. Additionally, we also compared our results with state-of-the-art frameworks such as Naive bias (NB), Decision tree (DT), Random forest (RF), and Trustworthy privacy-preserving secured framework (TP2SF). Our simulations were performed on a Tyrone PC with the following specifications: Intel(R) Xeon(R) Silver 4114 CPU @ 2.20GHz 2.20 GHz (2 processors), 128 GB RAM, and 2 TB hard disk. The system was configured with IPFS version 0.4.19. The results of our simulations demonstrate that our proposed framework outperforms the existing frameworks in terms of security, reliability, and privacy preservation. Our framework also provides better accuracy and faster processing times than the state-of-theart frameworks. These results demonstrate the effectiveness of our proposed framework in enhancing the performance and trustworthiness of digital governance interoperability and data exchange in smart city environments.

A. DATASET DESCRIPTION
To evaluate the effectiveness of our proposed framework, we selected two popular and up-to-date datasets that are commonly used in IoT research: the ToN-IoT and BoT-IoT datasets. These datasets were chosen because they represent real-world IoT data and provide a diverse range of use cases and scenarios. By testing our framework on these datasets, we were able to assess its performance and compare it to other state-of-the-art frameworks.
• ToN_IoT dataset: The period of IoT and Present day IoT (IIoT) datasets, and they consolidate data accumulated from heterogeneous sources, for instance, Telemetry datasets of IoT and IIoT sensors, Working structures datasets of Windows 7 and 10, Ubuntu 14 and 18 TLS, and Association traffic datasets. A realistic, VOLUME 11, 2023 large-scale network was created by the IoT Lab at UNSW Canberra Cyber, which is part of the school of engineering and information technology (SEIT) at the Australian defense force academy for the collection of the datasets. The datasets were gathered in an equal handling to accumulate both typical and digital assault occasions from IoT organizations. At the IoT lab, a brand-new test bed for connecting multiple cloud and fog platforms, physical systems, hacking platforms, and virtual machines IoT and IIoT sensors, determined to mirror the intricacy and versatility of modern IoT and Industry 4.0 organizations. The datasets are planned to be utilized as a benchmark for assessing the presentation of different network protection applications, particularly those in view of artificial intelligence.
• BoT-IoT dataset: It was developed in the Cyber Range Lab at UNSW Canberra by creating a realistic network environment with botnet and normal traffic mixed in. The dataset is accessible in various document designs, including the first pcap records, argus records, and csv records, and the records are isolated in light of the sort of assault. The pcap documents are enormous, with a size of 69.3 GB and in excess of 72 million records, while the separated stream traffic in csv design is 16.7 GB in size. The dataset incorporates different kinds of assaults, for example, DDoS, DoS, operating system and Administration Output, Key logging, and Information exfiltration assaults. The DDoS and DoS assaults are additionally classified in view of the convention utilized. To make the dataset simpler to deal with, the makers removed 5% of the first dataset utilizing select MySQL questions, bringing about four documents with a complete size of roughly 1.07 GB and around 3 million records. Table 2 provides an examination between the ToN-IoT and BoT-IoT datasets. Backdoor, DDoS, DoS Injection, MITM, Normal, Password, Ransomware, Scanning, XSS, Recon, and Theft are among the various attack classes that are broken down into training and testing sets in the datasets. There are 138,313 records in the testing set and 322,730 records in the training set for the ToN-IoT dataset, respectively. On the other hand, the BoT-IoT dataset has 733,705 records in the testing set and 2,934,817 records in the training set. Backdoor, DDoS, DoS Injection, MITM, Normal, Password, Ransomware, Scanning, and XSS data are included in the ToN-IoT dataset. For Recon nodes, the upload time for a 20 KB file is 0.603 seconds, which is 115% more than the upload time for a 10 KB file, while the upload time for a 320 KB file is 1.895 seconds, which is 624% more than the upload time for a 10 KB file. Overall, Fig. 4 shows that the upload time increases with the increase in the number of IoT nodes and the size of the file. These findings are important for network administrators to consider when deploying IoT networks and determining the necessary infrastructure to ensure efficient file uploads MITM, Normal, Password, Ransomware, and Scanning attacks. The number of records for each attack class in both datasets varies. For instance, the DDoS attack class has 13,971 records in the ToN-IoT training set and 6,029 records in the testing set, while the BoT-IoT dataset has 1,541,315 records in the training set and 385,309 records in the testing set. The Normal class has the highest number of records in the ToN-IoT dataset, with 209,792 records in the training set and 90,208 records in the testing set. In contrast, the BoT-IoT dataset has the lowest number of records for the Theft class, with only 65 records in the training set and 14 records in the testing set and Theft attacks, however, there is no data available. In contrast, the BoT-IoT dataset contains data for DDoS, DoS, Recon, and Theft attacks, but not for Backdoor, Injection sizes. As the number of nodes increases, there is a noticeable increase in the upload time for each file size. For instance, for a file size of 10 KB, the upload time ranges from 0.280 seconds with 10 nodes to 0.416 seconds with 100 nodes, which is an increase of approximately 48%. Similarly, for a file size of 320 KB, the upload time ranges from 1.895 seconds with 10 nodes to 2.031 seconds with 100 nodes, which is an increase of approximately 7%. Furthermore, there is a gradual increase in the upload time as the file size increases. For example, with 10. Table 3 presents the examination of block mining time regarding changing record sizes and number of IoT hubs. The findings demonstrate that, regardless of the number of IoT nodes, the block mining time also increases with file size. The block has a file size of 10 KB mining time ranges from 0.037 seconds to 0.953 seconds for 20 and 100 IoT nodes respectively. On the other hand, for a file size of 320 KB, the block mining time ranges from 0.652 seconds to 1.568 seconds for 20 and 100 IoT hubs separately. Also, as the quantity of IoT hubs increases, the block mining time also increases for all file sizes. For instance, as the number of IoT nodes increases from 20 to 100, the block mining time increases from 0.406 seconds to 1.322 seconds for a file size of 80 KB. For instance, when the number of IoT nodes increases from 20 to 100 for a file size of 10 KB, the block mining time increases by 2617.6 percent, while it increases by 113.7 percent for the same file size when the number of IoT nodes remains constant at 20 but the file size increases from 20 KB to 320 KB. Fig. 5 suggests that increasing the number of IoT nodes has a higher impact on the block mining time than increasing the file size. Table 3 presents the analysis of block creation time with respect to varying number IoT nodes for different file sizes. As the file size increases from 10 KB to 320 KB, the block creation time also increases for all the IoT nodes. For instance, the block creation time for a file of size 10 KB ranges from 0.057 seconds for 20 IoT nodes to 0.973 seconds for 100 IoT nodes, while for a file of size 320 KB, the block creation time ranges from 0.672 seconds for 20 IoT nodes to 1.588 seconds for 100 IoT nodes.  This indicates that as the size of the file increases, the block creation time also increases proportionally. When considering the impact of varying the number of IoT nodes, the block creation time generally decreases as the number of IoT nodes increases. For example, for a file size of 10 KB, the block creation time decreases from 0.280 seconds for 1 IoT node to 0.416 seconds for 5 IoT nodes for uploading, while it decreases from 0.037 seconds for 1 IoT node to 0.953 seconds for 5 IoT nodes for block mining, and from 0.057 seconds for 1 IoT node to 0.973 seconds for 5 IoT nodes for block creation. From Fig. 6, we observed that as the number of IoT nodes increases, the block creation time decreases, but as the file size increases, the block creation time increases. It is important to note that the percentage increase in block creation time is smaller when compared to block mining time for the same set of file sizes and number of IoT nodes. This indicates that the block mining process is more resourceintensive than the block creation process.    Table 3 presents the analysis of Block access time with respect to varying number of IoT nodes and file size. As shown in the table, as the file size increases, the block access time also increases. For instance, the block access time for a file size of 10 KB is 0.0006 seconds when there are 20 IoT nodes, while for a file size of 320 KB, the block access time increases to 0.023 seconds, which is almost 40 times higher. Similarly, as the number of IoT nodes increases, the block access time also increases. For example, the block access time for a file size of 10 KB is 0.0006 seconds when there are 20 IoT nodes, while for 100 IoT nodes, the block access time increases to 0.462 seconds, which is almost 800 times higher. For example, the block access time increases by 76.67% when the file size increases from 10 KB to 320 KB, while it increases by 10.90% when the number of IoT nodes increases from 20 to 100. Fig. 7 indicate that block access time is dependent on both file size and number of IoT nodes, and it  is essential to consider both factors to ensure efficient block access in an IoT environment. Table 4 presents the class-wise detection rates of the proposed intrusion detection system (DRL) and several existing systems on the ToN-IoT dataset. The table shows that the DRL system achieved a detection rate of 99.8% for backdoor attacks, 77.3% for DDoS attacks, 95.668% for DoS attacks, 69.373% for injection attacks, 43.778% for MITM attacks, 100% for normal traffic, 92.533% for password attacks, 94.595% for ransomware attacks, 95.913% for scanning attacks, and 69.873% for XSS attacks. Compared to other systems, the DRL system showed improvements in detecting backdoor attacks, with only a slight decrease in the detection rate for DDoS attacks compared to the Random Forest (RF) system. The Decision Tree (DT) system achieved perfect detection rates for all classes except Injection and MITM attacks. The RF system achieved the highest detection rate for the Normal class, while TP2SF (Transformed) achieved the highest detection rate for Injection and XSS attacks. However, the DRL system outperformed all other systems for Ransomware and Scanning attacks. Overall, the results of Fig. 8 shows that the proposed DRL system achieved competitive or superior performance compared to existing intrusion detection systems for most classes of attacks, especially for Ransomware and Scanning attacks. The results also highlight the importance of considering class-wise detection rates in evaluating the performance of intrusion detection systems, as the performance can vary significantly across different attack classes. 70124 VOLUME 11, 2023 Authorized licensed use limited to the terms of the applicable license agreement with IEEE. Restrictions apply.  For the ToN-IoT dataset, the proposed and existing intrusion detection systems' class-wise precision comparisons are shown in Table 5. The proportion of all positive predictions that are true is the precision metric's metric. Among the current approaches, decision tree (DT) achieves 100% precision for all intrusion classes, followed closely by random forest (RF) with precision ranging from 96.032% to 100%. The proposed DRL model achieves high precision rates ranging from 97.016% to 100.628% for all intrusion classes except for the MITM and Scanning classes. The Naive Bayes (NB) model has the lowest precision rates for most of the classes, with precision ranging from 94.235% to 100%. Comparing the proposed DRL model with the existing methods, it achieves higher precision rates for the DoS and Injection classes than NB, and higher precision rates for the DDoS, Injection, Ransomware, and XSS classes than RF. However, it achieves lower precision rates for the MITM and Scanning classes compared to TP2SF (transformed). From Fig. 9, we say that the proposed DRL model achieves competitive performance compared to the existing methods in terms of precision, with an average precision rate of 97.9%.

C. COMPARATIVE ANALYSIS WITH RESPECT TO INTRUSION DETECTION SYSTEM
The proposed and existing intrusion detection systems for the ToN-IoT dataset are compared class-wise in Table 6. According to the findings, the proposed DRL model performs better than the other models in terms of recall for most of the intrusion classes. Specifically, the DRL model achieves the highest recall for all classes except for the normal and MITM classes. The RF model also performs well in terms of recall, with the second-highest recall for most classes. Compared to the other models, the NB model shows the lowest recall for all classes, indicating that it is not suitable for the detection of these types of intrusions. The DT model shows a moderate performance, with recall values ranging from 83.102% to 83.309% for all classes. The TP2SF model, both original and transformed, shows a significant improvement in recall compared to the other models for most classes, particularly for the XSS class. Fig. 10 show that the DRL model achieves an increase in recall for all classes, ranging from 17.262% to 18.825%, compared to the NB model. The DT model shows a slight increase in recall compared to the NB model, ranging from 2.365% to 2.469%. The RF model shows a moderate increase in recall compared to the NB model, going from 4.73% to 4.736%. The TP2SF models, both unique and changed, show a critical expansion in review contrasted with different models, going from 10.06% to 14.865% for the first TP2SF model and from 15.46% to 20.76% for the changed TP2SF model. Table 7 presents the class-wise F-measure comparison of the proposed and existing intrusion detection systems for the ToN-IoT dataset. It is observed that the DRL-based proposed model achieves the highest F-measure for all intrusion classes, ranging from 97.235% to 97.442%. The TP2SF (Transformed) model also achieves a good F-measure ranging from 94.870% to 95.077%. The traditional machine learning models, namely NB, DT, and RF, exhibit lower Fmeasure scores for most of the intrusion classes, compared to the proposed models. Fig. 11 shows that the proposed DRLbased model achieves an average improvement of 13.6% over the existing models. In contrast, the TP2SF (Transformed) model exhibits an average improvement of 7.4% over the existing models. The DT model shows a negligible increase of 0.4%, while the NB and RF models exhibit a decrease of 5.6% and 0.1%, respectively. Overall, the proposed DRLbased model outperforms the existing models in terms of VOLUME 11, 2023    F-measure for all intrusion classes. The confusion matrix shown in Fig. 12 represents the performance of the proposed framework for the ToN-IoT dataset. Table 8 presents the class-wise detection rates comparison of the proposed DRL-based intrusion detection system with existing techniques for the BoT-IoT dataset. The results show that the proposed DRL-based approach outperforms the other existing methods in terms of detection rates for all the intrusion classes. The detection rates for normal traffic are consistently high across all methods, with the proposed DRLbased method achieving a detection rate of 98.765. For the DDoS and theft intrusion classes, the proposed DRL-based approach, and the existing random forest (RF) approach have similar detection rates, both achieving over 98%.
However, for the other intrusion classes, the proposed DRL-based approach significantly outperforms the other methods. For example, the proposed DRL-based approach achieves a detection rate of 98.264% for the DoS class, while the next best method, the original TP2SF approach, achieves only 95%. Similarly, for the reconnaissance intrusion class, the proposed DRL-based approach achieves a detection rate of 98.411%, while the next best method achieves only 96.090%. Overall, the proposed DRL-based approach achieves the highest detection rates for all the intrusion classes, ranging from 98.264% to 98.765%. Compared to the existing methods, the proposed DRL-based approach   achieves an improvement of up to 3.352% for the DoS intrusion class and up to 2.675% for the reconnaissance intrusion class. Fig. 13 demonstrates the effectiveness of the proposed DRL-based approach in detecting various types of intrusions in the BoT-IoT dataset. Table 9 compares the class-wise precision of the proposed DRL-based intrusion detection system with the existing systems for the BoT-IoT dataset. The precision for the normal class is consistently high for all systems, with a value of 98.765%. For the DDoS class, the proposed DRL system shows a significant increase in precision (98.998%), compared to the other existing systems, with an increase of 0.233% compared to the next best system, which is the RFbased system. Similarly, the proposed DRL system shows the highest precision for the DoS class (99.16%), with an increase of 0.995% compared to the second-best system, which is the RF-based system.
For the Reconnaissance class, the proposed DRL system again shows the highest precision (98.978%), with an increase of 9.213% compared to the second-best system, which is the TP2SF-based system. For the Theft class, the precision of the proposed DRL system (98.638%) is significantly better than the other existing systems, with an increase of 14.403% compared to the next best system, which is the TP2SF-based system. In summary, Fig. 14 demonstrates that the proposed DRL-based intrusion detection system shows significant improvements in precision for all intrusion classes, except for the Normal class, compared to the existing systems for the BoT-IoT dataset. The proposed DRL system outperforms the existing systems in terms of precision for DDoS, DoS, Reconnaissance, and Theft classes, and achieves an overall high level of precision for the dataset. Table 10 presents the class-wise recall comparison of the proposed DRL-based intrusion detection system and the existing systems for the BoT-IoT dataset. The proposed DRLbased system achieves the highest recall rates for all intrusion classes, with a maximum recall rate of 97.534% for the Theft class. The TP2SF (Transformed) method also shows a considerable improvement in recall rates for all classes, achieving a maximum recall rate of 95.169% for the Theft class. On the other hand, the traditional machine learning methods, NB, DT, and RF, have lower recall rates compared to the proposed and TP2SF methods. The recall rates of these methods are relatively close to each other and do not show significant differences. For example, the recall rates for the Normal class for NB, DT, and RF are 79.617%, 81.982%, and 84.347%, respectively. From Fig. 15, the proposed DRLbased intrusion detection system outperforms the existing methods for the BoT-IoT dataset, with significantly higher recall rates for all intrusion classes. Compared to the TP2SF (Transformed) method, the proposed DRL-based method achieves even better performance with an increase in the maximum recall rate from 95.169% to 97.534%.     Table 11 presents the class-wise F-measure comparison of the proposed and existing intrusion detection systems for the BoT-IoT dataset. It can be observed that all the models perform well in detecting normal traffic with F-measures ranging from 78.29% to 96.11%. The proposed DRL-based model outperforms all other models in detecting DDoS and DoS attacks with F-measures of 96.14% and 96.16%, respectively, which represent an improvement of 3.18% and 1.09% compared to the next best performing model. For Reconnaissance attacks, the proposed model achieved an F-measure of 96.18%, which is an improvement of 6.73% over the next best performing model. For Theft attacks, the proposed model achieved an F-measure of 96.21%, which is an improvement of 1.14% over the next best performing model. Overall, the proposed DRL-based model achieved the highest F-measure for three out of five intrusion classes, and achieved competitive results for the remaining two classes. Fig. 16 indicates that the proposed DRL-based model is effective in detecting a wide range of intrusion attacks on the BoT-IoT dataset. The confusion matrix shown in Fig. 17  represents the performance of the proposed framework for the BoT-IoT dataset.

VI. CONCLUSION
In conclusion, this paper proposes a secure platform to enhance the trustworthiness of digital governance interoperability and data exchange using blockchain and deep learning-based frameworks. The proposed approach incorporates an optimal blockchain leveraging approach and a lightweight Feistel structure with optimal operations to ensure double-secured data exchange and privacy preservation. Additionally, a deep reinforcement learning (DRL) model is utilized to detect and prevent intrusions, contributing to more secure digital governance systems. Experimental evaluation using two benchmark datasets, BoT-IoT and ToN-IoT, demonstrated that the proposed framework outperforms other state-of-the-art techniques in terms of various measures. The framework's effectiveness in real-time scenarios has been demonstrated through two case studies from smart city applications, waste management and voting systems. Overall, the proposed framework provides a trustworthy platform for digital governance interoperability and data exchange, addressing the challenges of privacy, security, and reliability in managing and delivering public services. The results demonstrate the effectiveness of the proposed approach in enhancing the trustworthiness of digital governance systems and ensuring secure and reliable data exchange in smart cities.