Classification and Mapping of Model Elements for Designing Runtime Reconfigurable Systems

Embedded systems are ubiquitous and control many critical functions in society. A fairly new type of embedded system has emerged with the advent of partial reconfiguration, i.e. runtime reconfigurable systems. They are attracting interest in many different applications. Such a system is capable of reconfiguring itself at the hardware level and without the need to halt the application’s execution. While modeling and implementing these systems is far from a trivial task, there is currently a lack of systematic approaches to tackle this issue. In other words, there is no unanimously agreed upon modeling paradigm that can capture adaptive behaviors at the highest level of abstraction, especially when regarding the design entry, namely, the initial high-level application and platform models. Given this, our paper proposes two domain ontologies for application and virtual platform models used to derive a classification system and to provide a set of rules on how the different model elements are allowed to be composed together. The application behavior is captured through a formal model of computation which dictates the semantics of execution, concurrency, and synchronization. The main contribution of this paper is to combine suitable formal models of computation, a functional modeling language, and two domain ontologies to create a systematic design flow from an abstract executable application model into a virtual implementation model based on a runtime reconfigurable architecture (virtual platform model) using well-defined mapping rules. We demonstrate the applicability, generality, and potential of the proposed model element classification system and mapping rules by applying them to representative and complete examples: an encoder/decoder system and an avionics attitude estimation system. Both cases yield a virtual implementation model from an abstract application model.


I. INTRODUCTION
Nowadays, embedded systems are ubiquitous and control many vital and safety-critical domains, such as transportation, communication, and medical devices. Modeling and correctly implementing these systems is not an easy task.
In embedded systems, formal models have for a long time aided in managing the complexity of emergent behaviors arising from combining heterogeneous components [1]. The role of models, in general, is to capture the essential aspects of a system's behavior at a high level as abstract, wellbehaved, and mathematical representations. A formalism gives value to a model from the perspective of a computeraided design flow, thus equipping it with tools and methods for analysis, and rules for evaluation and verification. In this context, formal models become specifications, or entry points in a design flow enabling rigorous, traceable an amendable path to system implementations.
Yet, there is a lack of systematic approaches for the design of runtime reconfigurable systems (RTR), which can guarantee a possible and correct implementation, especially when regarding the design entry, namely, the initial highlevel models for system specification.
RTR systems are embedded systems capable of reconfiguring part of their hardware without the need to stop the execution of the application implemented on it. Such a system is gaining interest for many different applications due to its adaptive capability that could be explored to reduce power consumption, increase performance [2] and provide fault tolerance [3]. Needless to say, a systematic design methodology for RTR systems is essential for the correct implementation of such systems. Serpanos et al. [4] state important research challenges, including the need for novel tools to enable the embedded systems' design at different abstraction levels and also considering platform definitions and runtime reconfigurability aspects.
In view of this, the scientific community needs to consider new strategies for the design of RTR embedded systems. Other researches take for example software and hardware codesign [5], [6], testing and debugging [7], or even concepts analyzed as separated pieces. Also, most of them do not tackle software and hardware reconfiguration as formal aspects in the design flow, even when examining RTR [8].
We address these questions through the combination of ontologies and formal models of computation. In this sense, our paper proposes two domain ontologies: an applicationdomain ontology and a platform-domain ontology. They are used to derive a classification system and to provide directions on how the different model elements are allowed to be composed together. On the other hand, a well-defined semantics is provided by a formal model of computation (MoC) [1] that dictates execution, concurrency, and synchronization. Considering the design of runtime reconfigurable embedded systems, we propose a flow taking into account (i) an application model, (ii) a virtual platform model, and (iii) a set of mapping rules developed in the paper.
The application model is described based on the theory of models of computation and admitting reconfigurable behavior. The scalable platform model represents the abstraction of the underlying software and heterogeneous hardware, i.e. it constitutes a virtual platform. When applying a set of well-defined mapping rules on these models, the result is a correct virtual implementation model able to be further compiled into software or synthesized into hardware. Compilation and synthesis are not within the scope of the paper, much rather as potentially design steps enabled by virtual implementations.
The models are expressed as functional programs (i.e. hosted on a functional programming language), where functions are used as first-class citizens. This means that a host language provides both a semantic foundation and the evaluation mechanisms necessary to pass functions as values. Hence functional programming enables a convenient environment to describe adaptive models of computation and demonstrate the applicability of our classification system and mapping rules.
Using two different representative examples, we apply our proposal to two runtime reconfigurable embedded systems to demonstrate the applicability, generality, and scalability of our approach.
As main contributions of this paper, we combine a set of suitable formal models of computation, a functional modeling language, and two domain ontologies to create a systematic design flow from an abstract executable system model into a virtual implementation based on a runtime reconfigurable architecture, a virtual platform, and using well-defined mapping rules.
The main contribution points are summarized as follows: 1) an application-domain ontology used to derive a classification system and composition criteria to model an application (Sections III-A1 and III-A5); 2) a platform-domain ontology used to derive a classification system and composition criteria to model a virtual platform (Section III-B1); 3) a generalization of the runtime reconfiguration concept by using different MoCs, i.e. timed and untimed. We formulate the requirements for a MoC to be able to describe reconfigurable behaviors and we give examples of suitable MoCs found in the literature (Section III-A4); and 4) a set of well-defined mapping rules that when applied to the high-level application and platform models, leads to a feasible virtual implementation model (Section III-C).

II. RELATED WORKS
For over two decades [9], commercial FPGAs can perform a partial reconfiguration. However, the usage of runtime hardware reconfiguration has been mostly underutilized. Nguyen et al. [2] argue that the lack of use cases, demonstrating the advantage of runtime reconfiguration (RTR) over traditional static reconfiguration, may be the root cause. For this matter, the authors of [2] present a quantitative demonstration of the benefits of such an approach for embedded vision applications. They mention aspects like power consumption and performance. Fortunately, in recent years, runtime reconfiguration is being revisited, and many different applications are exploring the advantages of such a feature. In [3], runtime partial hardware reconfiguration is used in the embedded control system of a multirotor enabling different rotor configurations to be switched in runtime. The authors point out that runtime reconfiguration will make the multirotor more fault-tolerant and adaptive. In [10], the usage of runtime reconfiguration is explored to increase the security of implemented cryptographic algorithms against side-channel attacks.
Even with the increasing interest in runtime reconfiguration capabilities, most applications are still designed using ad-hoc methods due to the lack of a well-structured flow for the design of general RTR applications. Here, we intend to advance the state-of-the-art regarding the design of RTR embedded systems by using the concepts of ontologies, models of computation, and mapping rules. Our goals address several of the challenges posed in [11], such as the system's possibility to adapt itself in runtime and the use of formal methods to enable correct operation guarantees.
A model-based design is introduced in [12], although they not use runtime reconfiguration. The authors claim for an approach for rapid development of embedded systems. Their solution is based on STMicroelectronics and Mathworks related tools where they created a Simulink blockset. Our proposal here is to use as much as possible of free and even open-source frameworks and tools mainly in the specification, modeling, and simulation phases. In the case of lowlevel automatic code generation, one can take advantage of commercial tools' engines, such as the Simulink embedded coder mentioned in [12].
In [13], the authors show a formal model and a corresponding design methodology applicable for the context of a heterogeneous system on chip. However, they address only the specification of access permissions and information flow requirements for embedded systems in an abstract manner and provide configuration codes at the end. In the present paper, we conceptualize a design flow taking into account executable functional specifications, based on a suitable model of computation, along with functional blocks of a virtual platform. Next, we derive a feasible virtual implementation by combining the functional specifications and blocks by a set of well-defined mapping rules.
Khalgui et al. [14] propose a design methodology applicable to reconfigurable discrete-event systems. They use a profile called R-UML, which is a UML-extension. Then, in a further step, the method counts on a transformation to consider formalisms, e.g., Petri nets, and timed automata. In [15], the authors combine semi-formal and formal methods and also use model transformations. Our proposal already starts with formal models of computation.
According to [16], an ontology is an "explicit organization of knowledge" structured by concepts and the relation between them. When used in a specific domain, this enables formalization of information. In this sense, we developed two domain ontologies: application and platform, as well as a set of mapping rules connecting both.
A method to infer semantic properties using lattice-based ontologies supported by manual annotations is presented in [17]. In the present paper, we introduce ontologies used to derive a classification system that allows for efficient mapping of application model elements to platform functional blocks.
Embedded systems design is intrinsically domainspecific. Given this, the work [18] proposed a formal foundation to evaluate if two different domain-specific modeling languages or metamodels are equivalent.
The authors of [19] defined hardware and software codesign as a possibility to integrate the hardware and soft-ware design techniques in a framework and introduced a formal co-design methodology applicable to embedded systems specifications. Here, we present a set of well-defined mapping rules to enable the elements of an application model to be systematically mapped to the functional blocks in a platform model, leading to a possible and feasible implementation model.
Self-adaptation is one of the most relevant strategies to cope with the increasing complexity of software systems [20]. As claimed by those authors, this complexity demands the adoption of formal methods in systems design. In the present research, the application model is based on formal models of computation, which confers execution, concurrency, and synchronization semantics.

III. DESIGN FLOW CONCEPTUALIZATION
Our proposed classification system and set of mapping rules, supported by formally analyzable application models, enable a correct-by-construction approach leading to a feasible virtual implementation model, as illustrated in Fig. 1  Here, we suggest a design flow starting from the application model (Section III-A) and the virtual platform model (Section III-B), followed by an application of the mapping rules (Section III-C) to yield the virtual implementation model as the result.
We envision our proposal as paving the way towards a correct-by-construction system design flow [21], where each design decision is accounted for and can be justified by the need to both satisfy the required application properties and to meet the design criteria. This motivates founding our approach on formal models of computation which grant VOLUME W, 2021 unambiguous semantics for an application and systematizing the classification systems and mapping rules to have a virtual platform.
The system design starts with an executable specification, based on a sound formal model of computation. The application model Λ formally specifies the system under development. This model is described as a network of concurrent processes communicating through signals on a MoC-based framework. In turn, this framework describes runtime reconfigurable behavior in an abstract way, hiding specific details about the implementing technology from the designer perspective. Therefore, the designer can focus on developing a functional executable specification without restrictions imposed by technology. Fig. 2 illustrates this concept. In this sense, a control actor is responsible for sending functions to a regular actor. In turn, this actor processes inputs into outputs by using the received function.
The virtual platform model P presents itself as an abstraction of a heterogeneous architecture. It captures the main concepts of a wide range of physical hardware platforms and configurations into functional basic blocks and hides device-specific information. This can be thought of as a template that can be implemented on a variety of physical technologies.
The application and platform models provide information about the functional aspects of the system being designed. This is possible due to the orthogonalization of design concerns this approach relies on. The application and platform information is combined by a set of well-defined rules that takes into account the functional aspects of the application together with the functional aspects of the platform resulting in a virtual implementation model I.
The modeling language uses a small yet powerful set of modeling elements, which have a correspondence in the implementation domain. This enables the mapping rules.
The ontologies are used to restrict both the application domain and the implementation domain to the set of necessary components, so that efficient mapping rules can be derived.

A. APPLICATION MODEL
A model is considered as an abstraction of an element that can be a physical system or even another model. Abstraction comprises a way of choosing which level of information or aspects to consider when modeling a given system [22].

1) Application-Domain Ontology
Classification and composition rules along with semantic information (as depicted in Fig. 1) contribute to a clear and unambiguous understanding of a model. In view of this, we developed an application-domain ontology used to attribute a classification to each element of our application model and to define how the elements are allowed to be composed. The semantics of the application model Λ is finally given by the MoC used to describe the application, according to the characteristics of the application, e.g. execution, concurrency and synchronization (detailed in Section III-A2).
As illustrated in Fig. 3, the application-domain ontology has three higher abstract entities: procedure, path, and value.
The higher abstract entity procedure performs a computation that can be either a controller procedure or an executor procedure. The computation performed by the controller procedure is about deciding and informing a variable executor procedure which function (here we consider a function as common data, allowing them to be exchanged between procedures and stored in memory) it has to use to perform its computation. On the other hand, a fixed executor procedure has its unchangeable function defined at design time.
Another higher abstract entity is the path. Paths serve as unidirectional channels between procedures, so that they can communicate with each other and exchange entities from the value class. There is also the control path to enable the transport of functions from a controller procedure to a variable executor procedure. And finally, there is the data path, used to transport info values. Data paths are classified as either homogeneous data path or hybrid data path. Homogeneous data paths are used as a communication channel involving two entities executor procedure (e.g. variable or fixed), and the hybrid data path is used when a executor procedure communicates to a control procedure.
The relation "is part of" means that the higher abstract entities procedure, path, and value, together compose the application. The relation "is a" stands for an entity generalization as it moves upwards in the ontology hierarchy.
The realizable entities (leaves in the tree representation, Fig. 3) in the application-domain ontology are the controller procedure, fixed executor procedure, variable executor procedure, control path, homogeneous data path, hybrid data path, info value and function value. These entities are interpreted as classes, as described in Section III-A5.

2) Functional Modeling
The application model Λ is structured as a network of concurrent processes communicating through signals, having their formal foundation in the theory of models of computation.
Models of computation (MoC) are classes of behaviors dictating the semantics of execution, concurrency, and synchronization in heterogeneous cyber-physical systems. They are founded in the tagged signal model (TSM), which is a denotational framework introduced in [1], [23], as a common meta-model for describing properties of concurrent systems as sets of possible behaviors. From a structural point of view, concurrent systems are defined as networks of processes communicating through signals.
In this context, signals are ordered sets of events characterized by a tag system, which describes the causality between events and can model time, precedence relationships, synchronization points, and other key properties. Events are the elementary information units exchanged among processes and are composed of a tag t ∈ T and a value v ∈ V .
Processes can be regarded as sets of possible behaviors over signals or relations between multiple signals. In other words, processes are basic structures that encapsulate computation and execute with the semantics dictated by a specified tag system, i.e. a MoC. Processes are allowed to have an internal state. When the behavior of a process is determinate and can be uniquely defined by a function from inputs to outputs, such a process is said to be functional. Models possessing only functional processes are named functional models.
A suitable formal framework to host our approach is ForSyDe (Formal System Design). It is a transformational design methodology based on MoCs and the functional programming paradigm (FPP). ForSyDe targets the modeling of heterogeneous embedded systems by providing a common framework for the use of specific MoCs for different design goals [24].
A key idea of ForSyDe is to use processes constructors to create processes. Exploiting the FPP, ForSyDe defines process constructors as higher-order functions that take combinational functions and values as arguments to produce processes. The process constructors implement the semantics of the specific MoC, thus enabling the orthogonalization between communication and computation [24]. In this way, the behavior of a system given by a specific MoC can be captured using a finite set of process constructors. ForSyDe defines processes from a functional programming perspective as functions over the (history of) signals.
Signals, in ForSyDe, are the only means of communication and synchronization between processes, i.e., processes communicate to each other by writing to and by reading from signals.

a: Synchronous Dataflow
The synchronous dataflow (SDF) MoC, introduced by Lee and Messerschmitt [25], is a paradigm for untimed computation in which execution is dictated by a partial order between events. An SDF application can be expressed as a directed graph, where vertices are called actors and represents the computations or functions to be executed, and edges are directed signals which represent data channels. The partial ordering between events is satisfied by firing the actors according to fixed production and consumption rates. In other words, vertices execute whenever there is sufficient data in the input signals according to the consumption rates, and yield a number of data tokens at the outputs according to the production rates. In this case, an actor without input signals can fire at any time. Similarly, it is also possible to have sink-node actors, i.e. without any output. Fig. 4 illustrates an SDF graph actor named actor ν . In our application model concept, the index ν = {1, 2, . . . , V} VOLUME W, 2021 stands for the actor number, and V ∈ N + = {1, 2, 3, . . . } is the number of existing vertices/actors in the graph. The actor itself represents a computation.
Let's assume that it is possible to change the actor's computation in runtime. In this case, the actor is a function placeholder, that is, an actor that has defined but not fixed behavior over time, as detailed in Section III-A4. We base our assumption on the fact that the partial ordering constraints between events dictated by an SDF graph can be satisfied regardless of the functions executed by its actors. Still, in Fig. 4, there are two explicit incoming signals and another two explicit outgoing signals related to actor ν . Each actor ν can have A ∈ N = {0, 1, 2, . . . } incoming signals, and B ∈ N + outgoing signals. It means that each actor can receive data from different inputs and produces data into different outputs. When fired, i.e. executing the function it holds, the actor ν consumes c ν,α samples from incoming signal i ν,α where α ∈ {1, 2, . . . , A}, and produces p ν,β samples into the outgoing signal o ν,β where β ∈ {1, 2, . . . , B }. The parameters c ν,α and p ν,β are the consumer and producer rates, respectively. It is also possible to have sink-node actors, i.e. without any output.

b: Scenario-Aware Dataflow
The scenario-aware dataflow (SADF) MoC is an extension of SDF to address dynamic behavior [26]. SADF has two types of actors: kernels and detectors. Kernels are data processing actors whose configuration, i.e., production and consumption rates and function, is defined by input control tokens. Every time a kernel fires, it consumes a single control token from the control port, setting the configuration of the firing. Then, the firing proceeds in the same manner as in an SDF actor. Detectors are the actors responsible for producing control tokens for all kernels connected to them via control channels. The functional behavior of a detector is dictated by a finite state machine (FSM) and every state of the FSM represents a detector's scenario [27]. The token consumption rates of the detector are constant, however, the token production is a function of the current state. The FSM behavior of the detector ensures that the model is analyzable for properties such as consistency and absence of deadlocks. Two signals are synchronous if all events in one signal are synchronous to the events in the other signal and viceversa [1].
Following this hypothesis, the timing behavior of the model is simply defined by the arriving of input events considering the system handles input samples in zero time and waits until the next input event arrives [28].

3) Runtime Reconfiguration
An adaptive system can actively change its configuration based on an internal state or as a response to changes on the external environment. Adaptiveness can be enabled by runtime reconfiguration. For a specific application and a specific time frame, the reconfigurable devices' spatial structure is changed to comply with a given objective [29], [30]. Reconfigurable computing aims at both high performance and flexibility [31].
We distinguish between two possible reconfiguration techniques: full and partial. A device supports full reconfiguration when it considers its whole reconfigurable area as just one monolithic block. In that case, the hardware configuration can be changed at the cost of stopping the execution of the whole reconfigurable area. Partial reconfiguration, on the other hand, allows the reconfigurable area to be divided into one or more independent partitions or slots that can be reconfigured, i.e. changed, without affecting each other.
Runtime reconfiguration is the possibility to change a reconfigurable device's functionality while that device has already started executing and without stopping it.

4) Generalization of the Runtime Reconfiguration Concept
We formulate the requirements for a MoC to be able to formally describe runtime reconfigurable behaviors. Our application model Λ is supported by this generalization of the runtime reconfiguration concept.
The basic principle is the use of functions as events [32]. Once a model of computation or a language supports firstclass functions, and consequently, higher-order functions, modeling adaptivity by defining runtime reconfigurable processes which operate on signals carrying functions, becomes transparent. In other words, the key to expressing runtime reconfigurability at the highest level of abstraction in a transparent manner is the capability of the modeling language (or MoC) to support first-class functions, i.e. functions that can be treated as variables.
In this sense we define the concepts of meta-behavior and function placeholder as follows.
Definition 1 (Meta-behavior) A meta-behavior is a process' functional behavior that is defined by a function application function, generically defined by the apply function as follows: The apply function is a higher-order function that takes a function and a number of arguments and, as a result, applies the function to the remaining arguments. Each argument x i can belong to a different V i ⊂ V .

Definition 2 (Function Placeholder -FPH)
A function placeholder is any functional process possessing a metabehavior as its defined behavior. As an implication, one of its input ports receives functions, which are applied to the rest of the input events to compute the outputs. A functional process is a computing element such as an actor. The concept of function placeholder is exhibited in Fig. 5. The process Φ ν receives functions in the input signal sf ν to apply them to its remainder inputs and then generates the outputs, as in (2) and (3).
For dataflow untimed models (e.g. SDF and SADF), the variable δ represents the actor firing k ν , while inputs i n where n ∈ {1, . . . , α}, and outputs o m where m ∈ {1, . . . , β}, are vectors of events with size given by the token rates.
Given this, the application model elements of the class Procedure.Executor.Variable follows the FPH definition (Definition 2), independent of the model of computation used for the modeling.

5) Classification System
The application-domain ontology introduced in Section III-A1 is used to derive and build the classification system related to the application part, as depicted in Fig. 1. Considering the semantics of the application model Λ and the concepts of the application-domain ontology, Table 1 shows the application model elements and their corresponding classifications. Notice that the application model elements, showed in the left column of Table 1, are always the same, independent of the MoC used.
The mapping rules step uses this resulting classification system, as presented, for the application side. In other words, once one designs a given system supported by the MoC-based application model Λ, the model element class is given by our defined classification as is in Table 1.

B. VIRTUAL PLATFORM MODEL
As a scalable virtual platform model P , we consider the use of the runtime reconfigurable (RTR) approach when abstracting architectural details through functional blocks. The scalability aspect is related to the platform's ability to receive different functionalities over time enabled by both hardware and software reconfiguration schemes.

1) Platform-Domain Ontology
In the same sense of the application-ontology, we developed a platform-domain ontology used to attribute classification and composition directions to each functional block of our virtual platform model P , as shown in Fig. 1.
The platform-domain ontology has three higher abstract entities: static, adaptable and pool, as Fig. 6 illustrates.
The entity static means fixed underlying hardware architecture in which the software needs to adapt to the hardware. On the other hand, the entity adaptable stands for runtime reconfigurable circuitry where an algorithm can have an optimized hardware architecture to run on in terms of performance and power consumption. In this case, different basic logic gates are combined to form a circuit, i.e. reconfigurable hardware.
The entity pool represents a memory pool where dissimilar and ready-to-load and execute software-based and hardware-based configurations are stored in. Configurations, can be loaded and selected at runtime.
Notice that static and adaptable have almost the same entities relating to them. When an entity is adaptable it has the suffix χ, e.g. interfaceχ. The χ indicates that an entity also supports hardware runtime reconfiguration, both full and partial. This gives flexibility to the proposed ontology, i.e. it is possible to have different combinations on the number of instances from these two entities static and adaptable when describing a platform.
The entity interface is divided into data interface and control interface. Interfaces are in charge of sending and receiving data and control messages to and from entities performer and control unit. The entity performer is responsible for computations. Those computations are based on a fixed hardware when the performer is an entity static, and on runtime reconfigurable hardware when the performer is an entity adaptable. In the entity adaptable, there is the once performer, the mode performer, and the function performer. In the once performer, a configuration is loaded into it once during initial full reconfiguration and will never change at runtime. The configuration to be loaded there is defined at design time. The second one is the mode reconfiguration where different algorithm implementations can be selected at runtime to execute and deliver the expected computation result. In this case, all possible configurations must be loaded a priori, connected to the performer's data and control interfaces, and ready to be executed. Conversely, in the function performer, just one configuration is fetched at runtime from the pool, loaded, connected to the performer's inputs/outputs and ready to be executed. This can be done multiple times during runtime.
The entity control unit is split into manager and dispatcher. The manager control unit is responsible for deciding which configuration an adaptable performer will execute based on a user-defined selection algorithm. Whenever the manager decides that a new configuration has to take place, the dispatcher control unit takes care of the reconfiguration mechanisms, either mode or function reconfiguration.
The relation "is part of" means that the higher abstract entities static, adaptable and pool, together compose the platform. The relation "is a" stands for an entity generalization as it moves upwards in the ontology hierarchy.
As in the application-domain ontology, the realizable entities in the platform-domain ontology are the leaves in the tree representation (Fig. 6), namely the data interface static, control interface static, performer static, manager control unit static, dispatcher control unit static, dataχ interfaceχ adaptable, controlχ interfaceχ adaptable, once performerχ adaptable, mode performerχ adaptable, function performer adaptable, managerχ control unitχ adaptable, dispatcherχ control unitχ adaptable, and configuration pool.
These entities are interpreted as classes, and are used to build a classification system (Section III-B3), in the same way as the application-domain ontology.

2) Virtual Platform Functional Blocks
The virtual platform architecture model is build up from basic functional blocks: worker, steward, and handler. These functional blocks can be attached to a programmable device (ProgDev), and a reconfigurable device (ReconDev). ProgDev stands for fixed hardware architecture devices, such as the central processing unit (CPU), and the graphics processing unit (GPU). On the other hand, ReconDev represents the runtime reconfigurable devices, such as modern field-programmable gate array (FPGA) chips, supporting full and partial reconfiguration.
The introduced platform-domain ontology allows for various possible combinations to describe a virtual platform. It is possible to have a platform with functional blocks following definitions from: 1) just entity static; 2) just entity variable; and 3) both entities static and variable, i.e. heterogeneous. The functional block worker (single-line box in the following figures) represents a block doing a software-based, when attached to ProgDev, or even a hardware-based computation, when attached to ReconDev. The functional block handler (double-line box in the figures) is responsible for deciding which function configuration, from a configuration repository, a functional block worker has to perform.

a: Fixed Computation Cases
Fig. 7a describes a virtual platform architecture model with two workers sharing data through a bus. In this case, they are attached to a programmable device, meaning their computations are fixed. On the other hand, Fig. 7b describes another possibility where workers are attached to a reconfigurable device executing a configuration assigned to them in design time and loaded into them once by full reconfiguration. In this case, they will not change in runtime, since there is no functional block such as the handler or steward in the scenario.  The underscore, i.e. index ∈ 1 . . . n, in the worker indicates that a device supports n different functional blocks, including worker, handler and steward. The steward is represented with a dotted-line box in the following figures.

b: Variable Computation Cases: Mode Reconfiguration
As illustrated in Figs. 8a and 8b, the block handler decides which function configuration the block worker has to perform. This decision relies on a user-defined algorithm which can be a static schedule. By the presence of a handler and a steward, this is a variable computation scenario.
Figs. 8a and 8b also depict the case where the handler commands the functional block steward so that it can point to the address of a function to a worker execute it. In this way, the steward controls the worker functions selection through a control bus. This approach is known as mode reconfiguration [33], detailed in Fig. 8c, where the functions are selected via sf 1 .
Considering a variable computation case based on mode reconfiguration, all possible functions, software-or hardware-based configurations, must be initially loaded into the device's memory, either a programmable device or reconfigurable device. Finally, just one function must be chosen at a time from 1 . . . Γ possibilities using the signal sf 1 to switch to a function (i.e. configuration) and select its output (Fig. 8c). Note that worker and handler can communicate with each other via a data bus.
In Fig. 8c, the switch connects the input interfaces i 1,1 . . . i 1,n to the switched function (i.e. configuration), and the select connects the selected function output to the output interface o 1 . Mode reconfiguration is possible in both programmable device and reconfigurable device, as illustrated.

c: Variable Computation Cases: Function Reconfiguration
Up to this point, we took into consideration homogeneous platform architecture models, either following only entity static or only entity adaptable. Now, Fig. 9a illustrates a heterogeneous model where the worker is attached to a reconfigurable device. In this particular case, the steward reconfigures the worker with a new function (i.e. new configuration) instead of just pointing out the function address. This is known as function reconfiguration scheme (Fig. 9c). Here, just one function, i.e. just one configuration, is fetched at runtime from the configuration repository, and loaded using partial reconfiguration into a worker to be executed. Fig. 9b describes the function reconfiguration scheme possibility with both the handler and steward attached to a reconfigurable device, besides the worker itself. In this case, it is again a homogeneous platform architecture model, however following the variable computation case based on function reconfiguration.
In Fig. 9c, a configuration is fetched from the pool based on the signal sf 1 . The input interfaces i 1,1 . . . i 1,n , and the the output interface o 1 are initially connected to the worker, as it figures as a placeholder, according to Definition 2.

3) Classification System
The platform-domain ontology introduced in Section III-B1 is used to derive and build the classification system related to the platform part, as also depicted in Fig. 1. Considering the proposed meaning of the virtual platform model P and the concepts of the platform-domain ontology, Table 2 shows the virtual platform model's functional blocks along VOLUME W, 2021    with their corresponding classifications. It follows the same approach as in the application model classification system (Section III-A5).

ProgDev
The mapping rules step uses this resulting classification system, as presented, for the platform side. In other words, once one designs a given system supported by the virtual platform model P , the functional block class is given by our defined classification as is in Table 2.  Figure 9: Function reconfiguration considering the configurations repository to indicate the desired configuration so that the worker is loaded with and executes it. This is possible by the hardware partial reconfiguration technique present in modern FPGA-based devices.

C. MAPPING RULES
The general mapping rule m is given by (4), where one class of elements in the application model Λ (summarized in Table 1) maps to one class of functional blocks in the virtual platform model P (summarized in Table 2). Thus, the mapping result is equivalent to the virtual implementation m : (Λ → P ) ≡ I As previously mentioned, we use ontologies to restrict both the application domain and the implementation domain to the set of needed components. Given this, we derive correct and efficient mapping rules m i=1...7 . Note that our mapping rules are general enough to be applied to any computer system since we consider its minimum basic parts, namely processing, communication, and storage. It follows the von Neumann machine [34] in terms of minimum computer parts but advances in software and hardware architectures as it considers runtime reconfiguration on its central characteristics.
The defined rules are stated in Table 3. The column label " m − →" denotes the constraint used to define the rule application.
Notice that some rules give more than one option to map the application model element classes to the functional block classes in the virtual platform model. As mentioned before, our platform-domain ontology allows for different possible combinations to describe a virtual platform following definitions from just entity static, just entity variable, or both entities forming a heterogeneous virtual platform. In this sense, the mapping rules follow the constraint shown in (5).
Another constraint is defined when targeting a heterogeneous device, depending on the function's configuration, which can be software-or hardware-based, as described in (6). When targeting a reconfigurable or hetero-geneous device, we assume the function configuration will be hardware-based, as it represents the most customized solution in terms of power consumption and performance.
According to (6), when a software-based configuration is required, rules 1 to 6 states the mapping to the entity static S. Conversely, when hardware-based is the requirement, rules 1 to 6 states the mapping to either entity adaptable A or a mixed case M. This case is also present in Table 3.
There is a special case for the class Path when considering the use of heterogeneous devices. This general path precedence rule supersedes other path rules as described next.

a: General Path Precedence Rule
When a class either regarding the Path.Control or the Path.Data.Homogeneous/Hybrid connects two application model elements that in turn will be mapped to two distinct platform model functional block classes, e.g. Static.Interface and Adaptable.InterfaceX, the class Path will be mapped partially to the class Static.Interface and also partially to Adaptable.InterfaceX. This is the case illustrated in Fig. 9a, where there is a "data bus" interconnecting the handler and the worker, and a "control bus" interconnecting the steward and the worker.
Next, the rules are described.

1) Procedure.Controller Rule
The mapping of elements from the class controller to the either the class manager or managerX also creates a new functional block in the virtual platform model from the classes dispatcher or dispatcherX, respectively. The decision between entities static S or adaptable A follows the constraint as defined in (5). In other VOLUME W, 2021  (6) words, the class Procedure.Controller will be mapped to Static.ControlUnit.Manager and will create another functional block from class Static.ControlUnit.Dispatcher when having entity static S or heterogeneous devices in place (i.e. mixed M).

2) Procedure.Executor.Variable Rule
The mapping of elements from class variable to classes performer or mode or function. These three options follow the constraint as defined in (5). On the other hand, the decision between mode or function follows the constraint (6).

3) Procedure.Executor.Fixed Rule
The mapping of elements from class fixed to classes performer or once follows (5), which depends on targeting only programmable, only reconfigurable or heterogeneous devices. In other words, the Prodecure.Executor.Fixed class will be mapped to Adaptable.PerformerX.Once when having entity adaptable A or heterogeneous devices in place (i.e. M).

4) Path.Control Rule
The mapping of elements from class control to either class Interface.Control or InterfaceX.ControlX follows (5).

5) Path.Data.Homogeneous Rule
The mapping of elements from class homogeneous to either class Interface.Data or InterfaceX.DataX follows (5).

6) Path.Data.Hybrid Rule
The mapping of elements from class hybrid to either class Interface.Data or InterfaceX.DataX follows (5).

7) Value.Function Rule
Functions are mapped to software-or hardware-based configurations, depending on the constraint given by (6).

IV. ILLUSTRATIVE EXAMPLES
Next, we introduce two examples to demonstrate the applicability of our proposal. The first (Section IV-A) is an encoder and decoder system following a tutorial style. The second example (Section IV-B) is an avionics attitude estimation system. That one shows a complete system modeling and how our approach manages both complexity and scalability through abstraction.

A. ENCODER AND DECODER SYSTEM
We adapted and implemented a model introduced in [32] as a representative and qualitative tutorial example to demonstrate the applicability of the proposed classification system and mapping rules in our design flow. This model presents components such as cipher and decipher which can perform arithmetic operations. Fig. 10 illustrates our runtime reconfigurable embedded application model Λ Example based on the synchronous MoC.
The application model example shows two runtime reconfigurable processes based on the function placeholder definition (Section III-A4): cipher and decipher.
The signal s input [k ] is encoded with an encoding function s encF The signal s encF comprises the encoding functions along with the respective keys. Signal s decF comprises the decoding functions along with the respective keys. Signal s input is a plain-text to be encoded. The signal s enc comprises the encoded data, and the signal s output the decoded data.
(7) to (10) define the processes in this example model. Both cipher and decipher are runtime reconfigurable processes, class Procedure.Executor.Variable according to the application-domain ontology (Table 1), since they receive a function that truly changes their behavior, i.e., the data processing function changes over time k.
A runtime reconfigurable process may have different behavior according to the function it is currently executing. Notice that cipherGen and decipherGen could also be considered as runtime reconfigurable processes since they combine regular data input with a function to create a new function as output. However, in this case, their behavior is not changed. Therefore, cipherGen and decipherGen belong to the class Procedure.Controller because they are indeed sending functions to another actors.
The application model functional code implemented in Haskell/ForSyDe using the synchronous MoC is shown in Listing 1. This is also available in [35]. Listing 1 exposes the annotated classes of the model elements according to application-domain ontology as in Table 1. The synchronous function placeholder fphSY is defined with the comb2SY ForSyDe process constructor using, as behavior, the single argument function application function $ from Haskell, which is equivalent to the apply function in (1) with m = n = 1.
The runtime reconfigurable processes (class Procedure.Executor.Variable) cipher and decipher receive the reconfiguration signals s encF and s decF (class Path.Control). In this example, the reconfiguration decision logic is based on a static schedule policy, i.e. user-defined algorithm to give the sequence, namely, fadd, fsub, fadd, fsub, fadd (class Value.Function).
Notice also that the elements cipher and decipher from Listing 1 [35] follows the function placeholder's definition. Table 4 shows the element classes of the application model example, as shown in Fig. 10, following the applicationdomain ontology.

2) Utilization of the Mapping Rules
After modeling the application based on the synchronous MoC, we apply the defined mapping rules to obtain a possible and feasible virtual implementation model. There exist two constraints (5) and (6) that must be specified in order to apply the rules. To show a comprehensive and representative case, we set the first constraint to use a heterogeneous device. In this case, for the second constraint, functions implementations must be hardware-based. Table 5 shows the mapping rules summary according to the constraints as in (11) and (12).

3) Model Elements and Functional Blocks
According to the application of the mapping rules, Table 6 shows the correspondence from the elements of the application model example to the functional blocks in the resulting virtual implementation model example I Example .

4) Resulting Virtual Implementation Model
After the application of the mapping rules, the corresponding view of the partitioned application model is shown in Table 5: Mapping rules result, according to (11) and (12).

B. AVIONICS ATTITUDE ESTIMATION SYSTEM
The integrated modular avionics (IMA) architecture became a de-facto standard in the aeronautical industry and it represents the state-of-the-art with respect to avionics platforms. It aims to host multiple avionics applications, considering different criticality levels, on the same platform instance [36]. The second generation of IMA (IMA-2G) is discussed in [37], where the authors present some future requirements and challenges for it, including reconfiguration capabilities. However, the reconfiguration addressed there is basically in terms of software. Our design can advance the concept by addressing also hardware reconfiguration possibilities (as in Section III-A4). Singh et al. [38] assert that conventional testing methodologies for software can be expensive and ineffective. Thus, they use formal verification techniques such as model checking to verify embedded software inside the context of modern avionics systems. There is a guideline in the aeronautical industry for the use of formal methods, DO-333 "Formal Methods: DO-178C Supplement" [39]. As previously mentioned, our proposal starts with formal models of computation which leads to a correct-by-construction system design flow [21] for RTR embedded systems with both software and hardware.
Given this context, the second example we introduce is an avionics attitude estimation system (AAES) with triple modular redundancy (TMR) and hardware runtime reconfiguration. This one shows a complete system modeling and how the proposed approach manages both complexity and scalability through abstraction.
The AAES is responsible for getting data from three different inertial measurement units (IMU) to compute the airplane's attitude (roll angle φ, pitch angle θ, yaw angle ψ) along with linear velocity and linear position.
The computations are performed in terms of quaternions to avoid the gimbal lock phenomenon (also known as Euler angle singularity), i.e. pitch angle near to ± 90 degrees. After that, Euler angles are obtained. The aircraft equations of motion used in our application model are supported by [40].
The final estimated attitude, i.e. voted attitude, is a result of a voting process concerning redundant parts, i.e. angular rates, and linear accelerations measurements. Should any computation is considered wrong, that data is assumed as failed or invalid. Besides, a hardware reconfiguration can take place to reconfigure the faulty processor as a parallel mitigation action.
Here, the AAES is ultimately responsible for supplying the voted attitude, namely angular positions, linear velocity, and linear position to the flight control computer (FCC) system. FCC runs the aircraft control laws that actuate the flight control surfaces. According to DO-178 [41], a failure in FCC is considered catastrophic, namely the most critical level. As a consequence, AAES is also critical (i.e. catastrophic) since it provides input navigation data to FCC.
On the other hand, AAES can also feed the in-flight entertainment (IFE) system with navigation data to be shown in moving map passenger's screen. In this case, an IFE failure is considered minor, according to DO-178.
Our AAES model providing data to both FCC and IFE systems is a typical IMA scenario. However, here we also introduce runtime partial hardware reconfiguration.
Next, we follow our design flow considering: 1) the AAES application modeling; 2) the obtention of the classes concerning AAES application model elements, as defined by the application-domain ontology; 3) the utilization of defined mapping rules; 4) the obtention of the correlation concerning the model elements and functional blocks, as defined by the platform-domain ontology; and 5) finally getting the resulting virtual implementation model. Table 7 shows the signals used in the AAES application model.

1) AAES Application Model
aBodyZNz Nz [k] Figure 13: AAES application model illustrating the signals and processes as a network of concurrent processes. This is the core functionality of the AAES. Here the quatern is short for the quaternion integration, quat2euler stands for the transformation from quaternion to Euler angles, insftrans is short for transformation to the inertial frame, Tustin is the Tustin integration, and aBodyZNz is the computation for the load factor. Signal Description ω ix , ω iy , ω iz Angular rates in the axes x, y, z provided by gyroscopes within IMU i = {0, 1, 2} − → ω i Tuple of (ω ix , ω iy , ω iz ) from IMU number i ∈ {0, 1, 2} a ix , a iy , a iz Linear acceleration in the axes x, y, z provided by accelerometers within Tuple of voted (ωv x , ωv y , ωv z ), in the time instant k, from one of the IMU number i ∈ {0, 1, 2} − → av[k] Tuple of voted (av x , av y , av z ), in the time instant k, from one of the IMU number i ∈ {0, 1, 2} Estimated attitude quaternion qr +qxi+qyj+qzk, in the time instant k, just after the quaternion Roll, pitch and yaw Euler angles after the transformation from quaternion to Euler in the time Estimated linear acceleration in the time instant k, considering the inertial frame Estimated linear velocity in the time instant k just after Estimated linear position in the time instant k just after Load factor Fig. 13 depicts the core of the AAES application model by using the synchronous MoC. It considers the input of three different IMU to form a triple modular redundancy. IMU is a micro-electro-mechanical system (MEMS) composed basically of gyroscopes and accelerometers. The core of AAES presents two voters: one dedicated to gyroscopes' data, as in (13); and the other to accelerometers' data, as in (14). The voted angular rates (i.e. gyroscope data) enter the quaternion integration, as defined in (15), and then are next transformed to Euler angles, as in (19) to (21).
As illustrated in the bottom of Fig. 13, the voted linear accelerations (i.e. accelerometer data) and the estimated attitude quaternion enter the transformation to the inertial frame reference, as in (16). Finally, the estimated linear acceleration is integrated once resulting in the estimated linear velocity, and twice to produce the estimated linear position, as in (17) and (18), respectively. The load factor is computed as in (22).
with T standing for the sampling time; g the Earth's gravity acceleration;q the quaternion conjugate of q; and The function voter( − → x 0 , − → x 1 , − → x 2 , − → x l ) should return: • The average − − → x m3 between vectors − → x 0 , − → x 1 and − → x 2 if the distance between − − → x m3 and each vector is no bigger than a chosen threshold; else • The average − − → x m2 between the two vectors that are closer to − − → x m3 if the distance between these two vectors is no bigger than twice the threshold; else • The average − − → x m1 between − → x l and its closest vector.
Equations using both quaternions and vectors, such as (15) and (16) consider a vector as being a quaternion with zero real part. As such, the quaternion exponential of (15) is defined as in (23).
Note that equations (13) to (15), (17) and (18) are previous evaluation cycles, namely k−1. These variables are considered states of the system, and after their computation, they must be stored in a memory to be used in the next evaluation cycle. The AAES core must have a memory with enough size to support the storage of the states between different evaluation cycles. The AAES core is formally described as an actor that receives functions in one control input port, that is, an actor with the runtime reconfiguration property. To model this, we consider the runtime reconfigurable processor (RTRP) model based on [42]. The RTRP is modeled as a function placeholder with configuration and data memories as shown in Fig. 14 Figure 14: Runtime reconfigurable processor model.

3) AAES Mapping Rules Utilization
After modeling the AAES application based on the SY MoC, we apply the defined mapping rules to obtain a possible and feasible virtual implementation model of the complete AAES.
In the same way as in the previous example, there exist two constraints (5) and (6) that must be specified before applying the rules. To demonstrate a real scenario, we set the first constraint to use a heterogeneous device. In this case, for the second constraint, functions implementations must be hardware-based.
Path.Data.Homogeneous ccout , cc in Path.Data.Hybrid AAES core (as in Fig. 13) Value.Function regular data (i.e. numbers from the computations) Value.Info Table 9 shows the mapping rules summary according to the constraints as in (26) and (27).

4) Model Elements and Functional Blocks
According to the application of the mapping rules, Table 10 shows the correspondence from the elements of the AAES application model example to the functional blocks in the resulting virtual implementation model, namely AAES.   Figure 15: AAES complete application model illustrating the signals and processes as a network of concurrent processes.
Here, we handle the complexity of the system by abstracting the AAES core function into actors receiving a signal containing function values, namely core i with 0 ≤ i ≤ n. Finally, the FCC and IFE are shown as receivers of the voted data.

5) AAES Resulting Virtual Implementation Model
Finally, after the application of the mapping rules, the corresponding view of the partitioned application model is shown in Fig. 16, and the virtual implementation model of AAES is illustrated in Fig. 17.

V. DISCUSSION a: Generalization of the Runtime Reconfiguration Concept
We formulated the requirements so that a model of computation can describe a semantic notion of runtime reconfiguration. This is granted by the principle of functions as events and the functional programming paradigm. The function placeholder definition (Section III-A4) can be used in an application model to describe a runtime reconfigurable process. These processes can have their functions changed through full or partial reconfiguration in the virtual implementation model. Notice that this concept is general, and therefore applica-ble to different models of computation. In other words, the runtime reconfiguration concept as defined here is orthogonal to the models of computation.

b: Domain Ontologies
The development of two domain ontologies enabled the definition of a classification system. The first one (Section III-A1) defines the classes for the elements in the application model. The second one (Section III-B1) defines the classes for the functional blocks in the virtual platform model. These ontologies are general enough to serve as a classification system for most models of computation and reconfigurable architectures and platforms.

c: A Set of Mapping Rules
A set of unambiguous and well-defined mapping rules was developed along with two constraints (Section III-C). By systematically applying these rules to the high-level  Figure 17: Virtual implementation model instance AAES resulting from the application of the mapping rules and considering the two constraints (26) and (27).
application Λ and virtual platform P models, our approach leads to a feasible virtual implementation model I, as stated in (4). Those two constraints were added to allow for different possible organizations in terms of architectures concerning heterogeneous systems including hard processors and reconfigurable logic. They also account for a possible trade-off between power consumption and performance.

d: Representative Examples
In the encoder/decoder system example, we show an application being modeled by using the synchronous model of computation. After the constraints fixing, the mapping rules were employed for this application instance. Finally, we applied the defined rules and the result was a feasible virtual implementation model.
In the second example, we address a much complex scenario involving safety-critical systems, i.e. an avionics attitude estimation system in the integrated modular avionics context. By that modeling, we show how to model a larger system (i.e. scalability) and handle complexity within abstraction layers. In that case, the functions present in the AAES core (as in Fig. 13) are carried as a single function in the perspective of the AAES complete application model (as in Fig. 15).
Although not covered by the example applications, other models of computation such as dataflows, e.g. synchronous dataflow or scenario-aware dataflow, are also encompassed in the function placeholder Definition 2. Therefore, these MoC have equivalent mapping rules as the ones shown for the synchronous MoC, making our design flow general enough to model other systems from different areas.
e: Guidelines to a Platform-based Design: The Next Step The proposed methodology relies on a high-level of abstraction and formal models of computation. The design starts with the application being modeled based on a given MoC according to the application's domain, for instance, synchronous reactive or dataflow.
Considering the model simulation still in the specification phase, i.e. an executable specification, the formalism of MoCs can be captured by a wide range of computer programming languages from different programming paradigms such as object-oriented programming (OOP), imperative, and functional, either interpreted or compiled.
When following the concepts of OOP [44], there is the object (i.e. an instance of an OOP-class) with data and methods. Thus, the application model element actor (applicationdomain ontology's classes Procedure.Controller and Procedure.Executor.Fixed) can be implemented by using an object. Notice that there may be a limitation towards equivalence in OOP for Procedure.Executor.Variable considering its semantics. Object polymorphism is a possibility that most approximates, however it does not cover the essence of Procedure.Executor.Variable. The element signal (classes Path.Control, Path.Data.Homogeneous, Path.Data.Hybrid) can be captured through relationships like dependency and association, for instance. The element function (class Value.Function) can be implemented by using an objects' method. In this specific case, both unified modeling language (UML) 2 and systems modeling language (SysML) can aid in the modeling activity [17], [45].
When using imperative programming languages, like Python or C++, the application model element actor (application-domain ontology's classes Procedure.Controller, Procedure.Executor.Variable, Procedure.Executor.Fixed) can be implemented by using a task. The element signal (classes Path.Control, Path.Data.Homogeneous, Path.Data.Hybrid) can be implemented with a message queue or a pipe, for example. The element function (class Value.Function) can be implemented by using a software function along with the function pointer technique. These guidelines apply when the programming language does not support higher-order functions. However, functions are not pure in the imperative paradigm since they are likely to produce side-effects.
We argue that the semantics of a MoC, as described in Section III-A2, is better preserved when using the functional paradigm. In this sense, the model element actor is implemented as a process, and the element signal/communication paths as a signal from the functional language. Functions are simply regular data carried by signals, as the functional paradigm supports higher-order functions, and consequently the function placeholder definition (Section III-A4).
In this case, a physical implementation of our virtual implementation model is based on a CPU, GPU, or any other fixed hardware platform following the platform-domain ontology's classes from entity static. Table 11 shows in the first five rows a possible mapping guideline from classes related to the platform-domain ontology to a physical platform based on only fixed processing units. This states feasible and concrete possibilities.
Conversely, provided physical hardware supporting runtime reconfiguration is in hand, the mapping from the application model to the virtual platform model relies on the platform-domain ontology's classes from entity adaptable.
In this way, classes Adaptable.InterfaceX.DataX and Adaptable.InterfaceX.ControlX can be physically implemented (e.g. concrete) with wires and buses in a modern FPGA-based device. Functional blocks such as the worker (classes Adaptable.PerformerX.Once, Adaptable.PerformerX.Mode, and Adaptable.PerformerX.Function) can be implemented in a hardware description language (HDL) and become a runtime reconfigurable partition inside the FPGA area. Blocks like the handler (classes Adaptable.ControlUnitX.ManagerX and Adaptable.ControlUnitX.DispatcherX) can also be implemented in HDL, as illustrated in Fig. 9b, or even in software, as shown in Fig. 9a, where the handler follows classes Static.ControlUnit.Manager and Static.ControlUnit.Dispatcher. Table 11 also shows these cases. VOLUME W, 2021 Table 11: Final mapping fm guidelines to the physical implementation. The expression "RTL/HDL reconfigurable part, hardware partition" means that it is possible to implement a hardware-based function (i.e., circuit) using various possibilities including RTL abstraction and HDL code. In turn, that hardware-based function can be placed into a location allowing for runtime full and partial reconfiguration, as required. Notice that our virtual implementation model is sufficiently flexible to allow several possibilities when it comes to physical implementations in different hardware architectures. In this sense, we can say the high-level abstraction allows our models to be platform-agnostic, they do not depend on any specific hardware technology, but at the same time, they can be concretely (e.g. physically) implemented on a variety of commercial off-the-shelf platforms.
The final mapping from the virtual platform to a physical platform follows the expression (29), and Table 11 gives the mapping guidelines to the final implementation. m : (Λ → P ) ≡ I (28) fm : I → Physical Implementation (29) where fm is short for final mapping.

f: Key Performance Indicators
Our approach enhances the following qualitative key performance indicators (KPI): 1) hardware area reusability -assuming that not all system's functions must be available during all system's operation, and enabled by the generalization of the runtime reconfiguration concept (Section III-A4), the same hardware can be used in a mutually exclusive time fashion. In turn, it also contributes to decreasing single-event upset (SEU) due to less hardware area exposition. For example, SEU can be faced during flights by electronic devices present in commercial airplanes; 2) reduced testing needs -this is due to the presence of formalism already in the specification project step, e.g. formal models of computation contributes to correctby-construction; 3) taming of complexity -the most complex system can take profit from our approach since we use different levels of abstraction to focus on the applications' functions (i.e. application model) and platform functional blocks (e.g. virtual platform model) hiding away implementation details that will be taken into account on further design steps; and 4) clear separation of concerns -we make a clear separation between the application model from the platform model and enable to map of different virtual platforms to real physical platforms.

VI. CONCLUSION
This paper introduced a classification system derived from applicationand platform-domain ontologies. The ontologies were used to restrict both the application domain and the implementation domain to the set of necessary components, so that correct and efficient mapping rules were created. We suggested a design methodology considering an application model that admits runtime reconfigurable behavior, described by formal models of computation, and also a scalable virtual platform model, described by functional blocks. When applying a set of well-defined mapping rules to these models, the result is a feasible virtual implementation model. Functional programming is also used here as an elegant and convenient way to simulate, i.e. execute, the application model already in the design entry stage.
The novelty introduced in this research was on combining suitable formal models of computation, a functional modeling language, and two domain ontologies to create a systematic system design flow from an abstract executable system model into a virtual implementation targeting a runtime reconfigurable architecture using well-defined mapping rules.
We demonstrated the potential, applicability, and scalability of our classification system and mapping rules in two runtime reconfigurable embedded system examples: a tutorial-oriented encoder/decoder system, and a larger and more complex system, i.e. an avionics attitude estimation system.
As future work, we intend to automate the application of the mapping rules based on the high-level abstraction application model and the classification systems, besides the final mapping from the virtual implementation model to the physical platform implementation.