The N-Dimension Computing Machine Postulate

This paper postulates a novel N-dimension computing machine that operates in an unconventional manner. This postulate aims at solving existing problems in higher dimensions, where one must re-think the scope of a given problem domain beyond the one-dimension Turing machine to dictate all subsequent problem representation, problem transformation, and algorithmic derivation. Two over-simplified well-known problems, namely, the Traveling Salesman Problem and the Tower of Hanoi problem are presented to demonstrate the point. Both synthetic problems are effectively adapted to solve a real world project. To realize the postulate in a viable architectural construct, data flow and molecular computers are investigated since they show potential computation power. Unfortunately, they are still confined to working in one-dimension domain. A biological-like architecture for software systems is proposed in three design aspects: structure, function, and behavior. Contributions of this work are to revamp traditional Turing computation paradigm to N-dimension computing machine, yet it is simple, straightforward, and implementable by state-of-the-practice hardware and software technologies. Thus, the burden of solving difficult problems can be lessened.


I. INTRODUCTION
The advent of electronic computers has revolutionized myriad of problem solving venues to a new realm of computation. What was previously solved analytically is now carried out numerically with the help of algorithms so developed. Such an undertaking is equivalent to transforming the original problem from one domain to another domain which is known as the ''mapping'' process. It is this very mapping that instigates studies and development of approaches, methodologies, and artifacts to efficiently and effectively manipulate the mapping. Researchers are hard at work to devise algorithms that map multi-dimension problem space to von Neumann architecture. This principally rests on execution by a one dimensional Turing machine that may or may not halt, depending on the governing algorithm. During the course of mapping process, one inherent wicked obstacle is introduced-the infamous algorithmic ''complexity'' problem.
The argument on complexity of an algorithm is determinism since the solution computability is of either P or The associate editor coordinating the review of this manuscript and approving it for publication was Jenny Mahoney. NP classification. The number of steps used to solve the problems of both classifications in polynomial time increases exponentially if the problem size is large. Some well-known examples of P and NP are the Tower of Hanoi and Traveling Sales man problems, respectively. Thus, this work will look into how we can tackle the problem in a different manner so that (1) it not only can be set up as close to the inherent characteristics of the problem as possible, (2) but also can be carried out in fewer steps that practically terminates after a reasonably wait.
Let's consider the second aspect first. Perhaps the limitation of this aspect lies in the principal model used to set up and solve these problems, i.e., the Turing machine. It is a onedimension machine that slowly moves along the tape [11] which may or may not halt. Scientists and engineers try to overcome the speed of this computation process by building high power computation devices such as supercomputers that can perform 10 12 operations per second. For molecular or DNA manipulations, Adleman estimates 10 20 operations per second [13]. Unfortunately, such a considerable improvement still does not solve the second aspect in that it merely speeds up the computations, yet leaves the number of steps to solve the problem unconsidered. This is in part due to structure of the aforementioned algorithmic complexity problem. A close analogy to be drawn is in the area of software development where there are accidental complexity and essential complexity that make up the software problems. As Brooks succinctly puts it [19]: ''The complexity of software is an essential property, not an accidental one. Hence descriptions of a software entity that abstract away its complexity often abstract away its essence.
Mathematics and the physical sciences made great strides for three centuries by constructing simplified models of complex phenomena, deriving properties from the models, and verifying those properties experimentally. This worked because the complexities ignored in the models were not the essential properties of the phenomena. It does not work when the complexities are the essence.'' As far as the state-of-the-art is concerned, the first aspect remains to be an unexplored area by any research endeavors. No existing method breaks away from transforming the problems from their natural domain (of the size say N dimensions) to solving them in one dimension Turing machine. Thus, the N-to-1 problem to Turing machine transformation and the 1-to-N solution to problem might not losslessly project back to the original problem domain. This is the essence complexity which is the main focus of this work to transform the problem to solution in an N-to-N fashion and to accomplish the second aspect.
This article is organized as follows. Section 2 describes some relevant related work. Section 3 sets up computation transformation in one-dimension and higher dimension spaces using a simple drawing analogy. The problem formulation then follows. The postulate is presented in Section 4, along with a real world case to exemplify its novelty. Section 5 furnishes the proposed architecture supported by a synthetic example. Some final thoughts are given as the future prospect.

II. RELATED WORK
There have been a number of computer and software architectures proposed by researchers to improve algorithmic complexity, hardware, computation manipulations, etc. These research endeavors employed a variety of methods, operations, control flows, computation speed, etc., to support the established reference architectures. Two groups of architectures are investigated as the forerunning bases since they are naturally and architecturally suitable for the proposed framework, namely, data flow computers and molecular computers.
A. DATA FLOW COMPUTERS Ever since Dennis and Misunas laid down the ground work for a data flow processor [28] in order to break away from sequential processing of the von Neumann model and avoided memory and processor switching during execution, concurrent processing became a reachable reality. The first well known data flow machine came about from Manchester University in 1978 [27] as depicted in Fig. 1.  Later, Lerner [20] pointed out a serious flaw of data flow computer which was manipulation of large arrays of data. Patnaik et al. [21] introduced the EXMAN, an extended Manchester data flow computer, with the improved multiple matching units, arrays and arrays operations, parallel execution of loops. Many succeeding researches proliferated, ranging from conceptual models of data flow graph, parallelism detection using dependency graph and matrix [25], to hardware synthesizing of middle grain parallelism of data flow computing and von Neumann structure (function driving scheme) for used in multi-core microcontroller unit design [26] (see Fig. 2), etc.
In view of data flow parallelism, Zehendner and Ungerer [23] proposed three levels to be exploited: task, block and instruction, and subinstruction. Despite tireless efforts by researchers to innovate better and faster performance data flow computers [2]- [5], major issues on sensitive operations such as language and data dependencies, token matching, resource management, and well-formed parallelism [6] still persisted.
In short, data flow architecture deals only with (1) data values, not data addresses that limit concurrent data value retrieval, and (2) instruction execution that does not depend on an instruction counter (or program counter), but commences when all the required input data values are present. The output values in turn are sent to other instructions that VOLUME 8, 2020 need these values [24]. This apparently departed from traditional von Neumann control flow to attain maximal parallelism [22]. These two principles of parallelism advocated by data flow computer will be adapted to the design of the N-dimension computing machine.

B. MOLECULAR COMPUTERS
Recent researches in molecular computers have made significant progressed as evident abound with many publications on DNA computing [8]- [12]. This new computing paradigm surpasses conventional electronic computers by an order of magnitude in terms of speed [13]. Moreover, their rich of unique encoding representations make them ideal for very large storage capability [8], [13].
Another strong characteristic of biological life form is its autonomy. A uni-cellular life form is a good example to consider. It has simple structure, self-surviving, and reproduction abilities. The marvel of biological constructs that are naturally simple yet powerful has been exploited by many researchers to invent next generation of computing machines. Table 1 summarizes some architectural characteristics of the biological-like constructs.
All are ideal for 1-D computing machines. They take advantage of complex DNA structural patterns to denote different encodings (qubits, words, memory strands, bases) that enrich the data representations to be processed. These data, along with the minute molecular computer, are crammed into one execution process which cannot be achieved by conventional electronic computers owing to their physical limitations and memory wall problem.
Powerful as it is, this molecular computing paradigm still falls short of the first aspect by the same argument stated earlier. Besides, their realization is complicated and difficult to build. Hence, this study sets forth to introduce a novel architecture that can be realized by existing hardware so as to accommodate the postulate. Details will be elucidated in the sections that follow.

III. COMPUTATION SPACE TRANSFORMATION
Before establishing the proposed work, let's look at a problem domain in higher dimension solution space. Consider the classical drawing problem containing Eulerian circuit in Fig. 3. All lines in the planar graph cannot be drawn continuously without lifting the pen and drawing over certain line(s), e.g., line 7 cannot be continuously drawn following line 5 without repeating line 2, i.e., 1-2-3-3-4-5-2-7. This is obvious by Eulerian circuit property since all vertices of the graph are odd degrees. However, the same problem can be easily solved in higher dimensional Euclidean space as depicted on the right graph, i.e., 1-2-3-4-5-6-7-8. This solution can be projected back to lower dimensional (planar) Euclidean space where line 6 and 8 degenerate (hide) behind line 5 and 1, respectively.
One may contend that certain problems are independent of geometric dimension, e.g., enumeration problems, iterative problems, etc. This argument will be further discussed in the sections that follow.

A. ONE DIMENSION AND HIGHER DIMENSION SPACES
In Euclidean space, a three-dimension position can be addressed by three indicative bases, that is, the (i, j, k) coordinates denoting the element C ijk . This position logically holds an elementary quantity, or a single value, for use in subsequent computations. Such an arrangement facilitates a straightforward architectural scheme that maps the entire logical configuration on to a portion of physical memory. Thus, information storage and retrieval can be performed directly (disregarding processing overhead incurred by virtual memory management, dynamic loading, paging, swapping, and so on that require additional address translation). This solution space paradigm based on Euclidean space organization has been well entrenched ever since the creation of electronic computers. A conventional approach is essentially performing an algorithmic transformation of the value stored in C ijk of the underlying problem domain to the solution space. The process is procedurally dictated by a predefined sequence of computations P(x), where x denotes the variable in C ijk being executed linearly on the Turing machine. As a consequence, the computing space is confined to one dimension. Such a limitation instigates one to investigate other potentially viable higher dimensional representations.
Recent research and development in Computer Graphics have brought about three dimensions representation paradigms, e.g., V ijk which denotes a voxel at position (i, j, k) in three dimensions Euclidean space. In this work, we denote a voxel in the problem domain having m attributes at position (i, j, k) as V m ijk , m = 1, 2, . . ., N, specifically V 1 ijk = V ijk . One can devise a mapping from V m ijk problem domain to a linear Turing machine representing the problem space. Unfortunately, there is no guarantee that the shrunk down problem domain will be algorithmically solved, i.e., whether the Turing computation will halt.
Two issues that fall out of the above cell versus voxel information containment are (1) domain representation, and (2) domain transformation. The conventional method runs algorithms that squeeze conceptual schemas from the problem domain (C ijk ) onto the designated problem space p or M p to be linearly executed on the Turing machine, producing the desired output in the solution space or S r , r = 1-D space. The voxel setting is a different story. The problem domain (V ijk ) can be correspondingly fed to an equal size problem space or M ijk to be executed on the 3-D machine, producing the desired output in the solution space or S ijk , ijk = 3-D space. Consequently, the need for dimension reduction to accommodate linear computability diminishes. These conceptual schemas are depicted in Fig. 4.

B. PROBLEM FORMULATION
This scenario essentially creates an illusion of a simplified solution space that reflects its corresponding complex problem domain by means of an N-dimension computing machine. To exemplify the above prospectus, consider an over-simplified Traveling Salesman Problem (TSP) in Fig. 5, where all weights denote distance (5a) or expense (5b). Some partial solution spaces based on two cities of origin, i.e, A and C, are shown in Table 2 and Table 3, respectively.
In both examples, if they are solved separately, the solutions are the same regardless of where the starting node is as long as they trace on the same route. For example, in Fig. 5(a), ABCEDFA = 29 which is the same as CEDFABC = 29 since  they trace on the same route but merely change the starting node from A to C. Likewise, in Fig. 5(b), AFECBDA = 28 is the same as CBDAFEC = 28. Incidentally, the optimal solution for each problem becomes 27 units in distance, and 27 units in expense. The answers can be computed centrally or parallel on one or higher dimension machines.
On a different scenario, should this problem be combined into one multiple-objective problem, i.e., find the optimal path that minimizes both distance and expenses in one trip for this TSP. With reference to the above example, we could take node A as the start node in Table 4 since any other  starting nodes would yield the same answer as shown earlier.
If we choose AFDECBA (1-blue arrow) to obtain the minimal expense of 27 units, we must accept the corresponding distance along AFDECBA of 29 units. By the same token, accepting another solution AFECBDA (2-red arrow) having 27 units of distance will result in the corresponding expense along AFECBDA of 28 units.
Barring that everything is relatively uniform or equally comparable, the optimal solution to the multiple-objective problem would be AFECBDA (2-red arrow) having distance = 27 and expense = 28, for a total cost of 55 units. This is slightly better than starting from the optimal solution of the expense sub-problem, i.e., AFDECBA (1-blue arrow) = 27 units and distance = 29 units for a total cost of 56 units. An alternate solution ABCEDFA (4-magenta arrow) also yields the same total cost, i.e., expense = 27 units and distance = 29 units. Nonetheless, the solution ADBCEFA (3-green arrow) is another optimal solution of 55 units. That is to say, in solving the multiple-objective problem, one is not obligated to start from the optimal solution of one subproblem (either distance or expense) and tries to coerce the final optimal result for the multiple-objective problem. This is apparent from the solutions of AFDECBA (1-blue arrow) or ABCEDFA (4-magenta arrow) that do not yield the optimal solution.
This multiple-objective problem can be empirically formulated as follows. Let h i denote the result of metric i on p k that traverses a TSP and evaluates the value of the path to obtain the result v i (h i ), where i = 1, 2, . . ., m, denotes the metric used and p k , k = 1, 2, . . ., n, denotes the path traversed which is 10 in the above examples. The optimal result, v i (p i k ), is obtained from determining max/min {v i (h i )} of the TSP, where h i = p i k . Hence, for the above example, we get h 1 along p k yielding the optimal result v 1 (p 1 k ) and h 2 along p k yielding the optimal result v 2 (p 2 k ). That is, h 1 [1=distance] has 6 nodes and 10 paths traversing along p 1 = ADFECBA (or p 2 = ADBCEFA, p 3 = AFEDCBA, p 4 = AFDECBA, . . ., p 9 = AFECBDA, p 10 = ABCEFDA) to yield the optimal cost = v 1 (p 1 2 ) = v 1 (p 1 9 ) = 27 units. Similarly, h 2 [2=expense] has 6 nodes and 10 paths traversing along p 1 = ADFECBA (or p 2 = ADBCEFA, p 3 = AFEDCBA, p 4 = AFDECBA, . . ., p 9 = AFECBDA, p 10 = ABCEFDA) to yield the optimal cost = v 2 (p 2 4 ) = v 2 (p 2 7 ) = 27 units. However, computing the multiple objectives of one trip minimal cost is to compute h 1,2 [1,2] along p 2 = ADBCEFA (3-green arrow) or p 9 = AFECBDA (2-red arrow) that yields the optimal result of v 1,2 (p 1,2 2 ) = v 1,2 (p 1,2 9 ) = 55 units. This is slightly higher than making two separate trips, each of which has the minimal cost yielding 27+27=54 units. The question is whether the savings of making two separate trips are worth the while, if other expenses and time such as packing the bag, hotel and accommodations, risks of accident, schedule conflict in one of the trips, etc., are taking into account.
We can add more metrics to find the optimal route for the above TSP in one trip, such as quantity of merchandise payload which the salesman must deliver during the visit. Given the merchandise is light, small, and is not necessary to over stock it because the demands might not be so high, e.g., SIM card, maximizing this quantity metric makes sense in solving the multiple-objective problem based on distance, expense, and payload. Thus, formulation of this problem can be set up as follows: where v i (p i k ) denotes the optimal value of v i (h i ) for the singleobjective on metrics i = 1(Minimize distance), 2(Minimize expense), 3(Maximize quantity), . . ., m, c i denotes constraint equation Eq(i), i = 1, 2, 3, . . ., m, p k denotes the stages in each constraint equation , k = 1, 2, 3, . . ., n, and ε i denotes the slack (adjustment for global optimum) of each constraint equation i that deviates from its single-objective (local) optimum.
Under cell space containment analogy, to simultaneously solve h i1,i2,...,im [i 1 , i 2 , . . ., i m ] of the above formulation requires (m+1)-D computations. The first m-D constraint equations run m metrics. The 1-D (m+1) th constraint equation runs the objective function to monitor and adjust the slack value during all constraining equations are computing the local optimal or near optimal values. This cycle requires (m+1) * n iterations. Some equations may stop at v i (p i k ), ε i = 0, whereas others may not. Path AFECBDA (2red arrow) of the above example reaches the optimal distance value of 27 having ε 1 = 0, but reaches near optimal expense value of 28 having ε 2 = 1. Both values form the optimal result of 55 by the objective function as shown in Table 4.
It is apparent from the above example that the problem domain representation is limited by the aforementioned cell versus voxel information containment. In other words, what we have known all along that TSP is an NP problem under one dimensional computing space may not hold in N dimensional computing space. Since all factors could be simultaneously mapped onto their corresponding values in the problem space and fired concurrently, the answer to the TSP might be near optimal which could be computed in shorter time than working in the 1-D computing space. Such a claim will be further investigated in the proposed N-dimension computing machine postulate.

IV. THE POSTULATE
Having been motivated by The Poincare Conjecture [1], it was speculated whether confinement of conventional computing capability to one dimensional Turing tape could ever be extended to an N-dimension computing machine. Traditional algorithmic approach often defines the problem domain parametrically in polynomial form, denoted by P( The statement makes no allusion to the computing space and Euclidean space if they are isomorphic. As mentioned earlier, the computing space is a resulting transformation from the original problem domain in Euclidean space through a series of procedural mapping. Thus, a compelling question remains to be reckoned with is whether an algorithm is still needed. If it is, what is the dimension of the algorithm? The answer is yes for obvious reasons. The mapping of an equal dimension problem and a computing machine rests on the conceptual viability and subsequent realization of an algorithmic transformation that takes input data from the problem domain and maps to the solution space.  Let's reconsider the Tower of Hanoi (ToH) problem. Traditional approach solves the problem recursively in P(n) steps on a 1-D Turing tape. From the earlier argument, if the consideration is extended to N dimensions, the problem can be solved in fewer numbers of moves than using the traditional approach. For argument sake, let N=2 denoting the axes of stacking the chips, namely, y and x, and n = 3 denoting the number of chips, one can solve the problem based on the size constraint of the chips stacked vertically (y) and laid horizontally (x). The latter arrangement stipulates that two or more chips can be laid horizontally side by side provided that they do not violate the size constraint as in the y-axis arrangement. One can imagine that there is a rack to physically hold the chips as shown in Fig. 6. This arrangement is called a pile. Such a pile will permit simultaneous stacking and retrieval of maximal k (>1) chips at a time. Starting from the original ToH setup where all chips are stacked in pin#1 (p1) or pile#1 (pile1), the steps to move them to p3 or pile3 are shown in Table 5.
Note that (1) the third move attaches S horizontally in order to allow the fifth move to simultaneously remove M and S chips, and (2) the fifth move needs a 2-D algorithm to concurrently execute M and S moves. As n becomes larger, the consideration can be extended to n-D spaces, where n ≤ N. We can infer that this P(n) problem can be solved by an N-dimension machine.
The TSP, however, might not always fall in the same scenario. If additional m metrics are taken into account during n cities travel where 0 < m < n, the number of 1-D solution spaces corresponding to each metric could be covered by n < N, or mn < mN ∼ N. The TSP solution space is bounded by O(n) = N. Therefore, a more general postulate replaces the mapping to an N-dimension algorithm as follows:

An N-dimension problem can be systematically solved by an equal dimension computing machine with the help of the same dimension algorithm.
Proof: There are three cases to be considered. Case 1: n < N, parameters of the problem are lower than the dimensions of computing machine. One may contend that the extra N-n dimensions make it uneconomical to operate. Besides, the over-capacity might be unsuited to scale down to n if there is no provision for reduction.
Case 2: n = N, parameters of the problem are equal to the dimensions of computing machine. The algorithm should be able to handle n dimensions problem domain properly.
Case 3: n > N, parameters of the problem are higher than the dimensions of computing machine. We are back to square one since the extra dimensions of the problem n-N = k > 0 are equivalent to current state-of-the-art that maps the k dimensions problem domain onto 1-D Turing machine. We do not postulate this possibility from the outset.
The above synthetic examples have been adapted to a project for a local logistic company. Some details are omitted for confidentiality and anonymity, but the technical details are all in place. For brevity, the following scenarios describe problem statements and the proposed solutions to the project. Company X, located in the urban area, is an affiliate of a medium-size logistic firm that possesses a few pickups (PU) and mid-size trucks (MST) to make up their transshipment fleet. Items are arbitrarily dropped off by the customers and collected on premise. These items are then transshipped to the distribution centers (DC) located in the outskirts of the city to be subsequently transported by trucks to the destinations. This transshipment is required because trucks are forbidden to come inside the city. At the moment, the most important problem that they are facing is inadequate parking inside their premise to accommodate the fleet (obviously real estate is expensive in the city). The culprit is that they must use part of the parking lot as temporary storage if the items overwhelm the storage area (small warehouse-SW), leaving little room for docking. Hence, some PUs and MSTs must be parked on the street, causing inconvenience in the neighborhood. They are in need to mitigate this street parking occupation so as to avoid any litigation with the residents and city authority. Anyhow, this problem is beyond the scope of this work. From the technical standpoint, there are two problem statements to be considered: (1) sending the items at the lowest cost to destinations, and (2) minimizing the costs and time incurred by transshipment.
The first problem is a multiple-objective TSP that must optimize both transportation distance and payload costs to make it as cost-effective as possible for one trip. There are seven cities in the travel path. The distances from source to destination cities are given in Table 6. Table 7 shows the transportation distance cost associates with Table 6.
In the past, the company applied transportation technique to set up the logistics route along the seven cities. That is   to say, transportation distance cost was determined from the route 1-2-3-4-5-6-7 = 426.2. Using the conventional (singleobjective) TSP yields 1-2-3-4-5-7-6 = 417.8. A back haul transport is suggested to increase the payload of each trip so that some trips can be combined to save average transportation distance cost per trip. The simple idea of back haul transport is depicted in Fig. 7, where S-i-k-T is the regular return route, and S-i-j-k-T is the proposed back haul detour that makes additional pick-ups (or drop-offs) along the return path to save the delivery costs in subsequent trip.
The back haul payload is shown in Table 8. At present, this new back haul detour is carefully selected to keep d ij and d jk small so that the truck will not wander off its route too far. There might be a number of factors that could affect the additional back haul pick-ups. The major disruptions are increasing of traffic volumes, carbon dioxide emission controls, logistic wage and freight rates [29], detour, risk of accidents, etc. If the company applied transportation technique on the optimal 1-2-3-4-5-6-7 route, the payload would yield 51.6, while that of TSP 1-2-3-4-5-7-6 route yielded 50.9. Table 9 shows the cost of back haul based on the payload shown in Table 8.  The results are determined by constraint equations c 1 and c 2 , where Eq(1) solves the transportation distance cost in Table 7 (417.8) and Eq(2) solves the payload cost in Table 9 (423.5). Since the optimal paths for both objectives are the same, i.e., 1-2-3-4-5-7-6, the slack ε 1 (1=transportation distance) = ε 2 (2=payload) = 0. The overall cost of this multiple-objective trip yields 841.3.
The second problem adapts the above ToH solution directly as follows. Items are usually stacked in the SW in the order each item comes in from different customers, i.e., first-in at the stack bottom, not the order of shipment schedule due to space limitation. Spill-over items are temporarily stored in the parking lot by stacking them in the same manner. This creates unnecessary unstacking, restacking, and cleaning up man-power and time, not to mention possible damage to retrieve the items. Using the proposed ToH solution, the three piles will be made up as follows: items in SW represents pile#1 (shipment schedule replaces the size order stipulation for stacking and top of stack is the earliest item to be transshipped); docking area is pile#2; the PU or MST is pile#3; and the imaginary rack now becomes real racks. When the items come in from the customer, they will be stacked in the same manner as shown in Fig. 6 based on shipment schedule. Suppose item 1, 3, 4 are to be transshipped due to schedule changes or there are enough rooms on the PU/MST for them (assuming they are small and their delivery schedules are just 1 or 2 days after 6). This rack-and-pile arrangement of the modified ToH will save a lot of man-power, time, and possible damage on the items while unstacking and restacking them. The secondary benefit is the use of racks and ToH algorithm  will allow SW to store more items in a tidy and accessible arrangement. As such, the PUs and MSTs no longer have to occupy the street.
Hence, both multiple-objective TSP and modified ToH applied to solve the case problem in higher dimensions. The question is what architecture will accommodate the machine. We will exploit the notion of DNA strands, uni-cellular autonomy, and the two data flow parallelism principles. Details on the proposed architectural design will be discussed in the next section.

V. IMPLEMENTATION
Biologists have long known the collective strengths of animal colonies such as ants and bees. These life forms, when operate collectively, can produce enormous production of hundred folds over their individual effort. This inspires a closer look into uni-cellular life form that brings about one simple yet beautiful analogy reflecting in the human body. The complex natural body is made up of millions of basic building blocks known as cells. They are individually similar, structurally simple, yet possess internal working states as well as external communication exchange mechanisms among themselves. The synergic performance of all cells that make up the whole body is certainly unimaginable.
Idealistically, the proposed N-dimension computing machine must run independently within each dimension which may require occasional coordination. This is very much like the chaotic human world where individual goes about his life independently but often comes in contact with one another. There may be collaborative relations within the same dimension space and across dimension spaces, all of which are dictated by the mapping transformation.
A Biological-like Architecture for Software Systems (BASS) is proposed as shown in Fig. 8. Details of the structural, functional, and behavioral designs are explained below. Fig. 8 shows how BASS draws the analogy to the human body. Its rudimentary building structures are file blocks and components that mimic the DNA. Its composition is primarily made of infinite ATCG nucleotides, forming the DNA strands which are not algorithmically enumerable. On the contrary, VOLUME 8, 2020 BASS is made of components whose composition is algorithmically enumerable. The structural design of a component is depicted in Fig. 9.
A component is uniquely identified by its CID. It is structurally laid out as a linear array. Unlike the EXMAN array operations on dynamic pointer scheme and random access structure that aim to avoid memory copying and increase access speed, we deliberately create memory to store the component in order to make it autonomous. Thus, all fields store values for immediate use. They do not hold address that requires another indirect reference to retrieve the values. These design principles adhere to data flow arrays and uni-cellular autonomous structures with slight modifications. Access and retrieval from the component array is performed by the modified First-In First-Out (FIFO) technique which will be described in the functional design aspect.
The next field is a 'fixed' size field that permits component retrieval to be performed rapidly at hardware level. The info field maintains component bookkeeping. The file field is the heart of BASS architecture that represents the root block holding the contents of the component under a biological-like structure encompassing nucleotide, codon, and chromosome subfields. This file can be extended to incorporate additional indirect blocks (files) if need be. The next two fields keep the necessary and sufficient basic and derived operations to manipulate the component autonomy so that it can ''survive'' without any external supports, thereby upholding the living cell principles. It is this derived operation field that one can tailor BASS component to fit in the desired implementation. The last field is reserved for future use. Notice that this structural design is motivated by the arrays and array operations of data flow model, focusing on values (objects) stored in each field, not the addresses of the fields since the field size is fixed.
From cell structure standpoint, three nucleotides make up a codon, their construct denotes the (i, j, k) bases of a voxel (V ijk ). Thus, we could designate one chromosome to hold one codon or many codons as we see fit the problem domain. This will keep the component structure unique just like a DNA strand but algorithmically enumerable. Fig. 10 depicts the structure of BASS component in binary tree of height m+1 levels. The root node, t 0 0 , denotes generation 0 (superscript) with sibling 0 (subscript). The first level child nodes, t 1 1 and t 1 2 denote generation 1 st (superscript) having two siblings (subscript 1 and 2) and so on until the m th level, denoting N = 2 m nodes. Based on the above cell structure, each node is represented by , v k = t m q co3, m=generation, q=sibling sequence, co1=component 1 of t m q }. This construct is not only simple and straightforward to implement, but also supports memory in situ replacement directly. Suppose t 2 2 on level 2 ceases and is deactivated, it can be reached and replaced at loc=2 m − 1 + q = 2 2 − 1 + 2 or location 5. The height, number of nodes, orders of traversal, of the tree can be determined in the same fashion. Finally, since several cells constitute an organ, so do the components constitute a component group (CG). The CG can be denoted by a subtree as shown in Fig. 10. Similarly, many organs make up the body just as many CGs make up a BASS artifact which is the entire binary tree.
The functional design lies on a three-stage cycle mimicking a cell life cycle, namely, creation, sustainment, and cessation. This is in compliance with a human cell that can reproduce (or split), grow, and die. BASS components use the modified First-In First-Out (FIFO) discipline to lend themselves to direct hardware implementation in that we can build hardware circuit to perform FIFO access and retrieval without any software support. The modified FIFO from the conventional FIFO access is to limit memory occupation by establishing a threshold or Time-To-Live (TTL) as an aging factor so that the inherent starvation problem will be eliminated [16].
The fact that old cells die and new ones grow to replace the old cells suggests BASS components to behave in the same way. They are created (cloned) by the predecessor to replace the expiring component in situ. These requirements elaborate the principal design philosophy of BASS to support energy preservation and mitigate the memory wall problem [16], [17].
The behavioral design of BASS deploys one always active node in generation 0 (gen 0) to keep the system life cycle alive. All working BASS components will be generated in subsequent generations. This is depicted in Fig. 11.
Behavioral design begins at the formulation of the above objective function m i=1 vi(h i ) that can be described as follows. The first cell [t 0 0 ] denotes the primary node of the computing machine which is always active in generation 0. The N-dimension problem domain induces the expansion by reproducing t 1 1 and t 1 2 , denoting first generation (superscript 1) of two nodes (subscript 1 and 2). This process repeats as the nodes reproduce the second generation (t 2 1 , t 2 2 , t 2 3 , t 2 4 ), third generation, . . ., until m th generation, reaching N nodes limiting threshold (t m 1 , t m 2 , . . ., t m N−1 , t m N ). That is to say, the dimension of the problem after invocation induces node reproduction, akin to the body reproducing the organs. The expansion is modeled after the function driving scheme [26] to support parallelism. Based on BASS architecture in Fig. 8, each node that mimics an organ will hold a component group as the implementation elements to independently run the constraint equations, as well as the objective function. The result will be sent to output and execution ceases. All nodes are deactivated and shrunk back to the primary node.
From this configuration, we can deploy different methods in the components through the derived operation field of each component as shown in Fig. 9. A synthetic case is described to apply some features summarized in Table 1. Given a video footage of a relatively unknown stateman to identify who he was, the execution process is demonstrated in Fig. 12 and described as follows. At clock cycle 0 (T 0 ), the component in primary node (t 0 0 co1) would extract voice and image streams, reproduce two child nodes (subscript 1, 2) of the first generation (superscript 1) that acted as sensory organs, i.e., ears and eyes, and pass voice and image streams on to t 1 1 and t 1 2 , respectively. The top line shows the way component t 0 0 co1 continues monitoring the results from t 1 1 or the ears and t 1 2 or the eyes. The next three lines represent execution of t 1 1 and the bottom three lines represent that of t 1 2 . Node t 1 1 would handle the voice stream as follows: component 1 would apply the sticker method [15] via the derived operation field to the voice signal; component 2 would retrieve voice archives whose stickers could be similar to those from input voice; and component 3 would pipeline the match process.
Meanwhile, node t 1 2 would simultaneously start separation of subsequence operation [7] as follows: component 1 would tag the input image stream (by the derived operation field); Execution of node t 1 1 proceeds as follows. Component 1 and 2 (t 1 1 co1 and t 1 1 co2) begin execution at clock cycle 1 (T 1 ) and pass their results on to component 3 (t 1 1 co3) in the next clock cycle (T 2 ). Both t 1 1 co1 and t 1 1 co2 continue execution at T 2 , while component 3 begins performing voice matching in the same clock cycle T 2 . This process continues until T 8 where sound matching is found. The two operations at T 8 performed by t 1 1 co1 and t 1 1 co2 of the ears are canceled and both components are deactivated. Since t 0 0 co1 is monitoring all intermediate single-objective (local) optimums, the result from component 3 (t 1 1 co3) is copied to t 0 0 co1 and component 3 is deactivated. By the same token, image matching is found at T 9 and both results are combined in t 0 0 co1 in the same clock cycle for output.
Thus, the simultaneous and independent pipelines of the ears and eyes or t 1 1 and t 1 2 would speed up the identification process considerably. Tracing on the formulation established earlier, we have Substituting the above equations with the values of clock cycle, we have, v 1 (h 1 ) = (7 + 7 + 7) where v 0 (h 0 ) is system overhead from the primary node which is always active in generation 0.
To account for the running time of BASS model, we start from the expansion of N-dimension problem using d units of time to create a component. Assuming a node has three components, each component executes at least once or L units of time having p possible renewal chances (0 < p < 1) of r repetitions, the life span of a component is equal to (d + L + (p * L * r)) = Z, or 3 * Z for each node. Since a component ceases to exist as its TTL expires and is automatically replaced in situ by a new component, deletion is supposedly infinitesimal except the primary node. The total time for the N-dimension problem is equal to 2 m * 3 * Z units. The above synthetic case demonstrates an everyday problem solving by the brain using only 2 of the five senses. If we encounter a problem that calls for all senses to be solved, conventionally we would probably model it like Y sol = F(X S , X H , X O , X t , X T ) = F 1 (X S )• F 2 (X H )• F 3 (X O )• F 4 (X t )• F 5 (X T ), where S=sight, H=hearing, O=smell, t=taste, T=touch, F 1 (X S ) = function performed by visual region of the brain taking visual input X S to derive the sight answer, F 2 (X H ) = function performed by hearing region of the brain taking audio input X H to derive the hearing answer, and so on, and • = operations for the brain used to combining five different forms of input into final single answer (Y sol ). Suppose the visual part of the brain is slightly malfunction, causing the person to be say color blindness. Nevertheless, the remaining four senses still function properly. By upholding the first aspect stated earlier, i.e., set up the problem based on all natural senses, one would hopefully get a near complete answer except color problem. We can see that the real world problems do not originate in one dimension. The body also has five dimensions to receive these different forms of inputs and runs them on the corresponding control regions in the brain. Obviously, the body cannot do F 1 (X H ) or F 2 (X S ) because the brain will not take it. Why in the world do we squeeze these five senses to one output (5-to-1 transformation), imitating the video (2-to-1), and make it run on a 1-D Turing machine? As stated in the postulate, we need an N-dimension computing machine (brain) to solve an N-dimension problem (S,H,O,t,T) with the help of an N-dimension algorithm (F).
Thus, from this biological example, it is inappropriate to force the problem establishment in one dimension since the solution derivation process will be quite difficult to reduce the representative formulation of F 1 , F 2 , F 3 , F 4 , F 5 into a single function to be operated by the corresponding algorithm, implementation, and most important of all, proper mapping back to the problem domain as to how the answer is derived from each input. All of which inevitably create the undesirable essence complexities in the process.

VI. THE FUTURE PROSPECT
The vision of N-dimension computing machine projected not only toward non-conventional computing paradigms but also their corresponding data representations. The postulate suggested higher dimensions of computing scheme that might lessen computation complexity of the problem under investigation. A couple of simplified well-known problems, i.e., TSP and ToH, were exemplified to demonstrate the point and were further adapted to a real problem.
The proposed architecture was a novel biological-like architecture for software systems (BASS) to accommodate the mapping N-to-N from N-dimension problem domain to N-dimension computing machine systematically. Each problem dimension used a constraint equation to compute the optimal result independently from other problem dimensions. A basis component was deployed to monitor and determine the optimal result of the multiple-objective problems. A synthetic video identification case was demonstrated to exemplify the procedure.
Design of the proposed N-dimension computing machine architecture was aimed at simple algorithms and straightforward implementation that could be realized by state-of-thepractice hardware and software technologies.
An intriguing future prospect is how one can extend the postulate to build some artificially intelligent algorithms that think like human in tackling N-dimension computing problems concurrently. This might provoke new challenges on the computing paradigms, hardware and software architectures, and most important of all, the ultimate boundary of technological singularity between man and machine. Only our imagination will tell.