IEEE Xplore At-A-Glance
  • Abstract

Airborne Communication Networks for Small Unmanned Aircraft Systems

Unmanned aircraft that function as nodes, in networks interconnected with multiple communication routes, can be remotely operated with extended range and quick and reliable response.

This paper explores the role of meshed airborne communication networks in the operational performance of small unmanned aircraft systems. Small unmanned aircraft systems have the potential to create new applications and markets in civil domains, enable many disruptive technologies, and put considerable stress on air traffic control systems. We argue that of the existing networked communication architectures, only meshed ad hoc networking can meet the communication demands for the large number of small aircraft expected to be deployed in future. Experimental results using the Heterogeneous Unmanned Aircraft System are presented to show that meshed airborne communication is feasible, that it extends the operational envelope of small unmanned aircraft at the expense of increased communication variability, and that net-centric operation of multiple cooperating aircraft is possible. Additionally, the ability of airborne networks of small unmanned aircraft to exploit controlled mobility to improve performance is discussed.

SECTION I

INTRODUCTION

Small unmanned aircraft systems (UASs) have the potential to create new applications and markets in civil domains, spur the development of many disruptive technologies, and put considerable stress on air traffic control systems. The proliferation of small unmanned aircraft systems for military applications has led to rapid technological advancement and a large UAS-savvy workforce poised to propel unmanned aircraft into new areas. Advancements have been made on many fronts, including propulsion systems, energy storage, networked communication, miniaturized payload devices, and autonomous control. Because of these technologies, the operational envelope of UASs extends beyond piloted aircraft. As a result, the benefits of these systems are beginning to be realized by potential UAS users outside of military domains [1], [2], [3], [4].

Potential civilian, commercial, and scientific applications of unmanned aircraft systems are numerous. The most common use of small UASs across all domains (both within and outside of the military) is persistent surveillance and in-situ data collection. Unmanned aircraft systems have already been fielded for missions such as law enforcement [5], wildfire management [6], pollutant studies [7], polar weather monitoring [8], and hurricane observation [9]. Proposed UASs span numerous future applications that include severe storm penetration, precision agriculture, communication relaying over major cities, persistent global climate observation, traffic monitoring, movie making, and even cargo handling. A recent study concluded that in 2017, the civil UAS market in the United States could reach $560 million out of a total (civil plus military) UAS market of approximately $5.0 billion [10]. The study projects 1500 civil UASs will be in service in 2017 and that approximately 85% of those will be small.

The largest current barrier to the use of unmanned aircraft in the National Airspace System (NAS) of the United States is satisfaction of Federal Aviation Administration (FAA) regulations regarding safe flight operations and air traffic control (ATC). In particular, the FAA requires all aircraft(1) operating in the NAS to have a detect, sense, and avoid (DSA) capability [11] that provides an equivalent level of safety compared to manned aircraft [12], [13]. While onboard sensors are expected to be a component of future DSA solutions, communication to ATC and operator intervention will also be required, either from a regulatory or a practical perspective. Thus, one of the primary concerns of the FAA regarding the ability of UASs to meet safety regulations without conflicting with existing systems is the availability and allocation of bandwidth and spectrum for communication, command, and control [14].

In this paper, small unmanned aircraft (UA) are defined as those with maximum takeoff weight less than or equal to 150 kg, maximum range of 30 km, and maximum altitude of 4000 m mean sea level (MSL). The small UAS classification used here encompasses the Micro, Mini, and Close Range categories defined in [15]. The weight limit effectively means small UAs could not carry the equivalent weight of a human operator. The altitude limit taken here means small UA cannot fly into Class A airspace (the airspace from 18 000 to 60 000 ft MSL where commercial aircraft fly). Although it may be possible for vehicles in this category to fly at higher altitudes, the regulatory issues are significantly more challenging, and it is reasonable to assume most small UAs will not fly in that airspace. In fact, most small UAs would probably fly substantially closer to the ground. Likewise, the maximum range of 30 km represents typical operational limits on this class of aircraft and there can be notable exceptions, like the Aerosonde UA, which was the first unmanned aircraft to cross the North Atlantic [8]. Finally, note that a small UAS can be composed of multiple heterogeneous small UAs with highly varying capabilities.

Unlike larger unmanned aircraft, small UASs are in a unique regime where the ability to carry mitigating technology onboard is limited yet the potential for damage is high. Given the size and payload constraints of small UASs, these unmanned aircraft have limited onboard power, sensing, communication, and computation. Larger UASs such as the Predator and Global Hawk can carry the same transponder and communication systems of manned aircraft, enabling them to fit into the existing ATC framework more easily, as well as significant sensing and computation for onboard decision making. In contrast, transponder equipment(2) would take up a substantial fraction of the power and weight capacity of small UASs. Although the payload capacity of a small UAS is limiting, the kinetic energy stored in a 150 kg aircraft can cause significant damage to other aircraft, buildings, and people on the ground. Furthermore, the limited sizes of small UAS make them accessible to a wider audience than larger systems, and the percentage of small UASs deployed in the future will likely be high relative to larger unmanned aircraft systems [10].

In addition to being a transformative technology on their own, small UASs will also serve as test beds for technology suitable for larger unmanned and manned aircraft. This is because small UASs are fundamentally technology driven. Small UASs benefit most from technology that lowers the size, power, and weight of existing communications, sensors, guidance, navigation, and control or enables new capabilities such as autonomous collision avoidance or alternative propulsion systems. Further, small UASs have the potential to quickly surpass the number of other existing aircraft types. This technology drive with the relatively large market potential will lead to new low-cost technologies that will quickly accumulate flight hours.

As we will show, UASs have much greater communication needs (e.g., throughput, which is the rate at which data can be sent over a communication link, and latency or delay) than current manned aircraft. Many small UAS applications will require quick response times in areas where permanent supporting communication infrastructures will not exist. Furthermore, current approaches using powerful long-range or satellite communications are too big and expensive for small aircraft, while smaller radios fundamentally limit the small UAS operational envelope in terms of range, altitude, and payload.

This paper explores the role of airborne communication networks in extending the operational envelope of small UAS. We argue that only meshed ad hoc networking architectures (where nodes in the network can self-organize to act as relays that forward data along toward its destination) will provide the needed communication for the large number of small aircraft expected to be deployed in the future. Since small UAs are relatively cheap, future UASs will likely deploy multiple vehicles coordinated together. The meshed communication architecture offers the best option in terms of flexibility, reliability, and performance compared to other possibilities. It should be noted that we are not discussing the use of meshed ad hoc networks for general large-scale networking of the entire National Airspace System, but rather the communication infrastructure deployed by a single small UAS (with multiple aircraft) to communicate internally and with the external ATC system, which could include other aircraft in the local vicinity.

Experimental results are presented from flight deployments of an unmanned aircraft system designed specifically to study meshed airborne communication networking. A series of flight experiments were conducted to quantify the empirical behavior of off-the-shelf ad hoc networking equipment and standards when applied to highly mobile aerial vehicles. The results from these tests show that meshed airborne communication is feasible, that it extends the operational envelope of small UASs in terms of communication range, but that the increased envelope comes at the expense of increased variability in the networked communication.

Additional experiments showed that net-centric operation of unmanned aircraft is possible using a meshed airborne communication network. Command and control of an autonomous UA by a remote operator over an ad hoc network was demonstrated, including two-way communication of commands to the aircraft and telemetry data to the operator. A key aspect of the demonstrations was the fact that the network was not configured in advance and the ad hoc networking protocols self-organized in response to the dynamic environment, aircraft motion, and changing network demands.

Lastly, the ability of airborne networks of small unmanned aircraft to exploit controlled mobility to improve performance is discussed. Data ferrying allows mobile nodes to store and carry delay-tolerant data between nodes in the network. This concept enables communication in stressed or fractured networks that otherwise would not be possible and, in some cases, can improve network performance over relaying or direct communication. Current work is presented on the generation of flight plans for dedicated helper nodes that facilitate network communication through data ferrying. Likewise, decentralized adaptive model free control strategies can mitigate interference and noise effects that cannot be predicted in advance. Multivariable extremum seeking control is applied to the problem of UA motion control in a communication field, and it is shown how such an approach can lead to improvements in communication ability over position-based policies.

The remainder of this paper is organized as follows. Section II examines the requirements of future airborne communication networks for small UASs, lays out technical and nontechnical needs, and describes potential communication architectures. Section III describes our efforts to develop a small UAS to investigate airborne networking and gives results on feasibility, baseline performance, and net-centric operation. Section IV discusses how to exploit mobility to improve network performance. This paper concludes with Section V.

SECTION II

FUTURE COMMUNICATION REQUIREMENTS

A. General UAS Requirements

Conventional manned aircraft have minimal communications requirements compared to the needs of unmanned aircraft systems. A plane is not required to have any communication in uncontrolled airspace, nor is it required in Class E airspace when flying under visual flight rules [13]. In the vicinity of controlled airports and other classes of airspace, a pilot must be able to communicate with ATC and may require a transponder [13]. However, from the perspective of network or radio bandwidth and throughput, the requirements for this communication traffic are low since messages are limited in size (or length) and are sent sporadically. In contrast, a UAS can require significantly more communications. Fig. 1 outlines these communication needs, broadly classified into platform safety, remote piloting, and payload management. The UA communicates with more external parties: ATC, the pilot, and payload operators who may be in widely separate locations.

Figure 1
Fig. 1. Types of communication in an unmanned aircraft system.

Like a manned aircraft, the pilot must communicate with ATC in controlled airspace. This communication may be mediated by the UAS, whereby the UAS communicates via conventional radio to the ATC and this communication is then backhauled to the pilot. This implies an inherent inefficiency. A single ATC radio channel is shared by all planes in an area. But each UA requires a separate backhaul to its respective operator and multiplies the communication requirements. Future nonvoice approaches to managing aircraft are being contemplated [16]. In principle, ATC commands (e.g., to change altitude) could be acted upon directly by the UA without pilot intervention, obviating the need for the inefficient backhaul. However, it is likely that a pilot will always be expected to be “on the loop” so that UAS operations are completely transparent to ATC. Future communication system analysis estimates the average ATC voice and data rates to be about 10 kbps per aircraft, particularly in autonomous operations areas typical of UAS operations [16].

Other platform safety communication is related to detect, sense, and avoid requirements, which generally require the UAS to have equivalent ability to avoid collisions as manned aircraft [12], [13]. This may require onboard radar (active sensing), backhaul of image data (passive sensing), transponders, or cooperative sharing of information between UAs. The communication requirements here can depend significantly on the approach. The least communication demands are required when the aircraft uses active sensing, only reports potential collisions to the operator, and autonomously performs evasive maneuvers when collisions are imminent. The communication requirements here are negligible. More demanding systems send full visual situational awareness to the operator, which can require 1 Mbps or more.

Remote piloting of the vehicle has requirements that vary with the type of flight control. On one extreme is direct joystick control of the aircraft. This requires low delay and high availability. At the other extreme, tasking commands are sent to the aircraft, which are autonomously translated to flight paths. Here delays can be longer and gaps in availability can be tolerated. The UA-to-pilot link contains not only commands from the pilot to the UA but also essential health and status information from the aircraft back to the pilot. As examples, on the joystick end of the spectrum, commercial digital radio control links have data rates below 10 kbps, and one-way delays below 100 ms are preferred. On the autonomous end of the spectrum, an Iridium satellite link is sufficient for waypoint flying of the Predator UAS. Iridium has 2.1 kpbs throughput, delays of 1–7 s, and gaps in connectivity with 96% average availability [17].

Communication with the payload can range from a few bits per second for simple sensor readings to megabits per second for high-quality images. For instance, the Predator uses a 4.5 Mbps microwave link to communicate payload imagery when in line-of-sight of the ground station.

There are other considerations than data rates, latency, and availability. ATC, remote piloting, and other flight safety communication will likely be required to operate in protected spectrum that is not shared with payload and nonessential communication [16], [18].

In summary, the communication requirements for UAS are modest for ATC communications and remote piloting, while UAS can potentially require data rates in the megabits per second for payload management and DSA. It is this requirement for high data rates that distinguishes UAS from manned aircraft communications.

B. Small UAS Requirements

How do communication requirements change as we focus down on small UASs? This class of aircraft simplifies communications. Small UASs often fly in uncontrolled airspace. No transponders are required, nor is communication with ATC. Small UAS can operate directly in uncontrolled airspace without the need of an airport. They are often launched by catapult or hand and can have parachute, net, or snag recovery systems. Small UASs generally fly over smaller regions than larger UASs.

Small UASs are still subject to DSA requirements. For very short ranges, the pilot or spotters on the ground can provide the see and avoid. For larger ranges, active or passive techniques are required. The smaller platform size limits the ability to carry onboard autonomous DSA systems.

Small UASs are lower cost and are more likely to operate in cooperative groups. As such, plane-to-plane communication becomes another significant communication component. Furthermore, the types of communication can be highly varied, such as time-critical formation control, payload data exchanges, and broadcast group tasking commands. For example, swarming micro air vehicles may use local flooding schemes over short distances to disseminate data [19], [20]. It is envisioned that flexible data types and addressing will be required.

C. Communication Architectures for Small UAS

There are four basic communication architectures that can be used for small UAS applications: direct link, satellite, cellular, or mesh networking. Each has advantages and disadvantages, which we outline here. A direct link between the ground control station and the UA is the simplest architecture. However, obstructions can block the signal, and at longer ranges the UA requires a high-power transmitter, a steerable antenna, or significant bandwidth in order to support high-data-rate downlinks. The amount of bandwidth scales with the number of UAs, so that many UAS may not operate simultaneously in the same area. Finally, plane-to-plane communication will be inefficiently routed through the ground control station in a star topology and not exploit direct communication between cooperative UAs operating in the same area. Direct plane-to-plane communication will be limited by the characteristics of the link technology and may prove difficult between two highly mobile platforms.

Satellite provides better coverage than a direct link to the ground control station. Lack of satellite bandwidth already limits existing UAS operations and will not scale with the increasing demand of 1000s of small UAS operations in a region. For high-data-rate applications, a bulky steerable dish antenna mechanism unsuitable in size, weight, and cost for small UASs is necessary. Further, the ground control station requires a connection to the satellite downlink network. The ground control station may have obstructed satellite views because of terrain or clutter. Finally, multiple UAs operating in an area will suffer high delays if their communication is mediated by satellite.

Cellular refers to an infrastructure of downlink towers similar to the ubiquitous mobile telephone infrastructure. The mobile telephone infrastructure is not designed for air-to-ground communication. A single UA transmitter can blanket a large area with its signal-degrading system performance. Therefore, small UAS operations may require a dedicated cellular infrastructure. The cellular architecture provides several advantages. First, coverage can be extended over large areas via multiple base stations. UASs would hand off between different base stations as needed during flight. Secondly, the multiple base stations provide a natural redundancy so that if one link is poor, another link may perform better. Thirdly, a limited bandwidth can be reused many times over a region and capacity increased as needed to meet demand. The reuse can grow by adding more base stations as the number of users grows. Fourthly, the infrastructure can be shared by different UASs. Once installed, many UASs can each pay for the fraction of the infrastructure that they use. These advantages must be weighed against the cost. A typical mobile telephone base station is expensive, as is the tower, tower site, radio equipment, and associated networking infrastructure. Such a solution applies where the infrastructure investment can be amortized across frequent and regular UAS flights. Examples might include agricultural monitoring or border surveillance. Such architecture is not suited for applications like wildfire management or polar climatology, where demand is transient.

Meshing refers to a networking architecture where each node (i.e., a radio on a UA or ground node) can act as a relay to forward data. Communication between a UA and a ground control station can take place over several “hops” through intermediate nodes. The shorter range simplifies the link requirements, and bandwidth can be reused more frequently and thus more efficiently. Plane-to-plane communication can be direct and also benefit from the mesh routing protocols that employ additional relays as needed to maintain communication. However, such meshing requires intermediate nodes to be present for such relaying to take place. Furthermore, nodes may be required to move specifically in order to support communication. The mesh approach is promising for UAS applications where infrastructure is not available and multiple UAs are operating cooperatively.

Meshing can also leverage the other technologies described above. A direct, satellite, or cellular link to any node in a connected mesh enables communication with all the nodes, providing additional redundancy in the communication. Meshing combined with mobility can extend range. For instance, as a group of UAs moves beyond the range of a direct link, some of the UAs can be assigned to stay behind, forming a chain of links back to the direct link. In an extreme form, the UA can fly back and forth to ferry data between nodes that have become widely separateed. It is this flexibility, robustness, and added range that makes meshing an integral part of any small UAS operations. The next section explores the feasibility and performance limitations of meshed networking technology.

SECTION III

MESHED AIRBORNE NETWORKING FOR SMALL UAS

This section discusses the performance of meshed networking when applied to communication, command, and control of small unmanned aircraft systems. The concept of meshed networking was developed for communication networks with static or slow-moving nodes. In contrast, airborne networking must function on fast-moving vehicles in highly dynamic environments. Thus, the impact of mobility on airborne meshed networking is a key issue. Experimental results are presented here to show that meshed airborne communication is feasible using off-the-shelf network hardware and protocols. The performance of a meshed network of static ground nodes, moving ground nodes, and moving aerial nodes is compared against a network of only static ground nodes. It is shown that the operational range of a small UAS can be extended beyond the communication range of a single vehicle using meshed networking. Experimental results further demonstrate that the performance of meshed airborne networking is sufficient for net-centric command and control of a semiautonomous unmanned aircraft.

The Heterogeneous Unmanned Aircraft System (HUAS) was developed at the University of Colorado as a platform to study airborne communication networks and multivehicle cooperative control. Specific issues studied to date include the impact of mobility on airborne wireless communication using off-the-shelf IEEE 802.11b (WiFi) radios [21]; net-centric communication, command, and control of small UASs [22]; sensor data collection [23]; delay-tolerant networking [24]; and a framework for controlled mobility that integrates direct, relay, and ferrying communication concepts [25]. An overview of the HUAS test bed is provided next, and then the performance of airborne meshed networking is discussed.

A. Heterogeneous Unmanned Aircraft System

The HUAS combines technology developed at the University of Colorado for the Ad hoc UAS and Ground Network (AUGNet) and the Networked UAS Command, Control, and Communication (NetUASC3) systems [21], [22]. The main components of HUAS include:

  1. the CU Ares small unmanned aircraft equipped with onboard flight management and mobile ad hoc networking systems for autonomous operation;

  2. a multiple-tier airborne mobile ad hoc network combining static/mobile ground nodes with airborne nodes based on IEEE 802.11b (WiFi);

  3. a modular onboard flight management system;

  4. ground control infrastructure to enable command and control of HUAS by multiple dispersed users.

Compared to other small UAS test beds operated by university and government research institutions [26], [27], [28], [29], a key innovation of HUAS is the presence of a multiple-tier airborne meshed network. In order to facilitate multivehicle operations in support of a wide range of applications, a layered network was developed using a bottom-up design approach based on a combination of off-the-shelf technology with HUAS-specific algorithms and protocols (Fig. 2). Each successive step in the design process was based on the decisions made at the lower layers, thus insuring that the highest level goals of meshed airborne networking and cooperative control are met. The lowest layer of the network design includes physical hardware and network transport protocols. In particular, the network design was driven by the desire to use off-the-shelf technology as much as possible and to allow rapid development and easy deployment. The remaining layers in the design include data routing for ad hoc networking in highly dynamic environments [30]; sensor, communication, and control fusion between vehicle subsystems and across multiple vehicles [31]; service discovery and gateway resolution for fault-tolerant networking and sensor data collection [32]; and net-centric cooperative control algorithms [33]. Recently added functionality in HUAS includes service discovery that allows individual vehicles to register their capabilities with the system and gateway resolution to connect multiple subnetworks into a single network.

Figure 2
Fig. 2. Bottom-up UAS network design approach.

The lowest layers of the HUAS network utilize the infrastructure provided by an Ad hoc UAS and Ground Network. The AUGNet is a wireless network based on IEEE 802.11b (WiFi) protocols and hardware that combines static ground nodes, moving ground nodes, and aerial nodes carried by autonomous UAs connected through a mesh network back-hauled to the Internet. The main components of the AUGNet system include custom-made network radios, network monitoring software and a database that can be accessed during flight experiments from the Internet, routing software using the Click modular router to implement dynamic source routing (DSR), and a fleet of several small unmanned aircraft.

Network software combined with the communication hardware is denoted as the mesh network radio (MNR). The MNR hardware components are a Soekris Engineering Model 4511 single board computer (100 MHz 486 processor, 64 MB RAM, 256 MB flash memory), an Atheros AR5213A chipset mini PCI card set to ad hoc mode, which runs with constant output power and data rate of 2 Mbps, and a Fidelity Comtech bidirectional amplifier that increased the output signal up to the 1 W limit. No modifications of the IEEE 802.11b implementation of the Atheros card were made.

The main aerial component of HUAS is the Ares UA (Fig. 3) that was designed to carry mesh network radios and small sensor packages. It has a payload capacity of 10 lb (4.5 kg) in a 7 by 15 by 10 in3 (18 by 38 by 25 cm3) bay. A generator supplies 50 W of power to the avionics, communications, and payload. The Ares airframe is based on a highly modified Senior Telemaster, a popular RC model kit with a wingspan of almost 8 ft. (Fig. 3 shows the entire fleet of HUAS aircraft, although the experiments described in this paper were conducted using only the Ares UA.) More details on the AUGNet are available at [21].

Figure 3
Fig. 3. The HUAS vehicle fleet includes (clockwise from top left) the CU Ares, the CU MUA, the Velocity XL, the MLB Bat 3, the CU ground control station, and the Hobico NexSTAR.

Net-centric operation of HUAS is achieved by augmenting the AUGNet with the NetUASC3 architecture (Fig. 4) designed at the University of Colorado. The NetUASC3 architecture creates a semiautonomous unmanned aircraft system by integrating the onboard data buses used by a distributed avionics system for intravehicle communication with external wireless networking used for C3 and intervehicle coordination. Under this architecture, multiple subsystems such as the meshed network radio, autopilot, and payload sensors are connected via a standard SJA1000 controller area network (CAN) bus. The local vehicle subnets are connected to one another and dispersed operators via the AUGNet mesh network. The NetUASC3 architecture enables inclusion of IP-based sensors and facilitates implementation of service discovery protocols (i.e., middleware) as vehicles and sensors come in and out of the network. From the perspective of C3, this architecture enables individual vehicle subsystems to act seamlessly as the source or destination of data in the meshed communication network.

Figure 4
Fig. 4. Networked UAS C3 architecture.

The onboard flight management system (Fig. 5) is composed of several subsystems connected together through the Naiad interface board, which provides the local CAN subnet [33], [34]. The subsystems include the commercial PiccoloPlus autopilot [35], which operates the UA control surfaces, measures position and orientation information, and guides the aircraft to reach waypoint or flight pattern goals; the communication module, which interfaces between the AUGNet and the onboard system; the sensor payload subsystems, which can connect directly to the meshed AUGNet; and the supervisory computer, which manages the higher level functionalities of the aircraft by interpreting commands and making mission-level decisions. Although the commercial autopilot maintains a 900 MHz point-to-point communication link, this is used only as a safety backup, and all command and control (as well as payload) communication occurs through the HUAS meshed network.

Figure 5
Fig. 5. UA onboard flight management architecture.

One challenge with small UAS is that nodes may become spread out and sparsely connected. Connections that exist can be dynamic and intermittent, antenna patterns shift with aircraft maneuvering, sources of interference come and go, and low flying UASs can be separated by intervening terrain. In such environments, traditional end-to-end network protocols such as the ubiquitous TCP perform poorly. So-called delay-tolerant networks (DTNs) are designed for these challenged environments [36], [37]. DTNs are a current topic of research, and basic problems such as routing remain unsolved. We have implemented a DTN architecture for delivering sensor data to one or more external observers outside the UASs [23]. Sensors on UA or on the ground generate data, and a DTN procedure delivers the data in stages through gateways to the observers. Each stage takes custody of the data and stores it in the network until a connection opportunity to the next stage arises. End nodes can operate with no knowledge of the DTN. Data are delivered quickly when end-to-end connections exist and as quickly as opportunities appear when intermittently connected. The DTN supports data ferrying where, for instance, a ground sensor can deliver data to an overflying UA that then physically carries the data back to a network gateway to an observer.

Another challenge with small UASs is that nodes, services, and users can come and go over the life of a mission. To manage these dynamics, HUAS implements service discovery protocols and a publish/subscribe data service. These capabilities enable HUAS to use more than one ground station, communicate with any node without prior knowledge of its IP address, interact with the UA from outside of the meshed network, and reconfigure the entire UAS in flight. No operator is required to facilitate communication, no centralized database is needed to store node informatrion, and no ground-based controller is required to establish new data streams. The network accepts new vehicles and subnetworks into the system, requiring only creation or modification of a single network interface object that resides on the new components.

The elements of the HUAS described in this section have been implemented. The next section describes our outdoor test bed activities to measure HUAS performance and verify its effectiveness.

B. Feasibility and Baseline Performance

Initial flight experiments focused on demonstrating the feasibility of a meshed airborne network and characterizing the baseline performance of a single specific AUGNet configuration [21]. Experiments were conducted using three basic scenarios: a linear network of static ground nodes, the static ground network augmented with an aerial node, and three aircraft flying simultaneously while communicating with themselves and a single ground node. Performance metrics that were evaluated include communication range and network throughput, as well as subjective user experience.

1) Communication Range

Communication range is evaluated by looking at link throughput as a function of radio separation distance. Experiments were conducted on a flat mesa that provided several kilometers' line of sight between ground nodes. Fig. 6 shows link throughput samples as a function of separation distance for UA–ground and ground–ground communication links using the HUAS network. For the ground–ground link, 1.4 Mbps throughput is demonstrated for range values up to 1–2 km, after which it drops sharply. By comparison, the UA–ground link extends the communication range to 2–4 km. However, the variability in link throughput also increases to roughly double that of the ground–ground case. During an additional field test, a ground node was placed on a distant hill approximately 10 km away and provided 1 Mbps throughput. This result provides an upper limit on the node capability and suggests that the UA–ground range is limited due to aircraft mobility.

Figure 6
Fig. 6. Throughput samples at different ranges.

The UA-to-UA communication range proved to be less than expected given the demonstrated 10 km range limit and operation far above the terrain. When the UAs were flown within a few kilometers of each other, 1 Mbps throughputs were sustained. To measure larger distances, aircraft were also flown at a second airfield located 7 km away. No reliable communication was observed at this distance. At this range, the links were weaker and thus more susceptible to signal variations due to the UA maneuvering. These effects were exacerbated by the network routing protocol, which stops communication to search for a new route whenever too many packets are lost on existing ones. One outcome of these experiments was to reduce the congestion control time-out parameters in the meshed routing software.

2) Throughput

The impact of a highly mobile aerial node on network throughput was investigated by establishing a network with a linear topology. The network was spaced such that removal of any node would disconnect the network. The throughput between every pair of nodes and the throughput versus the nominal number of hops in the stationary ground network were computed. Tests were conducted with the ground (linear) network only and with an additional node mounted on a UA circling overhead. In this case, sustainable shortcuts through the UA could be formed since the UA–ground communication range is longer than twice the range between two ground nodes. The throughput with and without the UA is shown in Fig. 7. For the network without UA, the black bars represent the average throughputs of all possible 1-, 2-, 3-, 4-, and 5-hop paths. The gray bars show the throughputs that were achieved between the same source–destination pair with the help of the UA. The data points without the UA show that throughput falls off by a factor of two to three with each additional hop. This is a known phenomenon in meshed networks with a small span (five or less hops). For multihop paths, the UA is able to maintain the end-to-end throughput close to the 2-hop throughput, indicating that the network is able to find and sustain the 2-hop paths through the UA.

Figure 7
Fig. 7. Throughput data with and without the UA node versus the number of hops the data took to reach its destination. Error bars are one standard deviation.

As with communication range, the network with the UA has higher variability in throughput compared to the static network without the UA. To observe this effect in a different manner, we measured the losses in low-rate packet streams sent over 20 s intervals between every pair of ground nodes with and without the UA. Without the UA, given the fixed stable and connected network, no packets are lost (Fig. 8). With the UA, shorter routes are formed through the UA. However, with the UA, the network experiences occasional losses as high as 50%, thus reducing the observed average throughput. The higher variance of the throughput is also attributed to the maneuvering of the aircraft. As the UA turns, it can bank away such that the ground node is not in the main lobe of the wireless antenna.

Figure 8
Fig. 8. Loss rate over 20-s intervals with and without the UA.

Competing traffic flows share network resources; as a consequence, congestion control mechanisms in the routing protocols lower throughput compared to the case when each flow is sent one at a time. The throughput in the fixed node case was reduced on average by 29% when overlapping flows were introduced to the network with different sources and destinations. When the flows shared a source or destination, the throughput was reduced on average by 44%. In the three-UA experiment, the throughput was reduced on average by 32% where the competing flows were either UA-to-UA, UA-to-ground, or ground-to-ground flows. For the three-UA experiment, traffic flow overlap depends on the UA dynamics and therefore changed over the course of a given measurement. The results above were averaged over several runs, and variability across them was high. The three cases had standard deviations of 27%, 27%, and 40%, respectively. Once again, the experiment with the UA demonstrated the greatest variability.

3) End-User Subjective Experience

Two subjective tests were conducted to capture end-user impressions while using two common network applications: Web browsing and voice over IP (VoIP). For the Web browsing test, the user browsed a Web site consisting of several pages with multiple images 10, 100, 300, and 500 kB in size. The Web pages were served by the small-footprint single-threaded Web server Boa(3) installed with care to minimize impact on a laptop computer connected to a mesh network radio. The voice quality test evaluated the subjective perception of a voice conversation carried out between two users on laptops connected to the network.

For the fixed ground network, browsing over a link with up to six hops was comparable to using a fast dialup connection. With a hybrid network of stationary and mobile nodes, browsing became choppier since new routes to the Web server had to be found more frequently. The pictures were rendered in increasingly sporadic bursts with increasing hop count. Occasionally, downloads of pictures larger than 300 kB stalled noticeably halfway through the page, spoiling a user's browsing experience.

VOIP experiments were conducted using the Linphone(4) 1.02 software package with the full-rate GSM codec (linear predictive coding with regular pulse excitation). This resulted in a bit rate of 13 kbit/s over the band 300–3400 Hz. Voice quality was found to be exceptionally good up to three hops with no noticeable end-to-end voice delay. New routes formed automatically, and voice contact was reestablished without having to redial or restart the phone application. However, there were gaps in the speech if the user moved out of range of the network. At a distance of four hops, voice streams became choppy and meaningful conversation was not possible. As with Web browsing, aircraft mobility led to significant network reconfiguration, and the time needed to discover new routes considerably impaired voice conversations.

4) Evaluating Empirical Radio Models

Additional flight testing was conducted to experimentally validate radio propagation models for the network transmissions generated by HUAS. Standard empirical radio propagation models have received signal power P decreasing with distance from a transmitting source via a power law. For any two wireless nodes A and B, let d represent the distance between them. The power received at node A from B's transmission can be modeled asFormula TeX Source $$P(d)={K_{0}\over d^{\epsilon}}\eqno{\hbox{(1)}}$$where ∊≥ 2 is the signal path loss exponent (∊ = 2 is the ideal path loss in free space) and K0 represents the gain of the link and is in units of mW · m. The Atheros chipsets along with the drivers used in the meshed network radios [22] provide a measure of the received signal power for each individual link in the meshed network as the received signal strength indicator (RSSI) on a packet-by-packet basis. For the Atheros chipset, the RSSI is given as a function of received power asFormula TeX Source $${\hbox{RSSI}}=10\log_{10}(K\cdot d^{-\epsilon})+95.\eqno{\hbox{(2)}}$$Experimental verification and fitting of the empirical propagation model were done in two separate experiments: between two quasi-static (slowly moving) ground nodes and between a static ground node and a node in an Ares UA (fast moving). Information collected during each experiment included time and GPS position for each node. Since the output power of the Atheros card is set to a constant, the data collected during the experiments capture the position dependence of the received power.

Fig. 9(a) show the results of received power measurements versus separation distance for the first experiment. In this experiment, the empirical model was found to have transmission power K0 = 6.8 × 10−3 mW · m and radio path loss exponent ∊ = 2.2, using a least squares best fit line of the data on a log scale. The figure shows that for the case of two static nodes, the received power is symmetric, i.e., both nodes experience the same decay law. While the empirical model fits the measured received power data in the middle range of the system (from around 500 to 1500 m), the model begins to deviate from the measured data when nearby or far away. When the two nodes are close to each other, multipath from the ground has a significant impact on the variability of the received power measurement. As the separation distance increases, the sensitivity of the IEEE 802.11b interface card comes into play and limits the minimum measured received power value.

Figure 9
Fig. 9. Measurements and least squares fit of received radio power versus distance for (a) ground–ground and (b) UA–ground communication.

Fig. 9(b) shows the results of the second experiment using a static ground node on an 8 ft ladder and a radio mounted in a UA starting on the ground, taking off, and then flying an oval pattern approximately 200 m above the ground. The figure shows the RSSI as a function of distance for both the ground and aerial nodes. For this experiment, the empirical model was found to have transmission power K0 = 3.6 × 10−1 mW · m and path loss exponent ∊ = 2.34. The results of the second experiment are consistent with previous results in that the overall variability of the RSSI measurements while in flight (d > 200 m) has increased with the increased mobility and dynamics of the UA. While on average the UA and ground node measured the same RSSI, the RSSI measurement on the UA has a larger variance than the ground-based node. This is expected since the UA operates in a higher radio-frequency noise environment than the isolated MNR on the ground. Specifically, the altitude of the UA increases interference from other IEEE 802.11 networks in the area surrounding the test range. Fig. 10 shows a contour plot of the RSSI field as measured by the UA using Kriging two-dimensional (2-D) interpolation [38]. It is clear from this figure that the radio power does not propagate uniformly from the source but still decreases as a function of distance.

Figure 10
Fig. 10. Contour plot of RSSI field between UA and ground-based MNR.

These initial experiments demonstrated the feasibility of meshed networking on highly mobile aerial nodes and that it can extend the operational range of a small UAS. The experiments show that mobility plays a key role in the performance of airborne meshed networking. In particular, the variability in performance increases for highly dynamic aerial networks as range and throughput improve. In addition to establishing baseline performance characteristics, the experiments showed that standards developed for ground-based networks cannot be applied directly to airborne networks. First, it was discovered that the default time-out values for congestion control used in the DSR routing protocols were too high. The network could not distinguish between packet loss due to congestion and packet loss due to mobility, which occurs at faster time scales. Thus, these time-out values were reduced for future deployments of the HUAS network. Secondly, airborne meshed networking must account for the fact that the network will regularly become fractured or disconnected. Thus, delay-tolerant protocols are needed to ensure data delivery even when end-to-end connectivity is not possible at any given time. This motivated the development of the application layer DTN described above.

C. Enabling Net-Centric Communication, Command, and Control

Given that airborne meshed networking is feasible with some modification of standard protocols, a second set of experiments was conducted to investigate net-centric operation of an unmanned aircraft system over a meshed network. The key feature of these experiments was integration of intravehicle communication between subsystems with the external meshed networking between the UA and dispersed operators. Command and control data were transmitted directly between various onboard subsystems and a remote operator connected to the meshed network.

In order to demonstrate net-centric communication, command, and control, a virtual icing experiment and a communication-reactive flight experiment were performed using the NetUASC3 architecture. Each experiment was designed to trigger behaviors based on different conditions. For the virtual icing experiment, an offset was subtracted from an onboard temperature sensor reading. The offset starts at zero and increases over time to simulate dropping temperature. Once icing conditions occur, specified by a lower temperature threshold, the UA commands itself to return to a predefined location. In the communication reactive flight experiment, a constant heartbeat between the UA and the ground station is manually stopped to simulate a communication loss, allowing the operator to maintain communication with the system during a lost-communication event. In both cases, command and control data as well as additional payload measurements were sent over the AUGNet meshed network during the experiments.

1) Commmand and Control

The primary results of the flight experiments are shown in Fig. 11. The top plot in the figure gives the waypoint number the UA was tracking. The middle plot shows the temperature measured onboard the UA (including offset) and recorded on the ground, and the bottom plot shows the ping times recorded by the ground station. The horizontal time axis is referenced to GPS time of the day in minutes.

Figure 11
Fig. 11. Experiment results.

In the top plot, it can be seen that at the Start Experiment commands (two shown in the time scale covered by the green dashed lines), the UA transitions from Flight Plan 1 (Waypoints 2 through 7) to Flight Plan 2 (Waypoints 10 through 15). Upon detecting a sensor trigger (denoted by red dot-dashed lines), the UA transitions from Flight Plan 2 back to Flight Plan 1.

Fig. 11 shows that the first trigger occurred when the temperature probe reached 40 °F. This is seen in the temperature plot (middle), with the first start experiment command sent in minute 1086. After the start command is given, the sensor node introduces an artificial offset to represent dropping temperature. In minute 1088, the temperature reaches 40 °F, causing the sensor trigger. The top plot shows that the waypoint being tracked jumps from Waypoint 14 to Waypoint 2.

The second sensor experiment was started in minute 1092 and represents the ability of the UA to respond to changes in network performance. For this experiment, connectivity between the UA and the ground control station was the trigger and was measured by sending ping commands between the two. The loss of ping commands is seen in the bottom plot, with the trigger occurring 40 s after the last ping was received.

Throughout the experiments, round-trip ping times were recorded for the connection that linked the remote operator to the UA. Fig. 12 shows a histogram of the round-trip ping times, which includes the processing time on each end and transmission through one or more intermediate mesh network links. The average round-trip time is 100 ms, with some pings taking as long as 550 ms. The first set of experiments suggested that these ping times would depend on the attitude of the aircraft relative to the ground station. However, due to the close operation of the aircraft to the ground station (typically within 1 km), there is no noticeable dependency of the ping time on location (Fig. 13). The dense cluster of ping times at the center of the Fig. 13 represents data collected while the UA was on the ground during preflight checkout. It is still expected that as the operational range is increased, the ping times will be affected by the attitude of the aircraft.

Figure 12
Fig. 12. Histogram of round-trip ping times.
Figure 13
Fig. 13. Round-trip ping times plotted against GPS locations.

2) Streaming Payload Data

A final experiment was conducted to demonstrate the ability to access sensor subsystems across platforms over the meshed network. Flights were made with an IP-based Web camera (Panasonic BB-HCM311 Pan/Tilt IP camera) mounted on the belly of an Ares UA. The webcam has a built-in Web server and can be viewed over an IP-based network by multiple clients. For the implementation presented here, the video was downlinked over the network as a series of JPEG images, not in a streamed format such as MPEG. Fig. 14 shows a snapshot from the video downlinked over the meshed network to the ground control station. The image is of the HUAS ground control station and a nearby storage trailer.

Figure 14
Fig. 14. Images from a Webcam onboard Ares-2 of the HUAS ground control station.

This experiment validated several important aspects of the NetUASC3 architecture and the ability to perform net-centric operation of an unmanned aircraft system. First, the imagery provided a large data source to stress the network. Command and control data, including telemetry, is relatively small compared to the size of an image. Secondly, image data can be an important component of UA safety procedures and detect, sense, and avoid. Demonstrating the transmission of image data validates the concept of using streaming video for DSA. Thirdly, the camera connection to the network was transparent to the ground operator. From a network perspective, the camera was just another node on the network and there was no need to first identify the host UA in order to communicate with it. Fourthly, the network was able to transmit the large amount of payload data over the same meshed network used for command and control without significantly degrading performance. Data prioritization was not used here and could further improve the quality of service of command and control data.

During the flight tests, the frame rate varied and appeared to correlate with UA position and attitude. Since video imagery is stamped with system time, video quality and frame rate can be correlated with other telemetry data. Fig. 15 shows video frame rate correlated with aircraft roll angle. The frame rate is calculated by averaging over a 1.0 s window every 1.0 s. The roll angle measurement is provided by the PiccoloPlus autopilot over a different telemetry stream than the video (but over the same meshed network); however, they are both time-stamped from the same system clock. The data plot is asymmetric since the aircraft was predominantly in a counterclockwise orbit that required a negative roll angle. Data show a nominal frame rate of approximately 20 fps while the aircraft sits on the ground. This figure is less than the 30 fps maximum because the network quality is reduced when so close to the ground. As with all the experiments conducted on the UA, the dynamics of the aircraft after takeoff lead to large variability in the achieved frame rate.

Figure 15
Fig. 15. Frame rate data correlated with aircraft roll angle.

This second set of experiments demonstrated that although network performance becomes more variable compared to static systems when implemented on aerial nodes, net-centric operation of small UAS is still possible using meshed communication. By connecting onboard subsystems via local subnets to the external meshed network, dispersed operators as well as other aircraft in the UAS can communicate directly to different components of the aircraft avionics system. This enables true net-centric operation of the UAS, as data streams can be seamlessly created across any two UAS subsystems. From the view of the operator or other cooperative aircraft, access to these data streams is direct, even if the data itself travel over multiple hops to reach its destination. Experiments showed that command and control data that include simple one-time commands, continuous telemetry data, and streaming video can all be sent reliably over the same meshed network. Though not described here, the addition of service discovery routines and a publish/subscribe data service further expands this capability such that multiple aircraft can seamlessly send multiple data streams to multiple dispersed operators, thus enabling the full set of communication requirements needed for successful integration into the National Airspace System.

SECTION IV

EXPLOITING CONTROLLED MOBILITY

Unlike terrestrial networks, which tend to be static, and satellite communication systems, which are in fixed orbits, meshed networks of small UA offer a unique opportunity to exploit controlled mobility. As demonstrated in the previous section, real-time communication is challenging for these networks because of inherent variability in wireless connections combined with changing node positions. In the extreme case of sparsely connected, moving nodes, some nodes might not be able to connect with any other node for a long time. In this scenario, only delay-tolerant communication is feasible. Even when a continuous link is established, environmental factors such as multipath, interference, or adversarial jamming can degrade real-time performance relative to expected models. In these cases, the mobility of the nodes themselves can be exploited to improve the network performance. In sparse networks, node mobility enables data ferrying, i.e., physically carrying data packets through the environment between otherwise disconnected nodes. In the case of a connected network, node mobility enables adjustment of local network behavior in response to unmodeled disturbances. This section shows how controlled mobility can be exploited to improve networking performance and enable communication in situations where it cannot be achieved otherwise.

A. Position-Based Communication Models

Given the presence of node mobility in an ad hoc network, transmission of data between a source and destination can take three forms (Fig. 16). Direct communication occurs when two nodes transmit data directly to one another. Relaying occurs when additional nodes are used to receive a transmission from a source and retransmit it to a destination. Data ferrying occurs when a mobile node physically stores and carries data from one location to another. Understanding when and how to take advantage of each of these modes will improve the overall performance of meshed airborne communication networks.

Figure 16
Fig. 16. Three modes of maintaining a communication link between two static nodes A and B.

For direct communication, the well known Shannon–Hartley law describes the capacity of a wireless link as a function of the signal-to-noise ratio (SNR). Combined with the empirical radio propagation model, this law describes an inverse relationship between separation distance and communication link performance. The SNR is defined asFormula TeX Source $${\hbox{SNR}}={P(d)\over N_{0}}={K_{0}\over(N_{0})d^{\epsilon}}={K\over d^{\epsilon}}\eqno{\hbox{(3)}}$$where P(d) is the received power defined by (1), N0 is the noise and interference level experienced by the receiver, and K is the radio power constant, which is the ratio between the link gain K0 and noise N0.

The Shannon–Hartley law states that a wireless communication channel has a maximum rate at which information can be transmitted without error, which is a function of the SNR. Combined with the empirical model given in (3), the Shannon–Hartley law is given asFormula TeX Source $$R(d)=W\log_{2}\left(1+{K\over d^{\epsilon}}\right)\eqno{\hbox{(4)}}$$where R(d) is the Shannon capacity in bits per second and W is the bandwidth of the channel. The delay in delivering a packet of size b from A to B is assumed to be only a function of the insertion time, i.e., propagation delay and channel access delay are assumed to be small as compared to insertion delay. For direct communication, the time delay is given asFormula TeX Source $$\tau(d)={b\over R(d)}.\eqno{\hbox{(5)}}$$Thus in the direct mode of communication, the available data rate and time delays are driven explicitly by the distance between source and destination node.

In the relay mode, an additional node is placed between the source and destination. In packet-based networks, the relay node can only talk to the source or destination at any given time and can only utilize half its available capacity in either direction. However, the closer separation distance increases the throughput capacity of the links. For example, when a single relay is inserted equidistant from the source and destination, the new throughput becomesFormula TeX Source $$R_{R}(d)={R(D/2)\over 2}\eqno{\hbox{(6)}}$$where D represents the separation distance of nodes A and B and the direct communication distance between A and the relay or B and the relay is d = D/2.

In general, if there are N nodes in a linear network (node i sends data to node i+1) with positions pi for i = 1,…,N and we assume only one node can transmit at a time, then the throughput from node 1 to node N isFormula TeX Source $$R_{R}^{N}=R_{R}({\bf p}_{1},\ldots,{\bf p}_{N})\equiv\left(\sum_{i=1}^{N-1}R(d_{i,i+1})^{-1}\right)^{-1}\eqno{\hbox{(7)}}$$where di,i+1 = |p}ipi+1| and the insertion delay isFormula TeX Source $$\tau_{R}^{N}={b\over R_{R}^{N}}.\eqno{\hbox{(8)}}$$Alternatively, for repeater networks where each radio receives and transmits on different channels, and hence can transmit simultaneously, the end-to-end throughput isFormula TeX Source $$R_{\rm repeat}^{N}=R_{\rm repeat}({\bf p}_{1},\ldots,{\bf p}_{N})\equiv\min_{i=1,\cdots,N-1}R(d_{i,i+1}).\eqno{\hbox{(9)}}$$

B. Cooperative Electronic Chaining

The optimal number and positions of relay nodes can be determined for a given source and destination node. When position-dependent jamming and noise are not present in the system or their sources are known, the optimal number and positions for relay nodes can easily be determined from (7) or (9). The presence of unknown jamming, interference, or other position-based noise distorts the communication field (e.g., Fig. 10) and would cause these locations to be suboptimal. In that case, controlled mobility can be used to improve communication performance over position-based solutions.

In order to respond to the unmodeled noise sources, we have developed an adaptive, model-free decentralized control strategy based on extremum seeking control to mitigate these effects [39]. In this framework, primary task nodes such as an aircraft collecting in-situ data and the ground control station are free to move about the environment while other nodes within the UAS move to enable and optimize the meshed communication. Assuming a linear network topology, each node in the communication chain attempts locally to optimize the throughput of the three-node network composed of itself and its neighbors. In other words, the ith node attempts to solveFormula TeX Source $${\bf p}_{i}^{\ast}=\arg\max_{{\bf p}_{i}}R_{R}^{3}({\bf p}_{i-1},{\bf p}_{i},{\bf p}_{i+1})\eqno{\hbox{(10)}}$$for packet-based networks orFormula TeX Source $${\bf p}_{i}^{\ast}=\arg\max_{{\bf p}_{i}}R_{\rm repeat}^{3}({\bf p}_{i-1},{\bf p}_{i},{\bf p}_{i+1})\eqno{\hbox{(11)}}$$for repeater networks.

It can be shown that a local gradient-based control law that solves this problem also optimizes the (global) throughout of the entire communication chain from source to destination. However, there are two major reasons why a gradient-based control law cannot be implemented directly. First, since the position of localized noise sources is not known, the gradient of the communication objective cannot be determined analytically. Secondly, an unmanned aircraft has nonholonomic constraints, which prevent it from stopping at a given location and limit its ability to follow the gradient. Fortunately, this second limitation can actually be used to address the first through a multivariable extremum seeking (MES) controller that can estimate the gradients of the local objective functions while descending them.

Fig. 17 shows the block diagram of the MES algorithm developed for this cooperative electronic chaining problem [39]. Typical MES control works by inserting a dither signal into the control loop and then demodulating the output to estimate and track the gradient of an unknown objective function [40]. In the case of the system depicted in Fig. 17, a guidance vector field controller (LGVF controller [41]) is used to control the UA on an orbit about a virtual center point whose position is driven by “virtual” first-order dynamics. Using the measured SNR from the local communication links with the ith node, the local objective function RR3(pi−1,pi,pi+1) can be derived and its gradient can be estimated. The result is a distributed control law that drives the center of each UA loiter circle to a local maximum of total throughput RRN.

Figure 17
Fig. 17. An extremum seeking algorithm for a linked network chain of mobile relay nodes in 2-D. For a nonholonomic vehicle such as an unmanned aircraft, an orbit-tracking controller provides the dither signal needed to estimate the field gradient.

Fig. 18 shows an example where three UAs adapt to a noise source in order to optimize the communication link from source to destination. The source and destination nodes are located 4500 m apart and the UA start spaced along the line between them, at positions marked UAV1 through UAV3. An interfering noise source is located at [2500, 1000] m and causes the UA chain to bend upward away from the noise. Fig. 18(a) shows the paths of the UA as they move upward, and Fig. 18(b) shows the minimum SNR (which relates directly to the total chain throughput for a repeater network) improving over time. This example demonstrates that by exploiting mobility, the small UAS has mitigated the effects of a localized noise source on the total throughput capacity between the source (sensing) aircraft and the destination (ground control station).

Figure 18
Fig. 18. A chain of three UAs adapt to a noise source to optimize the communication chain.

While the results provided here for cooperative electronic chaining apply only to linear cascade networks with a single source and destination, the concept can be extended to more general networks with multiple sources and destinations. In that case, each relay node would use a local objective function based on all the different network flows passing through it. Various schemes can be used to prioritize or weight these flows in order to specify global network objectives. Future work will address this broader class of problems.

C. Data Ferrying

Data ferrying offers unmanned aircraft a second opportunity to exploit their intrinsic mobility. For the data-ferrying control mode, data are physically stored and carried by a node that moves within the network. This mode enables delay-tolerant communication in sparse or fractured networks where end-to-end connectivity can never be maintained.

Using (4) to determine capacity as a function of distance, the average capacity of the data-ferrying node can be computed as the integral over the motion of the node. In the simple three-node system of Fig. 16 with only one ferry, the long-term throughput is given as [24]Formula TeX Source $$R_{F}(D,V,B)={BV\over 2D}{1\over 1-\left(1+{B(\epsilon-1)V\over 2R_{R}(D)D}\right)^{-1\over\epsilon-1}}\eqno{\hbox{(12)}}$$where D is the separation distance of the source and destination, V is the speed of the ferry, and B is the size of its data buffer. When ∊ = 2, this reduces toFormula TeX Source $$R_{F}(D,V,B)=R_{R}(D)+{BV\over 2D}.\eqno{\hbox{(13)}}$$The cycle time, which also corresponds to the packet delay, follows asFormula TeX Source $$\tau_{F}(D,V,B)={2D\over V}\left(1-\left(1+{B(\epsilon-1)V\over 2R_{R}(D)D}\right)^{-1\over\epsilon-1}\right).\eqno{\hbox{(14)}}$$When ∊ = 2 this reduces toFormula TeX Source $$\tau_{F}(D,V,B)={B\over R_{R}(D)+{BV\over 2D}}.\eqno{\hbox{(15)}}$$Equation (12) shows that the increase in link capacity is only bounded by the limitations of the vehicle in terms of its speed and buffer size, but does come at the cost of increasing delay.

For stationary source and destination nodes, the best choice of communication mode through the network is a function of several variables, including the separation distance between nodes, the required average data throughput, the maximum tolerable delay, the data ferry speed and buffer capacity, and radio performance capabilities. Typical networking performance requirements are given as a quality-of-service (QoS) metric. In this paper, we consider two main QoS objectives: long-term throughput and the time delay for the delivery of a single packet. Given these two QoS objectives, a phase diagram for the case of a single relay or ferry can be developed by comparing (4), (7), and (12) to determine regions in the QoS space that can be satisfied by each communication mode (Fig. 19).

Figure 19
Fig. 19. Communication mode phase regions and transition boundaries for different separation distances. d = (a) 50 m, (b) 5000 m, (c) 50 km, and (d) 500 km.

As an example, consider a radio link that has bandwidth W = 3.6 MHz, radio power constant K = 2.1 × 108 W, path loss exponent ∊ = 3.0, packet length L = 1000 bits, ferry speed V = 50 m/s, and ferry buffer B = 50 MB. Fig. 19 shows phase diagrams of the feasible communication modes for source–destination pairs separated by various distances. Note, reduced performance (i.e., lower data rate and higher delay) compared to a spot on the diagram can always be achieved by the communication mode indicated by that spot. In other words, the lightly shaded region indicates where the QoS requirement can only be met by ferrying, which can also meet any reduced performance bounds. Likewise, the unshaded region indicates where relaying is better than direct communication. The dark shaded rectangular region indicates the QoS performance achievable by direct communication. Finally, it should be noted that these phase diagrams scale with the radio bandwidth and power gains and that the relative impact of the different communication modes as a function of separation distance is key, not the absolute separation distance and QoS values.

Fig. 19(a) shows the phase diagram for a separation distance of only 50 m. Here direct communication slightly outperforms relaying, with both able to achieve close to 40 Mbps with a delay less than 0.02 ms. Ferrying extends the achievable throughput to close to 200 Mbps but with approximately 1 s delay. Next, Fig. 19(b) shows the phase diagram for d = 5000 m. In this case, relaying (capable of at best 40 kbps throughput with 0.02 s delay) now outperforms direct communication (capable of at best 10 kbps with 0.1 s delay). Ferrying extends the maximum achievable throughput to 2 Mbps with over 200 s delay. Fig. 19(c) and (d) shows plots for separation distances of 50 and 500 km, respectively. These plots show that as separation distances increase, the achievable QoS decreases and that relay outperforms direct with ferry providing additional throughput at the expense of greater delay.

The phase diagram can be used in several different ways to exploit mobility in communication networks. The first method tracks the behavior of a communication link in the QoS space. As the communication requirements and/or the link characteristics like separation distance change, the QoS requirements will move through the phase diagram. As this path approaches the transition between different communication modes, the network can prepare to adapt. For example, in order to transition from direct to relay mode, a relay node must move to the vicinity of the communication link. Likewise, to move from relay to ferry modes, a mobile relay node is required.

A more likely method for using the phase diagram concept for airborne networks is to allocate bandwidth across several different modes. For example, an airborne node can support both direct communication and data ferrying at different QoS levels. The direct mode can be used to send priority information needed for collision avoidance. On the other hand, atmospheric sensing data, which has low priority and is not being used in real time, can be sent via the ferrying mode.

Described here for a single source–destination link, the concept behind the phase diagram can be extended to multihop networks with multiple flows [42]. The different data types in future networks supporting small UASs will have a large variety of QoS requirements, which in turn become requirements on the placement and mobility patterns of vehicles in the airborne network. By taking advantage of the increased performance that can be gained from controlled mobility, future airborne communication networks may be able to support the large demands placed on them by small unmanned aircraft systems.

SECTION V

CONCLUSION

This paper explored the role of meshed airborne communication in the deployment of small unmanned aircraft systems. The networked communication demands of unmanned aricraft systems in general, and small unmanned aircraft systems in particular, are large compared to manned aircraft since telemetry, command and control, health and safety, and payload data must be sent from multiple aircraft to multiple dispersed users such as the unmanned aircraft system operator, air traffic control, and other aircraft in the vicinity. Many future applications of small unmanned aircraft systems will require quick response times in areas where permanent supporting communication infrastructures will not exist. Thus, meshed networking architectures will provide the needed communication for the large number of small aircraft expected to be deployed in the future.

Flight demonstrations were conducted to investigate the behavior of highly mobile meshed networks of small unmanned aircraft. Experimental results validated the feasibility of meshed networking on mobile aerial nodes and characterized the baseline performance of the Heterogeneous Unmanned Aircraft System developed at the University of Colorado. Compared to a static network of meshed nodes, the inclusion of a communication relay on an unmanned aircraft extends the range of a single communication link and increases the end-to-end throughput of a network chain. The experiments showed that this improvement in performance came at the expense of increased variability in the network performance metrics. Experiments also verified the empirical radio propagation model that relates received radio power to separation distance between nodes. The general trends observed in this model serve as the basis for aircraft control algorithms designed to exploit mobility in airborne networks.

A second set of experiments verified that net-centric operation of a small unmanned aircraft system is possible using a meshed airborne communication network. These experiments showcase the integration of local intravehicle communication buses with the external meshed network architecture, allowing for seamless transmission of subsystem data directly over the network. Through this architecture, command and control, telemetry, and high-rate payload data were reliably sent over the same meshed communication network. Although performance of the network varied due to the dynamic nature of aircraft flight, command and control over the system was maintained at all times.

Key issues that emerged from these experiments were the impact of mobility on network routing protocols and the fact that end-to-end connectivity from a source to destination is not always maintained. Thus, off-the-shelf meshed network standards need to be modified before they can be applied to meshed airborne networking. In particular, congestion control mechanisms designed for static networks cannot distinguish between packet loss due to congestion and packet loss due to mobility. Therefore, the default parameter settings in the off-the-shelf protocols have to be adjusted. Furthermore, a delay-tolerant networking architecture should be implemented to account for times when the network becomes disconnected.

Although mobility causes problems for off-the-shelf networking standards, it also provides a unique opportunity that can be exploited to improve network performance. Adaptive, model-free decentralized mobility control algorithms can mitigate noise and interference from unknown sources. Using these algorithms, relay nodes in a linear chain can be moved to locations that improve the overall throughput capacity of network. Although analyzed here in a linear topology, the concept can also be extended to more general network configurations. Likewise, the concept of data ferrying allows mobile nodes to store and carry delay-tolerant data between nodes in the network. This concept mitigates decreased connectivity in stressed or fractured networks, enabling communication that otherwise would not be possible. Quality of service demands can be mapped into a phase space that indicates what mobility methods are needed to achieve a given demand.

Future work will continue to characterize airborne communication networks and the limits of performance and safety inherent in them. As the FAA moves forward to develop standards for integrating unmanned aircraft into the controlled airspace of the National Airspace System, the unique limitations and potential benefits to society of small unmanned aircraft systems must be considered explicitly. In this consideration, communications will be key and will require further inputs from research that builds on the efforts described in this paper.

Acknowledgment

The authors would like to thank the members of the University of Colorado AUGNet Research Group, especially C. Dixon and D. Henkel, for their various contributions to the research activities described in this paper.

Footnotes

Manuscript received nulldate; revised May 15, 2008. Current version published January 16, 2009. This work was supported by the U.S. Air Force under Grant FA9550-06-1-0205 and by L3 Communication.

E. W. Frew is with the Aerospace Engineering Sciences Department and the Research and Engineering Center for Unmanned Vehicles, University of Colorado, Boulder, CO 80309-0429 USA (e-mail: eric.frew@colorado.edu).

T. X. Brown is with the Electrical Engineering Department and the Interdisciplinary Telecommunications Program, University of Colorado, Boulder, CO 80309-0425 USA (e-mail: timxb@colorado.edu).

1. With some exceptions, such as balloons that are classified as aircraft.

2. For example, the Microair UAV-S transponder, http://www. microair.com.au/index.aspx?page=186&productID=77.

References

1. Civilian applications of unmanned aircraft systems (CAUAS)

2007, http://www.cauas.colorado.edu/

2. UAV systems for sensor dispersal, telemetry, and visualization in hazardous environments

B. Argrow, D. Lawrence, E. Rasmussen

Reno, NV
Proc. 43rd Aerosp. Sci. Meeting Exhibit, Jan. 10–13, 2005

3. Utilization of unmanned aerial vehicles for global climate change research

National Oceanic and Atmospheric Administration (NOAA)

2006, http://www.uas.noaa.gov/workshops/workshop3/index.html

4. New technologies to support NASA's mission to planet earth satellite remote sensing product validation: The use of an unmanned autopiloted vehicle (UAV) as a platform to conduct remote sensing

P. Coronado, F. Stetina, D. Jacob

Orlando, FL
Proc. SPIE, 1998, Vol. 3366, 38–49

5. Houston cops test drone now in Iraq, operator says

E. Sofge

Popular Mech., 2008, http://www.popularmechanics.com/science/air_space/4234272.html

6. Small UAS communications mission

T. Zajkowski, S. Dunagan, J. Eilers

Salt Lake City, UT
Proc. 11th Biennial USDA Forest Service Remote Sens. Applicat. Conf., Apr. 24–28, 2006

7. Capturing vertical profiles of aerosols and black carbon over the indian ocean using autonomous unmanned aerial vehicles

C. E. Corrigan, G. Roberts, M. Ramana, D. Kim, V. Ramanathan

Atmos. Chem. Phys. Discuss., vol. 7, p. 11429–11463, 2007

8. Applications of aerosondes in the arctic

J. A. Curry, J. Maslanik, G. Holland, J. Pinto

Bull. Amer. Meteor. Soc., vol. 85, issue (12), p. 1855–1861, 2004

9. NOAA and partners conduct first successful unmanned aircraft hurricane observation by flying through Ophelia

NOAA, 2005, http://www.noaanews.noaa.gov/stories2005/s2508.htm

10. Civilian market for unmanned aircraft struggles to take flight

B. Wagner

Nat. Defense Mag., 2007-10

11. Title 14 Code of Federal Regulations (14 cfr) Part 91

pp. 91.113

12. Federal Aviation Administration, Order 7610.4 k, Special Military Operations

19-02-2004

13. Safety considerations for operation of unmanned aerial vehicles in the National Airspace System

R. Weibel, R. J. Hansman

Tech. Rep. ICAT 2005-01, 21-11-2006

14. Meeting the challenge: Unmanned aircraft systems

Fed. Aviation Admin. R&D Rev., Vol. 4, 2006

15. Unmanned Aircraft Systems: The Global Perspective

P. van Blyenburgh

Paris, France: UVS International, 2007

16. Communications operating concept and requirements (COCR) for the future radio system

EUROCONTROL/FAA Future Communications Study Operational Concepts and Requirements Team

Tech. Rep. 1.0, 2006-03

17. Multi-channel Iridium communication system for polar field experiments

A. J. Mohammad, V. Frost, S. Zaghloul, G. Prescott, D. Braaten

Anchorage, AK
Proc. Int. Geosci. Remote Sens. Symp. (IGARSS), 2004, Vol. 1, 121–124

18. Estimation of future communications bandwidth requirements for unmanned aircraft systems operating in the national airspace system

S. J. Henriksen

Rohnert Park, CA
Proc. AIAA InfoTech@Aerospace, 2007, Vol. 3, 2746–2754

19. Information energy for sensor-reactive UAV flock control

D. Lawrence, K. Mohseni, R. Han

Chicago, IL
Proc. AIAA 3rd “Unmanned Unlimited” Tech. Conf., Workshop, Exhib., Sep. 20–23, 2004

20. Sensorflock: An airborne wireless sensor network of micro-air vehicles

J. Allred, A. Hasan, S. Panichsakul, W. Pisano, P. Gray, J. Huang, R. Han, D. Lawrence, K. Mohseni

Sydney, Australia
Proc. 5th Int. Conf. Embed. Netw. Sensor Syst., 2007, 117–129

21. Experiments using small unmanned aircraft to augment a mobile ad hoc network

T. X. Brown, B. M. Argrow, E. W. Frew, C. Dixon, D. Henkel, J. Elston, H. Gates

Emerging Technologies in Wireless LANs: Theory, Design, and Deployment

B. Bing, Cambridge, U.K.: Cambridge Univ. Press, 2007, 28, pp. 123–145

22. Networked communication, command, and control of an unmanned aircraft system

E. W. Frew, C. Dixon, J. Elston, B. Argrow, T. X. Brown

AIAA J. Aerosp. Comput., Inf., Commun., vol. 5, issue (4), p. 84–107, 2008-04

23. Sensor data collection through gateways in a highly mobile mesh network

A. Jenkins, D. Henkel, T. X. Brown

Hong Kong, China
Proc. IEEE Wireless Commun. Netw. Conf., 2007, 2786–2791

24. On controlled node mobility in delay-tolerant networks of unmanned aerial vehicles

T. X. Brown, D. Henkel

Boulder, CO
Proc. of Int. Symp. Adv. Radio Technol., Mar. 7–9, 2006

25. Phase transitions for controlled mobility in wireless ad hoc networks

C. Dixon, D. Henkel, E. W. Frew, T. X. Brown

Keystone, CO
Proc. AIAA Guidance, Navig., Contr. Conf., 2006-08

26. Flight demonstrations of cooperative control for uav teams

J. How, E. King, Y. Kuwata

Chicago, IL
Proc. AIAA 3rd “Unmanned-Unlimited” Tech. Conf., Workshop, Exhib., Sep. 20–23, 2004, Vol. 1, 505–513

27. Decentralized cooperative aerial surveillance using fixed-wing miniature UAVs

R. Beard, T. McLain, D. Nelson, D. Kingston, D. Johanson

Proc. IEEE, vol. 94, issue (7), p. 1306–1324, 2006

28. Design and implementation of a self-configuring ad hoc network for unmanned aerial systems

H. C. Christmann, E. N. Johnson

Rohnert Park, CA
Collection Tech. Papers 2007 AIAA InfoTech@Aerosp. Conf., 2007, Vol. 1, 698–704

29. A modular software infrastructure for distributed control of collaborating UAVs

A. Ryan, X. Xiao, S. Rathinam, J. Tisdale, M. Zennaro, D. Caveney, R. Sengupta, J. K. Hedrick

Keystone, CO
Proc. AIAA Guidance, Navig., Contr. Conf., 2006-08

30. A full-scale wireless ad hoc network test bed

T. X. Brown, S. Doshi, S. Jadhav, D. Henkel, R. G. Thekkekunnel

Boulder, CO
Proc. Int. Symp. Adv. Radio Technol., Mar. 1–3, 2005

31. Networked UAV communication, command, and control

J. Elston, E. W. Frew, B. Argrow

Keystone, CO
Proc. AIAA Guidance, Navig., Contr. Conf., 2006-08

32. A reliable sensor data collection network using unmanned aircraft

D. Henkel, C. Dixon, J. Elston, T. X. Brown

Florence, Italy
Proc. 2nd Int. Workshop Multi-Hop Ad Hoc Netw.: From Theory to Reality (REALMAN), 26-05-2006

33. Net-centric cooperative tracking of moving targets

J. Elston, E. W. Frew

Rohnert Park, CA
Proc. AIAA Infotech@Aerospace, 2007-05

34. A distributed avionics package for small UAVs

J. Elston, B. Argrow, E. Frew

Arlington, VA
Proc. AIAA Infotech@Aerospace, 2005-09

35. Piccolo User's Guide

C. C. Technology

2008, http://www.cloudcaptech.com/piccolo_plus.htm

36. A delay-tolerant network architecture for challenged internets

K. Fall

Proc. SIGCOMM'01, 2003, 27–34

37. Delay-tolerant network architecture

V. G. Cerf, S. C. Burleigh, R. C. Durst, K. Fall, A. J. Hooke, K. L. Scott, L. Torgerson, H. S. Weiss

IETF, 2006-03, Internet draft

38. Kriging: A method of interpolation for geographical information systems

M. A. Oliver, R. Webster

Int. J. Geogr. Inf. Syst., vol. 4, p. 313–332, 1990

39. Decentralized Extremum-Seeking Control of Nonholonomic Vehicles to Form a Communication Chain

C. Dixon, E. W. Frew

Lecture Notes in Computer Science, Berlin, Germany, Springer-Verlag, 2007-11, vol. 369

40. Stability of extremum seeking feedback for general nonlinear dynamic systems

M. Krstic, H. H. Wang

Automatica, vol. 36, p. 595–601, 2000

41. Cooperative standoff tracking of moving targets using lyapunov guidance vector fields

E. W. Frew, D. A. Lawrence, S. Morris

AIAA J. Guidance, Contr., Dyn., vol. 31, issue (2), p. 290–306, Mar.–2008-04

42. Towards autonomous data ferry route design through reinforcement learning

D. Henkel, T. X. Brown

Proc. Auton. Opportun. Commun. Workshop, 2008

Authors

Eric W. Frew

Member, IEEE

Eric W. Frew (Member, IEEE) received the B.S. degree in mechanical engineering from Cornell University, Ithaca, NY, in 1995 and the M.S. and Ph.D. degrees in aeronautics and astronautics from Stanford University, Stanford, CA, in 1996 and 2003, respectively.

He is an Assistant Professor in the Aerospace Engineering Sciences Department, University of Colorado (CU) at Boulder, where he is also a Member of the Research and Engineering Center for Unmanned Vehicles. Prior to joining the CU Boulder faculty, he was a Postdoctoral Researcher at the Center for Collaborative Control of Unmanned Vehicles, University of California, Berkeley, from June 2003 through July 2004. His research efforts focus on autonomous flight of unmanned aircraft systems, controlled mobility in mobile networks, optimal distributed sensing, cooperative mobile systems for in-situ volumetric sensing, planning and control under uncertainty, and vision-based control.

Prof. Frew was the 2006 AIAA Rocky Mountain Section Young Engineer of the Year.

Timothy X. Brown

Timothy X. Brown received the B.S. degree in physics from Pennsylvania State University, University Park, and the Ph.D. degree in electrical engineering from California Institute of Technology, Pasadena, in 1990.

He is an Associate Professor in electrical engineering and Director of the Interdisciplinary Telecommunications Program, University of Colorado, Boulder. His research interests include adaptive network control, machine learning, and wireless communication systems. His current research funding includes NSF, AFOSR, FAA, and industry. His projects include the role of mobility in network control, denial of service vulnerabilities in current wireless protocols, spectrum policy frameworks for cognitive radios, and indoor wireless network performance.

Prof. Brown received the NSF CAREER Award and the GWEC Wireless Educator of the Year Award.

Cited By

No Citations Available

Corrections

No Corrections

Media

No Content Available

Indexed by Inspec

© Copyright 2011 IEEE – All Rights Reserved