Software-Driven Connectivity Orchestration for Multidomain Network Functions Virtualization Ecosystems

This article proposes a novel connectivity orchestration service for multidomain network functions virtualization ecosystems to address current limitations. The service automatically provides connectivity to remote virtual functions using software-defined networking technologies.


Introduction
NFV provides an enormous flexibility to deploy software functions, commonly referred to as virtual network functions (VNFs), on different NFV infrastructure domains, for instance, on geographically dispersed cloud computing facilities or network edge locations.This flexibility is fundamental to meet the stringent performance requirements of services in 5G and beyond networks.However, the establishment of proper connectivity among VNFs deployed on different NFV domains has received little attention from both the research and the standardization communities.Existing solutions normally assume the availability of linklevel (layer 2) data paths among NFV domains, 4 which may not always be realistic.Alternatively, communications among remote VNFs have been provided through network-level routing mechanisms (layer 3), using IP

Related Work
NFV was proposed to address the limitations of relying on proprietary hardware for network functional-ities, by implementing them in software and deploying them as VNFs.It establishes an abstraction layer that decouples hardware from functionalities, enabling a greenfield environment for deploying network services.ETSI provides standardization activities for NFV, including the definition of an architectural framework 3 where management and orchestration (MANO) functions support the automated deployment of network services built by interconnected VNFs.On the other hand, SDN introduces a data plane with programmable network elements (e.g., switches) and centralizes the control logic in an upper hierarchy central node referred to as the controller.The SDN architecture, defined by the Open Networking Foundation, 8 facilitates communication between the controller and programmable network devices, the exposure of state information to external applications, and the interoperation among different controllers.
While some research has explored the synergies of NFV and SDN to provide networking solutions, most studies have focused on conceptual aspects rather than on a practical implementation. 9Only a reduced set of studies have experimentally evaluated the capacity of NFV platforms to deploy multidomain 5G services.In one study, 10 a streaming service is deployed over several geographically distributed NFV infrastructures, using layer 3 virtual private networks (VPNs) to support interdomain VNF traffic.Another study 11 formulates an algorithm for VNF placement and traffic steering in an NFV ecosystem, which is experimentally validated using NFV, SDN, and layer 3 VPNs for multidomain communication.Other research studies conducted in European research projects 5,6 showcase the prevalent approach, employing network-level routing to support interdomain VNF communications.
However, this approach opens serious concerns that may potentially limit the correct operation of telecommunication and vertical sector services on multidomain NFV environments.On one hand, the use of network-level routing mechanisms hinders a proper isolation among multidomain 5G services.That is, in the absence of specific mechanisms to prevent it, VNFs of one service could be reachable from VNFs of other services or by untrusted third parties, using the IP addresses of the target VNFs.On the other hand, a layer 3 approach entails the potential need for additional (undesirable) network-layer configurations on VNFs and their underlying ISP networks after a service deployment.For example, the next hop address of a VNF might correspond not to another VNF of the service, as would be expected according to the service descriptor, but to an edge router of the NFV infrastructure domain where the VNF is deployed.Moreover, the exchange of multicast and broadcast traffic among remote VNFs may require the installation of specific forwarding state on ISP routers, which may simply prohibit this traffic over their networks, preventing the proper execution of the service.A layer 2 approach based on the allocation of virtual local area networks (VLANs) to 5G services might be a natural alternative to address the aforementioned limitations, using link-level data paths from the underlying ISP networks or layer 2 VPN services. 12However, this approach raises significant challenges in terms of scalability and automated provisioning.
All these aspects have been carefully studied in prior research work, 7 Authorized licensed use limited to the terms of the applicable license agreement with IEEE.Restrictions apply.

FEATURE: ORCHESTRATING CONNECTIVITY IN NFV
jointly developed between Universidad Carlos III de Madrid (UC3M) and Telefónica I+D.This work presented L2s, a platform to provide secure link-level connectivity among NFV domains.However, L2s operation is based on a set of VLANs whose configuration is relatively static at each NFV domain, presenting limitations to accommodate the dynamism of the communication requirements in multidomain NFV ecosystems.

Design of the Interdomain Connectivity Orchestration Service
The design of the connectivity orchestration service allows the creation of virtual networks among different NFV infrastructure domains.These networks are automatically provisioned on demand, during the deployment process of a multidomain 5G service.VNFs of the same service that are deployed on different NFV domains can be allocated an interdomain virtual network and attach to that network through one of their network interfaces.The orchestration service guarantees the provision of link-layer connectivity among all the VNFs that are connected to the same virtual network, regardless of the actual domain where each of the VNFs is executed.In practice, such connectivity is provided on top of the underlying networks of NFV infrastructure providers and ISPs.The interdomain connectivity orchestration service is intended to support the complete life cycle management of these virtual networks, including not only their creation but also their modification (e.g., extending a virtual network to new NFV domains) and termination.Figure 1 illustrates our abstraction of an interdomain link-layer virtual network.
Our solution addresses the aforementioned limitations of layer 3 and layer 2 approaches to interdomain VNF communication: Figure 2 illustrates the design of the connectivity orchestration service.It encompasses different logical components within a data and a control plane.Our design choices were guided by the following criteria: 1) automation in the creation of interdomain virtual networks and the flexibility to implement different management policies for exchanging VNF data traffic among NFV domains, 2) ensuring that a service descriptor remains independent of the multidomain nature of the NFV ecosystem (a 5G service should function correctly based on the service descriptor, regardless of the specific NFV domains involved in its deployment), and 3) maintaining compatibility with existing initiatives and facilitating practical implementation by leveraging well-established and widely adopted NFV/SDN specifications and Internet protocols.

Data Plane Components
The service design incorporates data plane elements responsible for data forwarding functionalities.They are deployed at the network edge of every NFV domain to facilitate interdomain VNF communications.These elements, referred to as programmable layer-2 switch (PLS) in Figure 2, operate as programmable layer 2 switches.PLS elements provide a number of access ports at every NFV domain to attach VNFs and support their interdomain communications.
PLS elements can be implemented in software and be deployed as regular VNFs on NFV domains.This allows exploiting the inherent advantages of the NFV technology, such as the flexibility to change the allocation of resources to a PLS deployment, for instance, by incorporating additional PLS instances or by scaling them vertically.
To support data plane communications across NFV domains, PLS elements are interconnected through point-to-point links.These links can be established over the underlying networks of NFV infrastructure providers and ISPs, using standard IP tunneling protocols, such as instance virtual extensible local area networks (VXLANs) or generic routing encapsulation.Moreover, traffic exchanged among PLS entities may be protected through state-of-the-art security mechanisms, e.g., IP security (IPsec).
The set of interconnected PLS elements creates an overlay network spanning all NFV infrastructure domains.This network can provide redundant links between domains and support multiple end-to-end communication paths for remote VNFs.

Control Plane Components
In our solution, the overlay network can be programmed using SDN, by installing traffic forwarding rules on the PLS elements.This characteristic allows for the separation of interdomain VNF communications into isolated virtual networks, which are built on top of the overlay network.It also provides flexibility in managing data communications among VNFs connected to the same interdomain virtual network.This includes ensuring a minimum bandwidth, enabling shortest-path communications with different metrics, enhancing resilience in case of path failures, or performing load balancing in PLS elements.
In the architectural design of Figure 2, the interdomain connectivity orchestrator (IDCO) is the control plane entity that exploits such programmability.It features an SDNbased control interface to interact with the PLS elements.This interface enables the management of forwarding rules on these devices.The IDCO comprises four modules: • The northbound interface (NBI) module provides the point of   13 or NETCONF 14 ).The PLS manager also manages the overlay network topology, collects statistics, and detects unexpected events like link failures, reacting to them.• The TM module determines the network paths for VNF data flows on the overlay network.
To this purpose, it may consider TM policies or the link status characterized by the PLS manager, facilitating the application of traffic engineering principles.
The modular design of the IDCO allows for flexible integration of supplementary functionalities, for instance, to monitor data traffic transmitted over an interdomain virtual network or to temporarily deactivate and reactivate VNF communications for security purposes.

Operational Aspects
Our solution considers a transport provider that has service level agreements with a set of NFV infrastructure providers.The transport provider can instantiate a multidomain network service including a PLS element in each NFV domain.This service can be deployed as any other network service of the multidomain NFV ecosystem, using the existing MANO facilities.Using the PLS elements, along with its own network infrastructure, the transport provider builds an overlay network that spans every NFV domain.
A service provider may now request the deployment of a service (referred to as the target service) from the MANO system of the NFV ecosystem.The target service may be built by multiple VNFs, which have to be deployed on different NFV domains (e.g., to ensure appropriate latency for end users).Some VNFs may require link-layer connectivity (e.g., to isolate those VNFs from other services or to share a broadcast/multicast domain).
As part of deploying the target service across the intended NFV domains, the MANO platform requests the creation of interdomain virtual networks to the IDCO, identifying the point of attachment of each VNF to its corresponding PLS element.The IDCO then installs the necessary forwarding rules in the PLS elements to create the interdomain virtual networks and support link-layer inter-VNF communications.Once the target service is available, the MANO system notifies the service provider accordingly.

Prototype Implementation
We have developed a basic prototype of each component of the architectural design shown in Figure 2 to validate the feasibility of our design.The PLS implementation is an enhancement of the prototype presented by Vidal et al. 7 It supports SDN-based operations using Open vSwitch, an open source programmable layer 2 switch implementation.SDN programmability in Open vSwitch is supported through the OpenFlow protocol. 13Point-to-point links between PLS elements are created using Linux VXLAN interfaces.These links are protected with standard IPsec.The PLS prototype has been packaged as a VNF, using a virtual machine with Linux Ubuntu Server 18.04 LTS, 1 GB RAM, one virtual CPU (vCPU), and 20 GB disk storage.
The PLS manager and TM modules are implemented as a single SDN application using Ryu, 15 an open source SDN framework.The PLS manager utilizes the Net-workX Python package to represent the overlay network topology as a graph, which can automatically be discovered and updated through the Ryu SDN controller.The TM module utilizes NetworkX to calculate the shortest paths on the graph, using link attributes such as the available bandwidth and latency and Dijkstra's algorithm.
For validation purposes, our IDCO implementation supports the creation of point-to-point interdomain virtual networks, enabling data communications between pairs of VNFs (to implement multipoint virtual networks is part of our future work).The NBI module acts as an HTTP application programming interface that accepts Authorized licensed use limited to the terms of the applicable license agreement with IEEE.Restrictions apply.
requests to create such networks among NFV domains.The IDCO prototype implementation has been installed on a virtual machine with a Linux Ubuntu Server 18.04 LTS distribution, two vCPUs, 4 GB RAM, and 20 GB disk storage.

Proof of Concept and Results
We have accomplished a functional validation of our proposal through a proof of concept, using the prototypes of the IDCO and the PLS.The proof of concept encompasses the execution of two network services on a multidomain NFV ecosystem (NS 1 and NS 2 in Figure 3).NS 1 is built by two VNFs, VNF 1a and VNF 1b , which must be interconnected at layer 2 across different NFV domains.We will use our solution to create an interdomain virtual network and support the connectivity of both VNFs.Thus, they will be able to communicate as if they were attached to a layer 2 switch, satisfying the connectivity requirements of the network service.NS 2 also has two interconnected VNFs that must be deployed on separate NFV domains, VNF 2a and VNF 2b .Our solution will support their connectivity using an interdomain virtual network.Interdomain virtual networks will isolate data communications of NS 1 from NS 2 and vice versa.
The ecosystem was extended with an instance of the IDCO, hosted on a virtual machine collocated with the OSM software.
As an initial step, we created an NFV service descriptor with three PLS VNFs.The descriptor includes the definition of the point-to-point links to be established between PLS Authorized licensed use limited to the terms of the applicable license agreement with IEEE.Restrictions apply.
elements as VXLAN tunnels (links A-B, A-C, and C-B in Figure 4) and the necessary configuration actions to automatically set up such links at instantiation time.The OSM software was then used to automatically deploy the service, instantiating a PLS VNF on each of the domains and creating their point-to-point links.The PLS VNFs configuration concluded with the registration of their respective programmable switching functions (provided by Open vSwitch) within the IDCO. Figure 5(a) shows the registration of these three VNFs in the IDCO through Open-Flow (representing the OpenFlow directives exchanged between the PLS manager module and the PLS VNFs).These registration events started at the beginning of the experiment, allowing the PLS manager to discover the overlay network topology formed by the PLS VNFs.
The proof of concept proceeded with the deployment of NS 1.In the experiment, VNF 1a was deployed on domain A as a traffic generator, whereas VNF 1b served as a traffic sink on domain B. The OSM software was instructed to attach VNF 1a to port 2 of the PLS VNF in domain A (this was facilitated by the use of OpenStack provider networks).Similarly, VNF 1b was connected to the same port of the PLS VNF in domain B. Once both VNFs were operational, approximately within 25 s of initiating the experiment, we made a request to the IDCO NBI to create an interdomain virtual network between domains A and B. The request specified the allocation of port 2 in both PLS a and PLS b to the virtual network, along with the MAC addresses of VNF 1a and VNF 1b .These addresses were explicitly specified, although they can be obtained from OSM in a realistic situation.
In this proof of concept, the default TM policy prioritizes network paths based on Dijkstra's lowest cost algorithm, favoring links that do not accommodate additional flows.The costs assigned to each link are determined through a preliminary performance analysis, considering available bandwidth and round-trip time (RTT) metrics.The analysis revealed the following results: link A-B offers an available bandwidth and RTT of 898 Mb/s and 1.716 ms, respectively; link A-C, 286 Mb/s and 4.492 ms; and link C-B, 288 Mb/s and 4.399 ms.Thus, we assign a lower cost to link A-B since it provides better performance.
Following the request to the IDCO, the TM module determines that data communications between VNF 1a and VNF 1b should utilize the network path conformed by link A-B.The PLS manager then configures it using the SDN controller, According to the default TM policy, the TM selected an unused network path through links A-C and C-B, aiming to balance the traffic load in the overlay network.This resulted in the configuration of appropriate traffic forwarding rules on PLS a , PLS b , and PLS c , as shown by the OpenFlow events in Figure 5(a).After the instantiation, we configured another 10 Mb/s UDP transmission from V NF 2a to V NF 2b .Figure 5(c) shows the average traffic throughput traversing port 2 of PLS c (dashed purple line), validating the utilization of the new data path for the interdomain virtual network.Figure 5(b) illustrates the different RTT of this network path, being aligned with our previous link performance analysis.
As next step, we tested the ID-CO's ability to handle unexpected events within the overlay network.The link between PLS c and PLS b was brought down, triggering an event that was captured by the PLS manager.The network topology was updated, and the TM module recalculated the network path for the interdomain virtual network of NS 2. Consequently, the PLS manager changed the traffic forwarding rules on the PLS VNFs, resulting in an increase of traffic throughput traversing port 4 of PLS b and a drop in traffic from PLS c , as shown in Figure 5(c) at 125 s.
Finally, the proof of concept involved the migration of VNF 1a from domain A to C. The migration event was manually triggered 176 s after the experiment initiation, as depicted in Figure 5(a).Accordingly, the PLS manager changed the traffic forwarding rules associated with the virtual network of NS 1, so as to use links A-C and A-B.Once the migration process was completed, traffic arrived again at its destination (VNF 1b ). Figure 5(b) and (c) shows the RTT and the average throughput on the new network path, respectively.
T his article presents a con- nectivity orchestration service for multidomain NFV ecosystems.Our research indicates that current NFV and SDN implementations are mature enough to support the flexible deployment of 5G services in different locations.However, they still present limitations in supporting VNF connectivity across different NFV domains.We have demonstrated the feasibility of leveraging open source software technologies and standard protocols to address these limitations, creating new innovation opportunities for IT professionals and software practitioners in the field of 5G services.Additionally, we have conducted a thorough proof of concept in a realistic multidomain NFV ecosystem, assessing the viability of our approach.Our orchestration service has demonstrated its capacity to reliably operate even under unexpected connectivity events.Its modular design, the prototype implementation, and the proof of concept may serve as a valuable reference for future research in this field.
In addition, our work unfolds novel and promising avenues for future developments.Regarding the design, we utilize a redundant overlay network across NFV domains.Whereas this enables multiple communication paths for interdomain VNF communications, the overlay network topology and the path selection mechanisms are relatively static.Future research is required to adapt the overlay network to ISP network dynamism, considering bandwidth and delay variations, and implement TM policies based on the actual performance of overlay paths.We will also research the application of our connectivity orchestration service to cloudnative ecosystems, given their growing relevance in the NFV evolution.
Finally, we aim to contribute a full-featured implementation of our solution to the open source software community.Consequently, further developments are needed, including multiaccess virtual network creation and automated discovery of virtual network end points.

FIGURE 1 .
FIGURE 1.An abstraction provided to VNFs by interdomain virtual networks.
Figure 4 represents the various components of the proof of concept and their deployment on the NFV ecosystem.The ecosystem comprises three distinct NFV domains.Domains A and B are built by commercial off-the-shelf server computers.Domain C is composed by a rack of resource-constrained devices (singleboard computers, Raspberry Pi).Each NFV domain uses OpenStack as the virtual infrastructure manager.The MANO of resources across the three NFV domains are facilitated by the ETSI open source MANO (OSM) software.The NFV ecosystem is presented in detail by Vidal et al.

FIGURE 3 .FIGURE 4 .
FIGURE 3. The services defined to perform the experimental validation of the IDCO.