A Network Paradigm for Very High Capacity Mobile and Fixed Telecommunications Ecosystem Sustainable Evolution

The main objective for Very High Capacity (VHC) fixed and mobile networks is improving end-user Quality of Experience (QoE), i.e., meeting the Key Performance Indicators (KPIs) – throughput, download time, round trip time, and video delay – required by the applications. KPIs depend on the end-to-end connection between the server and the end-user device. Not only Telco operators must provide the quality needed for the different applications, but also they must address economic sustainability objectives for VHC networks. Today, both goals are often not met, mainly due to the push to increase the access networks bit-rate without considering the end-to-end applications KPIs. This paper’s main contribution deals with the definition of a VHC network deployment framework able to address performance and cost issues. We show that three are the interventions on which it is necessary to focus: $i$ ) the reduction of bit-rate through video compression, ii) the reduction of packet loss rate through artificial intelligence algorithms for access lines stabilization, and iii) the reduction of latency (i.e., the round-trip time) with edge-cloud computing and content delivery platforms, including transparent caching. The concerted and properly phased action of these three measures can allow a Telco to get out of the Ultra Broad Band access network “trap” as defined in the paper. We propose to work on the end-to-end optimization of the bandwidth utilization ratio (i.e., the ratio between the throughput and the bit-rate that any application can use). It leads to better performance experienced by the end-user, enables new business models and revenue streams, and provides a sustainable cost for the Telco operators. To make such a perspective more precise, the case of MoVAR (Mobile Virtual and Augmented Reality), one of the most challenging future services, is finally described.


Introduction
Since some ten years, Europe first established the DAE (Digital Agenda for Europe) industrial policy for the Member States to implement Ultra Broad Band (UBB) objectives, and then promoted the so-called European Gigabit Society (EGS) through the development of VHC (Very High Capacity) networks.VHC pillars for 2025 are two: a dense Europe-wide fabric of optical fiber (or equivalent) access and the development of the mobile 5 th generation (5G).
Recently, the EGS objective, as the European Commission (EC) put it forward, started rising some concerns in terms of sustainability and credibility in the assumed time frame.According to the European Investment Bank, the required investments to reach DAE/EGS targets account to € 384 bn until 2025 (median scenario), while the market can deliver around € 130 bn, i.e.only one third of the required amount.Therefore, they concluded that: (i) "Gigabits Society targets for VHC networks are far beyond what market forces can deliver"; (ii) the "investment gap needs to be fulfilled with a substantial degree of public support"; and (iii) there is "high risk of failing to meet Gigabit Society goals" [1].Therefore, the European mobile and fixed network ecosystem is going to face serious sustainability problems, and the recent COVID-19 outbreak (Sars-CoV-2 coronavirus pandemic) can only make things worse.On March 5 th 2020, the UK Parliament launched a public inquiry on the EGS objectives to ascertain "how realistic the ambition is, what is needed to achieve it, and what the Government's target will mean for businesses and consumers" [2].According to these authors, other governments in Europe should review rapidly their chosen policies towards the EGS to adjust them to the present scenario.This is now more urgent than before, as the consequences of the pandemic are making the Telco industry to face the shortmedium term challenge to anticipate network improvements as much as possible through a more affordable approach in the light of the novel societal scenario and the economic crisis.
When carefully analyzed, the EC's approach appears controversial both from a technical and an economic point of view, as well as from the point of view of industrial policy strategy.The apparent consequences of the lack of optimality in the large picture vision designed by the EC for the VHC networks could be more dangerous in the new socio-economic scenario in front of us. 1 In fact, it is even more certain that Europe needs to improve the telecommunications infrastructures in order to cope with the service challenges both present and future and to provide citizens with better quality of service at an affordable cost since now.However, promoting too far-fetched objectives could backfire and, while risking to not delivering the intended outcomes, they may severely put under stress the European telecom industry as a whole, and delay infrastructures especially in suburban and rural areas.
This paper aims at showing an alternative conceptual approach to achieve the objectives set at the mid-decade target date, illustrating a different way of framing the industrial strategy that is less risky, more gradual and, certainly, able to materialize the EGS targets earlier.The proposed evolutionary approach applies equally well to fixed networks in their migration from copper access to full optical fiber access (i.e., FTTH or fiber to the home), and to the path from present mobile 3G/4G to the 5G.
Considering 5G, the Total Cost of Ownership TCO (Capex+Opex) for mobile radio access networks (RANs) is expected to sharply increase.According to an analysis published by McKinsey & Co. [4] for one European country, where all three mobile operators were assumed to follow a conservative approach to 5G investment, the "TCO for RAN would increase significantly in the period from 2020 through 2025, compared to expected 2018 level.In a scenario that assumes 25% annual data growth, TCO would rise by about 60%".Therefore, in a conventional scenario, mobile Telcos will need to develop new strategies for 5G to afford increased network cost vis-a-vis saturated revenues, especially in the over-regulated European Countries.Standard measures will involve cost-saving efforts, but Telcos are also exploring alternative approaches, such as network sharing.With both spectrum sharing and active network sharing, estimates for RAN network savings range between 25% and 40% (CAPEX) and between 20% and 30% (OPEX), while some limited additional savings are also expected for the mobile backhaul section [5].In spite of that, the development of 5G may be disappointing, unless mobile Telcos develop new profitable revenue models, which are still uncertain (autonomous cars, smart cities, etc.) and can easily fall under attack from very agile competitors without any infrastructures and acting on top the network layer.
Regarding fixed networks, it is well known that in several environments it is very difficult to make business plans affordable with an abrupt strategy to replace mixed copper-fiber (i.e.FTTC) networks with FTTH (e.g., see suburban and rural areas).
We organize the rest of the paper as follows.Section 2 introduces the main key applications and network performance indicators (KPIs) that need to be considered to provide UBB services, especially live video services which are among the most demanding ones, and are limited by the network overall performance; then, it discusses Quality of Experience (QoE) vs Quality of Service (QoS) and shows the limitations of the traditional approach to customer satisfaction.In Section 3 we analyze applications performance and introduce a simple condition to avoid the risk of being 'trapped' in the inefficient UBB domain; then, the main approaches to address these issues are shown.In Section 4, first we show a conceptual procedure to progressively improve network performance and to reduce cost, then, we show in one very demanding service example, Mobile Virtual and Augmented Reality (MoVAR), how network performance targets can be achieved.Section 5 provides our conclusions.

Application services and network services KPIs
Telecommunications networks, both mobile and fixed (as well as convergent networks), offer two types of services: application services (or, more simply, applications) and network services:  Application services are the end-users' services (e.g., web browsing, e-mail, messaging, video streaming, cloud, gaming, 360° augmented and virtual reality).They are typically provided by software run on the servers of service providers (e.g., WhatsApp, Dropbox, Skype, Netflix) and on devices used by end-users themselves (e.g., personal computers, smartphones, tablets, TV sets).
Application services today are essentially offered by OTTs and, in a limited way, by other Content Delivery Providers (CDPs), while Telcos offer a very small percentage of application services (some 5%, or so).
 Network services are the IP packet transport services that must provide the performance levels required by applications.
Above-mentioned application services show a wide variety of applications which do not impose the same IP traffic burden to the networks.Table 1 shows the 2017-2022 projections of consumer traffic types on the internet according to the classification provided by the Cisco VNI [6].This is consumer IP traffic passing through the internet, i.e. not related to the network of one single Telco/ISP. 2Video streaming over the internet requires a share of bandwidth that is expected to grow over 82% of all consumer internet traffic in 2022.Network traffic analyses confirm that we can already estimate HTTP traffic at about 75% today.Moreover, different application services require different quality (or performance levels), e.g. the video streaming standard definition (SD) has a much lower resolution than UHD (4K) and then the applications KPIs will be much less demanding.
Edge-cloud computing [7][8][9][10] represents an opportunity for a Telco to improve application services performance and to reduce the TCO for the provision of its network services.In fact, the opportunity to provide application services to the end-user from a nearby server, possibly within the access network, without the application traffic having to go through the metro network and the core network, improves the performance and may allow the reduction of the investment needed to ensure KPIs compliance with a given quality level.
To clarify, let us take two simple examples.First, it has been estimated that an edge-cloud solution for the Dropbox service could reduce the amount of data crossings the Telco networks by up to 90% [11].
However, it should be considered that this data service in 2022 represents only a small fraction of the total consumer traffic, and the expected reduction of overall traffic is no greater than 10%.Moreover, Dropbox contents distribution in the edge-cloud has a high storage cost, as low cost solutions (e.g., transparent caching) cannot be used due to the small number of end-users that download the same content.Second, with state of the art technology, we can estimate the reduction in video traffic at around 50%, but the reduction is related to more than 80% of the total traffic and then the impact on consumer traffic would be 40%, which is far greater than the expected savings on above-mentioned data traffic.
Most of the video traffic is generated by movies (VoD streaming) and live events (live streaming) that can effectively use transparent caching for the edge-cloud distribution.
As first consequence, it is in the interest of Telcos to deploy as soon as possible edge-cloud computing technologies to reduce video traffic in the core and aggregation sections of their networks: we believe that the killer application of the ECC is content distribution.Once edge-cloud computing is deployed close to the end-users (in the fixed access PoP and mobile BTS aggregation sites) other services can be distributed, including enterprise edge-cloud, internet of things, gaming, and augmented/virtual reality.Moreover, new architectures such as cloud RAN/virtual RAN can be deployed.
Therefore, in order to analyze how to improve the applications performance and to evaluate the advantages of edge-cloud computing it is necessary focusing on the applications KPIs related to the "endto-end" connection between the servers running them and the devices presenting them to the end-users [12][13][14].The KPIs depend both on the Telco networks (access network, metro network, core network) and on the other networks, which must be crossed to reach the server that provides the service.
The main KPIs related to the performance of all the applications and, in particular, of the video services, are:  Throughput (TH), i.e. the "speed" at which end-users devices and servers exchange application data.It is one of the most important indicators, which can reach up to 1 Gbit/s for applications such as 360° MoVAR, and in the future will possibly increase.Throughput is always lower than the bitrate (BR), the "speed" in the communications channel between end-user and server, due to the congestion control algorithms.For current UBB networks this difference is very large.In fact, TH is limited both by the BR and, when the BR is not the bottleneck, by the 'distance' metric between the application and the end-user.
 Latency (measured through the round trip time, or RTT, in milliseconds) which must be very low both to improve throughput and to meet real-time requirements of some services (e.g., tele-surgery, autonomous driving, tactile internet) and which, in some cases, may require RTT values as low as 1 ms.
 Download time, which measures the response time in seconds to end-user requests (e.g., web server response time is the time it takes to display a web page).
 Video delay, which is a live streaming indicator measuring the time in seconds between the instant the camera captures a video frame and the instant the end-user device displays it on the screen.
The applications KPIs are mainly managed by the layer 4 (end-to-end applications transport) of the IP protocol stack, and also depend on the network KPIs, i.e. on the transport of the IP packets between the server and the end-user device (end-to-end packets transport).The transport of the IP packets is managed by the layers from 1 (physical) to 3 (network) of the IP protocol stack.The main end-to-end network KPIs are:  the bit-rate, that depends on the capacity of the network links crossed by the packets flow between the server and the end user.The available bit rate for one application is reduced if the number of flows increases, because the bit-rate is shared among the applications (applications transport protocol fairness principle);  the packet loss;  the latency, measured by the Round Trip Time, that is both one of the main network and applications KPIs.
Let us now concentrate on the relationship between the KPIs and the different definitions of quality being of interest to the Telco and to the end-user.

QoE and QoS
A rather broad definition of Quality of Experience, provided by ITU-T P.10-2016, consists of "the degree of delight or annoyance of the user of an application or service.It results from the fulfillment of his or her expectations with respect to the utility and/or enjoyment of the application or service in the light of the user's personality and current state."Although not always numerically quantifiable, QoE is the most significant factor in assessing the customer experience.Historically, Telcos attempt to derive Quality of Experience from Quality of Service, which intends measuring objectively service parameters through average main technical network KPIs, such as latency, packet-loss rate, and bit-rate.
According to ITU-T P.10/G.100,QoS is the "totality of characteristics of a telecommunications service that bear on its ability to satisfy stated and implied needs of the user of the service".
When compared, the two definitions show a substantial difference.On the one hand, QoS is network centric and provides an "ensemble view"i.e., it is based on average valuesnot related to the service experience perceived by a given customer.Its goal is to support network management and improve the average network quality.In general, QoS cannot appropriately meet the expectations of customers, who demand peak-hour rather than average time quality.On the other hand, QoE is focused on applications performance (or quality) both technical and subjective.It also has a subjective (but not arbitrary) set of measures from the user's point of view of the overall quality of the service provided, which aim at capturing specific needs.Therefore, QoE is user-centric and is evaluated by applications KPIs.In other words, rather than focusing only on technical parameters, QoE provides an assessment of human expectations, feelings, perceptions, cognition and satisfaction with a particular application or service.Some of today's subjective KPIs for the QoE are the abandonment of a certain application, complaint calls to a service provider's control center, churn, the amount of time spent with the application, the use of "like", and so on.The applications subjective KPIs are strongly related to the technical KPIs, e.g. for e-commerce the conversion rate increases when the catalog download time decreases.Predictable subjective indicators for the future include facial expressions, verbal criticism, and comparison with the past or other applications.When considering technical KPIs, such as throughput in the QoE framework, they are intended in the peak-hour and their values are closely related to user's satisfaction.The throughput and the other technical parameters defined above, when evaluated for each customer one by one, are important technical QoE parameters for video services.The bit-rate is certainly important because it could limit the throughput, however it cannot be analyzed 'alone', i.e. without considering the total throughput required by all active applications, and then it is not a QoE indicator.

The limitation of the traditional approach to customer satisfaction
To improve applications performance, a Telco typically works on the transport of IP packets and implements QoS-based traffic management techniques such as bandwidth reservation and packet prioritization at Layer 2 and Layer 3 of the IP protocol stack.Although these actions may assist in restoring network failures and congested network paths, they are not effective in improving the applications performance (i.e., to get higher throughput, lower latency, packet loss, download times and video delays) because the congestion control limits applications KPIs, that are mainly managed at Layer 4, and then QoS techniques have limited effects towards a better service experience.QoS can help eliminating available bit-rate bottlenecks through bandwidth reservation.
While QoE is mainly focused on the applications services and QoS on the network services, technical QoE and QoS are related.Traditionally, a Telco tries predicting QoE based on a construct of QoS-QoE models [15].However, the Telco's predicted QoE turns out to be a loose function of the set of network performance (averaged) KPIs and, to make things even worse, the complexity of the underlying interactions often masks the QoS-QoE relationship, as application QoE mainly depends on Layer 4 behavior, as well as on human and context factorsnot included in a QoS-QoE model.Consequently, a Telco is generally unable to capture the quality actually perceived by its end-users one by one.
On the one hand, most of the applications are provided by OTTs and CDPs, 3 are delivered by software executed in the end-user devices and in the servers.To provide good or excellent QoE, that enables higher revenues both from advertising and from service fees, the OTTs locate their servers close to the Telcos networks and in some cases, when allowed, into the core networks.Telcos do not have direct access to the end-user devices, nor can acquire application information by DPI (deep packet inspection), due to encryption possibly applied by OTTs.Even if DPI worked for some of the OTT services, human and contextual factors would continue to be missing, and more complex, context-aware QoE models for specific applications cannot be integrated into service delivery.
On the other hand, the OTT has access to end-user devices information by having its own application running into these devices.The OTT can then monitor the most important applications technical KPIs (e.g., throughput, latency, download time and video delay), other KPIs (e.g., stall events and playback buffer occupation, that depend on the main technical KPIs) and can also acquire understanding of the context in which applications are used, such as the user's location (acquired by GPS sensor, accelerometer, etc.), the type of device, and the type of internet connection.The OTT is also familiar with user profiles.In fact, the end-user downloads an application to use the service and, for some services, each end-user must have a personal account in order to subscribe to the OTT service.Within the user's account, information is contained such as the price (if any), service preferences and in some cases the end-user characteristics, which may be useful to implement a better forecast of the perceived QoE.
With the information on applications KPIs in hand, the OTT can adapt in near real-time the service quality (such as the video resolution) to the network conditions by adaptive streaming algorithms and can then improve applications KPIs.However, the OTT may be limited in always providing a good QoE to end-users due to problems that may occur in the client device or in the Telco network.
The OTT usually has no QoE issues in the big internet thanks to the distribution of applications and content from their centralized data centers (such as hyper-scale data centers) to medium/small data centers located at the edge of the big internet, i.e. close to and interconnected with the Telco's networks.
Today, Content Delivery Networks (CDNs) massively manage the QoE.About twenty years ago the OTTs started to use content delivery (CD) platforms also located within CDNs.Therefore, they have an established technology and are able to ensure the quality level required by the different applications in the big internet.
The whole premise of a CDN is to distribute contents and applications as close as possible to the enduser.In principle, this approach can also be used within the Telco network.However, the Telco network architecture today does not provide IP Layer 3 user plane visibility between the core network and the end-user equipment: therefore, CD platforms can only be located in the core network and are less effective as the 'distance' to reach the end-user is large.
Modern video streaming platforms, possibly located in a CDN, adopt appropriate application layer adaptive technologies, such as e.g. the DASH (Dynamic Adaptive Streaming over HTTP) MPEG standard (ISO/IEC 23009-1) [16], to allow adapting as much as possible to network conditions in order to comply to end-to-end technical QoE requirements individually for each end-user (Figure 1).Without directly controlling the network service, when network conditions degrade due to RTT and packet loss rate increase or to available bit-rate reduction, the OTT or the CDN provider can only reduce the source quality level to adjust end-to-end throughput as requested by feedback signals directly coming from the media playback controller.Therefore, DASH continuously adapts the short-term throughput to provide the best possible streaming quality.This implies variable video presentation quality level that, however, is much better than uncontrolled data speed random throttling, which causes very unpleasant video stalling conditions.However, as forthcoming contents and applications require high or very high throughput (e.g., for 4K streaming quality and for 360° augmented/virtual reality) this approach alone becomes less effective and we cannot avoid looking better into the Telco's network.
To improve the QoE in their networks, Telcos should adopt the same approach used by OTTs: this asks for placing content delivery platforms close to the end-user premises.Performance improvement is achieved acting on IP Layer 4 functionalities, and this requires the distribution of some core functionssuch as functions of the EPC (Evolved Packet Core) in 4G mobile networks, and of the BNG (Broadband Network Gateway) in fixed access networks.The goal of the EPC and BNG distribution is providing user plane IP Layer 3 visibility that, as said above, the Telco network architecture presently does not provide, to enable the distribution of contents and applications close to end-users.This entails complexity higher than that of the well-established CDN solution in the big internet.

Throughput vs bit-rate: the UBB domain
Let us now discuss what causes applications KPI degradation and how we can overcome it.We mostly concentrate in analyzing the throughput that is the most important KPI, with some mention to the other main KPIs.
On the one hand, when customers think of internet "speed", they are implicitly referring to the applications performance, i.e. to the end-to-end throughput.On the other hand, "speed" for Regulators and Telcos today is generally the bit-rate (i.e. the speed of the transmission channel) of the fixed and mobile access networks.Moreover, the speed test measurements provide the bit-rate (to be more precise a proxy of the bit-rate) in the network segment between the end-user device and an intermediate server that makes the measurement, which is different from the OTT servers that execute the applications and then provide the services to the end-users.This causes confusion, as very often people believe that a network is 'fast' if the access bit-rate is 'fast', while not considering that the most important KPI for a fast network is the application throughput, that depends on the end-to-end connection between the server and the end-user device and not only on the access network bit-rate.Which is worst, the confusion pushes towards increasing the access infrastructure cost for a potentially poor end-user performance experience [17,18].
In a given geographical area, the network throughpute.g., the throughput handled by one single access POP that aggregates and manages the traffic of all the access links for that network areamust be able to ensure the performance required by all the simultaneously active application services.
The throughput of one single application, in the given network area, is limited by the minimum between two values: the available bit-rate, which depends on the type and the number of simultaneously active applications, and the maximum network throughput that is limited, due to congestion control, by round-trip time and packet loss ratio.By considering one single active TCP (the transport layer protocol adopted more than 95% of times) application, its throughput is given by the following simplified but accurate expression [19]: where c is a constant (typ.values 0.9÷1.2),MSS is the maximum TCP segment size (typ.1460 Byte for video services), RTT is the round-trip time, PLR is the packet-loss ratio, and BR is the available bit-rate, i.e. the bit-rate the application can use.According to eq. ( 1), the product  · √ is the server to enduser 'distance' metric.In general, RTT and PLR depend on the network technology, its topology, and total traffic.Eq. ( 1) gives the upper bound of the steady state application throughput for deterministic packet loss for one TCP connection and traditional end-to-end "windows" based congestion control mechanism.
Eq. ( 1) is valid for any traditional TCP implementations (Tahoe, Reno, CUBIC, etc.) with different values for the constant, c.Table 2 shows some throughput values vs. RTT and PLR, assuming BR is not the bottleneck.
Heuristically, we can also assume eq. ( 1) to provide the boundary between the UBB bit-rates region and the BB/NB (broadband/narrowband) bit-rates region in a legacy network.In fact, for given RTT and PLR, in BB/NB networks the bit-rate is 'low' (i.e., BR is the bottleneck for TH) and the so-called bandwidth utilization ratio r = TH/BR is 1 (or close to 1).On the contrary, in UBB networks the BR is 'high', RTT and PLR limit the throughput, then TH < BR and, as BR grows, r can even become very small (r << 1): under this condition, bit-rate resources are wasted in one or more part(s) of the network.
Therefore, the bandwidth utilization ratio can significantly degrade in UBB networks as compared to legacy networks.Since the bit-rate increase is often deployed mainly in the access network, this is the part of the network that wastes bit-rate resources.Therefore, focusing on the bit-rate increase in the access network, that is presently the only objective of Regulators and Telcos, conflicts with throughput limitations given by RTT and/or PLR and by the available bit-rate between the big internet and the access network.As an example, for LTE networks the measured cumulative distribution function of the radio link bandwidth utilization ratio, F(r), was given in [20] for "large" TCP downlink flows (i.e.> 5 s and > 1 Mbyte) not concurrent with other flows: on average, the bandwidth utilization ratio is ra = 34.6 %, and the median ratio is only rm = 19.8%.The median ratio shows that one single application uses the radio channel bit-rate for about one fifth of the time and then the radio access bit-rate is far from being the bottleneck for the application.
In the segment from the big internet to the access network, the UBB network throughput is obtained by aggregating the traffic of all the access links managed by one access POP and then the bandwidth utilization ratio is, in general, much higher than the access network bandwidth utilization ratio and, in some cases, BR in this network segment is the bottleneck for the applications performance.This is shown in Figure 2 which provides fixed network average peak bit-rate per line in Italy (aggregation, metro, and core networks) measured in 2015-2017 and published by AGCOM [21].The peak bandwidth increase year over year in this network section is 25-30% (the former value for xDSL accesses and the latter value for FTTx accesses).Extrapolating the linear increase to 2020 overall BR is about 2 Mbps.This evidences the large gap between the BR available for applications in the access network and in the network from the big internet to the access [18].
To improve both the applications throughput and the bandwidth utilization ratio, when the available bit-rate is not the bottleneck, RTT and PLR must be reduced.This can be obtained by the distribution of contents and applications close to the end-users and by appropriate physical layer improvements.
Note that a low value for the bandwidth utilization ratio, r, has a negative impact on Telco's economic sustainability, as it increases network costs and does not allow network monetization, which is mainly based on network performance.When throughput service requirements become more severeas it is with time, when going from SD to HD and further on (see Table 3)retaining the legacy network architecture causes the Telco falling deeper and deeper into the "UBB access trap": by investing in access technologies only, the network costs increase and the customer satisfaction tends to be very poor.
Therefore, we define the UBB access bit-rate limit condition, which we should not violate, through the following relationship: derived from eq. ( 1) with c = 1, where now BR is the cumulative bit-rate required by all n applications that are simultaneously using the access link.In general, n is small for residential applications (e.g., less than 3-4).
The end-to-end bandwidth utilization ratio for a given geographical area must consider both the access network links and the network segment between the big internet and the access POP.The latter network segment today generally has a much higher utilization ratio (r ≈ 1 in the access-to-big internet network segment), because in most geographical areas the peak hour available average bit-rate for any active end-user is much lower than the access links bit-rate.The UBB cost inefficiencies are then related to the access network that has a very low throughput over bit-rate ratio (r << 1).
We can provide the throughput required by the applications and can reduce the network cost acting through compression techniques.If a given value of bit-rate is not the bottleneck, by decreasing PLR and RTT the throughput can be increased and the bit-rate utilization is improved.
Let us now separately consider throughput reduction by compression techniques (case 1), PLR reduction by access links stabilization (case 2), and RTT reduction by edge-cloud computing (case 3).

Reducing throughput through video compression (case 1)
Data compression works on the content before transmitting it over the network.Considering that in UBB networks the video traffic will be more than 80% at the peak hour, in order to distribute high and very high quality video contents new video compression techniques strongly reducing the source throughput cannot be neglected.
Modern lossy video compression techniques reduce the throughput needed for an application and therefore the bit-rate needed to achieve this throughput, if RTT and PLR are not the bottleneck.In such conditions, while the end-to-end bandwidth utilization ratio, r, is unchanged the source throughput reduction lowers the cost of the network as it reduces peak throughput, and normally improves application performance.Bandwidth utilization ratio in the access network for one single application is generally worsened.
Therefore, advances in video compression are crucial, and studies are underway on techniques to improve low complexity 2-d video encoding (MPEG-5 standard).Recently, MPEG approved the start of work on the next 'MPEG-5 Part 2' standard based on a solution named PERSEUS Plus [22], having a data stream structure defined as two-component streams, a basic stream decodable from a hardware decoder and an enhancement stream suitable for implementation of software processing.Experiments in the field compared the 'legacy' channel (h.264) broadcast in SD at 1.8 Mbit/s with the same 'PERSEUSenabled' channel in HD/720p at 460 kbit/s at least at similar quality.This is about a four-fold reduction factor, which can bring the video compression ratio to about 1:160.
All this may have a significant impact on the time profile of operators' CAPEX investments, allowing, for example, the fruition of HD content even with access network having lower bit-rate performance than previously.However, the compression technique choice is under the OTT's control rather than Telco's control.
For VoD, the compression techniques can reduce the required throughput.However, for live streaming the compression time could be too long and increase the video delay.Today, VoD 4K streaming requires a minimum throughput of 15 Mbit/s, while live 4K streaming requires 25 Mbit/s in order to reduce the processing time needed for compression and to avoid video delay increase.Therefore, compression techniques can provide limited throughput reduction for live streaming, due to video delay constraints.

Reducing PLR through access lines stabilization (case 2)
Packet loss is caused either by congestion in network routers or by transmission bit errors.According to Ookla's speed test, the PLR of the main four mobile networks in Italy measured in December 2018 resulted between 0.38% and 0.83%.Packets loss can never be null, and it acts as the implicit TCP feedback control signal for many network congestion control algorithms.
MANUSCRIPT SUBMITTED FOR PUBLICATION In the DSL access, frames corruption is a consequence of bit errors, producing loss of quality at channel level, monitored through the so-called code violations, which originate threshold-crossing alarms.Sequence of frame losses is a consequence of line instability, a metric derived from outage times related to synchronization losses, and line-initialization poor performance monitored through parameters such as, e.g., Failed Full Initialization Count [23].Line instability is a non-stationary process almost impossible to predict.However, Telcos can modify the DSL profile, on a line or bundle basis, to improve line stability and quality, which manifests through higher bit-rate.
Certainly, optical fibers are inherently more stable.However, the Wi-Fi home distribution degrades quality and is still the dominant factor of line instability.About 70% of today's end-users adopt Wi-Fi for indoor signal distribution.In the Wi-Fi section of a wireline network, main degradation factors are radio frequency interference due to nearby signals, signal attenuation due to distance, multipath fading, hardware faults and home setup misconfigurations.
Line instability is a problem in case of wireless networks too (both for 4G/5G mobile communications and for fixed wireless access).Some main causes are interference among cells (e.g. small cells superimposed on one macrocell), multipath fading, etc.
In general, a basic trade-off holds between stability and bit-rate: the higher a line's bit-rate is set, that same line's likelihood of instability increases.Hence, it may be necessary to decrease some unstable lines' bit-rate to ensure stable operation.Other lines, by contrast, may be able to increase their bit-rate and remain acceptably stable.Overall, the number of higher-speed lines increases and this average rate metric can be monitored to ensure optimized stabilization and bit-rate.
Therefore, the use of algorithms for surveillance and quality control of the Telco's lines (copper, optical fiber, and wireless) can allow the removal of physical layer bottlenecks, which directly affect packet loss, including those generally prevailing due to poor performance of home Wi-Fi.
The two PLR components can be considered independent; therefore, we have: where PLR1 = f (RTT) is the component which depends on network queues in the IP network and is mainly related to the network segment from the access POPs to the big internet; PLR2 depends on physical layer impairments (bit errors in the access network, including the home network, and in the network between the big internet and the access network).Therefore, the approach to PLR containment is twofold: 1. to reduce PLR1 we need to improve the transport infrastructure quality (transmission speed and number of packets/s managed by the routers), the applications transport protocols (Layer 4), and/or to bring content delivery platforms and applications closer to the end-user; 2. to reduce PLR2 we can rely on analytics based on Artificial Intelligence (AI) paradigms, whereby the Telco's access network performance can improve by constantly and automatically tuning its components.
Some advantages of optimizing access networks through AI tools include proactive online response to problems as they arise, better allocation of resources (wired or wireless) to dynamically improve the quality of the customer experience (and the speed), as well as the association of the client's reaction to optimization strategies for a better use of the available infrastructure resources.
To the aim of reducing PLR2, measuring the stability level is a crucial factor.However, algorithms in practice only assign one out of a few stability levels to each line: very stable, stable, unstable and very unstable.These four levels have a very high correlation with the customers' propensity to complain [24].
By doing so, the Telco can use QoE indicators as (i) the trouble ticket rate, and (ii) the dispatch rate:  Trouble ticket rate: A Telco's call center usually opens a trouble ticket in response to a customer complaint.Among such calls, some are technical in nature and related to the operation of the network's physical layer.A Telco desires reduction of customer-call volumes.A good measure for end-user QoE is trouble-ticket rate of technical nature, i.e. the percentage of such complaints to the number of lines on a monthly basis.Daily ticket rate can be volatile with noticeably different patterns over weekends, so a 7-day moving average is generally considered.
 Dispatch rate: Customer complains that cannot be resolved remotely result in a technician dispatch on the field (a most significant cost for a Telco).The dispatch rate is the percentage of dispatches to the number of lines measured, on a monthly basis.As with the trouble-ticket rate, a 7-day moving average is used.
When aiming at further reducing PLR2 we need working on the physical layer, but we must always balance its reduction with the negative effects on throughput due to the limitation in packet payload and the increase in RTT.In fact, reducing PLR2 may reduce the application throughput: for example, increasing forward error correction reduces the packet data payload, while a negative effect on RTT is due to interleaving.

Improving throughput and other application KPIs by edge-cloud computing (case 3)
Applications KPIs improvement should be obtained without increasing the network cost, which depends on the peak hour load.When possible, the network load can be reduced by adopting: a) the most efficient video compression techniques, which entails lower applications throughput and, as a consequence, lower bit-rate in the communications channel, and b) optimized access network performance by installing AIbased surveillance and control software providing lower PLR2.However, due to internet traffic exponential growth year after year, and to service requirements becoming more and more stringent, this approach turns out to be insufficient.Therefore, edge-cloud computing platforms should be deployed to achieve lower round trip time and, in many cases, lower PLR1.
Edge-cloud computing works above the network layer, at the transport layer and is able to improve the applications KPIs presented in Section 2.1.It also operates above compression and PLR2 reduction, and can coexist with these techniques to improve KPIs.However, the interaction among ECC, video compression and PLR2 reduction must be carefully considered to avoid possible negative impact on the applications KPIs.
In the ideal condition of adopting both video compression, techniques to reduce PLR2 and transport protocols that avoid congestion with new flow control mechanisms (e.g., BBR [25]), for each flow one can aim at RTT and PLR not limiting throughput between the server located far away and the end-user device.Therefore, the bandwidth utilization ratio in the network section between the end-user and the server r ≈ 1, and to improve application performance increasing the end-to-end bit-rate would be effective.Consequently, ECC platforms would not produce throughput increase and then ECC could be considered unnecessary.However, even under such limit condition the remaining KPIs (such as video delay and download time) may still be inadequate for an acceptable level of QoE.Furthermore, if we also consider the TCO, we can save on network costs by using ECC, as a result of containment of traffic and, thus, the needed total peak throughput in the network segment between the ECC and the big internet.
Thus, even under such ideal condition ECC may turn out to be necessary.
To achieve traffic containment one main ECC component is the transparent cache, which locally stores and continuously updates contents, not altering the end-to-end application logic in a fully transparent way both to the content provider and to the end-user [26].While already adopted in CDN nodes located at the Telco's core network or farther away, transparent caching is much more effective when brought closer to the end-user.
From a functional point of view, a transparent cache is a local repository which stores the most popular contents (the most requested high definition hit movies, the currently most clicked video clips or web pages, etc.) supplying such contents whenever a nearby end-user requests them, after the content provider effected authentication and authorization.By locally storing copies of the most frequently requested contents, the transparent cache is effective in increasing applications throughput, reducing download time and video delay, so improving all the technical QoE KPIs.However, to work properly a transparent cache must be dynamic in selecting the locally most requested contents, while updating the memory with new ones as it detects changes in the user behaviors and expectations.Transparent caching also requires visibility of the HTTP address, which many times can be encrypted.For this reason, it requires collaboration between the Telcos and the OTTs.Collaboration models based on transparent caches located in internet at the border of the Telco networks are well established since some twenty years between OTTs and CDN providers.
Zipf's law provides the theoretical foundations for the advantages of using transparent caching.In fact, the relative frequency with which contents are requested follows a Zipf-like distribution, where the relative probability of a request for the n-th most popular content is inversely proportional to n  with  taking on some value less than unity, typically ranging from 0.64 to 0.83 [27].Therefore, it is enough to store a limited number of contents in the nearby transparent cache to achieve high values for the probability that cache delivers the content, or hit-ratio (HR), so avoiding a great number of requests and contents to traverse the network.For  = 0.8 only 10% content stored provides HR ≈ 50%, which means that the heavy video traffic is cut in half.
Transparent caching is effective in reducing IP downstream data traffic in the network segment between the application server and the ECC.Then, the network upgrade to manage the IP traffic volume growth has a much lower cost.Consequently, the ECC network architecture TCO is lower than the traditional (legacy) network TCO if the ECC network cost is lower than the network upgrade cost for the segment between the ECC and the application server, a condition easily met in practice [13,14].
Let us assume a three-level network model (Figure 3): core network, edge networkgenerally comprised of a metro component and an aggregation componentand access network.To fix the ideas, in order to provide a rough evaluation of the throughput advantage, we assume five core nodes (CNs) and 25 metro nodes (MNs), while the total number of access nodes (ANs) is nAN = 250.In our architecture, rings connect the MNs to the CNs (the average number of MNs per ring is five).ANs connect to MNs via rings and the average number of access nodes per ring is ten.
Below, we only present throughput performance improvement (in [14] also cost analysis is provided).
We evaluate the total traffic throughput improvement for the i-th node through the so-called Speed-Up, SU(i), defined as follows: where TH(i) and THq(i) in Gbit/s denote the total applications throughput managed by the i-th node without and with an ECC platform, respectively.For simplicity, we conservatively assume PLR being the same under both conditions.Therefore, SU is a function of RTT ratios, only (see eq. ( 1)): where RTT(i) and RTTq(i) are the round trip time without and with ECC, respectively.We model both RTT parameters as random variables and consider the average values.The speed-up, SU (i), provided by ECC platforms that use transparent cache and that are located in the i-th node is given by: where the transparent cache hit-ratio, HR(i), is the cache efficiency, equal to the probability that the content is delivered by the cache, when the user plane IP layer is visible and the HTTP address is not encrypted, or can be decrypted by the Telco.
Therefore, the Network Speed-Up, NSU(i), for the ECC platform with the transparent cache located in the i-th access nodes, 1< i < nAN, is: The total downstream network traffic is partitioned among metro nodes according to the law Y(i) = a • i m with m = -0.6 (Figure 4 (a)).The traffic of each metro node is then distributed to the access nodes according to the law Y ' (i) = a ' • i m' with m' = -0.99(Figure 4 (b)).Then, the fraction of the traffic managed by each access node is the product of the percentage of its metro node with the corresponding percentage of the access node.Parameters a and a' are used to normalize the distributions to 1.These distributions were adopted through a traffic analysis based on peak bandwidth data from Analysis Mason [28] and Cisco VNI [5], and on the end user traffic distribution measured in Italy [29].
The ECC platforms are located in the ANs starting from the node providing the highest cost saving (as defined in [14]).In Figure 5 we report the speed-up of each single access node (SU(i), dashed lines) and the network speed-up (NSU(i), solid lines), i.e. the speed-up related to the total network throughput, and obtained by increasing the number of access nodes where the ECC platform is deployed.The indexes of the access nodes in the figure are ordered according to cost savings, i.e. the first node has the highest saving and the last has the lowest saving.When ECC platforms are located in all the access nodes, the NSU has the highest value.Two scenarios with different average RTT values between the end-users and the access nodes and between the access nodes and the big internet are considered.Scenario A refers to a lower network SU = 1.75 and scenario B refers to a higher network SU = 3.The increase of the number of access nodes equipped with the ECC improves network SU and then the network application throughput, if the network bit-rate is not the bottleneck.
In practice, values for SU in the range of about 2 to 4 can be easily obtained: if SU = 2 the throughput with transparent caches is two times the throughput without transparent caches.

A cost-saving approach to very high capacity network deployment
VHC networks deployment must ensure the respect of end-to-end applications KPIs, while at the same time cost-effective solutions must be devised for two main network segments: the access network, both fixed and mobile, and the IP network from the access POPs to the big internet interconnection.
The cost-effective ECC architecture can improve end-to-end applications KPIs.Moreover, it can provide cost savings both for the fixed network and the mobile network from the access to the big internet, due to the contents and applications distribution close to the end users that significantly reduces the peak throughput, and for the mobile access network (Radio Access Network).
Considering network costs, ECC platforms can provide either cost saving or cost time displacement for mobile and fixed UBB networks, because they reduce RTT and PLR, so improving the bandwidth utilization ratio.Since today r is much different in UBB access networks (r << 1) and in the network segment from access to big internet (r ≈ 1), we should think of the end-to-end performance so to better equalize the efficiency of bandwidth use and improve the application performance for the end-user.
Contrary to this cost-effective approach to get performance improvement, a strong push by Regulators towards improving the fixed access networks bit-rate only, not having regard to the end-user QoE objective, is providing an unbalanced and inefficient outcome.The minimum fixed and, in many cases also mobile, access networks bit-rate provided to any end-users is generally much higher than the bitrate today available for the applications in the Telco networks from the access POPs to the big internet interconnection.

Evolutionary approach to access network deployment
As discussed above, in both fixed and mobile access networks (with the possible exception of networks in rural areas) the bit-rate is, in general, much higher than that available between the access network and the big internet.The applications performance limit is due to the bit-rate that applications can use in the network segment from the end-users access points to the big internet, or to the distance (evaluated in terms of RTT and PLR) between the end-user device and the server providing the service.The significant increase in the bit-rate of fixed and mobile access networks, which often requires huge investment, does not therefore correspond to an equivalent improvement in application performance (due to the limits in the network segments from the access POPs to the big internet interconnection), moreover capital expenditures do not provide revenues increase.This implies critical issues for the economic sustainability of Telco's business models and can lead to low customer satisfaction.
While the EC is promoting a new objective, the one for the achievement of VHC networks in Europe, the investments approach in access network should change so to allow them to be more gradual.To fix ideas with an idealized scenario, the plot in Figure 6 shows three curves corresponding to a bit-rate of 10 Mbit/s, 100 Mbit/s and 1 Gbit/s, respectively.Let us suppose that for each of the curves a different fixed access technology is needed and that for the rightmost curve a copper network may be sufficient, for the intermediate one a copper-fiber hybrid network, and finally for the leftmost one the all-fiber network.If we focus on a brownfield network condition, it is evident then that each curve in the graph corresponds to different levels of investment, more and more intense if we move from the right curve to the left curve.The concept is easily extended to mobile networks with different radio link frequencies, RAN bit-rates, and cell densities, as well as to convergent networks.Now, let us think of the applications that run in these networks: as we have seen video streaming applications will reach about 80% of the total traffic in two years.Therefore, let us stick with these applications only.In the evolution of the services that the networks carry on, today video services are mainly SD, while HD services are beginning to penetrate the market.In the coming years, we can expect video quality to increase, so 4K Ultra HD (UHD) is starting to appear with 15 Mbit/s throughput for VoD and 25 Mbit/s for live.Over time, virtual and augmented reality services will begin to appear for the large masses of consumers.These services, as is the case for video alternatives (SD, HD, UHD), will go through several seasons, with increasing quality demand, which in Figure 6 we point out through four stages: Early Stage (ES); Entry Level (EL); Advanced Experience (AE); Ultimate Experience (UE) [30] (see Table 4).
If we think about the transition from the SD/HD video service of today to the future UE virtual and augmented reality service requiring the highest performance, it means moving from point (A) to point (B) shown in Figure 6.Of course, we can think of many transition paths, but if we want to take into account the gradualness of investments without the client perceiving performance limitations as the more valuable services advance, the optimal path could be the one shown in Figure 6 with the dotted arrows.
The rapid vertical downhill transitions from one curve to another correspond to a change of technology in the access network, without changing the RTT value.The latter can be improved by bringing contents and applications in the fixed access POPs closer to end-users.Lower RTT provides higher TH and then increases the bandwidth utilization ratio, r.
The main issue suggesting the gradual transformation of the fixed access network to FTTB/FTTH is the very low bit-rate utilization in the access network, being TH limited by the end-to-end bit-rate and by the distance between the server and the end-user device.The ECC rollout can be considered the key enabler for VHC networks that will have to ensure the very challenging end-to-end performance of future services such as 360° virtual and augmented reality, not only for fixed services but also the highly demanding mobile ones (MoVAR).
As we can see, a network will not be able to provide the future MoVAR services if the network's latency will not fall below the millisecond, besides ensuring end-to-end Gigabit per second speed per active end-user.Furthermore, if the latency is not reduced to such a level, any attempts to enlarge the bandwidth is useless and the associated investment is wasted.
Figure 6 shows how the evolution of Telco networks should ideally follow a path characterized by three phased elements: a) increase in quality and speed of all the applications and, in particular, video services; b) decrease the RTT; c) increase the end-to-end bit-rate (not only in the access network); and so on.
Such an evolutionary approach, at least in principle, is optimum for network development.In fact, network investments can follow the demand curve, and the Telco operator can reinvest the revenues in the network enhancement, without having to resort too much to bank credit and minimizing the investment risk.By doing so, and considering that the amount of money to be invested is limited, digital divide in a Country can be minimized, as there will be no overspending in the most profitable areas at the expense of the less attractive suburban and rural areas.Therefore, it is also inherently more socially responsible.
It is obvious that, being "continuum model", this approach cannot be followed in a straightforward way, due to several kinds of practical constraints.However, having it in mind can help not doing big mistakes in network planning such as anticipating too early the expensive fiber optics roll out to homes without taking advantage at the right time of technologies such as AI network surveillance/control and edge-cloud computing.

An exemplary case: Mobile Virtual and Augmented Reality
One of the most challenging services in the future will be 360° MoVAR.This is difficult to provide even with present 5G, as well as with fiber optics terminated to Wi-Fi.In fact, provision of the end-user with the so called "ultimate experience" will require extremely high guaranteed throughput (in the order of 1 Gbit/s) and latency well beyond present achievable values (i.e., ≤ 1 ms).
To provide 360° MoVAR, key performance indicators are very challenging.In fact, for complete solid angle spatial rendering, human vision field of view (FOV) needs to be accounted for: horizontal FOV is about 180° and vertical FOV is 135°.Binocular vision, which is the base for stereopsis and is important for 3D vision, covers slightly less than 120° of horizontal FOV.Therefore, foveal vision is ±30° while remaining peripheral vision up to 90° each side is not binocular (only one eye can see).In such external part, human brain perceives much the movement, so that it can almost instantly rotate eyes.
Displays FOV cannot be too much limited, as this could contrast the immersive effect and, even worse, it can induce the syndrome of "simulator sickness".Therefore, displays use now foveated rendering, i.e.
graphic rendering with eye tracking integrated into a virtual reality interface that significantly compresses the image quality in the peripheral vision to reduce about ten times the rendering workload.
Today, often, both objectives are not met, mainly due to the push to increase the access networks bitrate without considering the end-to-end application KPIs.To this aim, Telco networks architecture must change, improve lower layer performance through AI-based algorithms, and deploy ECC platforms that distribute contents and applications close to end-users to provide both performance improvements and network costs savings.Actually, the continuous run towards higher and higher access bit-rates, without feedback control on the actual end-to-end applications throughput, is an "UBB trap" which should be avoided as much as possible, as it certainly brings about increased cost while the end-user benefit is uncertain.
In this paper, we proposed working on the end-to-end optimization of the bandwidth utilization ratio, r, because it leads to better performance experienced by the end-user, provides a sustainable cost for the Telco operators, and may enable new business models and revenue streams.We showed that there is margin to achieve the VHC network targets of the EC, if the definitions of VHC networks are focused ~ 33 ~ By better aligning investments and service demand according to the conceptual approach delineated in this paper, Europe may avail of a network always at the QoE level requested by citizens.While not delaying the Gigabit/s throughput needed for the most demanding services, investments can be leveled so that rural areas are not left behind due to unless unavoidable concentration of scarce economic resources in the urban and other heavy-traffic areas.As an additional benefit, therefore, less burden can be expected for the Member States, as Telco operators, both mobile and fixed, will avail of more economic resources and increased willingness to invest due to limited risk.
This paper attempted at organizing ideas collected through years of examination of the European policies on telecommunications, and was written while Europe is being hit by a catastrophic virus epidemic and a potentially systemic economic downturn.If this unexpected crisis has to show something to policy-makers, it is that no Country or European region can wait for fiber deployment everywherein fact, they need somewhat higher speeds for capillary provision of services such as work from home and distance learning now and at much lower investment both in terms of money and time.Throughput (Mbit/s) on the end-to-end connection leveraging on the edge-cloud computing architecture.This is a big paradigm shift compared to what is generally retained in some policy-maker circles.Three areas of concurrent development are urgent, and already implementable with state-of-the art technology: (i) reduced video bit-rate through new compression standards; (ii) reduced routerindependent packet loss component with the implementation of network surveillance and control algorithms based on AI tools; (iii) development of edge-cloud computing architecture to jointly improve applications performance and to reduce the network TCO.These areas are not completely under Telco operators control, however an holistic approach by Regulators could push the market towards what we believe is the right direction, promoting progress in the telecommunication industry and improving the Telcos market positioning.
1080p HD (case Mid Definition a 720p) Note 3: High frame rate

Figure 2 :
Figure 2: Average peak bandwidth/total lines in Italian fixed networks measured over two years (Source: Agcom, 2018).

Figure 3 :Figure 4 :
Figure 3: Reference three-level network architecture and model key parameters.

Figure 5 :
Figure 5: Network Speed Up vs Access Nodes with ECC for scenario A (NSU=1.75) and scenario B (NSU=3).

Figure 6 :
Figure 6: Conceptual evolution of a network to provide new services while continuously readapting the network infrastructure.