Key Performance Indicators of the Reference 6TiSCH Implementation in Internet-of-Things Scenarios

Tens of thousands of wireless industrial monitoring deployments exist today, logging more than 18 billion operating hours. These solutions have been around for over a decade and are based on standards such as WirelessHART and ISA100.11a to provide performance guarantees to the applications. The new trend in industry deployments is the convergence of operational and information technologies happening through the Industrial Internet of Things (IIoT) paradigm. The challenge is to bridge the performance of these well-proven industrial standards with the interoperability of IP-based systems. The Internet Engineering Task Force (IETF), the organization behind most of the technical solutions of the Internet, has produced a set of specifications with this requirement in mind. The output of this effort is the 6TiSCH protocol stack based on open standards, such as those that have played a key role in the Internet’s ubiquitous adoption. The standardization of 6TiSCH is done. The state-of-the-art research work focus is on important, but niche, optimizations and performance evaluations of the 6TiSCH stack. This paper takes a different approach – it evaluates the performance of the standards-compliant 6TiSCH solution from the end user point of view. It does so on two experimental testbeds, in typical IoT test scenarios based on a well-defined experimentation methodology. We provide a set of Key Performance Indicators (KPIs) useful for the end user to decide whether the 6TiSCH technology is a good fit performance-wise for a particular use case. We demonstrate reliability of a vanilla open-source implementation of 6TiSCH above 99.99%, upstream latency on the order of a second and radio duty cycle well below 1%.


I. INTRODUCTION
The Industrial Internet of Things (IIoT) introduces the convergence of operational and information technologies in the industry deployments. It facilitates their integration with novel web-based systems through the usage of interoperable solutions. The de-facto wireless communication technology in industrial applications is Timeslotted Channel Hopping (TSCH), used for more than a decade in standards such as WirelessHART and ISA100.11a. Through the work of the Internet Engineering Task Force (IETF) and The associate editor coordinating the review of this manuscript and approving it for publication was Tie Qiu . its 6TiSCH working group, TSCH technology is now ready to be used in IPv6 networks. The result of this effort that spanned several years and a mix of academic and industrial participants is the 6TiSCH protocol stack. The 6TiSCH stack bridges the performance of existing industrial standards while benefiting from the Internet's IPv6 interoperability. The stack is based on open standards, such as those that have played a key role in the Internet's ubiquitous adoption. The goal of this paper is to define Key Performance Indicators (KPIs) of the 6TiSCH stack, a methodology for their collection, and to present the results of an extensive experimentation campaign using a reference 6TiSCH implementation. VOLUME 8, 2020 This work is licensed under a Creative Commons Attribution 4.0 License. For more information, see https://creativecommons.org/licenses/by/4.0/ The 6TiSCH protocol stack is based on a modular architecture. A key component influencing the performance of the stack is the ''Scheduling Function'' (SF). The 6TiSCH working group standardized one example of a scheduling function called Minimal Scheduling Function (MSF) [1] that is suited for best-effort traffic. A wide variety of scheduling functions have been proposed in the academic literature [2]- [6], each tailored to different application requirements.
With many SFs available, how can one compare the performance in the context of different application requirements?
While there are many academic papers published on 6TiSCH, they typically discuss niche optimizations and their related performance improvements. While often very thorough, such evaluations fail to give a high-level view of the performance of the technology. The end users, e.g. product designers, are then left with a scattered view before deciding on whether to use a given technology. It is hard to find unbiased performance benchmark results for other IoT technologies either, although there is a plethora of academic proposals and evaluations available. We therefore approach the problem of an unbiased performance evaluation of the 6TiSCH protocol stack, as it was standardized by the IETF. We do not propose new optimizations, but rather evaluate the standards-compliant solution. We produce the KPIs that an industrial user would expect before deciding whether a technology suits its requirements.
To achieve this, we design a novel software-based platform called OpenBenchmark, 1 which uses a black-box approach to benchmarking a 6TiSCH implementation. The concept of the platform is that the user should not worry about network specifics, but rather obtain high-level KPIs of a 6TiSCH implementation. The black-box approach facilitates the use of the platform by users that are not experts in low-power networking and firmware design. The user uploads the 6TiSCH firmware image, selects the test scenario and launches the experiment (see Fig. 1). The platform takes care of testbed 1 The article is an extension of the paper [7] published in the INFOCOM 2019 CNERT workshop. This version complements with the produced KPIs through an extensive experimentation campaign performed using the Open-Benchmark platform. resource provisioning, firmware programming, data collection and processing, and presents the user with a set of KPIs.
In order for the benchmark to be valuable to industrial users, OpenBenchmark instruments the firmware in real time during the experiment to adhere to a given test scenario. Test scenarios are defined to capture real-life use cases of a technology and therefore test its applicability. Since the test environment, i.e. a testbed, often plays an important role in performance results, the platform allows the experiments to be executed on different testbeds. For the purpose of this paper, we evaluate the reference 6TiSCH implementation, the OpenWSN stack [8], in industrial monitoring and home automation scenarios, each on two different testbeds to give performance insights.
In both scenarios, we observed reliability above 99%, which depending on the test environment goes up to 99.99%. Latency observed was on the order of a second and the radio duty cycle is well below 1%. It is important to stress that these results come from a vanilla open-source implementation of 6TiSCH. As each implementation can take different choices when implementing the standard, the performance of the implementations is likely to vary. As a consequence, these results should not be generalized as ''performance of 6TiSCH''. They should rather be seen as an example of a baseline when a reference open-source implementation of 6TiSCH is used. Furthermore, if application requirements are known in advance, many enhancements are possible. However, such optimizations are out of scope of this work.
The contribution of this paper is threefold: • We obtain performance datasets of a reference 6TiSCH implementation in two test scenarios on two different testbeds and publish them under open-data licence 2 ; • We analyze the datasets and discuss KPIs of the reference 6TiSCH implementation in each case; • We design and implement in open-source OpenBenchmark and enable the community to leverage it for further evaluations or comparisons. 3 OpenBenchmark was developed as part of the SODA project [9] at the University of Montenegro. The remainder of the article is organized as follows. Section II summarizes the related work on the subject of 6TiSCH performance evaluation. Section III presents the design of OpenBenchmark. Section IV details the obtained KPIs in both test scenarios. Section V concludes this article.

II. RELATED WORK
The work on standardizing 6TiSCH is complete. Core documents [1], [10]- [13] have been published or are in the process of becoming Request for Comments (RFCs). During the process, 6TiSCH has sparked the interest of different communities, including open-source implementation projects, standardization and research.
The reference 6TiSCH implementation used during ETSI testing events for interoperability is the OpenWSN stack [8]. The two other major IoT open-source projects, Contiki-NG [14] and RIOT [15], implement 6TiSCH. The 6TiSCH simulator [16] implements a Python-based discrete-event simulation tool focusing exclusively on 6TiSCH. Other tools have also been developed focusing on interoperability and conformance testing of 6TiSCH implementations [17].
The performance evaluation of 6TiSCH networks has been a subject of interest of many academic works. The SF is the major component influencing the performance of the stack as it constructs the communication schedule of the network. Therefore, it comes to no surprise that the majority of the work in the literature proposes new scheduling functions [18]. Examples are DeTAS [2], Morell et al. [3], ReSF [4], LLSF [5], TREE [6]. Other work focuses on optimizing the joining [19], [20], interplay with routing [21], co-existence [22], applications [23] to time-critical scenarios [24], [25].
Many of these works evaluate their proposals in realistic conditions on different testbeds. While often very thorough, in the majority of cases, each work benchmarks its particular proposal with no common methodology and scenario followed. One consequence of this practice is that it is hard for an industrial user to find a comprehensive evaluation useful from the application requirements point of view. Our article fills this gap, by defining and following a methodology to evaluate the 6TiSCH network in scenarios relevant to the applications.

III. OPENBENCHMARK PLATFORM
OpenBenchmark automates the experimentation and network performance benchmarking on selected testbeds supporting Internet of Things devices compliant with the IEEE802.15.4 standard. OpenBenchmark instruments the execution of an experiment in real time, following the pre-defined test scenarios, and collects the data to calculate the network KPIs in a fully automated manner.
Test scenarios are generic and derived from industrial requirements. A test scenario is mapped to an executable logic that runs concurrently with the experiment in the testbed. OpenBenchmark sends commands to trigger the desired actions of the firmware: configure radio transmit power, trigger application packet. The commands are sent to the Network Gateway, which processes and translates them into the potentially proprietary format expected by the firmware Implementation Under Test (IUT). The Network Gateway may run at the testbed infrastructure and be physically connected to the serial port of IUTs, or run at OpenBenchmark premises and communicate with the IUTs over an emulated serial port. This emulated serial port is provided through the software component of the companion OpenTestbed project [26], which transports the serial data over the MQTT protocol. OpenBenchmark provides the necessary integration and provisioning of the OpenTestbed software on supported testbeds, such that this complexity is hidden from the user. This allows the user to focus on the protocol aspects of the firmware, while the performance evaluation is entirely handled by OpenBenchmark through the Application Programming Interfaces (APIs) exposed by compliant firmware projects.

A. TOKEN-BASED BENCHMARKING
The benchmarking process of OpenBenchmark is based on random tokens. OpenBenchmark sends commands to the System Under Test (SUT) in real time, instrumenting it so that a node in the 6TiSCH network initiates the sending of an application packet. The command contains a 5-byte token that is to be transferred over the network by the originator node. Fig. 2 illustrates the process of OpenBenchmark, instrumenting node E to send an application packet to node A with a random token 3424. The command is received by the SUT Gateway and translated to the format understandable by the 6TiSCH Implementation Under Test (IUT). Upon the reception of the command, node E prepares an application packet and includes the token 3424 in its payload. SUT generates an MQTT event packetSent that is handled by OpenBenchmark, communicating the time instant at which the packet was sent, as well as other information necessary to calculate the KPIs. The packet is then handled by the 6TiSCH network and upon reception at node A, a new MQTT event is generated: packetReceived. The pair of packetSent and packetReceived events allows to calculate the latency of the packet and the number of hops traversed per packet. The absence of the packetReceived event indicates to OpenBenchmark that the packet has been dropped in the network, which consequently impacts the reliability.
One deficiency of the proposed design is in nondeterministic network delays between OpenBenchmark and the SUT Gateway. Since the commands that trigger the sending of a packet in the network are sent in real time, non-deterministic network delays between OpenBenchmark and the SUT Gateway do influence the reproducibility of the platform. To overcome this challenge, it would be necessary to implement a timestamp-based approach, where OpenBenchmark would communicate the exact timestamp at which the SUT Gateway should trigger the sending of an application packet in the network. The implementation of such timestamp-based approach is part of our future work.

B. SOFTWARE ARCHITECTURE
The OpenBenchmark platform consists of the following components (see Fig. 3 • Web server. A Laravel-based (PHP) backend and Vue.js-based frontend allowing the user to access the OpenBenchmark platform through a graphical interface. The backend serves as a bridge between the frontend and the rest of the OpenBenchmark components that are implemented in Python. The backend provides a RESTful API that enables the use of OpenBenchmark by 3 rd party applications.

C. TEST SCENARIOS
The goal of an OpenBenchmark test scenario is to capture real-life use cases of a technology in order to benchmark its performance in a setting that is relevant to the end users: companies adopting the technology for their products and their customers. A test scenario also allows the experiment to be fully reproducible and the results easily and fairly comparable, desirable properties from a research point of view. Each scenario describes the application traffic pattern and load, and the desirable coverage requirements in terms of number of IEEE802.15.4 hops. At a later stage, we plan on adding support for controllable interference generation. The description of a scenario is generic, with testbed-specific mappings.

1) SCENARIO DEFINITION: HOME AUTOMATION
Home automation systems typically consist of sensors monitoring some physical quantity, event sensors triggered by human action such as a button press, and different actuators. They are controlled by a central Control Unit (CU). The traffic consists of the mix of upstream and downstream traffic. The scenario has been derived from the requirements discussed in RFC5826 [27] and the emulated topology of a smart house discussed in Vučinić et al. [28]. Tables 1 and 2 summarize different logical roles a node in the network can have and the traffic pattern for each logical role.

2) SCENARIO DEFINITION: INDUSTRIAL MONITORING
Industrial monitoring systems can be generalized to consist of two types of sensors: 1) traditional monitoring sensors for temperature, pressure, fluid flow,. . . ; 2) sensors that transmit large quantities of data, for example vibration monitors. They are controlled by a central Gateway. The traffic is typically upstream. Tables 3 and 4 summarize different logical roles a node in the network can have and the traffic pattern for each logical role. The scenario has been derived from the requirements discussed in RFC5673 [29].

D. KEY PERFORMANCE INDICATORS (KPIs)
In the following, we give a brief summary of implemented KPIs.

1) RELIABILITY
Refers to the ratio between packets received and packets sent by the application. Therefore, this KPI indicates the end-to-end reliability. A packet may fail a transmission on a given link and later be re-transmitted. However, a failed packet transmission on a given link does not influence the end-toend reliability if the packet eventually arrives at the destination. We present separately upstream reliability, referring to the packets destined for the Network Gateway, downstream reliability, referring to the packets originated by the Network Gateway and destined for one of the nodes in the 6TiSCH network, and P2P reliability, referring to the packets exchanged between a pair of 6TiSCH nodes.

2) LATENCY
Refers to the time interval between the instant packet is generated at the application layer of the sender, and the instant the packet is received by the application layer of the destination. We present separately upstream latency, downstream latency and P2P latency.

3) RADIO DUTY CYCLE (RDC)
Refers to the ratio between the cumulative time that the radio chip is powered and the measurement period. We present separately average duty cycle, minimal duty cycle and maximal duty cycle.

4) NETWORK FORMATION TIME
Refers to the initial phase when the network is forming. It is an important KPI from the installation point of view. The KPI refers to the end of the secure joining phase of the network.

E. EXAMPLE USE CASES
We envision three main use cases of OpenBenchmark, with different target groups: IoT industry stakeholders, research community and firmware developers.

1) REFERENT BENCHMARK OF AN IoT TECHNOLOGY
Although there are many variants of IoT communication stacks (e.g. 6TiSCH, WirelessHART, ZigBee, ZigBee IP, Thread), it is quite challenging to point to a document that gives a fair and industry-relevant performance comparison among them. We designed OpenBenchmark to be used to tackle this challenge.

2) RESEARCH PROPOSAL BENCHMARKING
The research community also benefits from OpenBenchmark. We hope to attract researchers to use our benchmarking service for the evaluations of their research proposals. OpenBenchmark facilitates the extraction of experiment data by hiding the unnecessary testbed complexity. Moreover, it also leads to the increased confidence in the results: OpenBenchmark is in its entirety open source and can be reviewed and improved by the community.

3) CONTINUOUS DELIVERY BENCHMARKING
Firmware always evolves. Updates to the standards, newly discovered security vulnerabilities in the code, new features, VOLUME 8, 2020 all require the firmware development community to constantly update the code base of different IoT open-source projects. The best practices of continuous integration testing are already in place for the popular repositories. However, unit and functional testing do not indicate whether a software patch introduces unwanted performance loopholes. Does the proposed patch improve or degrade existing performance? In what conditions was the ''existing performance'' measured couple of years ago when we first merged that feature? To answer such questions, OpenBenchmark is designed to provide a ''continuous delivery benchmarking'' service to firmware developers. We are working on integrating OpenBenchmark with the continuous integration procedures of the OpenWSN firmware project, the referent implementation of the 6TiSCH protocol stack. This allows the code maintainers to run automated nightly experiments and assess the performance of the latest patches, before their release.

IV. PERFORMANCE EVALUATION A. METHODOLOGY
The two test scenarios defined in III-C were instantiated and executed in order to collect data on two testbeds: Fed4Fire's w-iLab.t [30] in Ghent and Inria's OpenTestbed [26] in Paris. The data collection procedure was as following. Each scenario was instantiated for a total of 30 nodes in a generic setting including the root of the network. Then, a mapping was provided for each testbed, consisting of the testbed node_id to use, as well as the radio transmission power that is to be configured by OpenBenchmark. Listing 1 illustrates an example scenario instantiation and its mapping on w-iLab.t testbed.
The duration of each scenario execution was set to 3 hours and 30 minutes, with 30 minutes of allowance time for the network to form and stabilize before the benchmarking process would begin. We executed the two scenarios on w-iLab.t's Datacenter deployment using nodes nuc28 to nuc43, each equipped with a pair of Zolertia Re-motes Rev. B. On OpenTestbed, we executed the scenarios using 30 OpenMote-B nodes in Building A of Inria-Paris deployment. In both cases, each scenario was executed using the same nodes, allowing us to compare: 1) performance across scenarios; 2) performance across different testbeds and radio propagation conditions. We used the vanilla Open-WSN open-source project, with main parameters specified in Table 5.
We present KPIs in a tabular form, except for the network formation time that is presented as a Cumulative Distribution Function (CDF). For each KPI, we present the mean value, minimum, maximum and the 99th percentile (P 99% , i.e. the value below which 99% of observations can be found) of at least 10 experiment runs. For example, if the discussed KPI is average latency, we present the mean, minimum, maximum and P 99% values over the experiment runs, where each measurement is the average latency in the network.

B. NETWORK FORMATION TIME
All the scenarios were executed using the same radio transmit power. As a consequence, due to the fixed physical topologies in the testbed, network formation time KPI is common across the scenarios. The plotted CDF (see Fig. 4) contains node join times across different scenarios.
From Fig. 4 we can see that it takes less than 20 minutes to form a 30-node network. This time is acceptable from the installation point of view as it does not require installers to spend an unreasonable amount of time on-site once the network is deployed. The time is consistent across the testbeds, which is interesting due to the fact that the deployments are quite different. w-iLab.t deployment used was the one in the Datacenter where all nodes have line-of-sight visibility of each other and OpenTestbed is deployed in a smart office setting across the floor of Inria-Paris building A. Even so, the network formed on w-iLab.t had a similar logical topology with the one formed on OpenTestbed in terms of the number of hops each packet would need to traverse. On w-iLab.t, the average number of hops was 2.5 while on OpenTestbed deployment in Paris, the average number of hops was 2.6.

C. INDUSTRIAL MONITORING
Industrial monitoring scenario consists of exclusively upstream traffic with occasional bursts coming from bursty sensor node types. Each scenario execution run consisted   of 10,861 packets being sent by different nodes in the network. Table 6 and Table 7 present the calculated reliability in the network for upstream and bursty traffic, respectively. During the experiments on w-iLab.t testbed in the Datacenter deployment, we observed four nines of reliability with some experiment runs without any losses.

1) RELIABILITY
The same scenario executed on OpenTestbed showed greater losses, equivalenting to 99.47% reliability of upstream communication. One explanation for this result is the radio interference present in the OpenTestbed deployment, causing higher losses on the radio channel.
We further studied the reliability of the traffic belonging to a burst and present the results in Table 7. We can see that during the experiment runs on w-iLab.t not a single packet belonging to a burst has been lost, which is not the case with the runs executed on the OpenTestbed deployment.   Table 8 presents the observed latency during the experiments, in TSCH slots and the equivalent in seconds for the slot length of 20ms used in the experiments. The interesting point to note here is that the results from the two testbeds are quite similar. This is a consequence of the logical network topologies built, with average hop distance from the root in both cases being less than 3 hops.

2) LATENCY
We further studied the latency of packets belonging to a burst and present the results in Table 9. We can see that the average latency of packets belonging to a burst is higher by a factor of 3 due to the queuing in nodes' buffers. Interestingly, the observation of similar latencies on two testbeds does not hold in the case of bursty traffic.

3) RADIO DUTY CYCLE
Radio duty cycle is an important KPI from the energy consumption point of view as the radio transceiver typ-VOLUME 8, 2020  ically accounts for the majority of current drawn on an IoT device. We present average duty cycle for the network in Table 10 and best case and worst case observations in Table 11 / Table 12, respectively. Best case, resp. worst case, refers to the lowest, resp. highest, observed duty cycle in a run.
From Table 11, we can see that the best-case result is quite consistent across the two testbeds and amounts to approximately 0.5%. The worst-case duty cycle in the network (see Table 12) is around 1.8% for the network formed on w-iLab.t and around 3.2% for the network formed on OpenTestbed. This is a consequence of the logical topology of the networks formed, as nodes closer to the root have more data to forward than the leaf nodes in the network. At 5 mA current drawn from the radio, a figure typical for state-ofthe-art radio transceivers, this results in the average current draw from the radio at about 90uA on w-iLab.t and 160uA on OpenTestbed. To put this number into context, consider that a typical AA battery holds 2200mAh, so a worst-case node would approximately have 2.8 years of lifetime on a pair of AA batteries on w-iLab.t and 1.6 years on OpenTestbed, disregarding the microcontroller and sensor consumption. For comparison, the best-case node at the radio duty cycle of 0.5%, would need over 10 years before depleting a pair of AA batteries.

D. HOME AUTOMATION
Home automation scenario consists of a mix of upstream and downstream traffic. Downstream traffic consists of bursts as well as the application-layer acknowledgment packets. Each scenario run lasting 3 hours and 30 minutes consisted of 1272 packets being sent by different nodes in the network.

1) RELIABILITY
Observed reliability of upstream traffic in the homeautomation scenario is presented in Table 13. We can   see that the average observed on w-iLab.t testbed is around 99,7%, while the same KPI observed on OpenTestbed deployment is 98,05%. We attribute this difference to the radio interference and different propagation conditions on the two testbeds.
For the case of downstream bursts, reliability is presented in Table 14. In both cases, downstream burst reliability is around 97%. The losses are attributed to the queue overflows due to the bursty nature of the traffic and the slow link capacity adaptation algorithm. Table 15 presents the observed latency of upstream traffic. We observed average latency of 3.5 seconds on w-iLab.t and 4.8 seconds on OpenTestbed. Higher latency on OpenTestbed is partly the result of the deeper networks formed during the home automation scenario runs, where each packet traversed on average 2.86 hops, while on w-iLab.t each packet traversed on average 2.62 hops. Table 16 presents the latency results for downstream bursty traffic. The observed latency for packets within a burst was 8.7 seconds on w-iLab.t while it was 12.2 seconds on OpenTestbed. It is important to note here that this result could be improved with the usage of shorter slots, as the default slot length in IEEE802.15.4 TSCH is 10ms, instead of 20ms used within the OpenWSN reference image. Indeed, using 10ms slots would halve the absolute latency in seconds.
Finally, Table 17 presents the downstream latency for non-bursty downstream traffic. Observed latency in case of w-iLab.t testbed was 3.96 seconds while on OpenTestbed it was 5.2 seconds.   2) RADIO DUTY CYCLE Table 18, Table 19, Table 20 present the observed results of radio duty cycle in the network while application traffic pattern is following the home automation scenario. Compared to the industrial monitoring scenario where the traffic load is higher, we can see that the duty cycle results are even better in the home automation case. The worst-case duty cycle in the network for home-automation was observed at 1.49% for w-iLab.t, and 1.68% for OpenTestbed.

V. CONCLUSION
The article presents the design of a benchmarking platform for IoT use cases OpenBenchmark and the benchmarking results of the reference implementation of the 6TiSCH protocol stack, the OpenWSN open-source project. OpenBenchmark is designed with end users in mind; it abstracts network and firmware specifics from the user and as an output presents the user with a set of KPIs relevant from the industrial point of view. The platform is also useful for evaluating research proposals using a well-defined methodology and a common set of KPIs. The source code of OpenBenchmark is available in open source.
We used OpenBenchmark to evaluate the performance of the reference implementation of 6TiSCH in industrial monitoring and home automation test scenarios. Each scenario was executed in two different radio environments, Inria's OpenTestbed in Paris, France and w-iLab.t in Ghent, Belgium.
From the results presented in previous section, we draw here some key take-away in respect to the applicability of 6TiSCH as a technology to different application domains. We could see in industrial monitoring scenario that the observed reliability was above 99%, with experimental runs regularly showing 100% reliability. We observe high reliability also in the home automation scenario where some traffic is generated according to the Poisson distribution, mimicking human actions. In both scenarios, the observed latency can be up to 12 s in bursty traffic scenarios. This result can be easily improved by using shorter TSCH slot lengths or different scheduling approaches specifically for interactive applications. In both scenarios, the observed radio duty cycle below 1%, attesting of the low-power nature of the 6TiSCH technology. While the battery lifetime is a board-level aspect with the attached sensors and the micro-controller also playing an important role, we could see that the consumption of the radio transceiver was negligible and allowing, alone, for a battery lifetime on a pair of AA batteries over 10 years.
Finally, it is important to note here that we used a vanilla version of the OpenWSN firmware image of 6TiSCH without any specific optimizations to a specific use case. Knowing the application traffic patterns and load in advance, it is straightforward to further tune the solution to find a different trade-off between latency and energy consumption for example.
As part of our future work, we plan on extending OpenBenchmark to other IoT technologies and platforms. Indeed, it would be interesting to compare results between different IoT technologies for common application traffic patterns, as defined by our test scenarios.  MILICA PEJANOVIĆ-DJURIŠIĆ (Member, IEEE) is currently a Full Professor in telecommunications with the Faculty of Electrical Engineering, University of Montenegro, Podgorica, Montenegro. She has published more than 200 scientific articles in peer-reviewed international and national journals and conference proceedings, being the author of four books and a number of book chapters. Her main research interests include wireless communications, 5G wireless networks, wireless IoT, cooperative and energy efficient transmission techniques, ICT trends and applications, and the optimization of telecommunication development policy. She has considerable industry and operating experiences working as an Industry Consultant and the Telecom Montenegro Chairman of the Board. She has been in charge of wireless networks design and implementation in Montenegro and in the region of SE Europe. She has been leading and coordinating many internationally and EU funded ICT projects and initiatives. She is a member of IEICE, with a long engagement in the field of telecommunication regulation and standardization. In addition to work on national and regional levels, she has participated, in cooperation with ITU, in a number of global missions and activities related with regulation issues, development strategies, and new technological solutions.