Multitenant Containers as a Service (CaaS) for Clouds and Edge Clouds

In recent years, along with containers, the cloud community has rapidly taken up Kubernetes, the de facto industry standard container orchestration system. All major cloud providers currently offer Kubernetes-based Containers as a Service (CaaS). However, when CaaS is offered to multiple independent consumers, or tenants, a multi-instance approach is used, in which each tenant receives its own separate cluster, which imposes significant overhead due to employing virtual machines for isolation. If CaaS is to be offered not only in the cloud, but also in the edge cloud, where resources are limited, another solution is required. In this paper, drawing upon the scientific literature, we provide a novel classification of Kubernetes multitenancy into three approaches: multi-instance through multiple clusters, multi-instance through multiple control planes, and single-instance native. We propose a single-instance multitenancy framework, meaning tenants are served out of a shared control plane in a single cluster. Our empirical findings show that the single-instance approach imposes a markedly decreased overhead compared to the other two. However, it entails a tradeoff in workload isolation owing to tenants sharing the compute nodes. There are nonetheless means to compensate for such weakened isolation, and we describe how our framework does so. The framework is publicly available as liberally-licensed, free, open-source software that extends Kubernetes. It is in production use within the EdgeNet testbed for researchers.


Introduction
Multitenancy is what makes cloud computing economical.From a single bare metal machine, a cloud provider can offer resources to multiple tenants, where each tenant is a customer that contracts for cloud services on behalf of one or more users.These resources are, for example, virtual machines in the Infrastructure as a Service (IaaS) service model, or tools for application development and deployment in the Platform as a Service (PaaS) model.Tenants that are prepared to accept less than perfect isolation from other tenants benefit from the lower prices that providers can offer thanks to more efficient use of the providers' hardware.
But, despite the greater efficiency of containers as compared to virtual machines, and despite recent improvements in ensuring isolation between containers, the cloud industry does not yet propose a multitenant Containers as a Service (CaaS) offering that takes advantage of these advances.What passes for CaaS today is in fact multiple side-by-side instances of single-tenant clusters of compute nodes, each cluster having its own container orchestration control plane and its own data plane, and isolated from other clusters through the use of virtual machines.For example, automated services such as AWS Fargate 1 and Google Autopilot2 that manage cluster capacity on behalf of a user who is deploying containers to the cloud do not do away with virtual machine overhead and do not improve control plane efficiency. 3In brief, although CaaS ought to offer greater efficiency than IaaS, 4 it does not yet do so.
With the emergence of the edge cloud, such efficiency will take on greater importance because resources will typically be more constrained than in the cloud.As part of the vision for 5G, it is projected that mobile network operators will become edge cloud providers, offering up compute resources from servers that are colocated with their wireless base stations [25,41], at what is being termed the 'service provider edge' [4,5,7].These operators are also expected to offer resources from their peering sites, or the 'regional edge' [6].Such edge cloud instances will be data centers that are geographically dispersed to be closer to the users of cloud services or to edge devices than are the centralized data centers that dominate the present-day cloud. 5With fewer resources, an edge cloud will not scale as elastically as a cloud, yet it must be prepared to receive a large number of workloads that have been deployed to serve local users and devices.
The problem that we aim to resolve is how to move CaaS multitenancy away from a high-overhead multi-instance model to a more efficient one that will be suitable for the resource-constrained edge cloud.In the solution that we propose, multiple tenants share a single instance of the control plane, which is used to deploy containers that coexist within a single instance of a shared cluster, while still allowing tenants to enjoy isolation from each other as well as the opportunity to customize their resources.
Our multitenancy solution has the particularity that it is designed to work in a federated environment.Today, a cloud customer typically deploys their workloads to a single cloud provider, but if they want to extend those workloads to be close to users and edge devices, a customer will also need to obtain resources from multiple edge cloud providers [18]. 6oing so will be easiest for a customer if those providers are federated, meaning that the customer will be able to contract with just one cloud or edge cloud provider and the customer will be able to deploy its workloads through a single interface offered by that provider [60,36], and the provider will manage the propagation of the workloads to the other providers.Accordingly, our multitenancy solution ensures that each cloud provider can accept tenant workloads that originate from other providers.
As we use the term, a multitenancy framework consists of a set of rules that govern how a cloud provider offers resources to its tenants such that each tenant can use their portion of the resources and configure those resources to meet their needs without regard for the presence of the other tenants.The rules address the creation of isolated environments, resource sharing, and user permission management.They determine which rights over resources are given to which tenants, under which conditions, and how those rights affect the relationships of other tenants with the same resources.The term equally well refers to the set of entities that are coded to enforce these rules.
In this paper, we describe our framework, argue for it, and show how we have implemented it in EdgeNet, a production edge cloud. 7What we henceforth refer to as the EdgeNet multitenancy framework is part of the larger EdgeNet code base, 8 which is free, liberally-licensed, and open source software that enables CaaS deployments to the edge cloud.It is desiged as a set of extensions to the Kubernetes container orchestration system, 9 which is itself free, liberallylicensed, and open source.Our reasoning in building upon Kubernetes is that cloud customers will want to continue using this familiar system, which is today's de facto industry standard container orchestration tool.
As Kubernetes does not natively support multitenancy, others have identified the need for such an extension and have developed their own Kubernetes multitenancy frameworks.(See Table 2 for details.)We will show that the existing frameworks, while no doubt fine for the cloud, will not be suitable for CaaS in the edge cloud.There are a few prior studies concerning these frameworks [62,33,27], but this is the first paper to situate them, and EdgeNet, within the existing scientific literature on cloud multitenancy.
Our contributions, and the sections of the paper that address them, are as follows: • We look at Kubernetes multitenancy frameworks through the lens of the scientific literature on cloud multitenancy and, in Sec.3.1, we provide a novel classification of these frameworks into three main approaches: multi-instance through multiple clusters, multi-instance through multiple control planes, and single-instance native.
• Based upon our analysis of the literature, we distill out four features that we believe will promote a future in which CaaS can thrive, in particular at the network edge, and we describe how we have incorporated these features into the EdgeNet multitenancy framework: consumer and vendor tenancy in Sec.3.3, tenant resource quota for hierarchical namespaces in Sec.3.4, variable slice granularity in Sec.3.5, and federation support in Sec.3.6.
• We have implemented the EdgeNet multitenancy framework as a free and open-source extension to Kubernetes, and have put it into production as the EdgeNet testbed, as described in Sec. 5.
• Our EdgeNet multitenancy framework constitutes a prototype for the federation of clouds and edge clouds, and we provide a vision in Sec.5.2.4 for the future development of a full federation framework.
• We benchmark the three multitenancy framework approaches using a representative implementation for each approach, and we reveal their pros and cons from a tenancy-centered edge computing perspective in Sec. 6.
The paper is structured as follows.Sec. 2 provides background on cloud multitenancy, the challenges that it presents, and the ways in which those challenges have been addressed for edge computing.Sec. 3 describes related work in the specific area of Kubernetes multitenancy frameworks.Sec. 4 discusses design principles for a CaaS multitenancy framework, and Sec. 5 presents the architecture of the EdgeNet multitenancy framework that we have developed.In Sec. 6, we benchmark our framework against representative frameworks for two alternate approaches, and we point to our future work in Sec. 7.

Rationale
We envisage a future in which tenants deploy services on a continuum of computing resources from cloud to edge cloud, about which we make the following assumptions: • Edge clouds are ubiquitous, scattered across the world [26].
• Compute and storage resources are constrained in the edge cloud, making it harder to scale tenant workloads there than in the cloud.
• Tenants value the ability to easily move their workloads from one edge cloud cluster to another and between the edge cloud and the cloud.
• Each tenant's user database is maintained by that tenant.User management is not a functionality provided by the compute clusters.
• Tenants and their users are unreliable.They may purposely or accidentally harm each other, or the compute cluster, or themselves.
We conceive of our proposed architecture based on these assumptions, for which we provide rationale in the following subsections: the necessity of a novel Kubernetes CaaS multitenancy framework (Sec.2.1) that takes container-specific security and performance considerations into account (Sec.2.2), and that enables federation across edge clouds and control over slice granularity at the edge (Sec.2.3).

Multitenancy
It is an often-repeated commonplace that cloud computing is not just "using someone else's computer", as the cloud goes beyond this to promise more flexible, convenient, and cost-effective access to computing resources.Multitenancy is required to realize this promise.The NIST Definition of Cloud Computing [45] mentions resource pooling as one of the "five essential characteristics" of cloud computing, saying that: The provider's computing resources are pooled to serve multiple consumers using a multi-tenant model, with different physical and virtual resources dynamically assigned and reassigned according to consumer demand.

Preprint
Multi-tenancy is a property of a system where multiple customers, so-called tenants, transparently share the system's resources, such as services, applications, databases, or hardware, with the aim of lowering costs, while still being able to exclusively configure the system to the needs of the tenant.
Multitenancy is a standard feature of the three established cloud service models, Software as a Service (SaaS), Platform as a Service (PaaS), and Infrastructure as a Service (IaaS) [48,14].If CaaS is to provide the promised benefits of the cloud and the edge cloud at scale, then it requires an efficient multitenancy model as well.We further discuss why such efficiency is required for CaaS to run for clouds and edge clouds in Sec.3.1, and the results of our experiments in Sec.6 support our contention.
Multitenancy has a broad meaning and can be enabled at different cloud abstraction layers using different techniques to share resources among multiple customers.This paper discusses multitenancy in the context of CaaS and methods for accomplishing it.CaaS offerings are mostly based upon Kubernetes [62], so we focus on the ways in which it can serve multiple customers using multitenancy.To be clear with respect to the discussion of multitenancy in the Kubernetes documentation, 10 which describes how a tenant can deploy an application in a Kubernetes cluster to serve its multiple customers using a multi-tenant model: that is also multitenancy, but at the application layer, and more precisely at the SaaS layer, however it is not multi-tenant CaaS, which is what this paper considers.

Security and Performance
While multitenancy is an essential cloud feature, it raises security issues that researchers have been considering for over a decade [16], notably with respect to the IaaS service model [23].For example, potential users are concerned about the security of their data when multiple tenants share the same infrastructure [12], and the resulting lack of trust can hamper cloud adoption [48].
Virtualization is used to isolate tenants from one another, but containers tend to offer weaker isolation [13], which introduces new concerns for multitenant container platforms [49], such as information leakage between colocated containers [30].In general, Sultan et al. [51] have identified four categories of threat in containerized environments: malicious applications within containers, one container harming another, a container harming its host, and a container within an untrustworthy host.
In Kubernetes, container security must be considered in the context of the pod, which is that system's smallest deployable unit, consisting of a set of one or more containers.The Kubernetes pod security standards define three profiles, Privileged, Baseline, and Restricted. 11However, these standards address a single-tenant environment, and so overlook some of the multitenant security issues mentioned above.
We therefore see the need for a solution that diminishes the security risks of running colocated containerized workloads.In order to be of interest for CaaS, such a solution needs to maintain the performance advantage of containers over virtual machines.

Edge Computing, Federation, and Slicing
As described in the Linux Foundation's 2021 State of the Edge report [5], cloud-like infrastructure is being developed at the network edge in order to serve edge devices that produce bandwidth-intensive and/or latency-sensitive workloads.ETSI's multi-access edge computing (MEC) architecture [3] provides a standard structure for making servers at cellular operators' radio access networks available for the deployment of such workloads by third parties.That is, the emerging edge cloud will be a multitenant cloud [11].
Since the MEC architecture anticipates that workloads may be containerized, we argue that there is a need for a multitenant CaaS framework that meets the specific requirements of the network edge.The prime edge requirements that we identify are federation and variable slice granularity.
MEC facilities will be provided by multiple operators.Just as a mobile phone user is able to roam from one regional operator to another today, a mobile edge device will need to be able to connect to different operators and find its containerized edge services spun up near each base station to which it connects.And ETSI describes a requirement for edge devices to be able to engage in low-latency interactions with each other when they are near each other, even if they are connected to different operators' base stations.ETSI uses the term federation to describe such interoperability scenarios.multiple operators' edge clouds.That is, the framework will not just be multitenant, it will also be multi-provider, with providers furnishing geographically dispersed heterogeneous resources.Those who deploy CaaS services to a multi-provider environment will be in need of a unified interface that simplifies the task of moving workloads between remote clusters that are owned by different providers [60].
In addition, as anticipated by the Next Generation Mobile Networks Alliance in 2016 [1], operators will have to support third party services that put a much more heterogeneous set of requirements on their networks than is currently the case.Extreme requirements are incompatible with a one-size-fits-all approach.The way that MEC handles this is through slicing [41,2,61], which allows network and compute resources to be allocated and custom-configured to meet the specific needs of individual services.In the CaaS context, we argue that no single slice granularity will meet the full range of needs.The standard CaaS sub-node-level slicing, in which containers are provided from a shared resource pool on individual node, while no doubt appropriate for many services, will not be appropriate for those that are the most sensitive to performance variation.For those services, node-level slice granularity will be needed.

Related Work
Someone who wishes to deploy containerized services to the cloud has a choice of open source container orchestration systems with which to do so, four of the most prominent being [10]: Apache Mesos, 12 Docker Swarm, 13 Kubernetes, 14and Rancher's Cattle. 15We focus on Kubernetes, as it has in recent years become the de facto industry standard.All of the major cloud providers offer Kubernetes-based CaaS to their customers (see Table 1).And Datadog, a company that provides cloud monitoring and security services, reports [22] that nearly 50% of their customers that deploy containers use Kubernetes to do so, this having increased about 10 percentage points over the past three years.
In the commercial cloud offerings, each customer gets their own Kubernetes cluster, which is a straightforward form of multitenancy.Some providers add on more advanced features.For example, an Amazon EKS customer can use a service called Fargate 16 to manage the capacity of their Kubernetes cluster, adding and removing nodes as they need to.Similarly, a Google Cloud customer can hand over control of their cluster capacity management to a service called Autopilot, 17 to do the same thing for them automatically.
While Kubernetes multitenancy in this form might be fine for large centralized data center clouds, there are drawbacks when looking to an edge cloud future.Setting up a separate cluster for each tenant is far from the most efficient approach, as we will show in Sec. 6. Resources are liable to be underused, which will be of particular concern in the smaller data centers that we can anticipate at the edge.And when tenants need to be repeatedly instantiated as their workloads migrate, for instance at one roadside cabinet after another to serve vehicles that are moving along a highway, spinning up an entire cluster for each arrival of a tenant risks taking too much time.We anticipate that lighter forms of multitenancy will be needed: ones that allow more efficient resource sharing, even at some cost in workload isolation, and that allow more rapid creation and deletion of tenants.Figure 1: Multitenancy Approaches.The multi-instance approaches provide each tenant with its own instance of the control plane (or, at the least, of certain control plane components) and, optionally, its own set of worker nodes, ensuring better isolation between tenants.The single-instance native approach caters to multiple tenants through a single control plane, while having them share the resources of a single set of worker nodes, thereby providing improved performance.
enabling multitenancy risk being a hindrance in a federated environment, in which a single customer might deploy their workloads to many edge clouds, each owned by a different operator.If all of the operators use a common open-source multitenancy framework, it will promote interoperability.
Starting in 2019, as Table 2 shows, a fair number of open-source Kubernetes multitenancy frameworks have been developed.Some, such as Virtual Kubelet [56] and frameworks that are derived from that code, take the same starting point as the commercial services, which is each tenant having its own cluster.But others offer worker nodes to tenants out of a shared cluster, which is more resource efficient.And some of these serve multiple tenants out of a shared control plane, which is yet more efficient.
The Kubernetes community has recognized the importance of developing such frameworks, as evidenced by the fact that one of the Kubernetes working groups, of which there are just five, 18 is devoted to multitenancy. 19Both of the frameworks that this working group supports take the shared cluster approach.VirtualCluster (VC) [57] offers a separate control plane to each tenant while the control plane is shared among tenants by the Hierarchical Namespace Controller (HNC) [54].These two frameworks, along with the others shown in Table 2, comprise the essential related work for our own EdgeNet framework.

Multitenancy Approach
The scientific literature describes two approaches to enabling CaaS multitenancy: multi-instance [34], and singleinstance native [37].We ourselves further distinguish between multi-instance through multiple clusters and multiinstance through multiple control planes, making three approaches altogether, as shown in Table 2.The approaches are illustrated in Fig. 1 and we describe them as follows:

Multi-instance through multiple clusters
Fig. 1a illustrates the multi-instance through multiple clusters approach, in which each tenant receives its own cluster.
The proprietary commercial CaaS offerings (see Table 1) are structured in this way, but there is no open-source frame-  20 and Kubespray, 21do not address multitenancy.
There is, however, a set of open-source Kubernetes frameworks that do address multitenancy for the case in which there are already multiple tenants, each of which possesses one or more of their own clusters, even if these frameworks do not spin up or spin down the clusters on demand.These frameworks, based on the code of Virtual Kubelet [56], a sandbox project of the Cloud Native Computing Foundation, are designed to allow workloads from one cluster to be deployed to another cluster.Their primary focus is on cross-cluster deployment in general, and multitenancy arises only in the specific case of clusters belonging to different tenants, but since they do enable this sort of multitenancy, we examine the advantages and disadvantages of doing so.As illustrated in Fig. 2, Virtual Kubelet establishes a connection from one cluster to another by leveraging Kubernetes' kubelet22 API.A kubelet is the agent that runs on each node of a Kubernetes cluster in order to manage the life cycles of pods, which are groups of containers associated with a workload.By implementing the kubelet API, a virtual kubelet masquerades as the kubelet of an individual node, but is in reality a stand-in for the remote cluster.It, in turn, uses the remote cluster's control plane API to deploy and manage workloads on that cluster.
Although we might think of this as a small scale form of federation, the Virtual Kubelet authors expressly say that it "is not intended to be an alternative to Kubernetes federation," by which we understand a full-featured and scalable federation.Similarly, as we have mentioned, Virtual Kubelet is not primarily designed for multitenancy.By contrast, EdgeNet is designed precisely for federation and multitenancy.While similar to Virtual Kubelet in the sense that EdgeNet introduces agents to transfer workloads from one cluster to another, EdgeNet avoids the overhead associated with each tenant having its own cluster.This is because, in EdgeNet, it is the cloud and edge cloud providers that possess the clusters.Provider ownership of the clusters also means that an EdgeNet tenant can rely upon a provider to ensure the privacy of its workloads, rather than relying upon another tenant to do so.
Liqo [55], Admiralty [53] and tensile-kube [52] are all based on the Virtual Kubelet code.Liqo is one of the few frameworks to date to be the subject of a peer-reviewed scientific paper [35].The authors are careful to state that some of the issues that arise from multitenancy, such as the manner in which the workloads of different tenants in the same cluster are isolated from each other, remain to be addressed. 23Sec.4.2 describes our proposed resolution for this problem.

Multi-instance through multiple control planes
In the multi-instance through multiple control planes approach, all tenants are supported by a single cluster, but each tenant acquires its own control plane within that cluster, as illustrated by Fig. 1b.One or more nodes are dedicated to supporting the tenant control planes, and, within each control plane node, containers, or containers grouped into pods, isolate one tenant's control plane from another's.(Isolating control planes from each other through containers imposes lower overhead than doing so with VMs.)There are variants to this approach, in which some control plane components, like the scheduler, are shared among tenants, while others, such as the API server and database, are duplicated so as to provide one instance to each tenant.In any case, this approach gives each tenant a full view of its own control plane view, which it can use for customizing its own environment.
Frameworks that follow this approach differ in how they isolate tenant workloads from each other.If tenants share a common set of worker nodes as they do in VirtualCluster, k3v, and vcluster, the degree of isolation will depend upon the container runtime used to run the containers.If each tenant acquires its own dedicated set of worker nodes, as happens in Kamaji, then there is better isolation.
VirtualCluster [57] is one of the two open-source frameworks incubated by the Kubernetes Multi-Tenancy Working Group.It virtualizes the control plane components per tenant, with the exception of the scheduler.For isolation between the worker nodes of different tenants, it uses Kata containers [47].
A drawback of VirtualCluster is the cost of providing separate control plane components per tenant.In a peer-reviewed scientific paper [62], the VirtualCluster authors state that this cost is a blocking point when more than a thousand tenants are in the cluster.By contrast, EdgeNet's shared control plane approach allows far more tenants to be allowed into a given cluster, and allows more tenants to arrive within a short period of time, as we show in Sec. 6.In a federated edge cloud environment, where we anticipate limited resources, large numbers of workloads, and the rapid propagation of workloads from one cluster to another, the shared control plane approach has a clear advantage.In fairness to VirtualCluster, it is designed for a different sort of environment.
In Rancher's k3v [46], the control plane is virtualized on a per-tenant basis, similar to VirtualCluster, but it does not provide data plane isolation, as VirtualCluster does.Exceptionally among the frameworks, k3v does not provide a mechanism for managing tenant resource quotas, as we mention in Sec.3.4.
vcluster [43], not to be confused with VirtualCluster, is one of two open-source frameworks developed by Loft, the other being kiosk, which takes the single-instance native approach.In the control plane, each vcluster has a separate API server and data store.Workloads created on a vcluster are copied into the namespace of the underlying cluster to be deployed by the shared scheduler.
Kamaji [21] is one of two open-source frameworks developed by Clastix Labs, the other being Capsule, which takes the single-instance native approach.Kamaji enables Kubernetes multitenancy by running tenant control planes as pods on a common cluster, known as the admin cluster.Each tenant receives its own dedicated worker nodes.Isolation between worker nodes on the same machine is enabled through VMs, which introduces much higher overhead than would isolation through containers.

Single-instance native
In the single-instance native approach, all tenants share a single control plane and a common set of worker nodes, as illustrated in Fig. 1c.Control plane isolation is ensured through a logical entity, such as Kubernetes namespaces, that introduces negligible overhead, but provides less control plane isolation compared to a multi-instance approach.Workload isolation depends upon the container runtime, as it does for the multi-instance through multiple clusters that are federated approach and for the multi-instance through multiple control planes approach.
This approach demands significant coding work to give each tenant an experience akin to using their own separate cluster.
The single-instance native approach's scaling advantage is illustrated by a scenario examined by Guo et al. in which it supported thousands of tenants, as opposed to just dozens for a multi-instance approach [34].It also has lower operational costs [19].And it is lighter weight for workload mobility, allowing containers to be spun up and spun down with less overhead than in a multi-instance approach, as we show through benchmarking in Sec. 6.For these reasons, we have adopted the single-instance native approach for EdgeNet.
The Hierarchical Namespace Controller (HNC) is one of the two open-source frameworks incubated by the Kubernetes Multi-Tenancy Working Group, the other being VirtualCluster.HNC takes the single-instance native approach, whereas VirtualCluster takes the multi-instance through multiple control planes approach.HNC uses a hierarchical namespace structure in order to enable multitenancy. 24Functionalities such as policy inheritance that allow objects to be replicated across namespaces are built upon this hierarchy.
Aspects of this work that have inspired our own multitenancy framework are its hierarchical namespace structure and the terminology that it employs.We have also designed our own framework to avoid what we perceive to be its defects: • HNC does not enforce unique names for namespaces, opening the possibility for namespace conflicts.
• HNC's quota management system is not aligned with the hierarchical namespace structure so as to limit a child's quota based upon its parent's quota, though community documentation states 25 that work is underway to enable this.• HNC's quota management system allows namespaces without quota to coexist alongside namespaces that have quotas, which puts those quotas at risk (see Fig. 4b and discussion in Sec.3.4).

Preprint
Capsule [20] is one of two open-source frameworks developed by Clastix Labs, the other being Kamaji, which, as we have seen, takes the multi-instance through multiple control planes approach.Capsule is one of two frameworks that adopts flat namespaces (see Sec. 3.2) as its customization approach, the other being kiosk.It gives a tenant the possibility of creating resources that can be replicated across a collection of namespaces of the tenant, and it provides the cluster administrator with the possibility to copy resources among namespaces of various tenants.Although this approach facilitates the management of multiple namespaces that belong to a tenant, so it eases management complexity, it may not be fully scalable for extensive tenant settings, as we discuss in the following subsection.Capsule aims at allowing an organization to share a single cluster efficiently, hence not accounting for the needs of the envisaged edge computing infrastructure.
kiosk [42] is one of two open-source frameworks developed by Loft, the other being vcluster, which vcluster takes the multi-instance through multiple control planes approach.This solution uses flat namespaces approach, as does Capsule, for customization.A tenant is represented by an abstraction called an account, and an account can create a namespace through an entity called a space.Each space is strictly tied to only one namespace.This framework permits the preparation of templates that can be employed during namespace creation, facilitating the automated provisioning of resources as defined within these templates in the designated namespaces.Despite alleviating management complexity, this approach still shares Capsule's limitations stemming from flat namespaces.Multi-cluster tenant management is listed on their roadmap, but the project does not seem to be under active development, as the latest commit in its main branch was around a year ago.
Centaurus's Arktos [17] takes the single-instance native approach to multitenancy.As discussed in Sec.3.2, it is the only framework that takes a tenant-wise abstraction approach to enabling customization.Arktos achieves this through API modifications, 26 which may require a significant amount of effort to keep aligned with the upstream Kubernetes control plane code.Its architecture primarily consists of three main software entities: an API gateway that receives tenant requests, a Tenant Partition (TP) that gives the illusion of each tenant acquiring an individual cluster, and a Resource Partition (RP) that operates on resources like nodes [27].Although not all of its features are precisely presented, based upon our reading of their documentation, we consider that this solution addresses some federation aspects, such as scalability and cloud-edge communication.They provide a vision of consolidating 300,000 nodes belonging to different resource partitions into a single regional control plane.However, the main branch of their project repository has not received commits for around a year, implying that it may not be currently undergoing active development.

Customization Approach
Containers-as-a-service cannot scale to a large number of tenants if the mechanism by which each tenant obtains the environments in which to deploy its workloads, and configures each environment to meet the needs of its workload, requires manual intervention at every stage by the cloud administrator.Each tenant should have a degree of autonomy to: create and delete the environments in which its workloads can be deployed; obtain resource quotas and assign them to those environments; and designate users for the environments, assign roles to those users, and grant permissions based upon those roles.Some combination of automation of these processes and delegation of administrative responsibility is needed to enable that autonomy.In Table 2, we call the way in which a multitenancy framework does this its Customization Approach.
By giving each tenant its own control plane, which the tenant's administrator can use to configure its environments as they wish, the multi-instance frameworks provide the greatest flexibility.We call this approach the Full Control Plane View.As Table 2 shows, it is offered by the frameworks that follow the multi-instance through multiple clusters approach (Virtual Kubelet based frameworks), since each cluster has its own control plane, and, of course, by the multi-instance through multiple control planes approach (VirtualCluster, k3v, vcluster, and Kamaji).
Some of these frameworks (Kamaji and, partially, Virtual Kubelet based frameworks) allow additional server environment configuration to take place by enabling SSH access to worker nodes, and this is noted as Data Plane customization in the comparison table.In Virtual Kubelet based frameworks, administrators of a tenant that owns a cluster can typically access the worker nodes in that cluster by SSH, but not the ones in other clusters, and this is classified as Partial in the comparison table.
In frameworks that follow the single-instance native multitenancy approach, some extensions to Kubernetes are required in order to safely enable customization.This is because in standard Kubernetes, giving a tenant's administrator the permissions necessary to configure their own environments means giving them the ability to configure other tenants' environments as well.Since there is no control plane isolation mechanism other than namespaces, an admin- The hierarchy captures relationships between the namespaces: a and b are the core namespaces belonging to two tenants, whereas the others belong to sub-trees of those core namespaces.For example, aa and ab are subnamespaces of a.They belong to the same tenant as a and they may inherit a portion of that tenant's resource quota, user roles, and the permissions that accompany those roles.Likewise, aba and abb belongs to the same tenant as ab and may inherit from it.Management of tasks such as the approval of new namespaces, and the modification of quotas, users, etc., can be delegated to each tenant's administrators, and, further down the hierarchy, to sub-tree administrators.
The flat structure does not express these relationships.For example, no mechanism provides for aa to inherit from a.If they are to share configuration parameters, this needs to be expressly requested by the common administrator of the two namespaces.There are efforts to solve this issue through configuration templates to be applied to multiple namespaces.Nevertheless, as the number of namespaces that a tenant has grows, it results in management complexity for the root admin of this tenant, which makes it challenging to keep track of independent namespaces.
istrator who has permission to create, modify, and delete namespaces can do so freely across the board.Rather than hand out such permissions, a single-instance customization approach needs to provide one or more custom resources that a tenant's administrator can access, and the controllers of those will ensure safety while configuring the tenant environment on the administrator's behalf.
Among the single-instance frameworks, Arktos employs the most elaborate customization approach: that of introducing a new abstraction, beyond namespaces, by which to isolate tenants from one another in the control plane.As this abstraction is meant to capture the notion of a tenant, we refer to it in Table 2 as a Tenant-wise Abstraction.Our concern about this approach is the amount of development work that it might entail, both to develop this new abstraction and to maintain its compatibility with Kubernetes' upstream version of the control plane code.
Instead of introducing an entirely new abstraction, frameworks can build on Kubernetes' existing control plane isolation mechanism: namespaces.We identify two ways of doing so.The simpler one, followed by Capsule and kiosk, is to follow the standard Kubernetes approach, in which each namespace exists independently of every other namespace.This is described as Flat Namespaces in Table 2.
Another way, but one that requires more development work, is to provide controllers that keep track of the relationships between namespaces, such as several namespaces all belonging to the same tenant.Since the two frameworks that do this, EdgeNet and HNC, do so by maintaining a hierarchical structure through which to track the relationships, we identify this approach as Hierarchical Namespaces in Table 2.
Fig. 3 compares the two namespace structures.A hierarchical structure permits configurations to be inherited and allows for configuration tasks to be delegated, offloading tasks from administrators at the top of the hierarchy to administrators further down.The prime disadvantage of a flat namespace structure is that, even with automation, the root admins of tenants are highly solicited.EdgeNet adopts a hierarchical namespace structure, which is implemented by the architecture described in Sec.5.1 and Sec.5.2.

Consumer and vendor modes
Cloud services generally support two types of tenancy: Consumer Mode, in which the tenant is the end user of the resources; and Vendor Mode, in which the tenant can resell access to the resources to others.

Preprint
The type of tenancy affects the visibility that the manager of a tenant has into that tenant's isolated environments.For a consumer tenant, these environments are generally termed workspaces, and they are created to be used by the members of that tenant's group or organization.A manager of a set of workspaces needs visibility into who the users of each workspace are, and needs fine-grained control over the rights of those users with respect to those workspaces.But a vendor tenant manages a set of subtenant environments that are destined for its own customers.A customer expects a certain level of privacy, with the users and user rights of their subtenant environment remaining hidden from the vendor.
As shown in Table 2, all of the CaaS multitenancy frameworks that we have studied support consumer tenancy, but only EdgeNet and Virtual Kubelet based frameworks support vendor tenancy.We expect that the same commercial logic that has driven other cloud service models towards both forms of tenancy will lead to support for vendor tenancy being generalized for containers-as-a-service.
In order to enable any sort of tenancy, a system must support authorization and isolation mechanisms.It requires greater expressiveness to support both consumer and vendor tenancy than it does to support consumer tenancy alone.Such expressiveness, for example, allows a tenant to create a subtenant for the purpose of reselling its own allocated resources.This can be done in different ways depending upon the multitenancy approach: • Multi-instance through multiple clusters: A tenant who owns a cluster can open this cluster for use by one of its subtenants.Because of the ease of doing so, we indicate Virtual Kubelet based frameworks as offering support for a vendor mode, even though their documentation does not explicitly mention this.However, since such an approach requires a cluster per tenant, this introduces high overhead, as our benchmarking shows in Sec. 6. • Multi-instance through multiple control planes: A tenant could create a subtenant generated with its subtenant control plane instance running on top of the tenant control plane instance.None of the frameworks that we have studied currently do this.• Single-instance native: A tenant can create a subtenant assigned with private namespaces that the tenant is solely authorized to remove.EdgeNet, having adopted the single-instance native approach to multitenancy, builds consumer and vendor modes on top of its hierarchical namespace structure.The implementation is described in Sec.5.2.1 and illustrated in Fig. 10.

Tenant resource quota allocation
Resource quotas are popular in commercial settings, where they provide a basis for providers to bill their customers.In situations where resources are constrained, quotas are also a simple means by which to ensure an equitable allocation of those resources.Quotas are commonly used in the cloud, and Kubernetes supports them by providing a mechanism for allocating quotas to namespaces. 27The Kubernetes mechanism is conceived for the relatively small scale scenario of a single organization using a cluster, and an administrator who manually sets resource quotas per namespace so as to share out the resources among different teams in the organization.A multitenancy framework that is built on Kubernetes needs to automate this process, to enable it to scale.
As Table 2 shows, all of the Kubernetes multitenancy frameworks that we have studied offer a mechanism for managing tenant resource quotas, with the exception of k3v.We classify k3v in this way as we consider its mechanism to be incomplete.In that framework, which is no longer under active development, a cluster administrator can set a resource quota in the host namespace of a virtual cluster, but the tenant will not be aware of it.
In the edge cloud, we can expect resources to be more constrained than in the cloud, and so the need for a quota allocation mechanism is even stronger.Since our EdgeNet framework is designed for the edge cloud as well as the cloud, such a mechanism is a required feature of the framework.
Having made the design decision to use a hierarchical namespace structure, our quota management system needs to follow that structure.This means building in dependencies between quotas: as shown in Fig. 4a, at each node in the namespace tree, quota must be shared out between the parent namespace located at that node and the sub-trees that are rooted at the children of that node.EdgeNet's quota implementation is more thoroughly described in Sec.5.3.
The only other framework that uses hierarchical namespaces, HNC, also allows quota to be shared out hierarchically.
The mechanism employed in doing so relies on Google Cloud's Hierarchy Controller28 as its foundation.But since it does not require that a quota be attributed to each namespace, it can end up constraining some namespaces while not  In EdgeNet, the quota of 15 must also be distributed within the sub-tree rooted at ab.For example, here, 3 is reserved for the namespace ab and 8 and 4 are allotted to the sub-trees rooted at aba and abb, respectively.Likewise, quota must be allocated to the sub-tree rooted at b and distributed within that sub-tree.
HNC, on the other hand, allows portions of the hierarchy to be free of quotas.In this example, in HNC, the administrator of namespace ab has, perhaps inadvertently, not set quotas for its subnamespaces, and likewise for the tenant administrator of b.If workloads in aba and abb were to exceed a resource consumption of 12 or the workloads at b were to consume resources exceeding 40, other namespaces with quotas might not be able to fully enjoy the resources quotas that had been reserved for them.
constraining others, opening the possibility for a sub-tree to not enjoy the full resource quota that it has been allocated, as shown in Fig. 4b.In EdgeNet, quotas apply either to the entire tenant namespace hierarchy or not at all, so this problem cannot arise.
Resource quotas can be wasteful of resources if they are not used fully, while best-effort distribution of resources is more efficient without providing guarantees.None of the Kubernetes multitenancy frameworks provides an intermediate solution.Providing such a solution is on the EdgeNet development road map.

Variable slice granularity
We use the term slicing to refer to a mechanism that enables multitenancy by dividing a larger pool of resources into smaller portions, each portion being for the exclusive use of one of the tenants.For CaaS, the larger pool is a compute cluster that consists of nodes, which may be either physical servers or virtual machines.But what size should a smaller portion be: a full node, or a subset of the resources of a node?A subset can be acquired through the use of containers, sandboxed to a greater or lesser degree, as Sec.4.2 will describe.Fig. 5 depicts the different possible node and slice granularities.In our estimation, neither of the slicing granularities is ideal for all use cases, and a multitenancy framework should offer both, and automate the ability to switch between them.
Node-level Slicing (Figs. 5a and 5b).Slicing at this granularity, which is offered by all of the frameworks that we have studied, provides a tenant with one or more entire nodes, so that isolation of a tenant workload is ensured at the level of the node in which it runs.By this means, it offers greater freedom in choosing a container runtime to support a particular containerized workload.And it can better ensure stable access to resources.Reserving an entire physical server (Fig. 5a) can be valuable, in particular, for a tenant that needs to meet an unusual requirement, such as guaranteed access to GPU resources.However, when entire nodes are reserved for tenants, some nodes might be under-utilized.Dashed vertical lines indicate how a cluster's resources are sliced so as to make those resources available to tenants.A node in a cluster can be a physical server (left illustrations) or a VM (right illustrations), presented as node granularities.Slicing can be performed so as to make an entire node available to a tenant (top illustrations) or so as to make a subset of a node's resources available to a tenant (bottom illustrations).Different node and slice granularities can coexist within a cluster (e.g., the scenarios shown in all four illustrations could appear simultaneously in a single cluster).Our EdgeNet multitenancy framework automates the process of varying the slice granularity, allowing a node to be reserved for a tenant, or returning a reserved node to the pool of nodes available to be subdivided.
Sub-node-level Slicing (Figs. 5c and 5d).Sub-node-level slicing improves the ability of a cluster to maximize the efficiency of its resources.This is enabled through containers where each container on a node takes a portion of its resources.Isolation between multi-tenant workloads on the same host is provided at the level of containers, so it is weak.Better isolation can be ensured through container runtimes that provide sandboxes to containers.This approach restricts tenant autonomy in selecting a container runtime as there are just a few of them available.
As Table 2 shows, all of the CaaS multitenancy frameworks that we have studied offer node-level slicing, and all but Kamaji offer sub-node-level slicing.When it is available, sub-node-level slicing is the default.Upon the request of a tenant, a cluster administrator can manually configure node-level slicing.
The EdgeNet framework is the only one for which the process of switching granularity is automated.Sec.5.4 describes how we implement this.It might seem that the node-level slicing that we thereby enable suffers from all of the inefficiency of the multi-instance CaaS model that we have critiqued (see Sec. 1), but this is not so, as our architecture preserves the single-instance efficiency of a single control plane.

Federation support
CaaS multitenancy frameworks have to date generally been aimed at the use case of a single cluster operator offering its resources to its own tenants.However, the resources of several operators from different regions or countries will generally be required by a tenant that wishes to provide its edge cloud based services to large numbers of end-users.Such a tenant might prefer to be the customer of just one operator and, through that operator, gain access to the others.We anticipate that operators will see a commercial interest in federation, which will allow them to more broadly commercialize access to their clusters.We also anticipate that operators will want to lower the barrier to entry for those who deploy services by allowing them to orchestrate their containers across multiple clusters with a single tool.
Many edge cloud services, such as cognitive services [26,32], are expected to involve workloads that are spread across the cloud and the edge cloud [8], with workloads moving back and forth between the two, so there are voices in industry that argue [60], and we are convinced, that a unified, single interface for users is a necessity.As a first step towards this goal, the EdgeNet multitenancy architecture presents an essential first brick in such a federation architecture: the ability to generate object names that are universally unique to cluster and tenant.Such uniqueness Preprint avoids name collisions during the propagation of objects across clusters.The details of our implementation are found in Sec.5.2.4.
Besides our EdgeNet framework, five of the frameworks that we study support scaling up the infrastructure that multiple tenants share, and four of them, the Virtual Kubelet based frameworks, do so through federation.Even for their main purpose of enabling deployment of workloads to multiple clusters, Virtual Kubelet based frameworks suffer from a significant drawback: Kubernetes' automatic scaling up and down of workloads to meet demand gets lost in remote clusters.This is because the Kubernetes objects that get deployed through a virtual kubelet are pods rather than the Deployment or StatefulSet workload resources that manage pod life cycles on a user's behalf, and the Kubernetes Horizontal Pod Autoscaling mechanism 29 in each cluster works on these sorts of objects, not on individual pods.
Like Virtual Kubelet, EdgeNet enables the deployment of workloads from local clusters on remote clusters, but Edge-Net handles this through an intermediate cluster between local and remote clusters.The intermediate cluster that does this for EdgeNet is called the Federation Manager.When a tenant, using its local cluster, makes a deployment in federation scope, the Federation Manager creates the deployment on the remote cluster on behalf of the tenant, as we describe briefly in Sec.5.2.4.Some of Liqo's extensions to Virtual Kubelet start to tackle some of the concerns that would arise in a multi-tenant federation, such as collisions between the names of namespaces generated in local clusters and in remote clusters.Liqo's solution is a naming scheme that ensures that the name of a namespace used by a workload will be unique in the remote cluster in which it is deployed. 30However, the same workload risks running in namespaces with different names in different clusters, which can itself lead to problems.EdgeNet by contrast generates globally unique names that avoid collisions, and a workload runs in namespaces that carry the same name on all clusters to which it is deployed.
The other framework that provides for cloud-edge communication and significant scaling is Arktos, but we have been unable to determine whether federation is involved.Its stated aim is to achieve a single regional control plane to manage 300,000 nodes that multiple tenants will share. 31

Design Decisions
Our vision for EdgeNet's multitenancy framework is to promote a future in which the CaaS service model can thrive, particularly at the network edge.We have made nine design decisions, listed below, to support this vision.The first six were discussed in relation to related work in the previous section, and the latter three are discussed in this section.The implementation details are provided in the Architecture section that follows (Sec.5).
• Multitenancy approach.EdgeNet obtains the lower overhead offered by a single-instance native approach to multitenancy, compromising on the isolation that would be offered by a multi-instance one (Sec.3.1).
• Customization approach.We mitigate customization limitations that stem from the single-instance approach through the use of hierarchical namespaces (Sec.3.2).
• Consumer and vendor tenancy.We design EdgeNet to support both the consumer and vendor forms of tenancy (Sec.3.3).
• Tenant resource quota.EdgeNet incorporates a control mechanism to manage the allocation of resource quotas in a hierarchical tenancy structure, allowing tenants to grant quotas to their subtenants and recoup those quotas from them (Sec.3.4).
• Variable slice granularity.Considering that there is no ideal granularity at which to slice a compute cluster in order to deliver resources to tenants, we allow an EdgeNet cluster to be sliced into individual compute nodes or at a sub-node-level granularity (Sec.3.5).
• Federation support.Our framework allows each EdgeNet cluster to receive the workloads of tenants from other EdgeNet clusters with which it is federated, while avoiding name collisions by generating object names that are unique to cluster and tenant (Sec.3.6).
• Kubernetes custom resources.For ease of integration into existing systems and ease of adoption by users, we implement EdgeNet using the Kubernetes custom resources feature, rather than creating a wrapper around Kubernetes or forking the Kubernetes code (Sec.4.1).
• Lightweight hardware virtualization.We compensate for the loosened isolation of workloads in the singleinstance native approach through the use of lightweight hardware virtualization that is optimized for running containers (Sec.4.2).• External authentication.In a federated multitenancy environment, users will need to authenticate with remote clusters, and for that reason EdgeNet adopts an authentication method that is external to any individual cluster (Sec.4.3).

Kubernetes custom resources
Kubernetes' custom resource feature 32 allows new entities to be added that, by the fact of their presence, extend the standard Kubernetes API, thereby maintaining backward compatibility with tools and interfaces that are familiar to users.By building our EdgeNet framework in this way, instead of as a wrapper around Kubernetes or as a separate system that interacts with Kubernetes, we increase the chances that the framework will be compatible with a variety of Kubernetes distributions.For example, we have successfully tested and run EdgeNet framework as an extension of k3s, 33 a lightweight certified Kubernetes distribution for IoT and edge computing.
We have containerized the EdgeNet extensions and we provide them in the form of public Docker images and configuration files.The core Kubernetes code remains untouched, and there is no need to recompile any existing code that runs a cluster.Any cluster administrator can deploy the extensions to their cluster with a single kubectl apply command without the need to bring down the cluster or interrupt its work in any way.
Aside from the choice of Kubernetes and of Kubernetes custom resources, all of our other design decisions should, in principle, apply to enabling multitenancy in any other container orchestration tool.

Lightweight hardware virtualization
The choice of virtualization technology, in the context of edge computing, between hypervisors providing the best isolation and containers being lightweight [59], is a longstanding discussion.We prioritize virtualized environments because of their lower overhead; in so doing, we favor enhanced performance over delivering the best isolation [31].
A native framework with operating-system-level virtualization satisfies these requirements, but it presents security concerns having to do with containers sharing the same kernel.We want to offer each tenant the security of its own guest kernel, which hardware virtualization provides, but without going so far as to adopt a multi-instance approach that would negate the performance advantages of containers over VMs.Fortunately, this is possible through the use of lightweight virtual machines, which offer the isolation benefits of hardware virtualization while offering nearcontainer-level performance.Our multitenancy framework therefore adopts a single-instance native approach with lightweight hardware virtualization.
We follow earlier work [29,47] that has recommended the Kata runtime 34 for providing isolation between containers in a multitenant environment [24,39,9,58].Kata spawns a lightweight VM that is optimized to run containers, delivering near-container-level performance [24, Fig. 5] and better isolation than OS-level virtualization.
Fig. 6 depicts three methods for workload isolation: virtual machines, Docker containers, and Kata containers.We consider a single workload per method that can improve isolation and performance at the cost of overhead.One workload per virtual machine provides the best isolation among the three while introducing high overhead.The containerization technique can lower such overhead, having one workload per container, although it diminishes the isolation.The Kata method falls between VMs and containers in terms of isolation and overhead, as a containerized single workload runs in a lightweight virtual machine.
Tenants who require better isolation and performance at the same time, can obtain these using the slice software entity in our framework.As described in Sec.5.4, this entity provides a tenant with the option of selecting container runtimes on an isolated subcluster so that the tenant can select one that meets its application requirements.

External Authentication
A tenant's users must authenticate themselves in order to access the resources that they are authorized to access.For multitenant CaaS to run at scale, it is not feasible to require users to have individual accounts at every different cluster location where they will deploy their workloads [15].Instead, authentication should be managed by an integrated identity management system.For example, an identity federation that consists of multiple identity providers, using Preprint HARDWARE Workload VM (a) One workload per virtual machine, the best isolation among the three, introducing high overhead.

Container Workload
(b) One workload per container, the weakest isolation among the three, providing improved performance.OpenID Connect (OIDC) 35 running on top of OAuth 2.0 36 as the authentication method, can support large-scale federations.With this in mind, EdgeNet uses this type of authentication (See Sec.5.8).

Architecture
Our EdgeNet architecture has been conceived around the design decisions articulated in Sec. 4, with the aim of introducing as low overhead as possible while making Kubernetes ready for the edge.As a reminder, our main design decision has been to take a single-instance native approach, meaning that tenants share a cluster's control plane components and compute nodes, rather than having each tenant acquire its own control plane components and compute nodes.To compensate for the diminished isolation that comes with sharing the same cluster, EdgeNet uses lightweight VMs to isolate workloads while retaining low overhead.
The architecture of our EdgeNet multitenant CaaS framework is illustrated in Fig. 7.It is designed as a set of custom resources and custom controllers that extend Kubernetes from within.The framework consists of six principal new entities: 37• Tenant is the fundamental entity that isolates a tenant from other tenants (Sec.5.1).
• Subsidiary Namespace is an isolated environment created by a tenant (Sec.5.2).
These are assisted by new entities that facilitate cluster and tenant management: Role Request (Sec.5.6), and Tenant Request and Cluster Role Request (Sec.5.7).Our architecture also covers user authentication via existing mechanisms (Sec.5.8).Aside from these, it provides cluster operators with configuration files in YAML format that can be carefully customized, which define runtime class 39 and predefined role resources.

Tenant
In the context of the namespace structure maintained by the EdgeNet framework, the Tenant entity is a controller that acts at the top level of the hierarchy: creating, updating, and deleting the core namespaces of cluster-scoped tenants, which are the ones that are admitted into the cluster by the cluster's administrator.Here, we describe the Tenant entity, while Sec. 5.In this example, a and b are tenant core namespaces, directly under the root of the hierarchy, r, which is not itself a namespace; the subsidiary namespaces are aa, ab, aba, abb, and ba.
Kubernetes' initial namespaces, default (d), kube-node-lease (knl), kube-public (kp), and kube-system (ks) are not included in the hierarchy and are not managed by these controllers.
UIDs are defined in Kubernetes as being 128-bit-long universally unique identifiers [40], 40 and the Kubernetes community suggests using the UID of the kube-system namespace as a cluster identifier. 41The labels allow the tenant namespaces to be consumed by policies and other entities locally.This labeling model is also required for the intercluster object propagation mechanism.
Each tenant has an owner who has control over the tenant and its resources, including any subnamespaces that the tenant might create.Having created the core namespace, the Tenant entity uses the Kubernetes role-based access control (RBAC) mechanism to grant this control, while at the same time limiting the tenant owner's control to the scope of its core namespace, so that it may not interfere with other tenants' namespaces.The Subsidiary Namespace entity will be responsible for extending the scope of the owner's control to the subnamespaces.With their control over the core namespace, the owner can manage the tenant by, among other things: admitting users; granting roles, which are sets of permissions, for those users; and deploying workloads.
Kubernetes' network policies allow confining pod communication into a namespace or set of namespaces by using labels.In our multitenancy framework, the policies consume the UID labels, as specified earlier, attached to tenant namespaces.Since tenants have complete authorization on their network policies, an authorized user can, wittingly or not, misconfigure network policies in a namespace, thus resulting in security threats.To overcome this vulnerability, we let a tenant enable or disable cluster-level network policy in the tenant specification, which confines the tenant's namespaces thanks to VMware's Antrea.Edge-Net uses its hierarchical namespace structure to build consumer and vendor tenancy.In this example, the namespace a belongs to a consumer tenant and the namespace b belongs to a vendor tenant that is reselling containers-as-aservice to its own customers.
The Subnamespace controller creates workspaces rooted at aa and ab for the consumer tenant by placing those namespaces into workspace mode.The consumer tenant has visibility into those workspaces.
The controller creates subtenants rooted at ba and bb for the vendor tenant by placing those namespaces into subtenant mode.The vendor tenant does not have visibility into its subtenants.Note that the subtenant that owns the sub-tree rooted at bb does have visibility into its own workspaces at bba and bbb.
A key characteristic of subnamespaces is enabling the choice of either mode, workspace or subtenant, at any depth in the hierarchy.By extension, subnamespaces allow a subtenant to be created in a child namespace with the workspace mode and another to be created with the subtenant mode, as shown in Fig. 10.Not only can these two modes co-exist in the same subtree, but they also reinforce each other's benefits.Last but not least, a subsidiary namespace can also be formed to be propagated across federated clusters.If so, it generates object names that are unique to the originating cluster and tenant to prevent name collisions during object propagation across the federation.Sec.5.2.4 describes how our federation architecture functions.

Inheritance
In the subnamespace specification, an authorized user can declare which objects are passed by inheritance from parent to child.The Kubernetes resource kinds that can be inherited are currently as follows: • Role-based access control (RBAC): Roles and Role Bindings; both together adjust permissions of users.
• Network policies; make a namespace restricted to defined ingress/egress rules.• Limit ranges; set a resource quota per pod.
• Secrets; keep sensitive information such as credentials to be consumed by pods.
• Config Maps; configuration to be used by pods.
• Service Accounts; an entity that allows applications and services to authenticate with the Kubernetes API.
If RBAC objects are not inherited, the specification must include the owner of the subnamespace for management purposes.Further, it is possible to declare continuous inheritance.In this case, the controller constantly syncs objects from a parent to its child.
Note that a resource quota is not an entity subject to inheritance, so as to avoid overconsumption by a tenant, which could get around quotas by generating subnamespaces at will.The logic ensures that the aggregated child resource quotas cannot exceed their parent's initial resource quota, including the core namespace.Each subnamespace creation Preprint taxes its parent's resource quota so that the aggregation of resource quotas in the parent and child namespaces remain the same.In other words, a tenant's resource quota is a cake to be shared out, and each subnamespace gets a piece of cake from its parent's cut.

Naming Convention
The naming convention has been conceived so as to enable federation deployments.As mentioned in Sec.5.1, a core namespace shares the same name with its tenant.Independent of its depth, a subnamespace follows the pattern of <subnamespace-name>-<hash>.We feed the hash function with the parent namespace and subnamespace name.This naming convention reduces the chance of name collisions while creating subnamespaces.If a collision nonetheless occurs, the subnamespace object enters a failure state, indicating a collision status.This is vital to the interoperability of multiple clusters.The reason is that tenants or namespaces holding the same names in different clusters probably occur in many clusters.Consequently, conflicts will inevitably arise while propagating objects, unless there is an adjustment mechanism such as the one described here.

Federation
In our federation vision, each cluster, even before it is federated, is a multitenant cluster, making its worker nodes available to multiple tenants, and federation further opens the cluster to the workloads of tenants from other clusters.(As we have discussed in Sec. 3, this differs from the approach of the Liqo framework, based on Virtual Kubelet, in which clusters only achieve multitenancy by federating.)We have developed a proof-of-concept federation architecture with a prototype implementation, which works jointly with our multitenancy framework.The source code of the prototype is publicly accessible via our repository.
We see each tenant gaining access to a federated set of clusters via what we might term a home cluster or local cluster.For example, a company that has developed an application that serves vehicles in several countries might need to deploy its workloads to the edge clusters of mobile operators in each of those countries, and it can do so via a cluster in its home country that is federated with these other clusters.To obtain access to a local cluster, it might contract with a cloud provider that has a commercial presence in its home country, leaving the cloud provider to manage the commercial relationships with the other providers in the federation.
Information regarding the identity of the company and its contract with its local provider remains local, while only the workload-related objects necessary for the deployment of the application get propagated to remote clusters.Propagating as few objects as possible has three significant benefits: (1) it avoids replication of tenant information across clusters, thus reducing bandwidth consumption and unnecessary traffic; (2) it enhances data privacy and sovereignty and mitigates security risks; and (3) it significantly reduces overhead that could stem from running a control plane or worker nodes per tenant at the scale of a federation.
In EdgeNet, the deployment scope of any subnamespace can be set to either federated or local.If federated, the subnamespace controller adds the UID of the kube-system namespace as a prefix to the namespace name, and this cluster UID is also fed into the hash function described just above (Sec.5.2.3).This ensures the uniqueness of each name across all of the federated clusters.
In our prototype federation, a tenant deploys its workloads to remote clusters by creating a Selective Deployment [50] that targets the remote clusters using affinities, such as locations and connected devices. 43A manager entity, called the Federation Manager, is informed by the local cluster for federation-scoped Selective Deployments.When it receives one, it searches for remote clusters that satisfy the affinities, in order to deploy the workload there on behalf of the tenant.To move towards a production federation architecture, issues such as caching and scheduling will need to be tackled.

Tenant Resource Quota
As described in Sec.3.4, Kubernetes provides the ability to associate resource quotas with namespaces, but in the context of independent namespaces.Since our multitenancy framework extends Kubernetes namespaces to work in a hierarchical fashion, we need to extend the quota mechanism to take into account the dependency of each namespace on other namespaces above it and below it in the hierarchy.The EdgeNet quota mechanism is designed to allow for a given resource to be shared out between a namespace and its child namespaces, and for the parent namespace to recoup each child's portion when it is relinquished.Child namespaces can in turn share out their quota with their children, and so on, recursively.Our framework covers the following resources: CPU, memory, local storage, ephemeral storage, and bandwidth, each accounted for individually. 44e model tenant resource quotas by representing the tree of a hierarchical namespace as a graph T = (V, E) composed of vertices V and parent-to-child edges E. For our purposes, each vertex v ∈ V is a namespace, except for the root node.The tenant of a namespace v is entitled to construct a subtree T v rooted at that namespace v, which is also called a core namespace.Denote q(T ) the resource quota of tree T , and each namespace v ∈ V has a resource quota q(v).
Here, we assume that there is only a quota for different types of resources for simplicity.In fact, different quotas can be set for different resources, such as CPU and memory.
Let σ(v) = {w 1 , w 2 . ..} ⊂ V represent the subnamespaces of v. Likewise, assume σ(w) = {z 1 , z 2 . ..} ⊂ V represent the subnamespaces of w.The hierarchical resource quota problem here is twofold.First, we must ensure that a tenant resource quota q(T v ) is equal to aggregated resource quota across all its namespaces: q(v) + w∈σ(v) q(w) + z∈σ(w) q(z).The latter is to guarantee that the resource quota allocated to a subtree rooted at a namespace w is also equal to aggregated resource quota across the namespaces of that subtree, thus q(T w ) = q(w) + z∈σ(w) q(z).
We solve this problem by partitioning resource quotas among parents and their children while keeping with the container orchestration tool's declarative approach.A tenant resource quota works by applying an identical resource quota, a Kubernetes resource, to the tenant's core namespace.Then, each subsidiary namespace in the core namespace takes its portion from that resource quota, as shown in Fig. 4a.
As mentioned above, when resources are constrained, ensuring a fair share of them is essential.Static allocation of quotas, however, may lead to inefficient use of the resources.There are two sides to this problem.Such resource quotas that are allocated to tenants, assuming some tenants' resource consumptions are inferior to their quotas, may result in suboptimal utilization of compute resources in clusters.Likewise, the resource quotas that are allocated statically to subnamespaces by tenants, assuming some subnamespaces consume fewer resources than their quotas, may provoke less-than-ideal use of their tenant resource quotas.Even though our system allows temporary addition to and removal from tenant resource quotas as well as manually updating subnamespace quotas, this solution cannot scale when there are many clusters.Sec.7 introduces how we plan to address this problem.

Slice and Slice Claim
Two software entities enable node-level slicing; slice and slice claim.Slice, a cluster-scoped entity, forms a subcluster by slicing among nodes, as its name signifies.A slice isolates the nodes within it from multitenant workloads once it is established.These nodes are chosen via a selector composed of fields that denote labels, number of nodes, and desired resources.On the other hand, a slice claim is a namespaced entity that tenants may create for their subnamespaces.
Nodes in a slice remain in the pre-reserved status until a subnamespace uses that slice.Once a subnamespace is bound to a slice, the multitenant workloads that runs on the nodes in this slice are terminated within a grace period of a minute.That is to say, workloads created in that subnamespace are isolated from other tenants.Thus, the container runtime configuration within such subnamespaces becomes available to tenants. 45Regarding the termination grace period, we have set it to one minute by default, as twice the default grace period of 30 seconds in Kubernetes.However, providers can adjust this termination grace period according to their requirements.
A slice claim has two working modes; dynamic and manual.The dynamic mode permits a tenant to automatically create a slice if the resource quota in the slice claim's namespace is sufficient.In contrast, the manual mode prevents a slice claim from generating a slice even if the slice claim's namespace has an adequate resource quota.In this case, a cluster administrator must satisfy the tenant's request.This kind of behavior can be desirable if the number of nodes in a cluster is scarce.Fig. 11 depicts how a tenant can receive node-level isolation.We discuss the need for a daemon to improve isolation in Sec. 7.

Admission Control Webhook
An admission control webhook is a software entity that allows for enforcing custom policies.It can mutate and validate object operation requests of users.Such mutating and validating operations are critical so as to ensure that users adhere to framework-specific policies.We enforce custom policies for subnamespaces, slice and slice claim, role requests, tenant requests, cluster role requests, as well as pods.implementation, a cluster admin may approve a request or deny it.Another option is, as mentioned above, a provider can integrate a credit card verification-like mechanism with our framework to avoid the manual administration of clusters, supporting CaaS to operate with many clusters at scale.There are four pieces of information in the request; the organization, the owner, the tenant resource quota, if desired, and whether or not to apply a cluster-level network policy.A cluster role request is an entity that allows a user to claim to hold a role at the cluster scope.This entity eases shaping a cluster administration team and encourages the platform users to ask for the roles that they need.

Authentication
Our general design approach is to build, wherever possible, upon what is already available for Kubernetes, as we do by adopting OpenID Connect (OIDC) 46 running on top of OAuth 2.047 as our authentication method.A feature that is still under development is to extend OIDC with Pinniped48 so as to access resources across clusters.This allows a user to authenticate once to access namespaces and objects, for which the user has access rights, in all of the clusters to which the objects have propagated.

Benchmarking
This section analyzes the performance of our EdgeNet single-instance native Kubernetes multitenancy framework.One of our goals is to assess to what extent native and multi-instance approaches are suitable for edge computing use cases.To this end, we compare our framework to single cluster per tenant offerings with the help of Rancher Kubernetes Engine (RKE) 49 in order to automate cluster creations and to the VirtualCluster [62] code that realizes a multi-instance-based multitenancy framework.That is to say, to represent the multi-instance through multiple clusters approach, we pick RKE, which is widely known for installing Kubernetes; VirtualCluster for the multi-instance through multiple control planes approach, which is a Kubernetes working group framework that is described in the scientific literature [62]; and our own EdgeNet framework is single-instance.
Both RKE and VirtualCluster perform well when the compute resources are nearly unlimited, or scalability with regard to the number of tenants is less of a concern.Compared to RKE, VirtualCluster is well-adapted to address the issues of the single cluster per tenant solution, such as high overhead.However, as we shall see, there is a tradeoff between performance and isolation, which means that existing solutions are not ideal for edge computing.We used the Geni infrastructure [44] to spawn four Ubuntu 20.04 LTS virtual machines with 8 CPUs and 16 GB of memory in order to conduct experiments with EdgeNet and VirtualCluster.Using these virtual machines, we created a Kubernetes v1.21.9 cluster consisting of one control plane node and three worker nodes.The control plane node is completely isolated from any workloads.
For the VirtualCluster experiments, we reserved a worker node for running the manager, syncer, and agent components.Likewise, the per-VirtualCluster-tenant entities, which are apiserver, etcd, and controller-manager, are deployed on a dedicated worker node.For the EdgeNet experiment, an isolated worker node was sufficient to run the entities.A separate worker node hosted monitoring tools in both cases.We used the default configuration settings for both frameworks, including the number of workers that process concurrently and the execution period that triggers the controller.
We compared the frameworks' performance for tenant creation and for pod creation.For VirtualCluster tenant creation, inter-arrival times of 0, 8, 16, and 32 seconds were used for creating 2, 4, 8, 16, 32, and 64 tenants, respectively.For EdgeNet, inter-arrival times of 0, 2, 4, 8, 16, and 32 seconds were used for creating up to 10,000 tenants.(We discuss the reasons for the disparity in the number of tenants below.)For both framework, pods created were 1,000, 2,500, 5,000, and 10,000.Timeout is two minutes to create tenants and pods separately.
To measure the performance of a cluster per tenant method, reserved resources for tenant entities, a virtual machine with 8 CPUs and 16 GB of memory, were divided evenly among four Ubuntu 20.04 LTS virtual machines with 2 CPUs and 4 GB of memory on GENI. 2 CPUs were chosen because cluster provisioning repeatedly failed with VMs having a single CPU.We repeated measurements at least three times for each case.

Tenant Creation
As discussed throughout the paper, besides security, overhead is a noteworthy factor in qualifying a multitenancy framework, especially for edge clouds.Our experiments measure a framework implementation's ability to handle simultaneous creation requests; the time it takes to create a tenant; entities' resource consumption; and consumption per tenant, if it exists.Each request is considered successful if the framework returns a success status within two minutes after the control plane receives the request.

VirtualCluster
The experiments show a correlation between request inter-arrival time and tenant creation success rate.For example, with a 32 s inter-arrival time for 32 creation requests, the number of successfully created tenants ranges from 26 to 32; when the inter-arrival time is lowered to 8 s, the successes decrease to between 13 and 18, as shown in Fig. 12a.It is possible that VirtualCluster's difficulties in handling simultaneous requests stem from an implementation issue that starves tenants of the compute resources necessary to establish their control planes in these circumstances.
Similarly, as seen in Fig. 12b, decreasing the request inter-arrival time increases the tenant creation time.At a 32 s inter-arrival time, the median creation time is 76 s; put another way, it would take more than an hour to create 128 tenants.Furthermore, as the figure shows, the creation time fluctuates more widely as inter-arrival time decreases.
The most critical scaling weakness for VirtualCluster is that every tenant introduces additional overhead in terms of memory and CPU usage due to the per tenant isolation of control plane components: apiserver, etcd, and controller manager.Fig. 12c presents the regular memory usage for 2, 4, 8, 16, and 32 tenants.For example, a thousand tenants would consume around 300 GB of memory just to be present in the cluster.This limitation ultimately affected our experiment, which could not reach a high number of tenants on the single node that we had reserved for tenant components; the maximum number of tenants that we could create stably was approximately 40.
In addition to this, a tenant starting to use the cluster results in an increase in resource consumption.We also noticed that a successful status message for the tenant control plane does not imply that all its components are present and functioning properly.Therefore, we only considered the cases where control plane components per tenant were all created successfully.

EdgeNet
As opposed to VirtualCluster, EdgeNet supported the creation of 128 tenants simultaneously with an almost zero failure rate across experiments.It also scaled well beyond this number, stably generating 2,560 and 10,000 tenants when the request inter-arrival time was set to 2 s and 4 s respectively, as shown in Fig. 12a.This is as far as one can go before running into Kubernetes' maximum namespace threshold50 of 10,000 in a cluster; if tenants are allowed to have around ten namespaces each, the number of tenants per cluster is limited to around 1,000.
When requests arrive simultaneously, the median time for EdgeNet to create a tenant object in the control plane increases with the number of tenants: 38 ms, 48 ms, 63 ms, 68 ms, 106 ms, 175 ms, 216 ms, and 270 ms for 2, 4, 8, 16, 32, 64, 128, 256 tenants respectively.Another pattern of results is obtained with an inter-arrival time of 2 s: creation times are 11 ms for 1,280 tenants, 11 ms for 2,560 tenants, and we tested as far as 5,120 tenants, also clocking in at a median of 12 ms.For 10,000 tenants, the median value is still 12 ms when inter-arrival time is set to 4 s.However, the maximum values increase as a function of the number of requests.
This suggests that concurrent or many requests saturate the shared API server, controller manager, and etcd moderately.Thus, when arrivals are simultaneous, the average time to fully establish a tenant increases as follows: 500 ms for 2 tenants, going up to 937 ms for 128 tenants.But Fig. 12b reveals that the time to fully establish a tenant drops when requests are spread out in time.For 32 tenants, the median times are 11.5 s for simultaneous arrivals, 271 ms for 8 s, 274 ms for 16 s, and 274 ms for 32 s.
Good results are seen for EdgeNet since it configures the state of the cluster rather than replicating the components, it does not generate per-tenant overhead, as shown in Fig. 12c.Given that the resource consumption of controllers is negligible, it is fair to state that there is no significant overhead in our framework.
It takes EdgeNet approximately 1 min 41 s to create 128 tenants.Furthermore, EdgeNet's creation time can be shortened if needed by adjusting the number of workers and the running period.By default, the tenant controller uses two workers with a running period of 1 s, and the client's query per second (QPS) rate and burst size are set to 5 and 10, respectively.We tried altering the setup to have ten workers with a 500 ms running period, setting QPS and burst to 1,000,000 each.With these settings, it takes just 17 s to fully create 128 tenants, as seen in Fig. 13a.The same figure shows that EdgeNet can handle simultaneous requests if a cluster welcomes around 1,000 tenants.The time it takes to establish all tenants eventually converges towards two minutes for both settings, thereby satisfying the success criteria we described at the beginning of Sec.6.1.However, we noticed it surpasses two minutes when simultaneous requests are more than 1,280.We presume that this may be due to client or control plane saturation resulting in the API server receiving delayed requests, which we need to investigate further.Fig. 13b shows that EdgeNet with default settings can scale up to 10,000 tenants when inter-arrival time is set to 4 s, but it takes more than ten hours in total.

Comparison
Our findings on tenant creation at least hint that better isolation provided by the multi-instance approach comes at the cost of performance loss.What can be clearly seen is that EdgeNet surpasses VirtualCluster on scalability and speed.
The peak number of tenants in a cluster is 10,000 for EdgeNet but around 40 for VirtualCluster, even with longer inter-arrival times.VirtualCluster offers a separate control plane per tenant, meaning an increase in base resource consumption, which is one of the major limitations.In contrast, EdgeNet can scale up to the cluster namespace threshold thanks to the native approach discussed in Sec.3.1.
Scalability is only one aspect of evaluating a framework's performance, especially for edge-specific workloads.Speed, stability, and overall reliability are also important.EdgeNet is considerably faster than VirtualCluster at tenant establishment for all inter-arrival times.Fig. 13a shows how optimizing the number of workers, running period, QPS, and burst can further improve EdgeNet's performance.Furthermore, when arrivals are not simultaneous, EdgeNet handles each request in microseconds, whereas VirtualCluster takes seconds, even minutes.Speed is an important contributing factor to establishing many tenants concurrently or in sequence, but stability and reliability are also critical.
VirtualCluster cannot adequately address simultaneous requests or requests with a short inter-arrival time, even if they are not many.Because of this issue, we observe a marked fall in the success rate of tenant establishment in such cases.We speculate that an implementation issue might be provoking resource starvation in tenant control planes.The time it takes to finish establishing all tenants is significantly more deterministic for EdgeNet than for VirtualCluster; EdgeNet exhibits almost no variation, irrespective of whether 128 tenants or 2,560 tenants are being created.However, EdgeNet's performance is tied to the control plane capacity as well.When many requests with little time between arrivals oversaturate the control plane, it has difficulty establishing all tenants properly.Nonetheless, EdgeNet can process 1,000 simultaneous requests, allowing tenants to use ten namespaces for each, as discussed above.
The multi-instance approach limits VirtualCluster's scalability since the base resource consumption increases as tenant numbers grow; providing one control plane per tenant costs about 285 MB of memory each.It is a large memory Max number represents the maximum number of successfully established tenants that can be stably reached with respect to allocated resources for tenant creation.
The time it takes to establish a tenant for four simultaneous requests.Per tenant overhead refers to the fixed proportion of resources each tenant consumes in an average manner, regardless of activity.VC consumption was measured using pods that deliver control planes to tenants, and RKE consumption was measured through containers that provide clusters to tenants.Traditional VM-based overhead is not included in RKE.

Comparison
In VirtualCluster, the syncer is an intermediate layer between the supercluster and tenant control planes in order to sync pod objects.The disadvantage of this approach is that every pod operation introduces synchronization overhead, both on the supercluster and tenant sides.We should emphasize that every synchronization process causes a delay for a pod to be up and running.This may raise concerns about running VirtualCluster at scale; however, it can be mostly overcome by providing more computing resources to the framework, leading to higher costs.In contrast, EdgeNet allows tenants to directly make use of the same control plane so as to create pods.Its performance is directly related to the capabilities of the control plane.Thus, EdgeNet produces superior results, where VirtualCluster takes at least three times as much time as EdgeNet to create 1,250 pods, 2,500 pods, 5,000 pods, and 10,000 separately.Table 4 shows how far VirtualCluster's synchronization of objects between the supercluster and tenant control planes causes significant delays while achieving better isolation.7 Future Work Although the work presented in this paper goes a long way to establishing a Kubernetes multitenancy framework that is suitable for the edge cloud, there is still considerable room for improvement.We describe areas for future work below.
Resource Quota Optimization.We plan to develop an optimization algorithm that distributes, in a best-effort fashion, underutilized tenant resource quotas among the ones who consume all of their quotas and surplus subnamespace resource quotas among those in the same tenant who hit their quotas.
Sub-node-level VIP Slicing.In order for tenants to receive guaranteed access to resources that are both available and dedicated to them, node-level slicing is currently the only option.By adding a new point to the slice spectrum, it will be possible to do so at sub-node-level granularity.We will deploy a pod that consumes almost no real resources on a node to ensure that resources are secured.Priority classes will enable the reservation mechanism for pods.
Storage.Sharing storage among containers securely at the edge is a challenge due to the security issues discussed in the Rationale section (see Sec. 2.2).We plan to develop an agent that runs on every node and is ready, upon tenant demand, to prepare a disk partition that the tenant can use as a storage volume for its Kata containers.
Security.We plan to encrypt each tenant's data separately, across its namespaces and cluster-scoped resources.In this way, even if a tenant's data leaks, another tenant will not be able to read it.

Preprint
Customization.Tenants cannot currently create cluster-scoped resources independently.We plan to develop a namespace-scoped custom resource that allows users to dynamically create cluster-scoped resources.This entity will be using the namespace name as a prefix in generating cluster-scoped resource names to avoid collisions.
Subnamespaces.A user may want to attach labels to subnamespaces.There is a risk, however, of a malicious actor breaching another tenant's network policies if labels are defined independently.For example, one can launch a bruteforce attack to correctly guess the namespace labels used in a tenant's network policies.By using the name of the subsidiary namespace as a prefix, we plan to solve this issue.Inheritance will then allow labels to be passed down from parent to child.
Container Isolation.Based on the reasons outlined at the end of our discussion of lightweight hardware virtualization (see Sec. 4.2), we will use a specific experiment setup to assess how Kata, gVisor, 51 and runC perform.We will examine a setup in which Kata and gVisor run on a physical server while runC runs on a virtual machine created on that server.
Federation.Containerized workloads need to move between edge clouds and clouds seamlessly without any user intervention.By leveraging local authorities such as hierarchical federation managers, we aim to address issues of clusters trusting one another.We will develop a throughgoing federation architecture and will implement a fullydeveloped federation framework that works in concert with our multitenancy framework.Once the implementation is done, we will assess its performance.
Isolation Daemon.Kubernetes garbage collection removes unused images.However, our slicing feature provides on-demand node-level isolation, so we need to instantly clean the node from multi-tenant pods and container images.We also consider clearing up iptables rules during this process.An isolation daemon that runs on each node will be further developed to fulfill these operations.
Additional Experiments.Due to time and resource constraints, we compare a limited number of systems in the benchmarking section.Likewise, some variables of interest could not be studied.For the generalizability of our findings, we will conduct additional measurements addressing these two limitations.

Conclusion
We have presented EdgeNet, a Kubernetes-based multitenancy framework for Containers as a Service (CaaS) that, because it is native, i.e., serves all tenants through a single control plane and a single data plane per cluster, is a more efficient alternative to the current multi-instance manner in which cloud providers offer CaaS.Our benchmarking results demonstrated good scalability and response times for EdgeNet as compared to a leading multi-instance alternative.Though, in our framework, tenants are not isolated into separate control planes, their containers nonetheless receive the high level of isolation that is provided by Kata containers.For edge computing to succeed, we believe that security and isolation must be handled natively in software so that workloads can be moved between distant clusters within short delays.
There are, of course, still many questions to be answered.What are the most optimal ways to establish a robust CaaS federation that is composed of ubiquitous clusters offered by numerous providers?In order for clusters to join and leave such a federation seamlessly and securely, what trust mechanisms must be in place?How can users get reliable and transparent billing systems in such an environment?
Anyone may avail themselves of our liberally-licensed, free, open-source code to enable multitenancy in a Kubernetes cluster.It is already in production use in the EdgeNet edge cloud testbed, for which the tenants are research groups around the world.And it is particularly suited for edge clouds, where resources are limited, as well as for the cloud.Because of its federation features, we see this framework as paving the way for tenants to deploy their services across edge clouds operated by many different operators worldwide.

Acknowledgements
EdgeNet got its start thanks to an NSF EAGER grant, and now benefits from VMware Academic Program grants via CAF America and the Fondation Sorbonne Université, as well as a French Ministry of Armed Forces cybersecurity grant.

Figure 3 :
Figure 3: Customization Approach: Hierarchical versus Flat Namespaces.The same seven namespaces organized into a hierarchy (left) and without a hierarchy (right), in each case under a root environment r, which is not itself a namespace.
EdgeNet example.The quota allocated to a sub-tree must be divided among a portion reserved for the namespace at which this sub-tree is rooted and the portions allocated to each subnamespaces.HNC example.In the same hierarchy as sub-trees that are constrained by quotas, it is possible to have sub-trees that are not constrained in this way.

Figure 4 :
Figure 4: Hierarchical allocation of resource quotas.Examples of a quota of 100 being divided up among the sub-trees of a hierarchical namespace rooted at r.The tenant of the sub-tree rooted at a has been allocated a quota of 60, from which it reserves 20 for its core namespace and allocates 25 and 15 to the sub-trees rooted at aa and ab, respectively.

Figure 5 :
Figure5: Node and slice granularities.Dashed vertical lines indicate how a cluster's resources are sliced so as to make those resources available to tenants.A node in a cluster can be a physical server (left illustrations) or a VM (right illustrations), presented as node granularities.Slicing can be performed so as to make an entire node available to a tenant (top illustrations) or so as to make a subset of a node's resources available to a tenant (bottom illustrations).Different node and slice granularities can coexist within a cluster (e.g., the scenarios shown in all four illustrations could appear simultaneously in a single cluster).Our EdgeNet multitenancy framework automates the process of varying the slice granularity, allowing a node to be reserved for a tenant, or returning a reserved node to the pool of nodes available to be subdivided.

Figure 6 :
Figure 6: Methods for isolating workloads.The dashed vertical lines distinguish one tenant from another.Each thick blue horizontal line designates a node.

Figure 10 :
Figure10: Consumer and Vendor Tenancy in EdgeNet, showing workspace (w) and subtenant (s) modes.Edge-Net uses its hierarchical namespace structure to build consumer and vendor tenancy.In this example, the namespace a belongs to a consumer tenant and the namespace b belongs to a vendor tenant that is reselling containers-as-aservice to its own customers.

Preprint
Number of successfully established tenants (a) Symbols represent frameworks, and each inter-arrival time is colored.

Table 2 :
Comparison table of related work (open-source Kubernetes multitenancy frameworks)

Node-level slicing of servers.
An entire node that is a physical server is made available to a tenant.

Node-level slicing of VMs.
An entire node that is a VM is made available to a tenant.

Sub-node-level slicing of servers
. A subset of resources of a node that is a physical server is made available to a tenant.

Sub-node-level slicing of VMs
. A subset of resources of a node that is a VM is made available to a tenant.
2 describes the Subsidiary Namespace entity, which acts lower down in the hierarchy, on the subtenants Namespace Hierarchy in EdgeNet.EdgeNet's multitenancy framework provides two principal controllers for managing its namespace hierarchy.The Tenant controller creates, updates, and deletes the tenant core namespaces at the top level of the hierarchy, while the Subsidiary Namespace controller handles all namespaces further down in the hierarchy. 42 40Kubernetes documentation: Object Names and IDs; UIDs https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#uids41ClusterID API discussion in the Kubernetes Architecture SIG mailing list https://groups.google.com/g/kubernetes-sig-architecture/c/mVGobfD4TpY/m/uEjVVsinAAAJ The longer the time between arrivals, the higher the number of successfully created tenants.Stably, VC manages to create around 40 tenants at most when interarrival time is set to 32 s, while EdgeNet reaches 10,000 with 4 s.

Table 3 :
Quick comparison of native and multi-instance approaches

Table 4 :
Time in seconds, median values, to create a representation of a pod as an object in the host control plane.The number of tenants used for the experiments is set to 32 for both VirtualCluster and EdgeNet