Proximity-Based Maritime Internet of Things: A Service-Centric Design

The Internet of Things (IoT) is all about services. The proximity-based IoT service is a type of service that becomes available once an IoT object comes near other objects. This type of service typically involves proximity machine-type communication (MTC) that allows direct communication between objects in a spatiotemporal context on an ad-hoc basis. The IoT world has witnessed increasing cases of such applications in recent years, but proximity MTC has not received sufficient attention in the IoT community. As such, this paper presents the unique characteristics and requirements of proximity MTC, focusing on differentiating proximity MTC from infrastructure-based wide-area MTC and service-centric networking from host-centric networking. Specifically, the paper utilizes an emerging application in maritime IoT, maritime autonomous surface shipping (MASS), as an example and offers an insightful and rigorous examination of the legacy maritime communications technology, pointing out the pitfalls to avoid in the development and standardization of proximity MTC in light of the recent spectrum assignment by ITU. A comprehensive service-centric solution is presented to address the requirements and pitfalls, highlighting its significance and relevance to virtual sensing in applications like MASS. The paper thoroughly describes the network architecture and how different network components and layer protocols fit together to deliver desired functionalities to meet the service-centric requirements through a concrete design.

The Internet of Things (IoT) describes the network of physical objects or things; it is essentially a thing-to-thing network.These things range from ordinary household objects to sophisticated industrial equipment, from something as small as a medical implant to something as big as a cruise ship.Regardless, they are all referred to as devices in this paper.
Pretty much any physical object can be transformed into an IoT device if it can be connected to a network to communicate information.To this aim, each device is typically embedded with sensors/actuators, a piece of software, and a communication module to connect and exchange data with other devices and systems over communication networks.
The associate editor coordinating the review of this manuscript and approving it for publication was Derek Abbott .
Since IoT is a thing-to-thing network, the communication network is naturally a device-to-device or machine-to-machine communication network, better known as a machine-type communication (MTC) network, for high-mobility interconnection and high-capacity access to the IoT services.The software is prebuilt, so-called applications that add a level of digital intelligence to things that would be otherwise ''dumb;'' it enables them to communicate real-time data over a communication network (typically a wireless network) to a fusion center or a server, where they are analyzed and transformed into information that may help a system to make the best decisions in perspective and real-time without the intervention of a human being.The notion of ''real-time'' depends on the latency requirement of the corresponding application or service.Under this concept, the Internet of Things essentially merges the digital and physical universes and makes the fabric of the world around us more intelligent and responsive.
First, let us briefly review the basic types of IoT applications and the fundamentals of the associated communications and networking schemes or technologies.
Indeed, IoT is all about services.The ability of IoT to share information is driving a broad set of applications that create a plethora of services.These services are executed by the client application running on a host or connected device, known as the service client, which has a specific and narrow function and relies on a communication network to communicate with the relevant parties to fulfill the function.Therefore, a client application is used by a service client to perform specific tasks.
In most IoT scenarios, the relevant party is the server application that provides a service upon request, whereas in some cases, the relevant parties are client application peers seeking information or collaboration on a particular task.Accordingly, in the broadest sense, IoT applications can be categorized into client-server and client-client applications.

A. CLIENT-SERVER IoT APPLICATIONS
For this type of IoT application, the relevant party is the server application hosted by a server (that runs the server application) on the Internet or the IoT cloud.A server application waits for requests from client applications and responds to them, thus providing a service upon their request.For these applications, the server location or the actual physical distance between the communicating peers has no direct irrelevance, and the server is most likely at a virtual network location.In practical implementations, specialized-subject applications and databases are run on servers at multiple locations known as clouds, kept track of by a directory of servers at one location, and made accessible over the Internet for service clients with client applications.Server selection is done by the infrastructure network or a proxy and determined by factors such as server loading and network traffic balance, subject to the service latency requirement.
The client-server IoT application works from the clients' part to access the remote server of a service provider for its necessary tasks and requirements.This communication model represents a relationship between cooperating programs in an application, composed of clients initiating service requests and servers providing the services.A client and the server exchange information in a request-response messaging pattern over, typically, the wireless wide-area network based on a cellular infrastructure (e.g., the NB-IoT network [1]).Typical applications involve a service client sending a request to a server at a default URL via a client application; the server receives the request, processes it, and returns a response through a server application.These client applications execute remote procedures and functions in an application server instance.
It is evident that the key to this type of IoT application is the need for a wide-area MTC network to support ubiquitous connectivity between the clients and the server.MTC communications denote the broad area of communication between terminals that are capable of transmitting and receiving data.
The wide-area MTC network is a telecommunications network that extends over a large geographic area, often established with an infrastructure network for providing communication channels between wireless clients and the wired network (e.g., the Internet) and, ultimately, the IoT cloud where the servers reside for potentially real-time operations and processing.The infrastructure-based MTC network consists of a plurality of control stations (such as the base stations used in traditional cellular networks) serving as relays (or routers) deployed over a wide geographic area and interconnected with the IoT cloud via a fixed transport network (e.g., a fiber-optic network) called the backhaul network.The last-mile coverage is provided wirelessly by the control stations to IoT devices via MTC terminals to increase device mobility and negate the need for cables or wires to reduce deployment costs, facilitating the transition of data from the wireless to the wired medium.The wireless coverage provided by a control station forms a star topology called a radio cell or simply a cell.The MTC network consisting of these cells is thus called the cellular MTC network.As illustrated in Figure 1, through the control stations, data are collected wirelessly (via the MTC terminals) from the IoT devices and relayed to the central facility (the server) via a backhaul network (not shown in Figure 1).Or vice versa, the device may request information from the server necessary to fulfill a task.In either case, an MTC terminal is an installation of a wireless communication module on an IoT device or a control station, enabling communication between an IoT device and a control station and, ultimately, message exchange between the IoT application endpoints (server and client).
Among wide-area MTC technologies, NB-IoT is the stateof-the-art 4G-LTE/5G technology from 3GPP, along with proprietary solutions such as arguably the most popular LoRa and Sigfox [2]-all depend on a cellular infrastructure.NB-IoT inherits the well-established 4G-LTE infrastructure and reuses a subset of the LTE standard but limits its bandwidth to a single narrowband from 3.75 to 180 kHz per channel on the licensed spectrum.On the other side, LoRa operates on the unlicensed spectrum of 125 kHz channel bandwidth and is poised to deliver low-cost and high-energy efficiency, mainly through a proprietary ''black technology,'' coined LoRa Modulation, boasting a continuous phase and a constant amplitude profile, optimal for the RF power amplifier.In either case, control stations (i.e., LTE eNodeBs or LoRa gateways) are deployed over a wide area as relays between the IoT cloud network and the terminals.
Most notably, new maritime applications have emerged in recent years with the advent of maritime IoT due to the ever-increasing demand for modernizing maritime services from maritime-related businesses and maritime safety and traffic management.Such applications include ship operations, vessel traffic services, vessel and container tracking, environment monitoring and protection, search and rescue, salvage and intervention, and reporting time-sensitive maritime safety information (such as distress alerts in an emergency) to improve situational awareness and navigational safety.In response, the International Maritime Organization (IMO) and the International Association of Marine Aids to Navigation and Lighthouse Authorities (IALA) put forward a VHF Data Exchange System (VDES) initiative intending to create a maritime MTC solution to maritime IoT [3], [4].International organizations (governmental and nongovernmental) in liaison with IMO also participated in the work.Unlike NB-IoT and LoRa, which target terrestrial IoT applications, a maritime MTC system like VDES is dedicated to maritime IoT applications deployed in unique maritime environments, providing solutions to the maritime-specific requirements that the terrestrial-based MTC systems cannot cover.The VDES concept is structured to include two wide-area wireless network components, VDE-TER and VDE-SAT, operating on a managed infrastructure encompassing control stations deployed worldwide and interconnected to the terrestrial IP network or Internet.Specifically, VDE-TER is for the shore station-based terrestrial access to support dense maritime traffic near shore, and VDE-SAT is for the satellite station-based access to provide communication beyond the line of sight of shore and on the high seas via low-earth-orbit (LEO) satellites.Together, they form a space-earth integrated maritime MTC system for worldwide coverage.Since then, related work has been undertaken by IALA, IMO, ITU, and other organizations based on this proposal [5], [6], [7], [8], [9].
Without loss of generality, throughout this paper, we use the maritime IoT as the design paradigm since maritime IoT has become debatably the most influential and relevant technology in the IoT community due to the current state of global maritime affairs in international shipping, an industry responsible for moving about 90 percent of global trade.The role of maritime transport has become more important than ever in supporting the resilience of global economies during the current crisis.ITU has assigned radio spectrums dedicated to maritime IoT on a global scale, which have been officially approved at the World Radio-communication Conference (WRC) [10], laying the foundation for global maritime MTC.
Nevertheless, unlike NB-IoT and LoRa for terrestrial IoT, on which there is a multitude of studies reported in the literature (such as [1], [2], [11], [12], [13], [14], [15], [16], [17], just to name a few), so far, there is only a handful of literature available on maritime MTC and IoT networking technologies compared to its terrestrial counterparts, although there are some excellent papers on VDES systems.For example, the VDE-SAT and TER link waveforms are studied in [18] and [19].VDES channel models and the statistical results are presented in [20].The measurement and performance evaluations of the VDES network are discussed in [21], [22], and [23].The use cases of VDES and research directions are touched upon in [24] and [25].In [26] and [27], the potential interference of VDES satellite downlink to terrestrial communication systems is assessed, and [28], [29], [30], [31] present the first VDE-SAT trial system, NorSat-2, currently being developed by the Space Flight Laboratory and the Norwegian Space Center.
Unlike the above literature that concentrates on the widearea IoT for client-server applications, this paper focuses on the proximity-based IoT for client-client applications with an important example use case in maritime autonomous surface shipping (MASS).Their differences are the topic of the next section.

B. CLIENT-CLIENT IoT APPLICATIONS
Cellular infrastructure-based wide-area MTC is ideal for client-server applications, under which an MTC terminal essentially is not concerned with its environment in the sense that other terminals in proximity are not of particular interest, i.e., irrelevant to the current application or service.However, there exists another type of IoT application, i.e., proximitybased IoT applications, a type of service that becomes available once an IoT device comes near other devices.In this case, the relevant parties of a client application are service clients seeking information or collaboration on a particular task.Among such applications, the proximity-based, of which only the service clients in the near vicinity are of interest, require direct communication between clients in a spatiotemporal context over a proximity MTC network on an ad-hoc basis without human involvement.
Typically, such applications in the maritime domain include aids to navigation (AtoN), collision avoidance, route exchange, autonomous surface shipping, and coordinated operations at sea.In these scenarios, devices in physical proximity are of the most interest-the closer they get, the more relevant they become.In this regard, wide-area networks are less applicable or adequate, whereas proximity networks, i.e., local-area ad hoc networking enabling direct client-to-client communication, are more relevant and appropriate.In wireless communications, the signal strength falls off quickly with propagation distance.The closer they are, the more likely terminals can hear each other.The wireless MTC technology for direct communication between close-by clients implements the ''distance-of-interest'' characteristic naturally for proximity-based IoT communication, as illustrated in Figure 2, and is, henceforth, referred to as proximity MTC networking-the central topic of this paper.
A typical proximity-based client-client IoT application session runs independently from the application server and operates without the benefit of the existing fixed wide-area MTC infrastructure, including centralized resource and interference management.Therefore, no traffic of such applications goes across the wide-area network-the parties involved do not need to be connected to a wide-area MTC network.Under wide-area MTC networking, a client-server IoT application only needs to communicate with the control station as a relay.However, under proximity MTC networking, a client-client proximity-based IoT application often needs to establish, by itself, a local-area self-configuring wireless network consisting of terminals of relevance, i.e., the nearby devices hosting the same application-a non-trivial discovery (broadcast) and connect (unicast or multicast if necessary) process, especially in a highly dynamic environment characterized by continuous alterations of network topology.The union of such wireless connections forms an arbitrary topology.Therefore, an MTC terminal must do much more than those under the cellular infrastructure-based wide-area network with a star topology and centralized access control.Consequently, a suite of well-structured unique operational protocols must be present not only in the lower layers but particularly in the network layer to deal with more challenging proximity networking-specific issues, such as proximity service provision, discovery, addressing, and medium access control.It is mainly because, under the infrastructure-based network, a terminal needs only to communicate with the control station, all conducted under the full supervision of the control station.
A typical example of such a communication system is the legacy Automatic Identification System (AIS) [32], [33], [34], [35], which uses two VHF channels to provide vessel identification and position reporting, introduced by the International Telecommunication Union (ITU) mainly for collision avoidance during a ship's voyage.Through AIS, along with other shipboard electronic navigation sensors (e.g., gyrocompass or rate of turn indicator), a ship automatically communicates its information (such as position and speed) to other nearby ships.Under this scenario, AIS can be deemed as a primitive proximity MTC technology.Unfortunately, proximity MTC has not received sufficient attention compared to its counterpart (i.e., wide-area MTC) and represents only a considerably small portion of the wireless communications industry.Very little of the available literature studies these topics unique to proximity-MTC networking-almost none explicitly addresses the unique issues at network, data link, and physical levels, let alone provides service-centric solutions.However, most recently, this type of MTC has become increasingly important as maritime autonomous surface shipping or MASS begins to emerge as a revolutionary technology poised to fundamentally change the maritime shipping industry, which presents an exciting opportunity that the shipping industry is looking to exploit, remembering that maritime surface shipping is responsible for more than ninety percent of world trade.
The main focus of this paper is thus proximity MTC network design for proximity-based client-to-client IoT applications and services, using a legacy MTC system as a reference, pointing out the lessons learned, and presenting the solutions with the rationale behind them.As outlined in Figure 3, the remainder of this paper is organized as follows: Section II describes typical proximity-based IoT services and discusses the requirements and challenges.Section III briefly reviews the framework and main properties of a legacy maritime communication technology.Section IV examines the key issues with the technology and pitfalls to avoid in future proximity MTC design.Section V outlines the general solution framework to the issues.Section VI elaborates on the key components of the framework through a concrete, comprehensive design that addresses the issues point by point and demonstrates a complete proximity communication system from a service-centric perspective in a maritime proximity IoT context.Finally, Section VII concludes the paper.The terms and acronyms used in the paper are listed in Table 1 for easy reference.

II. PROXIMITY-BASED SERVICES AND MTC
The increasing exploitation and utilization of the maritime domain are linked to the global human trend of multiplying, diversifying, and intensifying sea-related uses and activities.This trend is especially apparent with the increase in traffic linked to maritime shipping, traditional sea-fishing, extended aquaculture, oil and gas exploration and drilling, the development of leisure and boating activities, a wide variety of ecosystems, and cultural heritage conservation initiatives, as well as the introduction of marine renewable energies.These different activities create a complex set of interactions that can lead to navigational risks and environmental conflicts of use.The multiple pressures on navigation safety and the need to harmonize sea uses require an integrated approach to planning and managing these activities.In this section, we briefly describe proximity-based services that have sparked the development of IoT applications and the need for proximity MTC, focusing on maritime applications for the reason stated earlier.

A. PROXIMITY-BASED SERVICES
As aforementioned, the proximity-based IoT service is a type of service that becomes available once an IoT device physically comes near other devices.The definition of ''proximity'' depends on the application.For maritime IoT, the service becomes relevant when a ship comes close to another ship or aids-to-navigation (AtoN) devices (e.g., buoys, beacons, and lighthouses) near the closest point of approach (CPA).This type of service typically involves proximity MTC networking that allows direct communication between neighboring IoT devices with minimal or no human intervention.Specifically, proximity MTC is a special type of MTC in that the target MTC terminal or destination of the communication is in proximity.In this sense, it belongs to local-area ad hoc networking between neighboring IoT devices.It is local because it is for proximity-based services; it is also ad hoc because it does not rely on a pre-existing infrastructure and is established for one specific purpose in an impromptu, selforganized way and torn down once the purpose is served or the ''proximity'' condition is no longer met.
Proximity-based maritime IoT applications or services vary from AtoN to route exchange and MASS and remain in flux due to the ongoing maritime IoT evolution.AtoN, under this IoT concept, is extended beyond its original meaning (i.e., a marker).It employs devices equipped with proximity MTC terminals utilizing radio signals to provide precision piloting to passing vessels and assist them in determining their positions or safe course or warn them of dangers or obstructions to navigation in areas such as dangerous coastlines and channels and hazardous shoals and reefs.
In some applications, a vessel may generate a cooperative perception message and send it to the AtoN.The collected messages are then fused at the AtoN and integrated into a local dynamic map message transmitted to the surrounding vessels to help them better perceive the surrounding environment.
AtoN IoT thus increases environment awareness, extending the human visual perception, and rendering it less prone to weather and human errors.With the benefit of such an AtoN network, the maritime safety information services provide a vessel, throughout its voyage, with more current navigational warnings, meteorological forecasts, and hydrographic services, among other safety-related information.
In addition to AtoN, navigation safety depends on safety messages broadcast in the vessels' vicinity.A vessel captures the surrounding navigational information, analyzes the resulting navigation safety data, and disseminates vital information to its immediate environment and the navigation community (if necessary).In general, there are two types of safety messages: Environmental notification messages are event-triggered upon, e.g., detection of an abnormal traffic condition (e.g., an accident) or other hazards, to alert nearby vessels of this acute situation.Cooperative awareness messages are periodically exchanged between the vessels in the neighborhood on a ship's voyage to create and maintain awareness of each other.The status information includes position, motion state, activated systems, etc.The attribute includes vessel type, tonnage, and dimension.
In tightly coordinated operations at sea, e.g., sea-fishing, extended aquaculture, oil and gas exploration and drilling, and sea-based constructions (offshore oil rigs, cross-sea bridges), a number of vessels work closely in a team, where vessels must stay in close communication with each other and act in sync to support cooperative performance and ensure a safe and, smooth collaborative operation.Knowing what conflicting or synergistic interactions exist and unfold between activities in a spatiotemporal context is crucial in this scenario.
Automated route exchange allows a ship to maneuver safely among other moving ships.Under this application, a ship is informed of other ships' positions and intentions in the vicinity, foresees possible dangerous situations, and reduces route detours due to traffic conditions.Ships in close proximity (e.g., near the CPA of 0.5 nautical miles) maintain situational awareness and coordinate and optimize their routes autonomously so that close-quarter situations can be predicted and avoided early, especially when navigating a shipping lane and in poor weather.
On the other front, as aforementioned, international shipping is responsible for more than 90 percent of world trade.This reality has motivated the maritime industry to move quickly toward MASS to reduce operational costs and minimize human factors with the advent of IoT.In fact, IMO has recently completed a regulatory scoping exercise on MASS [36], [37].Further progress was made at the Maritime Safety Committee on developing a goal-based instrument regulating the operation of MASS, targeting a mandatory goal-based MASS Code in 2028.
Under the framework introduced in this paper, in a fully MASS scenario, the ship exchanges information with surrounding ships and AtoN devices and integrates it with vast amounts of real-time data coming from other heterogeneous sources, like radar, lidar, IMU (i.e., inertial measurement unit), strain gauge, high-definition infrared camera, and a variety of sensors.It employs technologies like data fusion, machine learning, and artificial intelligence (AI) to harness and leverage the full potential of real-time data, turning them into trusted and actionable intelligence for making datadriven decisions.The ship is essentially a watercraft piloted by AI and can operate independently of human interaction.Conceivably, proximity MTC plays a key role as the eyes and ears of a MASS through the cooperative perception of the shipping environment and sharing of locally sensed information among vessels in proximity.More details are given regarding MASS and proximity-based IoT services in Section VI, where we use MASS as a specific application in a concrete design to elaborate on proximity MTC networking.

B. PROXIMITY MTC
The importance of the proximity-based IoT concept may be acknowledged, but the task of applying it in the real world remains delicate.First, for this type of proximitybased client-client service network, physical proximity is a determining factor in terms of with whom a terminal interacts.Apparently, only nearby terminals are more likely to become relevant than those farther away.This association or discovery of relevant terminals can be made naturally and more efficiently at the terminal than on the infrastructure side.The cellular networking structure is thus inherently not the best choice, under which a terminal only communicates with the    4).One minute contains exactly 2,250 slots aligned with the UTC.The length or TTI of a burst is one slot up to five consecutive slots.
control station (the relay), which may not necessarily have direct knowledge of the immediate surrounding environment the terminal is experiencing.Moreover, a terminal encounters new terminals as it moves from one area to another, necessitating it to modify its communication network on the fly, leading to a dynamic system over time and space.Therefore, a self-organized proximity network that operates without going through the infrastructure (i.e., a relay network) has more merit in terms of relevance, latency, and robustness.This unique property differentiates it from the client-server communication that operates under the traditional infrastructure-based cellular MTC network and signifies proximity-based client-client communication.Table 2 summarizes the differences between wide-area MTC and proximity MTC.
A proximity MTC system is expected to offer fluidity in distributing network functions and provide amorphous services that adapt to a wide variety of proximity-based IoT application needs and match changing demands.Hence, both network configuration and communication resources must be flexible and adaptive to the offered services.In traditional maritime communication systems, components from networks, hardware, and middleware to applications and services are customized and intrinsically coupled for the specific purposes and requirements of a particular system.Consequently, service clients are often burdened with the tremendous cost of maintaining the services and receiving new services.Every new service comes with a new set of communication equipment; sharing or reusing the equipment is almost impossible, even for switching service providers for the same service.This situation presents a conundrum for a seagoing vessel to install all kinds of communication equipment (as we often see on a vessel) to receive various services, not to mention to maintain and feed the power to all equipment.As more and more maritime IoT services emerge, this situation can only worsen.The reasons can be traced to the lack of service-centric networking architecture and unified protocols for dealing with diverse maritime applications and heterogeneous end systems, rendering the traditional maritime communication technologies and systems no longer up to the task or adequate for this requirement.Here, service-centric MTC networking combines network-level and service-level information for efficient service identification, provisioning, discovery, and end-to-end delivery with quality and security-all transparent to the end user, i.e., the IoT application.
We assert through demonstrations that the ultimate solution is a service-centric MTC system with the sole goal of supporting a broad spectrum of maritime IoT services.This paper focuses on the proximity component of the overall maritime MTC system.Compared to the infrastructure-based maritime satellite and terrestrial components, it imposes more complexity on the mobile terminal end.But first, let us look at a legacy maritime communication system as a contrasting reference so that the design provided in Section VI can be better understood and more appreciated.

III. THE LEGACY AIS
Primarily for collision avoidance and in support of vessel traffic information service, the AIS introduced by the ITU in the 90s is mandated by the IMO in 2002 for all vessels over 300 gross tonnage on international voyages and remains to be today's most popular maritime communication system [38].References [19], [39], [40], and [41] provides more details on AIS, but for readability and comprehension, we briefly review the AIS in this section to show its outdated design that hinders its use in emerging diverse maritime IoT applications.Nonetheless, as a legacy internationally standardized technology, AIS will continue to exist and be protected.

A. SPECTRUM AND WAVEFORM
AIS typically operates without infrastructure under a decentralized (or distributed) contention-based medium access strategy via a sensing-based medium access scheme (i.e., selforganized time division multiple access or SO-TDMA [41], most effective for communication resource (i.e., time and frequency) sharing among AIS terminals for short-range selforganized ad hoc networking.In this sense, AIS can be deemed a proximity communication system, although its original intention was more than proximity-based services.
AIS runs at a channel symbol rate of 9,600 symbols per second over a 25-kHz channel.As shown in Figure 4, the International VHF maritime mobile band is channelized into 25-kHz individual subbands or frequency channels, of which Channels 2087 and 2088 are dedicated to AISboth are duplex (meaning transmissions and receptions on the same channel) [39], [42].These radio channels are primarily line-of-sight.
For this given radio spectrum, the physical multiplex resource of the legacy AIS is organized into non-overlapping (hence orthogonal) time slots, which divide a UTC minute into exactly 2,250 slots.Each slot provides 256 symbols; each symbol is a Gaussian-filtered minimum shift keying (or GMSK) pulse that carries one information bit.Here, UTC stands for the Coordinated Universal Time, also known as Greenwich Mean Time, an international time standard by which the world regulates the clock or time.A practical realization of UTC is provided by GPS, for instance.A GPS satellite has an atomic clock synchronized to UTC.
It disseminates this (UTC) information worldwide, enabling synchronization of the time slot over the entire AIS network to ease medium access control (MAC) and interference management in a time-division-multiple-access (TDMA) fashion via SO-TDMA.A terminal typically obtains UTC from the GPS signal using a GPS receiver.

B. TRANSMISSION BURST STRUCTURE
An AIS transmission burst follows the structure defined in Figure 5, starting with an 8-bit ramp-up period for spectral shaping to reduce out-of-band emissions.A 24-bit preamble consisting of alternating 0s and 1s is used for burst detection and time and frequency synchronization.
The start and end of the burst payload are ''marked'' by the start flag and end flag, an 8-bit unique bit sequence (i.e., 01111110).This framing technique, commonly found in wired systems (e.g., the Ethernet) for dividing a long bitstream into frames, is somehow adopted by AIS (a wireless system) to indicate the transmission burst payload boundary.In particular, the start flag breaks the bit pattern of the preamble, and together, they compose a preamble that signals the start of the payload.The end flag indicates the end of the payload since the transmission time interval (TTI) of an AIS burst varies from one slot up to five slots, and a receiver needs to know the end of the burst.This flag sequence is disambiguated by the scheme coined ''bit stuffing''-inserting non-information bits or delimiters into the information bit stream to break up specific bit patterns to ensure the synchronous transmission of information.Conceivably, this scheme only works well when bit errors are rare during transmission (true for wired communication) since such an error may destroy a flag or create a flag prematurely.
At the end of the burst is a 24-bit worth of time buffer, of which 14 bits are reserved for propagation delay, 6 bits for synchronization jitter, and 4 bits for bit stuffing.The 14-bit (1.46 ms) protects a two-way propagation delay of up to 120 nautical miles (NM).Note that the ramp-down time for transmitter power-off is not explicitly provided in the burst structure.Instead, it is implicitly ''borrowed'' from the buffer period since the overlap of the ramp-down with the ramp-up between two consecutive bursts does no harm to either burst.Therefore, if the buffer period is entirely consumed by what is meant for, the ramp-down will ''eat'' into the ramp-up field of the next slot, which nevertheless does not harm the preamble of the following burst.
After considering the overhead, all left is the burst's payload, ranging from 184 bits (23 bytes) for a one-slot burst up to 1,208 bits (151 bytes) for a five-slot burst.The payload has a fixed format: a 6-bit message identifier or Message ID, a 30-bit source MMSI, an optional destination MMSI, and the message, followed by an optional 19-bit communication state field used by the distributed medium access mechanism [41].The MMSI (short for Maritime Mobile Service Identity) is a 9-digit number standardized by ITU to uniquely identify maritime stations, e.g., ships [52].
An MMSI includes the 3-digit Maritime Identification Digits (MID) ranging from 201 to 775, denoting the country or geographical area of the country responsible for the ship so identified.For instance, the MMSIs under MID 338 are for various registering organizations in the United States, and those under 247 are for Italy.The MMSI for an individual station begins with an MID, i.e., MIDXXXXXX, where X is any figure from 0 to 9.An MMSI with the first figure equal to 0 is designated to identify a group of ships using the format 0MIDXXXXX.
At the end of the payload, a 16-bit cyclic redundancy check (CRC) is appended for error detection to ensure the integrity of the payload bitstream.Unused payload bytes are filled with padding bytes to facilitate variable message length.
The message can be one of the 2 6  = 64 messages predefined in the AIS air interface specification, including some of the most commonly used maritime services and ARQ (short for Automatic Repeat reQuest) protocol signaling that are hard-coded into the specification, such as Messages 1 and 7, as listed in Table 3 and Table 4.
For example, Message 1 is commonly used by a ship to periodically broadcast its static and voyage-related information.The message contains the ship's identification and dynamic information, such as position, speed over ground, course over ground, and navigational status.Dependent on the ship's speed and/or course changes, it is broadcast every 2 to 10 seconds while underway and every 3 minutes or less when at anchor or moored.A SAR transmitter transmits a series of eight identical position report messages (four on Channel 2087 and four on 2088) per minute when active.Similarly, an AtoN device uses Message 21 to provide precision piloting to passing ships, which is transmitted at a rate once every three minutes or the rate assigned by a control station onshore (i.e., a shore station) through Message 16.
The ARQ signaling message, i.e., Message 7 (Table 4), is used to acknowledge a unicast message, e.g., Message 6 (cf.Table 5), a type of message that requires an acknowledgment for each sent message for reliability.Since acknowledgment is a small signaling message compared to a minimum one-slot payload, and an AIS transmission can only carry one AIS message per burst, Message 7 thus combines up to four Acknowledgments to reduce waste.Nonetheless, it does not solve the problem when there are fewer acknowledgments in the queue.

IV. THE ISSUES
A close look at the AIS structure reveals the following major issues in the essence of protocol structure and physical air interface.It is beyond the scope of this article to analyze in detail the AIS specifications; however, it is worth outlining their basic characteristics and inherent weaknesses as contrasting references for the proximity MTC design in Sections V and VI.We start with the protocol structure that fundamentally hinders AIS's ability to support the emerging wide range of maritime IoT applications.

A. PROTOCOL STRUCTURE
First and foremost, the legacy AIS is designed under a flat structure without clearly defined, stand-alone layered protocols.We have just seen that AIS makes no difference between an application message and a signaling message (e.g., the medium access control or MAC message), all at the same level.The lack of a layered structure causes AIS to collapse into a flat framework (see Figure 5), imposing a fixed format literally ''carved'' into the transmission burst, i.e., the physical waveform of the AIS air interface, which fundamentally disables AIS to harness the benefit from the layered protocol structure, leaving no space/freedom for adaption to diverse applications or changing environments.

1) STRONG BOND BETWEEN APPLICATION AND AIR INTERFACE
From the application perspective, this structure leaves little freedom for applications, forcing a strong tie between the application and the AIS and thereby fundamentally restricting the applicability of AIS.
a) The only flexibility is provided through the Message ID field of the burst, which allows up to 64 predefined messages for the application to choose from, thereby preventing it from providing communication services to emerging diverse maritime IoT applications that do not or are hard to fit into the AIS message formats.Currently, AIS specifies 23 application messages and four MAC messages out of a maximum of 64 AIS messages (recalling that AIS does not differentiate between application and MAC messages).Introducing any new message format requires a specification change, which may cause backward compatibility issues.b) Moreover, any air interface has an upper limit to the size of an application message, with no exception for the AIS, whose largest message is limited by its burst payload.Segmentation is necessary so that the application message can be split into segments, placed inside the air interface payload, and reassembled to its original at the receiving end.AIS does not support segmentation or reassembly, meaning an application using the AIS interface must figure out the air interface payload size (i.e., a tie to the air interface), segment the application message into pieces, fit the fragments into multiple AIS messages, and reproduces the original message at the receiving end of the application, all by the application itself.c) The strong tie between the application and AIS makes application encryption difficult, posing a privacybreaching risk for maritime IoT applications.

2) WEAK RADIO LINK CONTROLLABILITY AND ADAPTABILITY
This inherent structural limitation also strips away the freedom for the MAC functions to manage the burst payload efficiently and control its transmissions over a wireless medium discretely on a per-application basis.a) AIS cannot perform quality of service (QoS) controls for applications and services with various QoS requirements, remembering that AIS was originally introduced mainly for collision avoidance.QoS includes key performance indicators such as latency and reliability, where latency refers to the maximum allowed end-to-end time, i.e., the time between the arrival of a message from the application and the reception of the message at the destined peer; reliability is the probability that a transmitted message is correctly received within a specified maximum endto-end latency.AIS does not have protocols to enforce the QoS, leaving it pretty much at the mercy of Mother Nature.b) AIS cannot dynamically multiplex or schedule multiple application messages (possibly from different applications) into the same burst based on message sizes and available payload, let alone multiplex application messages and MAC signaling messages into the same burst payload, resulting in wasteful resource usage and delays.For example, it is impossible to pack the ARQ and other messages in the same burst.We also see from Figure 5 that AIS carves the Communication State into the transmission burst as an inherent part of the burst to transmit the MAC Communication State message with the application message-a rather awkward way to do multiplexing.Since it is hard-coded, this field is generally just for this particular MAC message, not any other MAC signaling message.These limitations severely compromise AIS's ability to provide QoS for applications and deal with wireless channels notorious for varying and error-prone characteristics and render AIS unreliable and energy-and resource-inefficient.

3) LACK OF INTEROPERABILITY
It is essential for a vessel or marine equipment in a maritime IoT network to receive services from different systems or networks.The inherent dilemma of receiving such services lies in their lack of interoperability due to the usage of distinct protocols, interfaces, and software instances on top of different IoT infrastructures and resources in diverse deployment environments.Interoperability is the ability of IoT applications to exchange and integrate information flexibly, effectively, consistently, and cooperatively across network boundaries.Interoperability is then at the very center of maritime IoT's promises and is fundamental to its success.
For that reason, maritime MTC is expected to 1) offer the ability for different maritime IoT applications and services to access the network seamlessly, both within and across network boundaries, and 2) provide portability of information efficiently and securely across the complete spectrum of maritime IoT services without effort from the end-user, regardless of its manufacture or origin.
In fact, maritime service providers are already beginning to adopt IP-based application interfaces as a foundational component of their information technology platform, allowing maritime IoT application developers to utilize a rich array of existing application programming interfaces (APIs) and benefit from the ubiquity of IP networks.Decoupling maritime applications from their underlying physical environment is a major step toward making maritime applications and services available to all systems or networks with greatly reduced manufacturing and maintenance costs.
When it comes to today's data communication, it would be hard to argue against the fact that the IP network (the Internet in particular) is the largest data communication network on earth.IP is emerging as a ubiquitous protocol for communications and applications.The problem with the IP is that it is a protocol for the wired network and hence costly in that a 16-byte source and destination address included in an IPv6 packet, plus a 2-byte TCP or UDP port number, means that a minimum size of a packet is 36 bytes before any other overhead, i.e., the metadata required for routing and delivery over an IP network.This IP overhead is a significant burden, considering that the AIS is a narrow-band system (one-slot burst payload is 23 bytes) and that most maritime IoT applications are characterized by short burst data services.
Nonetheless, AIS is inherently not structured for a network layer protocol to interwork with the IP-based network and keep the heavy IP overhead manageable.This fact, plus the strong tie between the application and AIS, discourages developers from innovating new maritime applications for AIS-based systems and prevents the development of an AIS ecosystem.

4) NON-SERVICE-CENTRIC
IoT is essentially a network of services.As a communication network for proximity-based IoT, the ''destination'' of proximity communication is, in general, not the ''terminal of interest'' but the ''service of interest.''However, AIS lacks a systematically defined notion of ''application'' or ''service'' and a framework for network and application managementnecessary for service-oriented networking, rendering it inherently unsafe for service-centric IoT networking.Here is why: a) The lack of application oversight and supervision.It leads to unchecked and unmanaged deployments of applications over AIS, making it impossible for systematically defined application or service identification and vulnerable to malicious activities as having been frequently witnessed, hence inherently unsafe for maritime IoT, where safety is paramount.
b) Application identification ambiguity.AIS has no systematically defined and universally managed protocols for identifying and differentiating applications or services.Except for those explicitly predefined AIS messages (e.g., Message 1), it makes service discovery difficult and creates an ambiguity issue when the terminal hosts more than one application, possibly from different providers.c) The inability of an AIS device and its applications to receive provisioning, maintenance, and updates automatically and continuously from the service providers or backend servers over the network.It prevents them from being kept up-to-date and adaptive as network status and environment change.It is an important but often ignored service-centric network feature to keep the devices and applications deployed around the world relevant and stable for a healthy and sustainable network in the ever-evolving maritime world.

B. AIR INTERFACE
The AIS air interface, adopted from a wired interface (Highlevel Data Link Control Protocol [44]), essentially rests on the assumption that the AIS channel-a wireless channel behaves as a wired channel, consequently suffering from the following issues: The AIS interface is adopted from the wired system, where spectral efficiency is the least concern, which is in sharp contrast to wireless systems, where the radio frequencies as globally shared limited natural resources are scarce and precious.This inefficiency is manifested by the following facts: a) The AIS air interface does not have forward-errorcorrection coding, essential for efficient communication over wireless media.Without this layer of protection, it essentially leaves the entire burst wide open for channel impairments, especially problematic on the AIS channels with abundant co-channel interference due to the contention-based distributed medium access mechanism and the inherent adjacent channel interference in the crowded VHF band [27].It thus only takes a single bit of error to kill the entire burst up to five slots.Consequently, it renders an excessive frequency guard band imperative to shield the adjacent channel interference, which produces a channel symbol rate of 9,600 symbols per second over a 25-kHz channel-a less than 50% bandwidth usage [45].b) Moreover, AIS stuffs unused bytes in the burst payload with non-information-bearing ''padding bytes,'' just like a wired system.And yet, the unused burst payload can be significant due to the inability of AIS to pack multiple messages into one burst payload-wasteful for extremely scarce and precious VHF frequencies.

2) LOW ENERGY EFFICIENCY
A lack of an energy-efficient modulation and coding scheme (MCS) also nullifies the coding gain, consequently neces-sitating a high energy per information bit or E b /N 0 .One should not be surprised that AIS requires 1 W minimum transmit power, rendering AIS inherently unreliable and energy and spectrally inefficient.Therefore, AIS is unsuitable for energy-constrained devices such as battery-powered (i.e., no mains power supply with transmit power in tens of mW) to support IoT applications that require a lifespan in years (e.g., 10 to 20 years) in an energy-aware fashion.Note that although AIS MOB (Man-Overboard-Beacon) is introduced to operate on portable battery power for personal locating during search and rescue operations, the ''inside'' is still AIS that inherently lacks an energy-efficient communication mechanism, hurting the battery life.

3) NON-ADAPTIVE TO DYNAMIC WIRELESS ENVIRONMENTS
AIS has a fixed non-adaptive air interface (again, like a wired system), completely ignoring the highly dynamic nature of a wireless system.The lack of an adaptive modulation and coding scheme or MCS necessary for reliable communication over an unreliable mobile communication channel compromises communication reliability and further degrades energy and spectral efficiency.Moreover, the air interface cannot adapt or scale as per the IoT device limitation, unable to accommodate the whole variety of IoT applications or deployments.

4) WEAK SYNCHRONIZATION AND TRACKING CAPABILITY
A notable weakness of AIS is its transmission burst structure provides minimal synchronization and tracking capability in a dynamic wireless communication environment.a) As mentioned, AIS depends on GPS for network synchronization and uses derived slot ticks for transmissions and receptions.However, GPS reception may not always be stable.b) Due to the bursty and dynamic nature of proximity-based IoT, the only opportunity for a receiver to time-andfrequency synchronize to the incoming AIS signal is the preamble, making burst detection and decoding susceptible to radio channel variability, especially for long TTI bursts.A 24-bit preamble sequence (plus an 8-bit start flag) leads to a significant residual frequency estimation error even in a benign channel condition, let alone under strong interference and high channel variability.As aforementioned, two sources are attributable to the interference: one from adjacent channel interference and one due to the co-channel interference.Two factors may contribute to the variability: radio channel dynamics (mainly due to terminal mobility) and the frequency offset between the transmitter and receivers.Depending on the magnitude of the frequency offset, f , and the burst time, t, the resultant phase shift, f • t, may jeopardize the coherent detection of the bitstream, flipping 1s into 0s and vice versa.Nonetheless, AIS does not provide a mechanism to detect and correct them during the burst.Even with GPS to provide precision frequency reference for the transceivers, the relative motion of between terminals can easily introduce a Doppler frequency shift in tens of Hertz.The weak preamble accentuates the issue: the preamble-based ''oneshot'' estimation at its best has an error in the same order as the Doppler shift, which renders non-coherent detection (non-optimal) the only viable option.

5) WEAK MESSAGE INTEGRITY PROTECTION
AIS uses a 16-bit CRC for message integrity.The corresponding probability of miss-detecting an error is about 0.015 percent, which could be problematic for some messages, like Message 1.Such messages depend solely on the AIS CRC for information integrity.

V. THE SOLUTION
Not surprisingly, legacy AIS has been showing its structural incoherence to the IoT architecture and incompetence in dealing with the large variety of emerging maritime applications.As a consequence, the AIS community has been hollering for more spectrum to offload the ever-increasing maritime service traffic, and international radio spectrum regulators have begun to take notice.Recently, ITU came to the rescue, doubling down on AIS by committing a new precious VHF spectrum to maritime MTC for proximity-based maritime services, coined Application-Specific Messaging or ASM (refer back to Figure 4), intended to offload some ''specific maritime applications'' from AIS.However, more spectrum for offloading is not the ultimate solution to all the issues we just pointed out.
As the name indicates, the original intention of ASM has already shown signs of trouble that deserve closer attention: Should we now seek a general-purpose solution rather than going further down the path of being overly specific (for the reasons explained earlier)?In this regard, ASM is a misnomer.In fact, the opposite, i.e., ''Non-Application-Specific 101216 VOLUME 11, 2023 Authorized licensed use limited to the terms of the applicable license agreement with IEEE.Restrictions apply.Messaging'' or ''General Application Messaging,'' would be a more appropriate name.Most recently, some maritime organizations and authorities have been busy collecting new use cases not covered by the legacy AIS to be incorporated into the ASM standards instead of seeking a general solution that is future-proof.
It has become clear that the inherent structural issue of AIS needs to be better understood.Thus this paper intends to point out the caveats to prevent future MTC technologies from falling into the same pitfalls by using a design to support the arguments presented in the previous section, whereas the design applies to general proximity communications.
Indeed, the ability to accommodate a variety of maritime services is the key factor for successful maritime MTC implementation.The fundamental solution is a service-centric MTC framework to serve a broad range of proximity-based IoT applications, which is what today's maritime IoT sorely needs.Laying the right structure ensures that the MTC network securely accommodates such a solution as maritime IoT strategies and technologies continue to evolve.Despite a few efforts [46], [47], [48], much remains to be done toward this goal.In this section, we address, point-by-point, the issues laid out in Section IV, provide a proximity MTC framework, and present a practical, comprehensive design example based on the ASM spectrum under the maritime MTC network architecture illustrated in Figure 6, hoping that it will shed some light on the ongoing and future maritime communication technology development and standardization.Under this architecture, the maritime MTC network encompasses, as mentioned earlier in Section I, a satellite and terrestrial network for wide-area coverage of client-server applications [19] and a proximity network for proximity-based clientclient applications.

A. PROXIMITY IoT APPLICATION BASIC BEHAVIOR
From the application perspective, a proximity-based IoT application involves the following basic operations, at least in part, on a per-application basis: Proximity service discovery is a neighbor discovery and association process through which an IoT device detects its ''neighbors'' who are subscribed to the same service and invites them to participate in joint activity.In other words, the application advertises its services (and associated information) to allow other devices in its application-specific physical proximity to detect the services of interest.In an AtoN application, for instance, an AtoN device shares the navigation information service with passing vessels via the discovery message.Discovering and identifying the relevant services in the spatiotemporal context is the key property differentiating proximity MTC from the classic wide-area MTC and, consequently, a unique issue for implementing proximity MTC.

2) RESPONSE TO THE ADVERTISER
During discovery, the invited device, particularly the service client of the application that receives the discovery message, may acknowledge the advertiser (to show its interest), possibly followed by a sequence of message exchanges.Following the above AtoN example, in addition to the basic information accompanying the discovery message, the vessel may respond with a request for more information, e.g., an ice map.

3) GROUP MESSAGING
Depending on its behavior, the application may engage in group-wise message exchanges.The message is destined for multiple devices (all or a subset of the discovered devices), using, for example, a group identifier predefined or dynamically defined by the application.The group is possibly dynamic, with participants moving in or out as the environment changes, e.g., on a vessel's voyage.

B. PROXIMITY MTC FUNCTIONALITY
From the MTC perspective, its sole job is facilitating these basic operations, as mentioned above.Therefore, we will focus on the following aspects associated with proximity MTC.

1) MULTICAST AND UNICAST
First, two types of communication can be identified: multicast and unicast.
In service discovery, an application selectively broadcasts a discovery message in the sense that it is destined only to devices within its specific physical proximity defined by the application.When an application engages in peerto-peer messaging between two nearby devices after discovery, e.g., response to the advertiser, unicast is invoked, typically with a QoS requirement, depending on the nature of the application.
During group messaging, messages are destined for all or possibly a subset of the discovered devices.Both cases belong to multicast, but the latter imposes an additional selectivity constraint and, hence, is called groupcast, identified by a group identifier.Groupcast may enforce certain QoS requirements per the application.

2) LAYERED ARCHITECTURE
MTC is the process of conveying IoT application traffic from one IoT device to its peer (a nearby IoT device), which, like any communication system, requires information exchange, in the form of a protocol data unit (PDU), between two peers of a multitude of sets of mutually understood protocols.Layered architecture is the most common and widely used architectural framework in modern communication networks, under which protocols with similar functionalities are organized into horizontal abstraction layers.Each layer performs a specific role and communicates to its remote peer to accomplish a given set of tasks.This conceptual framework ensures interoperability within the communication system regardless of the technology, vendor, and model.In the following, we adopt the four-layer model, i.e., the application layer, network layer, data link layer, and the physical layer, of which the network layer (Layer 3), data link layer (Layer 2), and physical layer (Layer 1) belong to the MTC terminal.Together, they function as a single unit of an MTC network entity to fulfill the ultimate task, i.e., facilitating FIGURE 7. Protocol layers of IoT devices and MTC terminals for proximity-based IoT applications [19], where dashed lines denote virtual connections, and solid ones represent physical connections.
application traffic between clients at the application layer of IoT devices, as diagrammed in Figure 7.An application client refers to an instance of a client application.

3) LAYER ADDRESSING
Since facilitating data traffic between application clients at the application layer (Layer 4) requires coordination of every MTC protocol layer (Layer 3 through Layer 1), layer addressing, i.e., how the PDU destination is addressed, is one of the major responsibilities of a protocol layer to support signaling or messaging (i.e., PDU exchange) between peer protocols [19].Proximity MTC, in particular, has the following distinguishing characteristics in that regard.
In a traditional network (i.e., the Internet), a network address (Layer 3 address) always points to a network node or host identified by its IP address.Nevertheless, by current design, an IoT network is service-oriented, characterized by ''service-of-interest'' rather than ''host-of-interest.''This service-centric concept is realized through service addressing, using the service identifier as the network address for uniquely and efficiently profiling, identifying, and discovering applications.
When a proximity IoT application involves unicasting or multicasting with QoS, the data link layer is there to discretely ensure the delivery of the network layer PDU, according to the service QoS requirement, over the communication links between the communicating terminals, dictated by the physical locations and distances of the terminals.It thus requires the identification of the terminals.The terminal identifier (e.g., MMSI of the IoT device) is thus used as the Layer 2 address for terminal addressing.
The radio signal transmitted by an MTC terminal reaches every terminal within the transmission range.Since this range is dictated by the Friis equation and controllable via transmit power and MCS [19], a physical signal (i.e., the transmission burst) essentially provides proximity addressing that facilitates neighbor discovery and implements the key ''distance-of-interest'' functionality that naturally takes care of the proximity characteristic of proximity-based IoT applications.

FIGURE 8.
Proposed layers and the transmission burst structure for service-centric proximity MTC, of which the pilot pattern and guard period are configurable through the MAC Burst Configuration PDU.The MAC Burst Configuration PDU has a dedicated Logical Assignment Channel, ACH; other MAC Signaling PDUs that do not have dedicated logical channels all go through the Logical Traffic Channel, TCH.Here, ''Hn'' stands for the ''network layer PDU header'' defined in Table 7, Hd for ''data link layer PDU header,'' Hq for ''QFC header,'' Hm for ''MAC header,'' Tp for ''physical layer PDU trailer,'' and ''GP'' for the optional guard period.

4) MEDIUM ACCESS CONTROL
Since proximity MTC operates, by default, under selforganizing networking without centralized medium access control (typically performed by a control station in an infrastructure-based network), a distributed or autonomous resource allocation mechanism must be in place at the data link layer (Layer 2) and supported by the physical layer (Layer 1), which makes the life of an MTC terminal more complicated (than under the centralized medium access control in an infrastructure-based network).

5) PROXIMITY SERVICE PROVISIONING
Provisioning involves the process of preparing or updating an IoT device to allow it to provide new services to its users (i.e., the IoT applications), including communication resources (e.g., frequency band assignments and corresponding regulatory constraints) and configurations of the requested application (such as the service profile and the default application server, e.g., URL).Therefore, a network layer entity in charge of network resource and IoT application management must be present in Layer 3, even for proximitybased applications, for interacting with the remote backend server or proxy (detailed next) through a wide-area network (e.g., VDE-SAT or VDE-TER) as a backhaul network (see [19] for details on VDE-SAT and VDE-TER design).

C. PROTOCOL STRUCTURE
To avoid the crude brute force approach we have seen under the AIS framework, it is essential to follow a layered protocol structure like the one laid out in Figure 8 (as opposed to the flat one in Figure 5), which we will show provides the foundation and structural framework for the MTC solution to the AIS issues and, ultimately, the diverse and stringent maritime IoT requirements.
Under such a structure, the network layer (Layer 3) serves as a gateway providing service-centric networking to a diverse set of rich IoT applications.The application (Layer 4) is not a native component of the MTC network.It is produced and maintained by a third party, which typically assumes an IP network beneath it.Therefore, the network layer interacts with all kinds of applications/services via the socket channel-a construct that serves as an application programming interface or API for interaction between the application and the MTC network, i.e., an endpoint for sending and receiving application-specific data.
The socket API is a predefined collection of classic socket function calls that an application usually uses to access other parts of the operating environment without knowing the specifics of the operating environment.In particular, an IP-based network API provides an interface from the application to the network protocol (i.e., IP) stack, and the application uses the API functions to write data to or read data from the network through the operating system.
A socket channel under the current framework is associated with a specific IoT application, including the service provider's access point name or default network location (i.e., IP address and transport number), which provides a unique identifier for service recognition/identification and distribution/discovery.It is established for a specific application (and released once deactivated from this terminal), allowing one terminal to host multiple IoT applications.A socket also abstracts away the underlying MTC system and, hence shields the application from the implementation details of the MTC terminal.
The structure and properties of a socket are implemented by the operating system (e.g., Linux) of the host.The application data channeled through the socket (now the network layer SDU) is associated with a predefined service profile containing application-specific attributes maintained by the federated IoT Service Registrar (ISR).The service profile includes the service provider's default network location (IP address and transport number) and the service requirements, such as message size, transmission rate, communication range, and QoS.Message size indicates the typical amount of data required by a specific service generated by the application, and the transmit rate represents the number of messages per unit of time the application generates.The communication range specifies the minimum distance between a transmitter and its intended receiver within which the service QoS needs to be fulfilled.As stated, QoS includes key performance indicators such as latency and reliability, where latency refers to the maximum allowed end-to-end time, i.e., the time between the arrival of a message from the application at a transmitting service client and the reception of the message by the application at the destined service client; reliability is the probability that a transmitted message is correctly received within a specified maximum end-to-end latency.Given that most proximity-based applications are time-critical, the latency and reliability of communications are key metrics used by the Network and Application Management entity (NAM) in the way service requirements are perceived and prioritized to create, configure, and allocate a QoS flow that satisfies the requirements of the application traffic.
From the network layer perspective, the application data are delivered to the receiving peer using the concept of QoS flow.A QoS flow can be deemed a virtual ''tunnel'' for delivering IoT services sharing a set of common characteristics, and the network layer utilizes a corresponding set of QoS channels to materialize QoS flows.Applications with the same QoS characteristics share the same QoS channel associated with a specific QoS flow.A QoS channel, in essence, defines what type of data is to be transported, which allows the lower layer (particularly the data link layer) to make use of this information to manage the transmission of various types of network layer traffic or PDU.
Upon receipt of the network PDUs channeled through the respective QoS channel, the data link layer (Layer 2) utilizes two sublayers, the QoS flow control (QFC) and the medium access control or MAC sublayers, to manage the physical delivery of the network PDUs to their peers as per the QoS requirement defined by the respective QoS channel under the QoS flow concept.
At the QFC sublayer, QFC instances or entities are created and configured to handle individual QoS channels.In this regard, a QFC instance can be considered the representative or agent of the QoS channel in the data link layer, responsible for ensuring the service requirements associated with the QoS channel are met.
Unlike the QFC sublayer (where there can be multiple QFC instances), there is only one MAC entity at the MAC sublayer.Its job is to transmit the QFC PDUs from different QFC instances with specific requirements over the physical layer under the physical channel constraints, available system resources, and transceiver capabilities at both communicating ends and, at the same time, optimize the overall system performance.A MAC Traffic PDU carries a QFC PDU, while a MAC Signaling PDU conveys the MAC control message from a MAC entity to its peer (i.e., the receiving MAC entity).Both are packed or multiplexed in the data link PDUs and channeled to the physical layer through a logical Traffic Channel (TCH) or logical Assignment Channel (ACH), depending on the type of MAC PDUs they carry.
The physical layer (Layer 1) executes the physical delivery of data link layer PDUs between terminals within the MTC network in the form of physical channels, particularly over the Physical Traffic Channel (PTCH), per the supervision of the data link layer over the Physical Signaling Channel (PSCH).They are physically carried by the transmission burst's payload over the air.The physical channels and the transmission burst are detailed later in Section VI-D through a practical design.

VI. THE DESIGN
Now that we have laid out the proximity MTC framework, in this section, we elaborate on the service-centric characteristics of the proximity MTC solution portrayed in Section V through a comprehensive practical design.For the materialization of the design, we use maritime autonomic surface shipping or MASS, briefly mentioned in Section II and detailed in [49], as a specific and important application example.For the sake of clarity and coherence, we revisit the connection between MASS and proximity MTC before diving into the design.

A. MASS AND PROXIMITY MTC 1) MASS AND AUTONOMY
As briefly touched upon in Section II, the safety level that MASS can achieve is directly related to its cognition and autonomy level.Autonomy is self-reliance and selfsufficiency with the capability of learning, reasoning, reacting and adjusting to dynamic environments, and evolving with the environment.Autonomy in a navigational context signifies that human decision-making by the bridge crew of a ship is replaced by IT-based solutions.Here, artificial intelligence or AI [50] rises to the challenge, providing MASS's most crucial component: autonomy.
Specifically, an AI-driven pilothouse (also known as the bridge of a ship) assumes the role of the crew in operative decision-making that complies diligently with the international steering and sailing rules (e.g., COLREG [51]), presupposing the availability of automated situational awareness data and automated navigational control.It employs technologies like multisensor data fusion (e.g., Kalman filters) and machine learning [50] to harness and leverage the full potential of real-time data, turning them into trusted and actionable intelligence for making data-driven decisions.Furthermore, machine learning also allows an AI-driven bridge to automatically learn from past data or experience without explicitly preprogramming, making it learn as it goes like a human brain (for a specific task).

2) PHYSICAL AND VIRTUAL SENSING
Well, if AI is the brain of a MASS ship, it is not an overstatement to say that sensors are the eyes and ears.The MASS ship's AI employs a fleet of onboard sensors, such as radar, lidar, digital compasses, inertial measurement unit, strain gauge, and high-definition camera, combined with machine learning to perceive and understand the environment around the ship, facilitating situational awareness.Marine radar or lidar directs X-or S-band beams or laser beams 360 • around itself and measures the time it takes to reflect off surrounding objects, while AI combines the outputs from these physical sensory modalities to detect position and speed and predict the movement of nearby vessels.Cameras, mono or stereo, also help identify and recognize surrounding objects.
Since non-line-of-sight (due to refraction and reflection from water and objects) creates localization errors, these sensor systems depend on the line-of-sight dominant characteristics of these electromagnetic (EM) beams.As a result, if an obstacle (e.g., a cruise ship or an islet) comes in between the sensor system and a vessel (e.g., a yacht), it casts an EM shadow and hides the vessel from the MASS ship.Moreover, adverse conditions at sea may weaken the line of sight.So, fog, rain, snow, and lighting can impair AI's visibility, perception, and acuity and compromise safety.
Another important limitation is that these sensing systems essentially use physical sensors to take direct measurements of a physical quantity, such as the physical position, speed, and visual appearance of the object (e.g., the shape, texture, color, and even logo), and convert these measurements into relevant data but lack of other ship-specific information that AI can exploit for improved perception and decision, making the most of its tremendous learning and analyzing capacity.Such information includes 1) dynamic information relating to the ship's course, heading, rate of turn, and maneuvering intentions, 2) static information related to the ship's name, type, length, beam, tonnage, maneuverability, and 3) voyagerelated details, such as cargo information and navigational status, which is hard to obtain from physical sensing.
The fundamental regulatory issue related to situational awareness is whether a machine can replace human function, and observation is essential.Unfortunately, the above limitations are inherent to the conventional physical sensing system and may compromise the situational awareness of MASS AI and impinge upon the ship's autonomy and the ultimate international regularization and adoption of MASS.The solution is to complement physical sensing with virtual sensing.After all, any isolated individual MASS AI has limited perception and cognition.Only when individuals self-organize and work together as a collective swarm by interacting with each other based on simple principles can MASS achieve holistic swarm intelligence through virtual sensing and take a game-changing leap toward the next level of autonomy.
Under the virtual sensing concept, vessels explicitly broadcast relevant information, and a MASS ship receives the information over a reliable proximity MTC network, i.e., the virtual sensing space.This element implements collective environmental perception by sharing ship-specific information and locally sensed ambient information (e.g., the detected objects by physical sensing) among neighboring ships in the virtual sensing space, thus extending the perception scope beyond physical sensing.The MASS AI can then collect more relevant ambient information, consider multiple viewpoints, and fill in the gaps from physical sensing, making better-informed, more reliable decisions in a potentially complex MASS environment.
This swarm intelligence element becomes even more significant in head-on, overtaking, crossing, and other traffic scenarios, under which a MASS ship interacts and cooperates with ships of relevance to generate COLREG-compliant optimal trajectories.A trajectory is a time sequence of the ship's states that includes position, heading, linear and rotational velocity, and the values of the actuators (such as rudder angle and propeller rpm).
For example, ships in close proximity (near the CPA, e.g., 0.5 nautical mile) maintain situational awareness, coordinate, and jointly optimize their routes autonomously so that close-quarter situations can be predicted and avoided early, especially when navigating across a shipping lane and in poor weather.Such an autonomy level through cooperative information sharing and joint decision-making is difficult to achieve for the MASS AI that depends solely on stand-alone physical sensing.
Under the current MTC framework outlined in Section V, these virtual sensing functionalities are treated as services channeled through applications by design.For example, a collective perception service periodically transmits an application message via proximity MTC to surrounding ships that contains ship-specific and ambient information.A ship may send out event-driven messages alerting neighbors to hazard events such as anomalous traffic conditions or accidents.A route-exchange service entails collaboration among a group of ships in close proximity to maintain situational awareness and coordinate and jointly optimize their routes by AI.

3) THE VHF SPECTRUM
The radio spectrum is undoubtedly the foundation of all wireless communications, and even more so for maritime MTC because of its international nature.Therefore, before we plunge into the design, let us revisit the internationally allocated radio spectrum designated to maritime proximity MTC and reiterate its importance to proximity-based IoT applications, especially for MASS services.
However, the radio spectrum is a natural resource, and natural resources within the nation's geographic boundaries are generally owned by that nation, much like freshwater, land, forests, and mineral deposits.This fact increases the difficulties in overcoming the inherent barriers that prevent the globalization of the maritime MTC needed for MASS.Indeed, this ultimate goal will not be possible without adequate global spectrum access.
Nevertheless, the fact of the matter is that new wireless communication technologies and systems, fueled by exploding mobile applications, have created an unprecedented demand for the radio spectrum.Emerging countries deploy wireless systems to modernize communication infrastructure on land and at sea.This reality and the pervasive presence of legacy radio systems make an extremely competitive environment for worldwide spectrum access.As mentioned earlier, ITU came to the rescue, committing an extremely scarce and precious VHF spectrum to maritime MTC for ASM, as depicted in Figure 4 and tabulated in Table 6.Regardless of ASM's original intention, it is ideal for proximity MTC, especially for MASS applications.
Compared to higher frequency bands such as light or laser, diffraction is more predominant in the VHF band.This causes VHF waves to bend around obstacles, resulting in fewer EM shadows or blind spots.The VHF is also superior in penetration, meaning fog, rain, and snow are less of an issue.These advantages are hugely significant because they circumvent the marine radar, lidar, and camera limitations, making AI ''see'' better and ''hear'' more about the environment, which crucially boosts the AI's environmental perception capability to a level close to maritime safety regulations and practical applications.Therefore, these invaluable VHF channels are ideal for maritime proximity MTC, especially for MASS services, despite the limited bandwidth due to the scarcity of VHF, recognizing the fact that maritime proximity MTC is characterized by short-data services (e.g., collective perception service messages)unlike ''human-type communications'' dominated by bandwidth-hungry multimedia applications.

B. NETWORK LAYER
As aforementioned, proximity MTC is a special type of MTC in that the target or destination of the communication is in proximity.In this sense, it belongs to the local-area direct communication (no relays) between service clients of neighboring IoT devices through MTC terminals (cf. Figure 7).
Here, an MTC terminal may be an installation aboard a vessel or embedded in marine equipment, providing wireless connectivity between IoT application peers.An application may be co-located with the MTC terminal or reside on a host on an on-premises network connected to the MTC terminal via a local-area network, such as an Ethernet (or WiFi) network, aboard a ship with one or more end-hosts running various applications, as graphically illustrated in Figure 9.In the former case, the MTC terminal (aboard the yacht in Figure 9) serves as a host for the application, while in the latter, it (aboard the ship) serves as a router.Here, a host is an end system with one link (proximity MTC in this case) to the proximity IoT network, and a router has two or more links (proximity MTC and Ethernet).
The network architecture in Figure 9 supports this servicecentric networking, where the Network and Service Management or NAM entity caters to a vast array of rich maritime IoT applications through its agents residing at the network layer of MTC terminals.Also, client-server applications (cf.Section I-A) may be present.Both obtain Internet access through the MTC terminal via a wide-area MTC network (cf. Figure 1), such as VDE-TER or VDE-SAT, and communicate with their servers on the cloud.On the contrary, a client-client proximity-based application communicates with others in its physical vicinity through proximity MTCthe primary focus of the design.
101222 VOLUME 11, 2023 Authorized licensed use limited to the terms of the applicable license agreement with IEEE.Restrictions apply.Illustration of an exemplary proximity communication scenario to illustrate the service-centric network architecture and key elements, where multiple application hosts (connected through an on-premises network, e.g., Ethernet or Wi-Fi, on-board a ship) communicate with the application clients running on the host (aboard another ship) through an MTC terminal.The connection to the maritime cloud network (where NAM and the application servers reside) is through the wide-area MTC network (e.g., VDE-SAT) (cf. Figure 1), as a backhaul network for network management (e.g., service provisioning and updating).

1) SERVICE ADDRESSING
Network addressing is one of the major responsibilities of the network layer.In the current design, the destination is not the ''host of interest'' but the ''service of interest,'' highlighting the service-centric principle-the key to enabling efficient proximity IoT networking and the coexistence of a broad spectrum of applications on a single network.As such, unlike the traditional host-oriented networks (e.g., the Internet), the MTC network layer is no longer responsible for interconnecting terminals within the network.Rather, only the services of interest are monitored, and the relevant messages are delivered to their destined applications through the corresponding socket channels.
Just like the Internet Assigned Numbers Authority (IANA) oversees global IP address allocation for the host-oriented Internet, the maritime IoT central authority, i.e., the ISR under the framework from the solution derived in Section V-C, serves as a trusted registrar for all maritime IoT applications and handles registration, authentication, authorization, and key management.A service provider registers itself with the ISR.Upon registration, service integrity is kept until deregistration from the registry.The ISR works to maintain an updated repository or service registry of all applications available to the users and responds to client requests for services.It also keeps the status of the services provided by each application that is expected to be instantiated, scaled, and terminated without manual intervention.In practice, the ISR, through an international federation of national maritime administration bodies (like IMO), maintains a list of authenticated maritime applications and services, with an assigned universal service identifier or Service ID for uniquely identifying IoT applications and services-a key network component to efficiently implement the service-oriented architecture.This identity system provides unique identifiers for service recognition/identification and distribution/discovery (implementing the service addressing concept at the network level), as well as accountability and monitoring of the services being dispatched within the MTC network.Another advantage of using Service ID is saving over-the-air overhead, which is crucial for MTC due to the narrowband limitation, as we have already seen in Figure 4 (and most MTC systems like NB-IoT and LoRa touched upon in Section I).

2) QoS FLOW
As aforementioned, the NAM entity under the current framework (from the solution in Section V-C) encompasses the functionality of radio resource (i.e., frequency band) allocation/selection, network information maintenance and provisioning, network adaptation and convergence, and application data transportation via the QoS flow construct, as illustrated in Figure 10.Through QoS profiling (based on the information provided by ISR), NAM provides service providers with an open API to facilitate different network capabilities tailored for different service requirements.NAM associates each service with a QoS Flow through which the service is supposed to be delivered.This information is provisioned to the MTC terminals in the form of the network layer's Control PDU, as illustrated in Figure 9, to facilitate service-centric networking.
Referring back to Figure 8, the NAM agent at the network layer of an MTC terminal is responsible for mapping the socket channel to the QoS channel per service profile.Each QoS flow with its associated attributes is uniquely defined, maintained, and provisioned by the NAM.As such, a socket associated with a specific service or application is identified by the Service ID in Figure 10, whereas a QoS channel is identified by the QoS flow associated with a group of Service IDs, of which applications share similar QoS requirements.This structure lays down a framework for service-centric networking.Thus, the network layer decouples the air interface from the application by tunneling application traffic to the data link layer through QoS channels in the form of the network layer Traffic PDU.Under the central NAM, the NAM agents serve as decentralized NAM entities used by individual terminals to receive services.In order for the distribution of services between a centralized and a decentralized system, a NAM agent stays in touch with the central NAM (or a NAM proxy) for network and service updates a backhaul transport network, such as a maritime wide-area communication system (e.g., VDE-SAT or VDE-TER [19]) or any other wide-area cation means, illustrated in Figure 6 and Figure 9.These updates include the network resource (e.g., frequency bands) and application updates from NAM, software upgrades automatically and from the service providers, and system from manufacturers.For instance, besides the default Channels 2027 and 2028 in the international maritime mobile band, the NAM the terminal with frequency channels that become available for proximity MTC.The NAM may also opportunistically frequency bands, e.g., 75-MHz spectrum in the 5.9 GHz band (allocated in some countries or regions for terrestrial intelligent transportation services) for proximity VOLUME 11, 2023 101225 Authorized licensed use limited to the terms of the applicable license agreement with IEEE.Restrictions apply.

MTC at sea. A terminal may direct communicating peers to a new frequency band with more available bandwidths during a communication session to better meet the QoS requirements.
As mentioned before, the network layer is for delivering and receiving the messages of the service of interest; thus, only a Service ID is present in a network layer Traffic PDU header (tabulated in Table 7) and, hence, the Layer 3 address of the service-centric MTC network.A 16-bit Service ID can identify up to 65,536 maritime IoT applications.Since a Service ID is uniquely associated with an application and, hence, its profile (e.g., the default URL or server location (IP and port)), the network layer of a terminal has no problem figuring out what Service ID a socket is associated with (and vice versa) when a socket is created by an application.
A device signed up for one of these services maintains a peer socket to receive the service, and a terminal may maintain a plurality of sockets for various services.Now, a receiving terminal uses this list of socket channels as filters to monitor the incoming application traffic and screen out the irrelevant application messages, i.e., the network Traffic PDUs whose destination (i.e., Service ID) does not match the corresponding specification of the active socket channels.This, in essence, implements how applications and services locate each other on a proximity network, i.e., the service discovery.A socket channel is initially created per application for monitoring and sending discovery messages.The application may request additional socket channels (from the host operating system) to engage in unicasts or groupcasts to bootstrap a group communication session after discovery.
As illustrated in Figure 11, A shipboard IoT device multicasts, e.g., navigation safety-related information to the neighboring traffic.Here, the term ''multicast'' is used in contrast to broadcast to highlight the ''distance addressing'' concept described in Section V, under which the message is only destined for the terminals within the specified neighborhood by the application.Once the message is received by another IoT device on which the application is installed and active, and a match is found, the network layer forwards the payload (of the network Traffic PDU) to the corresponding socket.A ship may require additional application-specific information, allowing further information exchange using unicast communication links and a groupcast link within a group if applicable, e.g., in a coordinated ship operation.

3) INTERNETWORK ADAPTATION
IP is becoming a ubiquitous protocol for communications and applications, and applications assume a connection to the Internet or a local IP network.The message generated by the IoT application and received by the network layer of a terminal is expected to be IP-based datagrams such as the Transport Control Protocol (TCP) datagrams and User Datagram Protocol (UDP) datagrams.It is, therefore, essential for MTC to interwork efficiently with the IP network.TCP provides reliable, ordered, and error-checked endto-end delivery between applications via an IP network [53].The function of TCP is to control the transfer of IP packets so that it is reliable through connection management and reliability control, which simplifies the implementation of an application.Connection management includes connection initialization (a three-way handshake) and its termination.TCP opens the connection and completes all the handshaking formalities before transferring the datagram.Hence, even a short datagram needs a minimum of seven message exchanges, introducing excessive overhead and delays.Reliability is achieved by a process known as automatic retransmission request or ARQ, in which the sender detects lost segments and retransmits them.A retransmission of the TCP segment occurs after the segment is lost, i.e., when the sender does not receive the acknowledgment after a timeout.TCP is reliable in that the protocol checks to see if everything transmitted is delivered to the destined receiving end.Another issue with TCP is that it includes a network congestion control mechanism that treats unacknowledged or lost datagrams as an indication of network congestion.The protocol mitigates the congestion by slowing down the transmission after it occurs, as indicated by packet losses.It makes perfect sense for the wired IP network, where the loss of data is caused by routers dropping IP packets due to buffer overflow at the network the cause of network congestion.However, data is normal in a wireless network like maritime MTC due to errors caused by notoriously unreliable wireless channels.Therefore, handling lost data by the network layer caused by the wireless channel at the physical layer is no longer appropriate or efficient.
On the other hand, UDP provides a connectionless datagram service that emphasizes reduced latency and overhead over reliability.UDP does not require creating a connection; a datagram is transferred without handshaking.It is unreliable because the protocol does not ensure the delivery of the IP packet to the destination.For this reason, UDP is a protocol more suitable for MTC (refer back to Figure 9), in which the reliability is taken care of by the lower layer, i.e., the data link layer and the physical layer, where reliability for an erroneous channel can be handled with greater efficiency through channel coding at the physical layer where the error most likely occurs.Nevertheless, it does not prevent the application from adding additional ARQ for reliability (at the application level).
Regardless, both UDP and TCP rely on the IP.The problem with the IP is that it is itself costly in that a 16-byte source and destination address included in an IPv6 packet, plus a 2-byte TCP or UDP port number, mean that a minimum size of a packet is 36 bytes before any other overhead, i.e., the metadata required for routing and delivery.This IP overhead is a significant burden for MTC since most IoT applications are characterized by short burst data services, in which case the conventional IP header compression [54], as often seen in a human-type wireless communication system (e.g., LTE) where the traffic is dominated by streaming media services, is no longer effective.A more aggressive compression approach is desirable to keep the overhead more manageable, especially for a practical wireless system with limited bandwidth (e.g., 25 kHz for ASM, as shown in Figure 4), serving IoT devices with limited energy.Referring back to Figure 6, the air interface over the narrowband radio channel is surely the weakest link of the entire data exchange chain in terms of bandwidth and reliability; its limited capacity is the bottleneck of the MTC system and, therefore, is inherently unsuited for IP traffic.The internetwork adaptation between the MTC network and the IP network is another key function of the NAM agent to ensure, at the network level, a minimum efficiency loss during the data exchange between these two distinct networks.
Therefore, the NAM agent at the network layer of an MTC terminal acts as a gateway that sits between the internal (maritime proximity MTC network) and the external IP network, aggregating traffic, performing internetwork adaptation, and routing it to the destination across the boundary of two very different networks, which is in accord with the maritime MTC service-centric principle by providing a resourceand energy-efficient means for MTC terminals to interact with the IP-based applications without being overburdened by the resource and energy-hungry wired protocols that drive the IP network.It becomes even more crucial for maritime IoT, recalling that maritime proximity MTC has a very limited bandwidth in the extremely precious scarce VHF band.
In reducing the IP overhead, the NAM agent at the source terminal receives the UDP/IP datagram from the application client through a dedicated socket channel, with a destination identified by the application server's default network location (IP plus port).The NAM agent extracts the payload (service data) from the datagram and places it in a network Traffic PDU, replacing the destination network location with the corresponding Service ID.It then pushes the network Traffic PDU down to the lower layers through one of the QoS channels for transport over the air interface.At the receiving end, the process is reversed: Once the NAM agent receives the network Traffic PDU, it converts the Service ID embedded in the PDU back to the application's default network location, reconstructs the UDP datagram, and sends it to the application through the socket.If no match is found, i.e., no such service is active on this terminal, the network Traffic PDU is discarded.
Note that the application layer is not native to the MTC system (see Figure 7 and Figure 9), and an MTC network's sole job is to deliver an application message to all peers of the same application in the network, recalling the service-addressing nature of the service-centric design.Once a message is delivered to the corresponding application, it is completely up to the application to figure out what to do with it, conforming to the service-centric principle that application-specific information is shielded from the underlying MTC system, allowing application encryption.The application may apply integrity and ciphering algorithms to incoming/outgoing messages (or chooses not to).The application may discard the message if found irrelevant, for instance, meant for a different recipient defined within the application domain-the information not accessible to the MTC system.
This service-centric networking framework a) breaks the bond between the application and the air interface, b) provides the data link layer with the QoS information for QoS control, c) interworks with IP networks with efficiency, d) enables a single MTC terminal to host multiple applications, possibly from different service providers, providing end-to-end service delivery via a QoS flow and solving the ''one service provider, one device'' issue, and e) facilitating centralized or federated maritime IoT application supervision and maintenance.

C. DATA LINK LAYER
Under the current protocol framework in Figure 8, with QoS channels that define what type of data is transported, the data link layer uses logical channels to define how the PDU is transported to its peer(s) through the physical channel with the required QoS.To this end, the data link layer depends on a suite of protocols.In this paper, we place focus on the most relevant QFC and MAC functions/protocols and their PDUs.
The data link PDU includes a header (see Figure 8) that contains a Layer 2 address in the Source ID field.In the maritime MTC context, it is a four-byte MMSI, as shown in Table 7.A one-bit field is present at the beginning of the header to indicate the presence of a Source ID.This Source ID is needed for peer-to-peer connectivity at the data link layer per QFC instance, such as ARQ.The Layer 2 source address is also needed for broadcasting with QFC segmentation to avoid ambiguity during reassembling at a receiving QFC entity, recalling that a QFC instance may handle multiple services, possibly from different stations but belonging to the same QoS channel.

1) QoS FLOW CONTROL INSTANCE
As aforementioned, a QFC instance is created to be associated with a QoS channel, characterized and identified by a QoS flow inherited from the QoS channel.It is in charge of the delivery and reception of QFC Traffic PDUs of this QoS flow, as diagrammed in Figure 12.A QFC instance is released when its associated QoS channel is deactivated.
The quality of a particular service (i.e., complete, unique, valid, and timely) per the service requirements is managed by the QFC instance.For example, the retransmission request procedure is performed by paired QFC instances to deliver and receive QFC Traffic PDUs with the required QoS through the MAC entity.Specifically, PDUs from a QFC instance are numbered sequentially with a sequence index included in the QFC PDU header tabulated in Table 3.When the peer entity receives the QFC Traffic PDU, it arranges the PDUs in its receiving buffer back in order (if received out of sequence) by checking the Sequence Index embedded in the header of the QFC Traffic PDUs.If no missing PDU is detected, it reassembles the QFC SDUs and delivers them to the network layer through the corresponding QoS channel of the same QoS flow, in sequence if so required, marking the cessation of the QoS flow.Otherwise, it either discards the QFC SDU (if the service has no reliability requirement) or waits till the reordering timer expires and then requests retransmissions of the missing PDUs (identified by the sequence indices) from its peer QFC instance identified by the Layer 2 address (i.e., the MMSI) via the QFC Signaling PDU (if the service has reliability requirement).Remember that a Layer 2 source address is embedded in a data link PDU header (cf.Table 7), hence known by the receiving QFC instance.When the transmitting instance receives the notification, it locates the requested QFC Traffic PDU from the PDU buffer and transmits it to the requesting QFC instance at the address.
The transmitting QFC instance composes QFC Traffic PDUs from its QFC SDU (i.e., the network layer Traffic PDU or the data link SDU) buffer.When the QFC SDU size is greater than the maximum MAC SDU size (restricted by the maximum physical layer SDU size), it is broken down into multiple segments tagged with the sequence indexes, which are sent on multiple QFC PDUs.For reassembly at the receiving end, each QFC Traffic PDU is also tagged with a segmentation status (see Table 7) to indicate the segment's position in the original QFC SDU after segmentation.With the QFC Traffic PDU header and the source MMSI in the data link PDU header (recollecting that QFC handles service data with the same QoS flow, possibly from different terminals), the QFC peer on the receiving end can reassemble these QFC SDUs to recover the original network layer PDU once all the fragments are received.
Therefore, the QFC sublayer enables a) delivery of service data with specific QoS requirements via the QFC instance construct, b) service data quality (completeness) through retransmission, and c) segmentation and reassembly, allowing an application to send or receive a message of any size over the proximity MTC network.

2) MEDIUM ACCESS CONTROL ENTITY
The MAC entity of the MAC sublayer manages or supervises the physical delivery of all QFC PDUs over the physical layer and ensures the corresponding QoS requirements are met.Before a data link PDU is passed on to the physical layer for transmission, the MAC entity must perform transmission resource allocation and determine the corresponding modulation and coding scheme or MCS by taking into account the radio channel conditions and the QoS requirement of the QFC PDU.Also of primary concern for battery-powered or energy-harvesting IoT applications are the MTC terminal transmission power constraint and energy conservation, which strongly affect how resources are allocated, and how MCS is selected.For distributed resource allocation, random medium access and sensing-based medium access strategies (e.g., SO-TDMA) are used for contention-based medium access control.
Depending on the type of the QFC PDU, some QFC PDUs may involve peer-to-peer connections at the QFC instance level.For example, if the QFC PDU from a QFC instance is a QFC Signaling PDU, like the QFC Retransmission Request PDU with an implied destination (the requestee), it invokes a peer-to-peer connection with an automatic retransmission request or ARQ (up to, three requests per new transmission) at the MAC sublayer level.Hence, Table 7 defines the MAC Traffic PDU that has the choices of without (P2P=0) and with (P2P=1) a sequence number and a Destination ID (a Layer 2 address, i.e., MMSI) to facilitate ARQ at the MAC sublayer.The Layer 2 address implements the ''terminal addressing'' under the current framework for peer-to-peer connection at the data link level and is embedded in the data link PDU header (see Figure 8 and Table 7).
The MAC Feedback PDU is also defined in Table 7.It is used for feeding back information from a receiver to the transmitter in peer-to-peer communication, including acknowledgment for reliability control and channel quality indicator for channel adaptation.Without a dedicated logical channel by design, the MAC Feedback PDU is channeled by TCH and, hence, can be piggybacked with MAC Traffic PDUs under the protocol structure in Figure 8.This is an example where the MAC Signaling PDUs via TCH are multiplexed with other MAC PDUs into a single data link PDU and transmitted over a PTCH with great versatility and efficiency-the ''luxury'' that AIS lacks.
The slot assignment or acquisition for the pending transmission burst is through an autonomous medium access mechanism employed by the MAC entity.Information exchange between MAC peers may be needed depending on the medium access scheme.It is done through one of the MAC Signaling PDUs, e.g., the Slot Reservation PDU, transported via TCH and can be piggybacked with MAC Traffic PDUs and other MAC Signaling PDUs (e.g., the Feedback PDU).Table 7 defines the Slot Reservation PDU for SO-TDMA, where the timeout field indicates the number of reservations still yet to be used for future transmissions.
The MAC protocol in charge of the resource assignment employs a ''MAC-PDU scheduler'' or simply MAC scheduler for multiplexing the MAC PDUs from TCHs onto a data link PDU and then onto a properly configured PTCH through the MAC Burst Configuration PDU (Table 7).The MAC scheduler considers multiple factors regarding not only energy and spectral efficiency but also the specific service requirements (i.e., QoS) within the physical constraints, including the available frequency resources (e.g., frequency channels), radio link conditions (channel quality), and terminal communication capabilities (e.g., maximum transmit power, the highest and lowest MCS supported).For example, a highpriority service is usually assigned a configuration with a low-order modulation and low code rate to enhance the signal robustness, whereas a low-power (e.g., energy-harvesting) transmitter is assigned an extended TTI to boost the burst energy.
The MAC Burst Configuration PDU containing the configuration of the current burst is channeled through the dedicated ACH and transmitted on the dedicated PSCH.A burst configuration includes the size (number of bits) of the physical layer PDU and the burst length (in slots).It also includes the PTCH MCS for channel adaptation to achieve high spectral efficiency and the QFC PDU QoS requirement associated with the QoS channel (addressing Issue B.3).As such, a set of burst configurations is predefined, each of which is indicated by a 6-bit identifier, termed Link ID.The MAC header for the Burst Configuration PDU is omitted for best signaling efficiency without causing ambiguity with other MAC Signaling PDUs with the benefit of a dedicated logical channel (ACH) and physical channel (PSCH with a fixed MCS configuration).
Figure 13 illustrates a scenario where a MAC-PDU scheduler [19] multiplexes a MAC Feedback PDU and two MAC Traffic PDUs into a single burst payload.Now, after a MAC Traffic PDU is successfully received, the QFC instance of the PDU (Table 7) is matched against the list of the active QFC instances as a filter to screen out the incoming irrelevant QFC PDU (Traffic or Signaling) without further processing.The process prevents some irrelevant service messages from uploading to the network layer, recalling that service messages go through the final screening process at the network layer to complete the endto-end service delivery, as noted in Section V-B.If the QFC instance matches one of the active QFC instances, the P2P field of the MAC Traffic PDU is further checked to see if it is a peer-to-peer MAC Traffic PDU.If it is (P2P=1, meaning this PDU is a peer-to-peer MAC Traffic PDU), the destination field is checked to see if it matches the local MMSI.If it does, the QFC PDU is extracted and passed on to the corresponding QFC instance; otherwise, the MAC PDU is discarded, as diagrammed in Figure 14.

3) RESOURCE ALLOCATION
Finally, since proximity MTC operates in a self-organizing manner without the benefit of centralized resource allocation [19], the MAC scheduler thus employs a distributed resource allocation scheme, the SO-TDMA mechanism, to autonomously acquire and secure the resources for the transmission burst, minimizing the collision probability among neighboring transmitters.SO-TDMA is a virtual sensing-based medium access method in which a terminal uses the information extracted from the transmissions by neighboring terminals, i.e., the channel usage information embedded in a MAC signaling PDU (i.e., the MAC Slot Reservation PDU) of a transmission burst.The relevant information is collected through such a sensing process to predict the future channel status in terms of which slots are to be occupied by other terminals to avoid collisions.As illustrated in Figure 15, a MAC scheduler uses a sensing window to detect bursts on a relevant frequency channel to determine 15.Illustration of virtual sensing with a sensing window and a resource scheduling window used by a MAC-PDU scheduler for resource selection.

FIGURE 16.
Burst success rate vs. network offered load, where 90 percent of the services produce periodic messages, and 10 percent output bursty messages.A ''successful burst'' in the plot means that the QFC PDU carried by the burst is correctly recovered by the receiver with its QoS met, whereas Slotted Aloha randomly selects slots, and QoS is not enforced in the plot.
resources available for selection.The length of the selection window depends on the application property, e.g., latency and periodicity (if any).
By extracting the MAC Slot Reservation PDUs from the detected bursts, the terminal obtains the channel reservation information of other terminals and builds a resource map that indicates which resources or slots in the selection window are to be occupied by other terminals.The goal is to identify available candidate slots and make them available when the MAC scheduler calls on them, i.e., when a MAC PDU enters the MAC scheduling queue.Note that a terminal typically cannot receive (or sense) while transmitting on the same resource (due to self-interference); therefore, detecting a collision or failed transmission and correcting it (i.e., retransmitting it) takes an extended time (e.g., timeoutthe amount of time allowed to pass before the sender gives up waiting for an acknowledgment.).As such, the scheduler uses a resource selection algorithm to determine the best slots to accommodate the PDU in the sense of minimizing the potential collisions (or interference) and, hence, maximizing the reliability (i.e., the probability of successful transmission before the deadline).The Appendix describes this algorithm in greater detail, highlighting QoS (latency and reliability) control.
Since the sensing process takes time to obtain a complete and reliable picture of channel status, depending on the application QoS (e.g., the latency requirement), the terminal may continuously monitor the channel activity in the background (when not transmitting) for latency-sensitive event-driven bursty services (i.e., messages are triggered by random events).Referring to Figure 15, the MAC PDU enters the scheduling queue at t; the selection window can start no earlier than t + δ, i.e., the earliest slot boundary.Depending on the processing time (for, e.g., coding and modulation) and the position of t relative to the slot boundary, δ can be as much as one slot (plus processing time).Therefore, the delay between the time the MAC PDU arrives and the time it leaves the antenna ranges from T slot + δ to F + T slot + δ, where F denotes the selection window length, and the scheduler must select the right window size to satisfy the QoS requirement.Obviously, due to the non-periodic and random nature, virtual sensing-based MAC, hence the MAC Slot Reservation PDU, does not help avoid collisions with this type of transmission.Therefore, sensing-based resource allocation only works for transmitting periodic messages and is not meant for random bursty service messages, but it does help bursty messages avoid collisions with the periodic messages, edging out the non-sensing-based Slotted Aloha.
Figure 16 shows the QoS-enhanced SO-TDMA's performance in a proximity network, where 90 percent of the services generate periodic messages, and 10 percent produce bursty event-driven messages.A ''successful burst'' in the plot means that the QFC PDU carried by the burst is correctly recovered by the receiver with its QoS met, whereas, as a contrasting reference, Slotted Aloha randomly selects slots, and QoS is not enforced in the plot.
In summary, the MAC sublayer manages and supervises the actual physical delivery of QFC PDUs over the physical channels, capable of a) adaptation to radio link conditions, b) resource and power management, c) peer-to-peer communication with best-effort QoS control per QFC instance, d) incoming QFC PDU screening to seamlessly accommodate multiple applications, e) best effort to satisfy the QFC PDU's QoS requirement, and f) autonomous slot allocation with QoS.

D. PHYSICAL LAYER
In a nutshell, the physical layer is responsible for physically carrying the data link PDU over the wireless medium via a physical waveform termed transmission burst (cf. Figure 7 and Figure 8), with adaptability to the medium condition and the QoS requirement, controlled by the MAC entity of the data link layer.The integrity of the data link PDU (channeled through TCH) is protected by a reliable 24-bit CRC of the physical layer PDU trailer.
First, the transmission of a burst requires physical transmission resources (i.e., frequency and time).For the convenience of discussion and ease of understanding, we assume that the current proximity physical layer design adopts the ITU-ASM spectrum (Figure 4) and the same transmission time structure as AIS, i.e., the slotted-Aloha TDMA structure.Furthermore, we assume the same slot structure and network The two streams of parity bits from the turbo encoder are first interleaved individually and then interlaced bit by bit into the buffer.The number of bits given by the payload is read out sequentially from the start of the buffer.If the end of the buffer is reached, simply wrap around to the start until the required number of bits is acquired [19].
timing (synchronized to the UTC minute epoch) for all maritime MTC components, including the satellite and terrestrial components in the maritime MTC network (see Figure 6) through the mechanism discussed in 3) of this section; a slot is the smallest medium access resource element as in AIS.
Second, the adaptability of the transmission burst necessitates the need for burst configuration information at the receiving end.Unlike the transmission in a communication network with centralized medium access control FIGURE 18. Forward-error-correction coding performance for short and long information blocks under convolutional coding (CC), tail-biting convolutional coding (TBCC), and turbo coding (TC), where E b /N 0 stands for the receive signal-to-noise ratio per information bit [19].
(e.g., VDE-SAT/TER, where such information is explicitly broadcast on the downlink signaling channel by the control station [19]), the transmission burst of a self-organized proximity MTC network must be a self-contained, stand-alone signal, meaning that this information must be embedded in the burst.As briefly touched upon earlier (referring back to Figure 8), two physical channels are created: PTCH for carrying data link layer PDU and PSCH for conveying the MAC Burst Configuration PDU to its peers within the range.
The generation of a transmission burst is diagrammed in Figure 17a.We start with the physical channels.

1) PHYSICAL CHANNELS
PSCH is dedicated to the MAC Burst Configuration PDU (channeled through ACH), indicating the burst configuration necessary for decoding the PTCH that carries the MAC Traffic PDU or Signaling PDU (channeled via TCHs).Unlike PTCH, PSCH is non-adaptive (i.e., fixed MCS) by design and, hence, needs to be sufficiently robust to the extent that it essentially determines the maximum proximity; hence, only a small payload is allowed to keep the overhead manageable.It is indicated by a 6-bit Link ID defined in Table 7.This 6-bit short PDU is encoded with a rate-1/3 tail-biting convolucode (TBCC) to provide a higher coding gain than the turbo code for short bit sequences [19], as shown in Figure 18.The code bits are repeated three times to provide extra processing gains to increase signal robustness and boost signal energy to support low-power application deployments.They are then modulated onto 27 QPSK symbols to synthesize the PSCH.Note that an undetected Link ID error causes no extra damage to PTCH decoding due to the fact that a Link ID error will fail the decoding of PTCH regardless of whether the error is detected or not and that the PDU (carried by the PTCH) is protected by a powerful 24-bit CRC.
According to the information (burst duration and MCS) given in the MAC Burst Configuration PDU, the PTCH is constructed by turbo-encoding (a code rate of 1/3) the physical layer PDU for forward-error-correction coding over error-prone wireless channels [19].The code bits are stored in a circular buffer, shown in Figure 17b.Depending on the PTCH payload (a function of the burst configuration by the MAC Burst Configuration PDU), not all code bits in the buffer may be transmitted, leading to an actual code rate higher than the base code rate of 1/3.On the other hand, if the end of the buffer is reached before sufficient code bits are acquired to fill the payload, simply wrap around to the start until the payload is full, leading to an actual code rate lower than 1/3 for boosting the burst energy-useful for low-power transmitters or range extension.This rate-matching process (i.e., matching the actual bit rate that the current PTCH offers) provides a unified structure to achieve an arbitrary channel coding rate for adaptive MCS.Therefore, it facilitates a code rate that matches the actual rate the PTCH payload offers, leaving no unused/wasted channel symbol.
Conceivably, the proximity range is dictated by the transmit power and MCS prescribed (by the MAC scheduler according to the application requirements) and ultimately maxed by the transmission range of PSCH.

2) TRANSMISSION BURST
A transmission burst is used for carrying the physical channels.Since the transmission of a self-organized proximity MTC network must be a self-contained, stand-alone signal, it must contain all the information necessary for a receiver to detect and receive the signal and, ultimately, recover the message.The transmission burst is created just for this purpose.
First of all, a burst must be transmitted on a physical frequency spectrum authorized by the radio spectrum regulator.For the burst transmitted on the ASM spectrum for maritime proximity-based IoT applications, with the frequency channels assigned shoulder to shoulder with those of legacy AIS (see Figure 4), it has no choice other than to run at the same low channel symbol rate as AIS, i.e., 9,600 symbols per second, for the protection of the AIS from being interfered with by nearby ASM transmitters (see Figure 19), recalling that AIS is vulnerable to adjacent channel interference due to a lack of a robust physical layer design.
Secondly, the transmission burst must be spectrally shaped at the transmitter to satisfy the emission mask imposed by the spectrum regulator.
101232 VOLUME 11, 2023 Authorized licensed use limited to the terms of the applicable license agreement with IEEE.Restrictions apply.
As for QPSK/QAM modulation symbols, the current design uses a square-root raised cosine pulse.Since a practical pulse must be time-bounded, the pulse is no longer frequency-bounded after time-domain truncation.Larger values of the roll-off factor incur less spectral regrowth; thus, the truncated pulse with a roll-off factor of 1 is chosen to ensure minimal interference to AIS.
The ramp-up preamble (four channel symbols) and ramp-down postamble (four symbols) are prepended and appended to a burst to provide smooth power-on and poweroff transitions for burst spectrum shaping at the transmitter.The resultant power spectrum density of the burst is plotted in Figure 19.At a receiver, this preamble and postamble provide an extra time buffer of up to four symbols (416 µs) for collision protection between bursts caused by propagation delay disparities between transmitters, as well as timing errors (detailed later).In addition, the ramp-up preamble helps the automatic gain control (AGC) converge before the sync word starts.
Thirdly, the transmission burst must facilitate time-andfrequency synchronization and tracking at the receiver.
As illustrated in Figure 8, following the ramp-up is the sync word, a unique predefined waveform for a receiver to detect and time-and-frequency synchronize to the incoming burst and estimate the channel for coherent demodulation of the physical channels.For client-server IoT applications using a cellular communication infrastructure, a terminal synchronizes to the cellular infrastructure network using a dedicated downlink synchronization signal or beacon consistently transmitted (by the control station) before communicating with the control station.For client-client proximity-based IoT applications with a self-organized networking structure, a preamble (like the sync word in Figure 8) is essential to random bursty transmissions arriving at a receiver with disparate delays and frequency offsets.This potentially large time and frequency uncertainty is a serious challenge for successfully detecting a transmission burst since it is difficult for a receiver to obtain an accurate estimate for frequency offset and symbol timing because of the limited preamble duration, especially when GPS is unavailable.Furthermore, accurate carrier phase estimation is also necessary for coherent demodulation of the physical channels.Hence, the sync word in the current design is redoubled to 64 channel symbols.
Moreover, as previously pointed out, one of the issues with wireless channels is the channel variability that may cause the received signal phase to change during a burst, jeopardizing the coherent demodulation of the modulation symbols.This variability includes the potential Doppler frequency shift that cannot be eliminated or reduced by a more accurate frequency source (e.g., GPS).It can only be estimated and compensated on the fly using reference signals.Pilots are thus inserted into the payload (see Figure 8) to help a receiver track the radio channel variation and reduce synchronization and channel estimation errors during the payload period.As part of the burst structure, the pilot pattern is adaptive and inferred by the burst configuration, providing the MAC extra adaptability to channel variabilities.
Finally, following the sync word is the payload of the transmission burst that carries PSCH and PTCH.PSCH precedes PTCH and includes the MAC Burst Configuration PDU (i.e., the Link ID, indicating the burst duration, MCS, and pilot pattern of the PTCH) necessary for the receiver to recover the data link PDU from the PTCH.

3) NETWORK TIMING ESTABLISHMENT
As mentioned, the current design adopts the slotted Aloha TDMA structure synchronous with the UTC (Figure 5).In the case when a terminal is endowed with a GPS receiver, it may synchronize directly to UTC using, e.g., the ''1-pulseper-second'' (or 1PPS) signal output from the GPS receiver.In the case when information from the GPS is interrupted due to a GPS outage or lack of a GPS receiver (in most low-cost or battery-powered application scenarios), an MTC terminal may acquire UTC timing from a control station, e.g., a VDE-SAT space station or a VDE-TER shore station.In either case, the ramp-up/down alone is sufficient to absorb delays and timing errors, and hence, no guard period (see Figure 8) is needed.To acquire the network slot timing, an MTC terminal sends a Slot Timing Request in a MAC Random Access PDU (Table 7) to its peer on the uplink of the VDE-SAT or VDE-TER network.The control station (within the coverage) will respond with a MAC Timing Control PDU, from which the terminal derives the slot timing, inferred by the timing offset field of the PDU [19].
A guard period (six symbols) is only present (implied by the MAC Burst Configuration) in the rare case when neither GPS nor VDE-SAT/TER is available; a terminal stays with the most accurate timing source available.In particular, the terminal keeps itself aware of the synchronization hierarchy in proximity and synchronizes with the terminal with the highest synchronization stratum.This behavior allows a terminal to stay as close to the UTC as possible.The station in sync with GPS has the highest stratum.The absence or presence of a guard period implicitly indicates a high or low stratum, respectively.Thus, unlike AIS (Table 3), no explicit transmission of the stratum is necessary.If more than one station is found to have the same highest stratum in its neighborhood, the terminal synchronizes to the smallest MMSI number.This rule prevents forming an endless synchronization loop.
In summary, the physical layer delivers the following improved functions: a) Various modulation and coding configurations that facilitate adaptive modulation and coding schemes for proximity range control to facilitate the ''distance addressing'' concept and much improved energy and spectral efficiency, more importantly, empowering the data link layer to deal with various QoS requirements over unreliable wireless channels, b) an enhanced burst structure that improves the synchronization and tracking capability, especially beneficial to the low-cost and energy-limited maritime IoT applications, and c) a robust network timing acquisition mechanism that simplifies network timing acquisition, improves communication performance, and saves energy and communication resources in the absence of GPS.

VII. CONCLUSION
The Internet of Things is all about services; service-centric MTC is thus at the heart of the IoT revolution.Wide-area MTC (NB-IoT, LoRa, VDE-TER, and VDE-SAT) has been extensively studied.In contrast, proximity MTC has not yet drawn attention until most recently when this type of MTC has become increasingly important as MASS emerges as a promising concept poised to fundamentally revolutionize the maritime shipping industry.As pointed out, the legacy AIS, the closest available technology, is far from up to the task.This paper thus focuses on the proximity MTC, demonstrating a comprehensive service-centric practical design-an important and unique aspect of proximity-based IoT that has not yet been covered in the literature, especially on the service-centricity aspect that is more complex on the terminal side than that under a centralized infrastructure-based wide-area MTC network.It helps fill the gap on how the different components from the top network layer to the bottom physical layers fit together to deliver desired functionalities through a concrete design under the maritime IoT paradigm.
This paper differentiates proximity MTC networking from wide-area MTC networking, explicitly addresses the unique issues pertaining to proximity-based IoT, and provides a comprehensive service-centric design of the most relevant elements neglected by literature.We show the significance of proximity MTC and its unique role in virtual sensing and swarm intelligence through the MASS services.Specifically, the paper first presents a critical summary of the widely used legacy AIS technology in the maritime domain, highlights the issues to watch out for in future maritime MTC technology development and standardization, and then sets out a comprehensive framework to address these issues.Specifically, this paper uses the legacy AIS as a contrasting reference to motivate a new paradigm of communication and networking for maritime proximity-based IoT applications and services as an essential component in the larger MTC framework, a sorely needed addition to the well-established infrastructure-based wide-area communication and networking technology for emerging proximity-based IoT applications.
Throughout the paper, the design strategy revolves around the service-centric concept to solve the unique challenges in proximity-based IoT, like service discovery, terminal association, and QoS control, through 1) a service-centric architecture with key network elements, 2) three-layered address resolution, and 3) QoS flow control.First of all, since the signal transmitted by an MTC terminal reaches every terminal within the transmission range dictated by the Friis equation and is controllable through QoS as per application, this physical distance addressing (i.e., Layer 1 addressing), essentially proximity broadcasting, facilitates the ''distance-of-interest'' functionality that takes care of the proximity characteristic of proximity-based IoT applications.Secondly, the network layer provides end-to-end service delivery through service addressing and QoS flow, implementing the ''service of interest'' concept (to replace the traditional ''host of interest'' concept) as per the service-centric networking principle.Under the current design, service-addressing is based on the federated Service ID, the Layer 3 address, for uniquely and efficiently profiling, identifying, and discovering applications, decoupling the service from the traditional IP address.Service profiling allows service-specific QoS control using the QoS flow construct, aligning with the service-centric principle of IoT and differentiating the host-centric conventional networking.The QoS flow differentiates the services by QoS per the application profile.Furthermore, this service-addressing strategy provides a natural means for proximity-based IoT: 1) service discovery and 2) service network formation in a dynamic proximity environment.In contrast to AIS, the current protocols are structured to inter-work in an independent (but integrated) and well-defined (but highly versatile) manner to fulfill proximity MTC needs, facilitating dynamically tailorable, service-centric virtual information networks shared by relevant IoT applications.Thirdly, the virtual QoS flow construct is realized through a QoS channel managed by a respective QFC instance of the data link layer.It is established to discretely ensure the delivery of the network layer PDU per its QoS requirement via a unicast or multicast communication link between the QFC instance peers based on the Layer 2 address for terminal addressing.
Putting all these elements together leads to a coherent, seamless solution to a service-oriented proximity-based MTC, which we hope will invite comparisons, discussions, and critiques that may lead to the ultimate solution.

APPENDIX SO-TDMA FOR SERVICE-CENTRIC PROXIMITY MTC
In this appendix, we describe the self-organizing TDMA or SO-TDMA for autonomous resource allocation.It allows slot sharing in a TDMA fashion among multiple terminals in 101234 VOLUME 11, 2023 Authorized licensed use limited to the terms of the applicable license agreement with IEEE.Restrictions apply.

proximity, with enhanced features for service-centric proximity MTC.
There are mainly two types of IoT applications in terms of message periodicity: periodic and bursty.The former produces messages in a deterministic periodic fashion, whereas the latter is in an event-driven random bursty (nonperiodic) nature.
When the MAC resource selection protocol is triggered by a MAC PDU entering the scheduling queue at a time instant (indicated by a red arrow in Figure 20), the terminal (Terminal A in Figure 20) autonomously selects the resources or slots for transmitting the MAC PDU using a distributed channel/resource allocation mechanism in a shared-resource (in a TDMA fashion) communication environment among multiple terminals.A slot is the resource granularity for message transmission scheduling.
First, let us assume the MAC PDU includes a QFC PDU from a QoS instance that handles applications having a bursty nature.To this end, a corresponding selection window of length F slots is established and represented by where contains θ F ℓ = F (i.e., the cardinality of θ F ℓ ) consecutive slots starting from slot ℓ, as graphically illustrated in Figure 20.The goal is to produce a set of resource allocation candidates (slots) selected from this window.Note that there could be more than one terminal that happens to be in the selection process competing for the resources.Assume these terminals are synchronized to the network slot timing and have both bursty and periodic applications.
The purpose of the selection window is to provide certain degrees of freedom during selection to avoid collisions with other terminals.Obviously, the larger the window width, the more choices the selection has.However, the maximum width of the window, i.e., the value of δ + | ℓ | = δ + F, is dictated by the application QoS requirement (i.e., latency).Here δ ∈ R is the time between the MAC PDU arrival and the start of the selection window, ℓ , as illustrated in Figure 20, which is the time needed to complete the selection and preparation for the transmission (e.g., coding and modulation) plus the time for alignment with the slot since the arrival time is random for a bursty application.
The selection is based on the channel information collected through a sensing process, conducted in a time window, referred to as the sensing window, H ℓ .As illustrated in Figure 20, the sensing window traces backward from the selection window over a period of |H ℓ | = T slots, from which the ongoing traffic pattern or time correlation among transmission bursts (if any) is derived to predict the traffic in the selection window, ℓ , so that the collision with the traffic can be avoided.Generally, deriving such a statistical correlation of the background traffic is non-trivial, especially for random bursty transmissions in a dynamic environment.Therefore, SO-TDMA does not go that far but focuses on periodic transmissions, from which the traffic pattern can be relatively easy to capture.In particular, instead of estimating the period, it employs a virtual sensing approach, under which a periodic transmission burst explicitly broadcasts its periodicity information.For the SO-TDMA adopted by AIS, the burst period is specifically fixed to T = 1 minute or 2250 slots so that 1) a sensing window of length T is sufficient; 2) once detected, a burst's next transmission location is known.A burst only indicates the number of bursts left to transmit (separated T slots apart by default), i.e., c -the number of resources reserved for future transmissions in the current burst stream (more details on the resource reservation scheme for periodic transmission bursts later in this appendix).As such, it is sufficient for the sensing window, H ℓ , to have a depth of T = 1 minute or 2250 slots to detect the transmission patterns of periodic bursts aligned or not aligned with the sensing window.
Suppose Q ≥ 1 transmission bursts are detected in the sensing window on the following resources: Among them, J ≤ Q bursts are assumed periodic and hence include the resource reservation parameter, c.In the example of Figure 20, J = 3 (all from Terminal B) and s i = 1 slot (i.e., each burst occupies one slot).In the current design, a MAC Signaling PDU, the MAC Slot Reservation PDU (cf.Table 7), contains a timeout field, i.e., c, indicating the number of reservations still yet to be used for transmissions in its current burst stream, which infers a particular resource reservation pattern.Refer to Figure 13 to see how MAC PDUs are multiplexed into a data link PDU.Since a bursty transmission is random, it has no reservation (no MAC Slot Reservation PDU) embedded in the burst.
The slots that are reserved by a periodic burst j is and represents the reserved resources by all detected bursts in H ℓ .
After taking into account all reservation intentions from the derived reservation patterns of the J bursts in (6), the current terminal under discussion builds up a ''busy slot map'' for the selection window, represented as indicating which slots in the current selection window, ℓ , are known to be in use by other terminals.The slots that are free of conflict with the reserved resources or the idle slots become the candidate set, i.e., O ℓ ⊆ ℓ \ν ℓ . ( 101236 VOLUME 11, 2023 Authorized licensed use limited to the terms of the applicable license agreement with IEEE.Restrictions apply. (8) is the set of resources or slots that can be utilized by the terminal, consequently, preventing terminals in range of one another from using the same slots.In this example, only one reserved slot (by Terminal B) falls into the first selection window of Terminal A (the current terminal) and, hence, is not available for selection.The final selected resource, ω ℓ , is randomly picked from the candidate set, where and |ω ℓ | equals the resource size, i.e., the number of slots that a burst occupies.This purposely introduced randomness reduces the chances of collisions with those terminals (that happen to have the same selection windows).Clearly, the larger the selection window is, the less probability of collisions; nevertheless., | ℓ | is ultimately determined by the application latency requirement.Following the example in Figure 20, ℓ = θ 10 ℓ (| ℓ | = 10), v 0 ℓ = θ 1 ℓ+6 (i.e., slot ℓ + 6 in ℓ is reserved by Terminal B), O ℓ = θ 6 ℓ ∪ θ 3 ℓ+6 , and ω ℓ = θ 1 ℓ+1 , (i.e., slot ℓ + 1 is selected by Terminal A).Although the ω ℓ in (9) conforms to the QoS latency requirement as long as δ + | ℓ | is less than the QoS latency, it may not satisfy the QoS reliability requirement, depending on the collision probability.It happens when the selection windows of multiple terminals overlap, a factor that a terminal's MAC scheduler has no control over, rendering uncontrollable reliability.
To address this issue inherited from the original SO-TDMA and better satisfy the various reliability requirements in service-centric IoT, in the current design, n such resources as in (9) can be randomly selected, as shown in Figure 20 (where n = 2), to accommodate a total of n duplicate bursts to be transmitted within ℓ , where ω i ℓ = |ω ℓ | (i.e., the burst duration).These bursts can be individually decoded, and the probability that one (out of the n bursts) succeeds increases with n, which can be used by the MAC scheduler as a means of reliability control.Now, for applications with periodic messages (e.g., ship position reporting), the corresponding MAC Traffic PDUs periodically enter the scheduling queue at an interval of I .The terminal's MAC PDU scheduler may need to reserve a series of resources separated by an interval of I slots, determined by, e.g., the speed and heading of a ship for the ship position reporting application.
However, as aforementioned, the SO-TDMA scheme only allows a burst to be transmitted in a fixed period of T , 2250 slots in AIS, the same for all terminals.To create a burst sequence with a period of I under this framework, we first construct M ≜ T /I burst sequences with a period of T of length N .These M burst sequences are interlaced at a space of I , overall, constituting a burst sequence with the period of I of length N • M .
Specifically, M selection windows spaced I slots apart are established to produce a resource sequence for bursts with a periodicity of I , as illustrated in Figure 21.For the similar reason mentioned earlier, the flexibility that the selection window provides in selecting resources plays a crucial role in reducing the probability of selecting the same resource/slot that courts deadlock or continuous collisions in a sequence of periodic transmissions.
The same selection process detailed earlier for bursty applications is repeated within each selection window, i.e., ℓ+mI , for m = 0, 1, • • • , M − 1, to obtain where and Since each ω m ℓ in T ℓ is randomly (hence independently) selected from O ℓ+mI , which helps disrupt the formation of a chain of collisions with other terminals, this non-periodicity (i.e., ω m ℓ ̸ = ω ℓ+mI in general) voids the periodicity assumption.Therefore, the message periodicity of I is not exploited in the sensing process.However, each ω m ℓ ∈ T ℓ is made periodic with a period of T to produce T /I interlaced threads of periodic resource sequences with a period of T , each of length N .
Under this framework, the M resources within T ℓ+nT cannot be assumed to be strictly periodic with period I , but they are periodic with period T across T ℓ+nT (n = 1, 2, • • • , N − 1), i.e., ( This particular pattern can be acquired by another terminal with a sensing window of depth T .Such transmission pattern, as described, can be implemented by repeating T ℓ N times in time, accommodating a total of N • M bursts with an approximate period of I , within the tolerance of the QoS specified latency requirement.The corresponding reservation pattern is graphically illustrated in Figure 21 (where N = 2).It is evident that M or T /I must be an integer, i.e., T , must be divisible by I , i.e., Since for SO-TDMA, T is stipulated to be a system parameter common to all terminals, i.e., T = 1 minute or 2,250 (slots) for all AIS applications, any application message's period, I , must divide a common T .For instance, the period of 20 sec of an AIS application message means 60/20 transmissions per T , or I = 2,250/3=750 (slots).The advantage of this constraint is that T is common, hence, known to all terminals, and does not need to be broadcast.Furthermore, under this constraint, the channel usage status seen in the sensing window with depth |H ℓ | = T can be assumed to be the same for all the following N − 1 periods of length T , i.e., ν ℓ+mI +nT −nT = ν ℓ+mI , ∀n ∈ {1, 2, • • • , N −1}, N ∈ N, (18) based on (14), where and, therefore, T ℓ given by ( 11) can be used for all N repeats.However, this one-size-for-all constraint unnecessarily restricts an application's choice of its message period, I , leaving only a handful of I , s available for selection, which is not well in line with the service-centric principle of IoT.This constraint is lifted in the current design by allowing an application, i, to choose a specific sensing window T i , where T i is an integer multiple of an application-specific period, I i , i.e., Under this relaxed choice of T i (compared to (17)), a sensing window of depth |H ℓ | = T remains sufficient, where T remains a network parameter.However, (18) no longer holds, i.e., different repeats may see different channels, meaning, in general, ν ℓ+mI +nT −nT ̸ = ν ℓ+mI , ∀n ∈ {1, 2, • • • , N −1}, N ∈ N, (21) as shown in Figure 22.Consequently, (13) needs to be modified as to consolidate all channels into the selection, and (11) becomes where O ℓ+mI remains the same as (12).Finally, since different transmission bursts may have different application-specific periodicities according to (20), the reservation period of burst i, T i , needs to be indicated by 101238 VOLUME 11, 2023 Authorized licensed use limited to the terms of the applicable license agreement with IEEE.Restrictions apply.

FIGURE 1 .
FIGURE 1. Graphical illustration of a cellular MTC network consisting of a cellular infrastructure to provide wide-area coverage for client-server applications.

FIGURE 2 .
FIGURE 2. Graphical illustration of proximity MTC networks providing connectivity for client-client applications, wherein there are 17 MTC terminals in the region and four proximity networks are formed and self-organized based on the physical distance among them.

FIGURE 3 .
FIGURE 3. Organization diagram of the paper.

FIGURE 4 .
FIGURE 4. Maritime MTC radio spectrum assignments on the international VHF maritime mobile band (25 kHz per channel), regulated by the International Telecommunication Union (ITU).Recently, the ASM spectrum is assigned to proximity-based MTC and VDE-SAT and VDE-TER are the spectrums assigned to satellite-based and terrestrial-based wide-area maritime MTC, respectively.

FIGURE 5 .
FIGURE 5. AIS time slots and physical layer burst structure operating on the International VHF maritime mobile band and occupying Channels 2087 (at central frequency 161.975MHz) and 2088 (162.025MHz) (cf.Figure4).One minute contains exactly 2,250 slots aligned with the UTC.The length or TTI of a burst is one slot up to five consecutive slots.
FIGURE 5. AIS time slots and physical layer burst structure operating on the International VHF maritime mobile band and occupying Channels 2087 (at central frequency 161.975MHz) and 2088 (162.025MHz) (cf.Figure4).One minute contains exactly 2,250 slots aligned with the UTC.The length or TTI of a burst is one slot up to five consecutive slots.

FIGURE 6 .
FIGURE 6. Graphical illustration of maritime MTC network architecture, where red and green lines denotes the proximity MTC networking and light blue lines represent the wide-area infrastructure-based MTC networking (e.g., VDE-SAT and VDE-TER).Together they constitute a global maritime MTC network operating on the international maritime mobile communication spectrum (Figure4).

FIGURE 9 .
FIGURE 9. Illustration of an exemplary proximity communication scenario to illustrate the service-centric network architecture and key elements, where multiple application hosts (connected through an on-premises network, e.g., Ethernet or Wi-Fi, on-board a ship) communicate with the application clients running on the host (aboard another ship) through an MTC terminal.The connection to the maritime cloud network (where NAM and the application servers reside) is through the wide-area MTC network (e.g., VDE-SAT) (cf.Figure1), as a backhaul network for network management (e.g., service provisioning and updating).

FIGURE 10 .
FIGURE 10.A network layer perspective of service addressing and QoS control through Service ID and the virtual QoS flow construct for service-centric proximity MTC networking.

FIGURE 11 .
FIGURE 11.IoT service discovery on a proximity network.

FIGURE 12 .
FIGURE 12. QoS-control block diagram of a QFC instance (of an MTC terminal).The peer QFC instance at the other communicating end is not shown.

FIGURE 13 .
FIGURE 13.Illustration of MAC PDU multiplexing example, where Hd represents the data link header and Tp is the phyiscal layer PDU trailer.

FIGURE 14 .
FIGURE 14. MAC Traffic PDU processing for detecting relevant PDUs at the receiving MAC entity.

FIGURE 17 .
FIGURE 17.(a) Transmission burst generation flowchart, where w (t ) denotes the ramp-up and ramp-down power profile of a transmission burst.(b) Rate matching flowchart for base code rate 1/3 turbo codes.The two streams of parity bits from the turbo encoder are first interleaved individually and then interlaced bit by bit into the buffer.The number of bits given by the payload is read out sequentially from the start of the buffer.If the end of the buffer is reached, simply wrap around to the start until the required number of bits is acquired[19].

FIGURE 19 .
FIGURE 19.Power spectral density of the transmission burst (one slot) with square-root raised cosine pulses with a roll-off factor of 1 at a channel symbol rate of 9,600 symbols/sec Channels 2027 and 2028 (cf.Figure4).

FIGURE 20 .
FIGURE 20.Illustration of distributed/autonomous resource allocation: the sensing-based TDMA resource selection or SO-TDMA for service-centric proximity MTC with QoS (latency and reliability) control.

FIGURE 21 .
FIGURE 21.Illustration of distributed/autonomous resource allocation: the sensing-based SO-TDMA for proximity MTC with periodic transmission bursts under a common T , and A = 4I B = 2I C = T , where I A , I B , and I C are the application message intervals of Terminals A, B, and C.

FIGURE 22 .
FIGURE 22. Illustration of service-centricity-enhanced distributed/autonomous resource allocation: the sensing-based SO-TDMA for service-centric proximity MTC, facilitaing transmission bursts of arbitrary periodicities as per the service requirement: T A , T B , T C ≤ T , where T A = 3I A , and I A is the messaging interval by Terminal A.

TABLE 1 .
List of abbreviations.

TABLE 2 .
Comparison between wide-area MTC and proximity MTC.

TABLE 6 .
International frequency assignments for maritime MTC.
Maritime autonomous surface shipping, or MASS, is arguably one of the most important emerging maritime IoT services, of which MTC is an essential integral component.Different types of IoT services require different MTC networks.On one side of the spectrum, the wide-area cellular-infrastructure-based MTC network is a network environment containing powerful infrastructural components, i.e., the control stations, as relays (or access points) to the Internet, providing the connectivity for the network nodes that host client applications to the IoT cloud where the IoT application servers are located.On the other side, the proximity MTC network is a specific type of ad-hoc network where nearby network nodes that host client applications self-organize themselves to provide proximitybased client-client IoT services.