RSG-GCN: Predicting Semantic Relationships in Urban Traffic Scene With Map Geometric Prior

Automated identification of the relationships between traffic actors and surrounding objects, in order to describe their behavior and predict their intentions, has become the focus of increasing attention in the field of autonomous driving. Therefore, in this work, we propose a Road Scene Graphs-Graph Convolutional Network (RSG-GCN) as a novel, graph-based model for predicting the topological graph structure of a given traffic scene. The status of the actors and HD map information are integrated as prior knowledge, allowing the edges linking the actor nodes to capture potential semantic relationships, such as “vehicle approaching pedestrian” and “pedestrian waiting at intersection”. To train this model, we created our own RSG dataset, as well as a relational dataset and benchmark derived from nuScenes. Our extensive range of experiments demonstrate that our model can more accurately predict semantic relationships and behavior in a given traffic scene than other popular traffic scene prediction models. In particular, regarding the use of HD map prior knowledge, we found that the resulting increase in accuracy significantly outweighs performance loss caused by the increase in graph size. The downstream applications of RSG include traffic scene retrieval and synthetic traffic scene generation, which are briefly described.

of a "collision" semantic relationship between two vehicles ahead, it may slow down in advance to avoid a potential triple car accident. Furthermore, understanding a visual scene involves more than recognizing individual objects in isolation. By examining the interactions among the actors in traffic scenes, we may be able to discover specific patterns that indicate potential risks, disruptions, or overly-aggressive driving. The ability to model such relationships and patterns would also benefit related research. For example, the recognition of semantic relationships can benefit the following practical applications: • In simulation-based auto evaluation of self driving systems like [1], it is often necessary to generate multiple traffic scenes that are similar but different based on given scenarios. By constructing an RSG graph that contains semantic relationships as described in this paper, combined with a graph autoencoder network, we can generate multiple similar and different traffic scenes, and simulate them in a simulator. Relevant work is detailed in our paper [2]. • Large-scale scene retrieval: Current traffic datasets includes hundreds or thousands of traffic scenes, and we often want to retrieve them based on certain conditions. For example, "a vehicle parked at an intersection waiting for two pedestrians to cross the road." In our research, we construct a graph representation that includes semantic relationships, enabling us to quickly retrieve similar scenes with ease. • Natural language description generation: Generating natural language descriptions corresponding to traffic scenes has broad demand in multiple fields. However, traditional video captioning networks often perform poorly because they do not encode prior knowledge related to autonomous vehicle and driving scenes. By constructing an intermediate representation using topological graph that contains such knowledge, we expect to improve the performance of natural language description tasks.
Bringing this level of semantic relationship reasoning into the traffic scene domain would be a significant leap forward, but doing so involves two primary challenges: 1) Traffic actor relationship data is non-Euclidean. In other words, unlike image and text data, relationship data is difficult to process using normal convolutional neural networks (CNNs). Luckily, recent developments in graph neural networks (GNN) have brought significant improvements in the training of models using non-Euclidean data by arranging them into graph structure. 2) There is insufficient actor relationship data within conventional datasets. To solve this data insufficiency problem, we have created the Road Scene Graph (RSG) Dataset, based on the nuScenes dataset [3]. In addition, we provide graph-structured representations of traffic scenes, where nodes in the RSG correspond

FIGURE 1. Overview of Road Scene Graph (RSG) generation and proposed method:
A traffic scene (A) and its corresponding RSG (B). The colors of the nodes indicate node categories (Vehicle, Pedestrian, Road or Lane), while the colors of their edges (i.e., the relationship-defining lines linking the nodes) represent "layers", such as "Actor-to-Actor", "Actor-to-Map" or "Map-to-Map". Our goal in this research is to predict the "Actor-to-Actor" relationships, represented by the red edges in sub-figure (B). Sub-figure (C) shows an overview of the proposed method, including graph generation and the inference process.
to actor status, and where the edges (or nexus) of these nodes correspond to their pairwise relationships [4].
Compared to existing, large-scale scene graph datasets, which are used for common scene graph prediction tasks, our dataset is much smaller due to the difficulty of annotating all the actor relationships, which brings new challenges for our proposed model. First, the number of learned parameters will be smaller. Second, the domain knowledge, prior knowledge and geolocation information must be fully exploited to achieve state-of-art prediction accuracy with such small dataset.
Our proposed RSG prediction network, shown in Fig. 1, was inspired by previous studies focused on generating scene graph depictions of common traffic scene images. Currently, many scene graph prediction methods [5], [6], [7], [8], [9] follow an end-to-end structure. That is to say, these models first extract features (for example, using faster R-CNN [10] or Mask R-CNN [11]), and then simultaneously predicting the actor's class, bounding box and corresponding relationships. However, in the case of intelligent vehicles, object detection results are obtained from the fusion of data from multiple sensors, such as LIDAR, camera, mmWave radar, etc., so it is not necessary for our model to perform object detection again. Therefore, by separating the front-end (object detection and tracking) and back-end (relationship inference), our model is much more compact and efficient.
As illustrated in Fig. 1, the goal of our research is to transform a traffic scene (A) into a graph-based representation (B) which accurately describes the relationships between actors. The nodes in the representation, which represent vehicles, pedestrians, lanes and intersections, are connected with explainable relationships, such as "passing-by" and "driving-on". To automate this process, we propose using a GNN-based model to predict the unknown relationships (red edges) in the graph, based on the hierarchical nature of RSG. The nodes in the graph are divided into an actor set and a map component set, therefore the relationships between the nodes can be divided into three categories: "actor-to-actor", "actor-to-map" and "map-tomap". The "actor-to-actor" relationships are unknown, and determining these relationships is the goal of our research, while the other two sets of relationships can be easily obtained from an HD map and geometry-based rules. As a consequence, the edge/relationship-prediction problem can be transformed into a graph completion problem, which is easier to solve. Furthermore, our experimental results show that prior knowledge about these relationships significantly improves prediction accuracy.
The RSG-GCN method proposed in this paper is illustrated in Fig. 1(C). Here, the actors' 2D bounding boxes and HD map data are used as inputs. Then, both the actor and map component data are transformed into graph nodes. Next, based on the prior relationships in the "actor-to-map" and "map-to-map" sets, these graph nodes can be arranged into a semi-graph G semi , which is a subgraph of the Road Scene Graph (RSG) containing only the road component and map layers. Our proposed model then predicts the semantic relationships in the actor layer, and refines the "actor-to-actor" graph.
In addition to providing a relationship-based representation of traffic scenes, such road scene graphs can also help the self-driving community in tasks such as traffic scene retrieval (finding a specific traffic scene in dataset), and synthetic traffic scene generation (generating near-realistic, simulator friendly traffic scenes), which will assist in the automatic evaluation of autonomous driving systems.
The contributions of our work are as follows: • We introduce a gated recurrent neural network (i.e., a gated recurrent unit or GRU) model for semantic relationship prediction tasks. To improve prediction accuracy, we utilize geographic relationships as prior knowledge. Experimental results indicate that such knowledge greatly benefits the relationship prediction task, allowing our model to outperform baseline models in both accuracy and model efficiency. • We introduce a novel Road Scene Graph (RSG) dataset consisting of 20,000 road scene graphs from 500 traffic scenes. This dataset includes not only actor annotation (from nuScenes), but also pairwise relationships among actors and map components. • We introduce a scene retrieval method for finding specific scenes in RSG datasets, which finds and clusters similar scenes in order to predict potentially risky situations. In addition, the proposed RSG-based traffic scene generator can generate near-realistic traffic scenes for various applications. The remainder of this work is structured as follows: Section II provides a comprehensive review of related work, in the fields of applications of graph neural networks for autonomous vehicles, actor relationships in traffic scenes and road scene graph prediction. In Section III, we state the problem definition for road scene graph modeling. In Section IV, we discuss the methodology of our work, as well as possible downstream applications. And in Section V, we describe several experiments conducted to validate our proposed method through comparison with other popular traffic scene prediction models. Finally, in Section VI we summarize our study's findings and conclusions.

II. RELATED WORK A. GRAPH-BASED METHODS APPLIED IN INTELLIGENT VEHICLE
The decision-making systems of autonomous vehicles are expected to achieve a high level of driving safety [34], but generating appropriate driving behavior requires the integration of a broad range of data sources. For modern autonomous driving systems such as Autoware [35], [36], the data sources are highly heterogeneous, from tire pressure and battery status to camera video, GPS data, LIDAR point clouds and HD maps.
As a result of the recent, rapid development of graph neural networks (GNN) and their variants [37], researchers have proposed many graph-based applications for intelligent vehicles, such as traffic prediction and forecasting [27], vehicle control [38], trajectory prediction [13], [14], [39], CAN bus attack (cyberattack) detection [40], traffic scene captioning [41], [42], driving behavior prediction [16], [43] and synthetic traffic scene generation [19], [20]. There are several reasons for the widespread adoption of GNN-based systems. First, heterogeneous data can be more efficiently utilized by the vehicle's decision-making system, since graphs can be more friendly when heterogeneous data formats are being used. For example, compared with normal convolutional networks, GNNs can perfectly process data with various input sizes. Regarding actor trajectory prediction, before GNN and its variants were applied, much research focused on the use of carefully-rendered bird's-eye-view (BEV) images of the traffic environment as input [43], [44], since the number of vehicles and other map components were random. CNN variants such as convolutional LSTM (ConvLSTM) [45] were then used to learn the visual features of those images. The use of GNNs allowed the direct processing of raw data without rasterization, thus the design of decision-making models became much more straightforward. A second advantage of using GNNs is the ability to capture information in both nodes and edges. In traffic speed estimation [29] for example, nodes in the graph represent intersections while the edges represent the roads between the intersections. Moreover, as in Meta-Sim [19], [20], graph nodes can be used to represent objects (cars, people, trees, roads), and edges represent their hierarchical relationships. A third advantage of GNNs is that the graph structure itself can also convey crucial information, in some cases information that is as important as the nodes. Meta-Sim uses the graph structure to maintain generative rules such as "lane belongs to road" and "car driving in the lane".
These graph-based data representations allow a great deal of flexibility, as the relationships between nodes can vary depending on the type of prior information to be learned for a particular task. As shown in Table 1, the meanings of the nodes and edge data can be varied depending on the task. As illustrated by the previously developed applications mentioned above [19], [20], [29], there are many kinds of relationships which can be captured and graphic data representations that can be constructed.
The interaction graphs [12], [13], [14], [16] noted in Table 1 capture possible interactions among vehicles in traffic scenes, however they do not define the categories of these interactions. This is because predicting the categories of edge data is not an easy task. But many tasks, such as vehicle trajectory prediction [13], [14], [39], [46] and behavior prediction [16], [43], [47], [48], [49], benefit from graph-based data representation, since it provides an easy way to learn from a given scene without using image representation data.
Another interesting way to build graph-based representations of driving environments is the Lane Graph [17]. The purpose of these graphs is to learn HD maps without rendered image input. Instead, the lane graph connects nearby waypoints in the map to build a graph of the map. Liang et al. [17] first adopted this method for motion forecasting, and proposed LaneGCN to learn map features from HD maps. In their study, lane graphs were directly generated from the HD map. Zürn et al. [50] proposed LaneGraphNet to estimate such graphs from BEV images.
Expanding the scene understanding task from lanes to all nearby objects, Meta-sim [19], [20] uses a hierarchical tree for arranging these objects, based on a set of rules, such as "lane belongs to the road", "vehicle on the lane", etc. In this way, the graph captures the status of all nearby objects, as well as the hierarchical structure of the scene.
The proposed Road Scene Graph (RSG) method is a relational graph [51] based on our previous work [31], which included map components, actors and the semantic relationships among these actors. RSG itself is described in detail in Section III of this paper.
All these methods transform spatial and other kinds of information into various types of relationships, and then build a very compact, non-Euclidean, learnable graph to describe the scene surrounding the ego-vehicle. The preferred graph format varies according to the chosen task. For behavior prediction tasks, interaction graphs [16] are commonly used, while for motion prediction, lane graphs work better since they provide a fine-grained description of HD maps. Structured object trees (Meta-Sim) [19], [20] are commonly used for traffic scene generation tasks. And the RSG approach proposed in this paper is likely a better solution for predicting semantic relationships.

B. SCENE GRAPH PREDICTION
Scene graphs [52], [53] were originally proposed as a method of describing the relationships between objects detected in an image [4]. Currently, the majority of scene graph research focuses on describing common images, to meet the increasing demand for image retrieval [4], image/scene captioning [41], [42], image generation and image-based querying [54], [55].
The rapid growth in scene graph generation tasks is a result of the creation of large-scale, relational datasets of common Web images. Since Johnson et al. first proposed this concept [4], many large-scale relational datasets have been created. The Real-World Scene Graphs Dataset (RW-SGD) [4] was among the first, containing 5,000 images from the YFCC100m [56] and MS COCO datasets [57]. In addition, the Visual Relationship Dataset (VRD) [6], Visual Genome Dataset (VG) [58], Visually-Relevant Relationships Dataset (VrR-VG) [59], UnRel Dataset [60], HCVRD dataset [61] and others have appeared, with increasing numbers of images, object annotations and relationship annotations. Within the self-driving community, many excellent, large-scale dataset have also emerged [62], [63], [64], [65], [66], [67], [68]. However, our Road Scene Graph dataset would be the first focused on the semantic relationships among vehicles, pedestrians and other actors in traffic scenes.
Currently, the majority of scene graph generation (SGG) models follow a similar framework: (1) a region proposal predictor, which commonly uses Faster R-CNN [10]; (2) a region feature extractor [6]; and (3) iterative feature fine-tuning, using CRF or GNN models. Using probability distributions from natural language tasks as prior knowledge has also been proposed. In contrast, SGG for traffic scenes does not rely on a region proposal predictor, thus the complexity of the model can be significantly reduced. In Section IV-C we discuss this difference in more detail.
The Road Scene Graph Generation (RSGG) task is similar to the graph generation approaches proposed in previous studies, however the region proposal predictor has been removed as we can easily obtain highly accurate scene perception using Autoware or other sensing systems. But a hand-crafted region feature extractor is used for integrating information from traffic actors and the HD map. Finally, RSGG's iterative feature fine-tuning model was borrowed from the Iterative Message Passing (IMP) method [8], and then modified.

III. PROBLEM DEFINITION
In this section, we first provide a formal definition of our Road Scene Graph (RSG) method, and then explain the semantic relationship prediction problem.
As shown in Fig. 2, a Road Scene Graph, which can be represented as RSG = V, E comprises two individual sets: node set V = {V A , V M } where V A is an actor set and V M is a map component set, and a relationship set E = {r i→j v i , v j ∈ V} which consists of potential semantic relationships in a scene, such as "vehicle waiting for pedestrian". The categories of possible relationships are shown in Table 3.
To transform the relationships between a set of actors into a fixed-length node feature vector V A , we parameterize actor node α i ∈ A with an eight-dimensional vector containing its  class label c i ∈ C, its position according to the ego-centric bird's-eye-view coordinates (x i , y i ) ∈ R 2 , its bounding box b i ∈ B, and its velocity (v ix , v iy ) ∈ R 2 , based on the method proposed in [44]. Here, a bounding box b i ∈ B is parameterized according to its size (w i , l i ) ∈ R 2 and heading θ i ∈ [0, 2π).
Likewise, we transform map components, such as roads, lanes and intersections, into node feature vectors, which share a similar representation to actor node α i . A map component node m i ∈ M consists of its class label c i ∈ C, its center (x i , y i ) ∈ R 2 and its bounding box b i ∈ B. At the end of the node embedding vector, the velocity parameter is set to a constant (zero). Other HD-Map embedding methods are discussed in Section V-A.
As illustrated in Fig. 2, the edges in the RSG can represent three kinds of relationships, based on the type of nodes they connect: "actor-to-actor", "actor-to-map-component", and "map-component-to-map-component". The categories of these possible semantic relationships are listed in Table 3. The actor-to-actor edges capture interactions between actors in the scene, while the actor-to-map edges capture spatial relationships between actors and map components, such as "vehicle driving on the lane". The edges linking map components represent topographical relationships between map components, such as "lane belongs to road" or "road is predecessor to intersection". These map component-to-map component relationships are not predicted by our model, because they can be easily obtained from the geolocation database, and rarely change over time. In our proposed model, these relationships are fundamental to the messagepassing mechanism [8] of graph neural networks, as they can significantly increase the connectivity of a graph. At each iteration, these links allow actors to aggregate information from nearby map components, as well as from other actors.
The goal of semantic relationship prediction is to infer pairwise relationships among all actors in a traffic scene, given actor node set V A and HD-Map information M. Here, an actor α's category c could be "vehicle", "pedestrian", or 'barrier', while M consists of "road", "lane", and "intersection".
For each pair of actors or map component nodes (see Table 3), we formulate the relationship prediction problem as finding the optimal r i→j = arg max r Pr(r|A, M) that maximizes the following probability function in Eq. (1).
In contrast to scene graph prediction tasks based on common images [8], in traffic scene graph prediction the predicted relationships are undirected. That is to say, our proposed model does not distinguish between the "object" and "subject" of a relationship. One reason for this is that not all relationships have clearly defined objects and subjects. Relationships such as "Following" or "Approaching" do, but relationships such as "Grouping" do not. Then, since we can obtain an actor's position and velocity, and information about other nearby actors, it is easy to build a rule-based system to determine the "object" and "subject" of a particular relationship, without increasing the complexity of our graph inference model. The detailed definitions of the notations used in this study can be found in Table 2.

A. DATA SETUP
Here we introduce our Road Scene Graph dataset and explain how this dataset was constructed. The data format of our dataset is similar to that of other graph-based datasets in the common image domain, such as Visual Genome [58] and VRR-VG [59], however, since the RSG dataset is used to model traffic scenes, different object and relationship categories are used. Also, as Table 4 and Fig. 3 illustrate, the distribution of labels in the RSG dataset is unique and more balanced than image domain datasets. The main reason for this is that the number of node and relationship categories used in RSG is significantly smaller than those used for common scene graph datasets. The RSGD only contains 6 unique objects and 48 relationships, compared with 75,729 objects and 40,480 relationships in the graph-based, image domain Visual Genome dataset [58], as the goal of RSG is limited to predicting semantic relationships in traffic scenes. When applying semantic labels to describe traffic scene data, the most important thing is to carefully define these semantic labels in order to cover as many traffic scenes as possible. Despite this effort, there must be some cases not covered or ambiguous cases. In this study, the category of such relationships is shown in Table 3. We use three methods to describe traffic scenes as comprehensively as possible:  For the relationships between map components as shown in the green font in the top-left part of Table 3, we only describe the most basic relationships in the road network. These hierarchical and universal relationships can cover all 500 traffic scenes in the RSG dataset. In some rare cases, such as underground parking lots, construction sites, or wilderness areas, it is not possible to obtain HD map and create relationships between road elements. We have excluded these cases from the RSG dataset. To obtain such relationships, we first convert the nuScenes map to the ASAM OpenDrive [69] format. This process is described in detail in our previous work [2]. By constructing high-precision maps on the nuScenes dataset, we can obtain the connections and "Belongs-to" relationships between map components.
The orange-labeled blocks on the bottom-left (also implied in the top-right boxes) represent cross-layer "actor-to-map component" relationships. For these relationships, we use a set of rules based on the object state to roughly determine t 1123hem, ensuring that for any actor in the traffic scene, there is one and only one corresponding relationship can be generated to a map component road.
The most important semantic relationships in our research are those at the "Actor-to-Actor" level, lists in the bottom-right part of Table 3. Before annotating these relationships, to ensure that the categories of semantic label can cover the majority of relationships in various traffic scenarios, two methods were used: questionnaire surveys and prototype annotations. First, questionnaires were distributed to participants, asking them to observe 20 traffic scenes from the nuScenes dataset, each lasting 20 seconds. And then asking them to provide detailed descriptions of all potential relationship categories in those scenes. The participants included 4 faculty members, 8 doctoral and master's students. And 4 student participants of them do not have any driving experience, who could provide observations from a pedestrian's point of view. Except for fatigue, the experiment posed no significant risks to the participants. We managed the data carefully and protected the privacy of the participants. In this way, we obtained an initial relationship category list. Additionally, we referenced the relationships from the HDD HRI Driving Dataset [70], which focuses on the behavior of the driver.
During the prototype annotation process, to ensure that our list includes the majority of semantic relationships that appear in the scene, we collected feedback and reports from annotators and revised the semantic relationship category list based on this feedback. We revised the relationship list when annotating 50, 200, and 300 scenes, and re-annotated previous scenes to include the newly added semantic labels. In addition, to make the semantic labels more generalizable, we aggregated some semantic relationship labels, such as the "cut-in" and "cut-out" relationships, which we merged into "overtaking." The final issue is about the consistency. Here, consistency includes the consistency of semantic labels in different traffic scenarios, and labels annotated by different annotators. To this end, the following four strategies were used: (1) Using program-generated semantic labels: For "Actor-to-Map" and "Map-to-Map" relationships (green and orange parts in Table 3), we infer these semantic relationships with program which concerning object status and road network information. Therefore, we can maintain consistency among various traffic scenes. (2) Using standardized semantic labels: For relationships between objects (blue part in Table 3), we define each relationship in detail, indicate special cases, and distribute manual of annotation. We provide training to annotators to ensure label consistency before annotation. (3) Multiple annotators are used to annotate one scene, and conflicts between annotators are resolved. (4) For the "Actor-to-Actor" relationships, based on the second point, we used a set of rules based on the objects' positions and velocities to assist in the annotation process. Although this mechanism cannot automatically generate semantic relationships, it can automatically detect some obvious errors.

B. GRAPH PRE-PROCESSING
The aim of the graph-preprocessing is to generate the semigraph (G semi = G AM ∪ G MM ∪ K AA ) used as our network's input. In this work, we use single-frame data to predict the semantic relationships between objects for the following bird's-eye view), our model first generates actor and map node embedding. We then use a geometry-based rule system to generate actor-to-map relationship graph GAM . Next, we generate map-to-map relationship graph GMM from HD map M to initialize fully-connected graph GAA, which indicates the semantic relationships to be predicted among the traffic actors. After that, we transform input graph Gsemi into its dual graph, and use GRU to update the status of the relationship edges at each iteration. Finally, the refined scene graph is obtained, as shown on the right side of the figure.
reasons: (1) The RSG dataset we created has limited data and short videos of 20 seconds each. Using continuous frames like 5 seconds trajectory reduces the available dataset size by 75%. This ratio will not decrease even with more annotations, as it is related to the nuScenes dataset's video length. Therefore, we prefer single-frame prediction to use more data for training; (2) Initially, we extended the node feature vector's dimension to include past positions for several time steps (1,3,5). However, experimental results showed poor performance, possibly due to the graph neural network model's sensitivity to the node feature vector dimension. We need a more compact representation method to improve prediction accuracy; (3) We selected semantic labels considering single-frame prediction, and they can be determined from object positions and velocities (excluding acceleration) in a single frame.
As Fig. 4 illustrates, during this stage three matrices are generated: feature matrix V semi , adjacency matrix A semi and edge feature matrix E semi . The actor-to-actor subgraph is initialized as a fully-connected subgraph K AA . Its node feature matrix consists of the actor's current feature vector For the map-to-map layer subgraph G MM , we first obtain an OpenDRIVE format HD map from the nuScenes dataset, using a method we've proposed previously called "Real-to-synthetic" [71]. We also obtain (1) the driving directions of the roads and lanes; (2) the reference line and waypoints of each road/lane; (3) connectivity among roads/lanes/intersections; and (4) the ID of each road component, using the OpenDRIVE standard. The map components can then be connected based on their relationships, and arranged into a graph G MM . For the actor-to-map layer subgraph G AM , the relationships are determined by the relative position of the actor bounding-box's centroid and the map component's shape polygon, as well as the actor's velocity. Based on these relationships, subgraph G AM can be created to connect the map component and actor nodes.

C. RSG-GCN: SEMANTIC RELATIONSHIP PREDICTION NETWORK
Here we propose our novel model, Road Scene Graph-based Convolutional Neural Network (RSG-GCN). The RSG-GCN decomposes the probability of "actor located in HD map" Pr(α i , α j |M) into four factors: As demonstrated in our previous work [31], such decomposition allows us to integrate prior relationships such as "vehicle driving on the road" or "road next to intersection" into our graph inference model. Here, we model both the actor-to-map relationships Pr(r α i →m p r α j →m q |α i , α j , m p , m q ) and map-to-map relationships as subgraphs G AM and G MM , respectively.
Our goal here is to refine semi-graph G semi by predicting the semantic relationships among the actors. During the pre-processing stage, these relationships are ignored, with the fully-connected layer (K AA ) filling that position. K AA will then be replaced by the predicted subgraph (G AA ), on the basis of an inference among actor-to-actor status α i , α j , actor-to-map relationship graph G AM , and map-to-map relationship graph G MM .
Using a method based on Iterative Message Passing [8], we also apply gated recurrent unit networks (GRU) during our graph refinement process. GRU is a popular technique which has been used in several graph network generation methods to propagate node messages in graphs [72], [73], [74]. However, in contrast to these models, our proposed method does not simultaneously predict the status of objects and their relationships. This is because the bounding boxes of objects can be obtained from the perception module in a self-driving system, using LIDAR auxiliary bounding box regression. Therefore, we can focus entirely on prediction of the actor relationship edges using the highly accurate, intelligent sensing system of the vehicle (which is achieved through sensor fusion). To fully utilize this capability, we remove the node information update step from the original Iterative Message Passing model.
The new graph inference model is shown on the far right of Fig. 4. Here, we first transform G semi into a dual graph. In [8], it was observed that if we treat the relationships between the scene elements as separate nodes, the road scene graph turns into a bipartite graph. In other words, node vector V ∈ G can be separated into two sets, an object set and a relationship set, therefore we can transform the original graph into a dual graph. As a result, the edge prediction problem can be transformed into an easier, node prediction problem.
To enable message propagation on the graphs, we used a GRU [72], [75] model for the edge data, which is a simple but effective method that also solves the vanishing gradient problem caused by stacking graph convolutional layers. GRUs also outperform LSTMs when the scale of the dataset is limited [72]. For each node in the dual graph, we created a vector h t to indicate its hidden status. As every node in the dual graph data in the scene graph shares the same update rule, the same set of parameters is shared among all the node GRUs. Eq. (4) lists the update step formula for our GRU model: Here, σ () is a sigmoid function, all W are learnable parameters, and h t is the previous hidden state. Update gate z t adjusts the previous informationĥ t−1 . Finally, one-hot output a t is obtained after a fully connected layer W l , b l . After the final graph GRU, we use a multi-layer perceptron (MLP) to simultaneously predict node features V, edge features E, and adjacency matrix A G . At each iteration, only the nodes' status is updated, while the parameters for the edges maintain constant values. The status of nodes in the dual graph actually represents information from edges in G semi . The dual graph's edge data remain unchanged during this process. We then extract the components for subgraph G AA , and replace K AA in G semi with G AA to calculate the final output.

D. APPLICATION OF RSG: TRAFFIC SCENE RETRIEVAL
In addition to revealing the semantic relationships in traffic scenes, a key strength of our graph-based representation method is that it also allows a variety of potential downstream applications. Figure 5 shows schematic diagrams of two of these possible applications. One vanilla application is scene retrieval, which involves querying the dataset using a specific condition, such as "find scenes where two vehicles are waiting at an intersection, three pedestrians are crossing the intersection, and there are barriers nearby". To provide solid query results, we transform that scene Restore data structures end if retrieval task into a more appropriate subgraph isomorphic problem. As Fig. 5 (A) shows, the user first manually translates the query into a road scene graph. Then, a modified VF2 graph searching algorithm is used to find appropriate scenes and frames in the dataset, which are then compared with the original RSG. As shown in Algorithm 1, we need to check the categories of both the nodes and the edges.
A subgraph isomorphism problem is a computational task in which two graphs, G 1 and G 2 , are given as input. The task is to determine whether G 1 contains a subgraph that is isomorphic to G 2 . A description of our modified VF2 algorithm is outlined in Algorithm 1. In most situations, the time complexity of this task is O(n 2 ), where n is the maximum number of vertices of the two graphs. In state-of-the-art implementations such as VF2 [76], [77], the time complexity is between O(n 2 ) and O(n!n), while spatial complexity is of the order O(n). While n in our RSG dataset is in a range of <15, 140>, the performance of our graph search algorithm is still acceptable.

E. APPLICATION OF RSG: SYNTHETIC TRAFFIC SCENE GENERATION
Here, we briefly introduce our previous work on RSGbased synthetic traffic scene generation [71]. The purpose of this work is to automatically generate digital twins of open-source traffic scene dataset. And simulate them in CARLA [78] and SUMO [79] simulator. Then generate multiple traffic scenes that are similar to the given scene for testing purposes. This work can be divided into two parts. In the first part, we designed a graph autoencoder to learn and generate synthetic RSG. Then, traffic scenes in CARLA or SUMO rely on quantitative information (speed, location, pose, etc.) of all actors. To generate these scenes from semantic relationship information, we developed a grounding mechanism, which places each object node on the correct geometry position and assigns initial status.
The grounding mechanism works as follows: First, the RSG graph we generate contains a certain number of nodes corresponding to map elements. And in advance, we define a HD-map map set (from nuScenes, we converted maps in this dataset to openDrive format). Based on a topological graph matching method (VF2), we find a map (or a series of maps) that contains all map nodes in the traffic scene. Then, a set of random generation mechanisms is used to place objects on the map based on semantic relationships between objects and the map components. For example, if there is a "driving-on" relationship between a vehicle and a certain lane, the program randomly places it in a legal position on the lane and then assigns an initial velocity with Frenet coordinate system. At the same time, a "destination" is assigned for each object, i.e., the position the object will be in after a certain period of time. Finally, we use SUMO to simulate the generated initial status of traffic scene, obtain the trajectories of each object, and solve problems such as traffic lights, pedestrian avoidance, and object interactions. The simulation results contain all quantitative information about the object at each moment (speed, position, orientation, etc.), which can be passed into CARLA through CARLArosbridge for simulation (with maps loaded using openDrive loading function in the CARLA dev-branch).

V. EXPERIMENTS
In this section, we describe our evaluation of the proposed method, an RSG (gated GRU + dual graph) model. We do this by comparing its performance when predicting semantic relationships between traffic scene actors, in the form of graph edge data, with that of other graph prediction models, as well as with non-graph-based learning models, as listed in Table 5. We conducted this evaluation using the RSG dataset introduced in Section IV-A.

A. BASELINE MODELS
Here, we compare our proposed RSG-GCN model, described in Section IV with the following methods: (1) Vanilla GNN model with simple GNN stack; (2) Vanilla CNN model with simple CNN stack; (3) Iterative Message Passing model [8]; (4) Graph VAE [80] model with autoencoder, without prior geometric information (our previously proposed road scene graph generation model); and (5) Pairwise prediction model.

1) VANILLA GNN
In this model, we used the simple, 3-layer GNN model from [18] to learn the graph embedding of given semi-RSG G semi = G AM ∪ G MM ∪ K AA . We ignored the relationship categories in G semi because vanilla GNN does not support multi-relational data. Instead, we used the adjacency matrix and node feature matrix of G semi as this model's input. We then used 2 MLP layers to predict both the adjacency matrix and edge data category matrix of G AA , as shown in Eq. (5).
2) VANILLA CNN Similar to the approach used in [44], we used ConvLSTM [45] an extension of classic LSTM architecture, to learn the features of pre-rasterized, rendered data.
The input for this model was 2D rasterized images, as shown in Fig. 6 (D). We also used a fully-connected layer to predict both the adjacency matrix and edge data category matrix of G AA .

3) ITERATIVE MESSAGE PASSING
Here we used a full version of the IMP (Iterative Message Passing) model [8], [81]. Compared to our proposed RSG model described in Section IV-C, this model contains both a node data message-pooling model and an edge data messagepooling model. One challenge when using this approach is that the node features are obtained from learned feature representations of small pieces of images contained within bounding boxes, while edge data features are obtained from the ROI-pooling layer of the images. As these features are not accessible for our RSG prediction task, we use node/edge data features from G semi instead. We then modify the model to fit the dimensions of the inputs and outputs.

4) GRAPH AUTOENCODER
GraphVAE [80] is another popular method for generating small graphs. The key idea of this approach is to train an encoder to generate a latent representation z of given graph G(A, E, V), and then generate a fully-connected graph G( A, E, V) from z. Finally, the top K predictions selected from edge data E are used to construct the graph. The whole model is trained using minimum reconstruction loss, as shown in Eq. (6).
The three losses were defined as follows. Let A be the adjacency matrix, E the edge embedding tensor and F the node embedding vector in the generated graph G . Also, let A = XAX T , F = X T F , E .,.,l = X T E .,.,l X: When training this model, we first use a form of complete loss, as shown in Eq. (6). We then remove log p(V|z) at the end of the training process to maximize relationship prediction accuracy. Before the encoder, a global, feedforward pooling layer was used instead of pooling in each layer, as proposed in [82].

5) GATED GRU+DUAL GRAPH
This is the model proposed in this paper, which is described in detail in Section IV-C.

6) PAIRWISE PREDICTION
A non-graph method which is entirely different from the other models, this method takes every possible pairing of actors as its input and predicts the categories of the potential relationships. Due to the very limited parameter space of both the input and output (even smaller than MNIST), we used a stack of 3 fully-connected layers for the model structure. Nodes representing HD map components are learned in the same manner as the pairs of traffic actors. As shown in Table 7, this method is the only one which saw a drop in performance when prior geo-relationship information was used.
Evaluation results for the six methods described above are shown in Table 7. The proposed model outperformed all other methods, with or without prior geo-relationship information M. When such prior topological knowledge is integrated into the proposed model, accuracy of recall at 20 predictions (R@20, discussed in Section V-B) and at 50 predictions (R@50, also discussed in Section V-B) increases, outweighing performance loss caused by the significant increase in the size of the graph.
In this research, we not only represent actors as graph nodes, but also treat HD map components as nodes. However, these map component nodes differ from the actor nodes, whose status is represented geometrically by a 3D bounding box. Because the shapes of roads, lanes and intersections are quite unique, they are difficult to represent using fixed-length feature vectors. Therefore, we evaluated several methods of integrating map assets into our inference model, such as using rasterized images, component IDs or bounding boxes to represent the draft version of the map assets. Fig. 6 shows examples of the qualitative results for different map feature learning methods. Figure 7 shows the prediction accuracy of our proposed model when trained with different HD map data integration methods. The performance of our model peaked when using the minimum surrounding rectangle (MSR) method. Comparing to normal bounding box, MSR provides an additional degree of freedom, and much more similar to the majority of map components. Figure 7 (left) shows the reason for the poor performance of the polygon GNNs; the number of vertices of the various map components follows a long-tail distribution. Since   100 polygon vertices are quite a lot, this makes it difficult for a simple RNN to learn the features of these polygons. However, the polygon GNN outperforms the other, simpler methods like the bounding box and MSR methods. Figure 7 (right) shows the IOUs for the ground-truth polygon with bounding boxes vs. MSR. The majority of the IOUs of the minimum surrounding rectangles is in the 0.75 to 1.0 range, thus MSR is a simple and appropriate way to generate HD map representations. Model recall performance, as shown in Table 6, also supports this hypothesis.

B. EVALUATION CRITERIA
Top K recall has been widely used to measure prediction accuracy in previous studies [4], [5], [8], [9], [44] on graph prediction. As shown in Eq. (10), this metric represents the percentage of top K prediction hits for ground truth relationships "GT". The reason this metric is so widely used is that for most scene graph datasets, like Visual Genome [58], GQA [83] and our RSG dataset, it is neither necessary nor possible to annotate all potential relationships. The ambiguity of natural language makes this impossible for common scene graph generation. Furthermore, the duration or distance of a specific traffic actor's appearance may not be sufficient for annotators to create an appropriate annotation for the road scene graph. As a result, most research in this area only cites the true-positive of sample prediction when assessing prediction accuracy, thus the use of R@K recall has become a common practice [8], to avoid penalizing positive results for unlabeled relationships. Also, because relationship prediction in traffic scenes requires a higher level of accuracy than common scene graph generation tasks, to ensure traffic safety, a stricter accuracy metric is used for performance evaluation.

R@K
We also used the ego-centric R@K metric because both the nuScenes and RSG datasets are recorded from an ego-centric point of view, as annotation quality seems to be related to the Euclidean or topological distance between the targeted actor and the ego-vehicle. To evaluate such phenomenon, Table 8 shows our results when we extract a subgraph of relationship prediction based on topological distance (1, 3 or 5 hops) and Euclidean distance (10, 20 or 50 meters) from the ego-vehicle. We then used the R@K metric to evaluate prediction accuracy, based on these extracted subgraphs. Figure 8 shows three examples of actual road scene graphs generated using our proposed method. A graph of an entire road scene is often composed of tens of nodes and edges. To simplify our performance evaluation, we cropped the road scene graphs and generated ego-vehicle-oriented subgraphs. As the samples shown in Figure 8 illustrate, our model can generate good quality scene graphs for traffic scenes. When using the R@K performance metric instead of mAP, the predicted results have excellent structural consistency with the ground truth of these scenes.

C. QUALITATIVE RESULTS
Furthermore, most prediction errors occur among ambiguous relationships, such as "waiting for" and "passing by", or in situations where the geometry of the relationships changes drastically. For example, in Scene 0061 at t=7, the ego vehicle is following the vehicle in front of it as the road curves to the left. Human annotators can understand these types of relationships by watching what happens in the subsequent few seconds. In contrast, without information about the future, these relationship predictions can be challenging.

D. QUANTITATIVE ANALYSIS FOR VARIOUS MODEL STRUCTURES
In this subsection, we quantitatively analyze the prediction performance of our proposed model by comparing prediction VOLUME 4, 2023 255 Authorized licensed use limited to the terms of the applicable license agreement with IEEE. Restrictions apply. error and R@K recall of our method with those of the baseline methods described in Section V-A1. As noted previously, all models were trained using the same Road Scene Graph (RSG) Dataset. Table 7 shows the prediction performance of our model and the baseline models with and without prior geographic relationship information. The performance of the vanilla baselines indicates that even simple GNN stacks can learn the features of semi-graph G semi and somehow make accurate predictions. However, when using a small dataset like RSG for training, the use of rendered images degrades performance due to under-fitting.
Our proposed model (RSG with Gated GRU + Dual Graph) achieved the best performance, outperforming the graph autoencoder model by 10% using the R@50 metric, when prior geographic relationship information was available. However, iterative message passing (IMP), the original version of our proposed model, was not as accurate as our proposed, cropped model. The goal of our original IMP model was to simultaneously predict each actor's status (position, velocity, etc.) and the semantic relationships of the traffic scene. In contrast, the actor status inference module has been removed from our proposed RSG model. Note that the IMP model performs 11.4% better in terms of R@50 when using the dataset without prior geo-relationship information, in comparison to its performance when this information is provided. This is likely because, with the integration of this prior geo-relationship information, the graph's diameter increases significantly. Furthermore, although the IMP model benefits from the additional information, its accuracy suffers from the increased output. On the other hand, the accuracy of the pairwise prediction model does not suffer, as it does not receive the whole graph as input, so it neither benefits nor suffers from prior knowledge.

E. EVALUATION OF EGO-CENTRIC ACCURACY
We also measured the accuracy and recall of these models using the ego-centric metric defined in Section V-B. As mentioned previously in this section, we assumed that our dataset has an ego-centric bias due to the method used to generate and annotate the RSG dataset. The results of this experiment, shown in Table 8, support this hypothesis. We can see that relationship prediction accuracy significantly increases for actors closer to the ego-vehicle for all models. Even if we cropped the subgraph to create a wider range (50 meters or five hops), prediction accuracy is still significantly higher than when the full graph prediction results are used. This confirms that the ego-centric bias is a result of the RSG dataset's ground truth for relationships and bounding boxes.
There are two possible reasons for this ego-centric bias: (1) Since the dataset is composed of drive recordings made by different vehicles, the quality of the relationship annotations may change dramatically at the periphery of the lidar's or camera's field of view (FOV), as overlapping, appearance and vanishing occur more often in the peripheral areas of traffic scenes. (2) The viewpoint of our relationship annotation system itself is ego-centric. Since we only provide the view from the camera mounted on the ego vehicle, the annotator may, unconsciously, label more relationships between ego-vehicle and its surrounding objects than relationships between the non-ego actors. A simple experiment confirms the second hypothesis. When the ego-vehicle view was changed in 10 scenes to a random vehicle view (but not too far from the ego-vehicle, to avoid eliminating all the surrounding actors), annotations for the selected nonego vehicle increased 21.1%. This result reveals that a fixed bird's-eye-view (BEV) would be fairer and more objective. However, our ego-centric dataset may be more appropriate for ego-vehicle-related research, such as identifying potential risks around the ego-vehicle, or predicting the ego vehicle's trajectory, for example.

F. ABLATION EXPERIMENTS
These experiments evaluated how prior knowledge of the topological map benefits the relationship prediction task. Table 9 shows the results of removing specific layers of the road scene graph (left), and of randomly removing various amounts of prior knowledge (right). These results indicate that the IMP, autoencoder (graphVAE) and proposed models  are the most affected by the removal of prior knowledge. Although unaffected by the removal of this topological data, the simple GNN model is still unable to outperform these three models.
Among the three layers of RSG data, the results shown in Table 9 indicate that removal of the "map-to-actor" layer degrades prediction performance the most. And when the prior relationship information is randomly removed, the performance of most of the models rapidly decreases to a level even lower than the "without prior georelationship information" condition in Table 7. This decrease in performance suggests that the grounding of actors to the HD map greatly boosts prediction accuracy. However, some models, such as the IMP model, suffer less than others when prior relationship information is removed. Note also that when we randomly removed just a small amount of relationship information (5%), the accuracy of the prediction results for our proposed model increased slightly (1.05%). This suggests that the random removal of prior knowledge could be a safe method of graph data augmentation.

G. APPLICATIONS OF RSG
Here we provide some qualitative results about synthetic traffic scene generation from our previous study proposing a Real-to-Synthetic [71] traffic scene generation method, which generates near-realistic, simulator-friendly traffic scenes from graph-based scene representations. Detailed result about digital twins generation and synthetic scene generation can be found on our previous work [2].

VI. CONCLUSION AND FUTURE WORK
The goal of this paper is to improve understanding of urban traffic scenes by accurately predicting semantic relationships among traffic actors. To accomplish this task, we first created and annotated a Road Scene Graph dataset containing VOLUME 4, 2023 257 Authorized licensed use limited to the terms of the applicable license agreement with IEEE. Restrictions apply.
traffic scenes with multiple semantic relationship annotations linking each pair of actors. We then proposed an RSG-GCN model as a method to predict this graph. Our model first generates traffic actor and HD map node features. These node features are then integrated into a semi-graph by determining the geometric relationships among the actors and map components. Finally, a graph refinement model is proposed to leverage actor status information and prior HD map information to predict semantic relationships among the actors. The proposed Road Scene Graph traffic scene modeling approach provides a novel way to describe traffic scenes at both the geometric and semantic levels. Our experimental results indicate that our proposed relationship prediction model outperforms other popular methods. Future work will be focused on how recent developments in the performance of common SGG (Scene Graph Generation) tasks can be used to improve RSG's prediction domain, and how RSG can be used to benefit other tasks, such as traffic scene captioning and risk detection.