Loading [a11y]/accessibility-menu.js
Incorporating Attributes Semantics into Knowledge Graph Embeddings | IEEE Conference Publication | IEEE Xplore

Incorporating Attributes Semantics into Knowledge Graph Embeddings


Abstract:

More and more work has focused on incorporating different kinds of literals into Knowledge Graph to promote the performance of knowledge embedding. These literals contain...Show More

Abstract:

More and more work has focused on incorporating different kinds of literals into Knowledge Graph to promote the performance of knowledge embedding. These literals contain numeric literals, text literals, image literals and so on. These additional descriptions are connected to the entities through certain attributes. To incorporate numeric literals, some methods combine the embeddings of literals part with the traditional part - embeddings of entities. However, in the construction of literals embeddings, these existing methods consider the differences of these attributes: one dimension represents one attribute. But they ignore semantic meanings of attributes themselves. In this paper, we propose two methods to incorporate attributes semantics into knowledge graph embeddings from two perspectives: LiteralEAN and literalE-AT. They concatenate with the embeddings of numeric literals by different ways. Furthermore, their extension model LiteralE-C is also proposed as having a more comprehensive representation of attributes semantics. In an empirical study over two standard datasets FB15k and FB15k-237, we evaluate our models for link prediction. We demonstrate that they show an effective way to improve LiteralE and achieve state-of-the-art results. In ablation experiments, we find combined models do better than their singular counterparts in most cases.
Date of Conference: 05-07 May 2021
Date Added to IEEE Xplore: 28 May 2021
ISBN Information:
Conference Location: Dalian, China

I. Introduction

In the past decade, Knowledge Graph (KG) has motivated many knowledge-driven applications, such as question answering and data integration. DBpedia [1], Freebase [2] and YAGO3 [3] are well known examples in KG. They store knowledge in triples that contain two entities and their relationship (for example, in triple < Tokyo, captionOf, Japan >, Tokyo and Japan are entities, captionOf is their relationship). Representation learning for modeling KG is an important task, which is the backbone of some downstream tasks like link prediction and entity classification. KG representation learning is also known as KG Embedding. It aims to encode KG structures into low-dimensional embeddings. These quantified embeddings can capture global patterns (also called structure-based information) and make computing the likeliness of given triples possible. These open KG and learned embeddings could benefit collaborative computing works.

Knowledge graph with numeric literals. Literals are linked to entities through attributes. These attributes are not only different, they also contain semantic meanings.

Contact IEEE to Subscribe

References

References is not available for this document.