Multilingual Knowledge Graph Embeddings for Cross-lingual Knowledge Alignment
The paper proposes MTransE, a translation-based model for multilingual knowledge graph embeddings, to provide a simple and automated solution. By encoding entities and relations of each language in a separated embedding space, MTransE provides transitions for each embedding vector to its cross-lingual counterparts in other spaces, while preserving the functionalities of monolingual embeddings. They deploy three different techniques to represent cross-lingual transitions, namely axis calibration, translation vectors, and linear transformations, and derive five variants for MTransE using different loss functions. The interesting part about their models is that they can be trained on partially aligned graphs, where just a small portion of triples are aligned with their cross-lingual counterparts.
Cross-lingual Knowledge Graph Alignment via Graph Convolutional Networks
This paper proposes a cross-lingual KG alignment using GCNs. Given a set of pre-aligned entities, they train GCNs to embed entities of each language into a unified vector space. Entity alignments are discovered based on the distances between entities in the embedding space. Embeddings can be learned from both the structural and attribute information of entities, and the results of structure embedding and attribute embedding are combined to get accurate alignments.
They present a KG (XLORE2) that extends a previous English-Chinese KG (XLORE) by adding more facts, adding more cross-lingual knowledge linking, cross lingual property matching, and fine grained type inference. They also design an entity linking system that demonstrates the coverage and effectiveness of XLORE2.
I think that this github repo about entity alignment in knowledge graphs could be useful:
MTransE is linked above. This is their github repo.