Graph Attention Networks for Graph Learning and Its Applications

dc.contributor.advisorHuang, Jimmy
dc.contributor.authorZhu, Runjie
dc.date.accessioned2022-08-08T15:41:59Z
dc.date.available2022-08-08T15:41:59Z
dc.date.copyright2021-11-26
dc.date.issued2022-08-08
dc.date.updated2022-08-08T15:41:59Z
dc.degree.disciplineComputer Science
dc.degree.levelDoctoral
dc.degree.namePhD - Doctor of Philosophy
dc.description.abstractThis thesis addresses and investigates the recent development of graph attention network (GAT) models in the three following aspects: (1) GATs on single graph learning via the Knowledge Graph Embeddings (KGE) task, (2) GATs on multiple graph learning via the Cross-lingual Entity Alignment (CEA) task, and (3) GATs on on-going real-world problems via the COVID-19 node classification task. These three aspects of research complement each other in a way that cover a wide range of graph learning tasks to prove the effectiveness and robustness of GAT-based models. First, GAT has demonstrated its strengths in the KGE task recently. Although GAT has proven to be promising in achieving the state-of-the-art (SOTA) performance in KGE, the performance of current GAT-based models is still largely restrained. In this thesis, we propose a novel bidirectional graph attention network (BiGAT) which leverages GATs to learn hierarchical neighbor propagation in a bidirectional manner. Second, past studies of multiple graph learning for CEA tend to use traditional approaches to find equivalent entities in the counterpart knowledge graph (KG). These traditional methods tend to miss important structural information beyond entities in the modeling process. Many GNN-based models model KGs independently for embedding learning. Moreover, they tend to either underrate the usefulness of pre-aligned links or utilize only a few pre-aligned entities to connect different KGE spaces. These characteristics largely restrain model performances. To address these issues, we propose two novel GAT-based models, Contextual Alignment Enhanced Cross Graph Attention Network (CAECGAT) and Dual Gated Graph Attention Network with Dynamic Iterative Training (DuGa-DIT), to effectively learn embeddings from different KGs, to capture more neighborhood features, and to propagate more significant cross-KG information through pre-aligned seed alignments. Third, recent studies on graph learning for COVID-19 have shown the possibility of leveraging deep-learning models to classify infected cell nodes. Thus, another important part of this thesis dives into designing an effective GAT-based model for node classification. Our proposed method, graph attention capsule network (GACapNet), has delivered significantly better results than baseline models. Moreover, our study could also indicate predictive features to help close existing knowledge gaps in the pathogenesis of COVID-19 pneumonia.
dc.identifier.urihttp://hdl.handle.net/10315/39556
dc.languageen
dc.rightsAuthor owns copyright, except where explicitly noted. Please contact the author directly with licensing requests.
dc.subjectComputer science
dc.subject.keywordsGraph learning
dc.subject.keywordsKnowledge graph
dc.subject.keywordsNatural language processing
dc.subject.keywordsGraph attention networks
dc.subject.keywordsKnowledge graph embedding
dc.subject.keywordsCross-lingual entity alignment
dc.subject.keywordsNode classification
dc.titleGraph Attention Networks for Graph Learning and Its Applications
dc.typeElectronic Thesis or Dissertation

Files

Original bundle
Now showing 1 - 1 of 1
Loading...
Thumbnail Image
Name:
Zhu_Runjie_2021_PhD.pdf
Size:
6.16 MB
Format:
Adobe Portable Document Format
License bundle
Now showing 1 - 2 of 2
No Thumbnail Available
Name:
license.txt
Size:
1.87 KB
Format:
Plain Text
Description:
No Thumbnail Available
Name:
YorkU_ETDlicense.txt
Size:
3.39 KB
Format:
Plain Text
Description:

Collections