000K  utf8
1100  $c2023
1500  eng
2050  urn:nbn:de:gbv:8:3-2023-00247-4
2051  10.21941/kcss/2023/1
3000  Galke, Lukas Paul Achatius
4000  Representation Learning for Texts and Graphs$dA Unified Perspective on Efficiency, Multimodality, and Adaptability$hChristian-Albrechts-Universität zu Kiel  [Galke, Lukas Paul Achatius]
4030  Kiel$nChristian-Albrechts-Universität zu Kiel
4209  [...] This thesis is situated between natural language processing and graph representation learning and investigates selected connections. First, we introduce matrix embeddings as an efficient text representation sensitive to word order. [...] Experiments with ten linguistic probing tasks, 11 supervised, and five unsupervised downstream tasks reveal that vector and matrix embeddings have complementary strengths and that a jointly trained hybrid model outperforms both. Second, a popular pretrained language model, BERT, is distilled into matrix embeddings. [...] The results on the GLUE benchmark show that these models are competitive with other recent contextualized language models while being more efficient in time and space. Third, we compare three model types for text classification: bag-of-words, sequence-, and graph-based models. Experiments on five datasets show that, surprisingly, a wide multilayer perceptron on top of a bag-of-words representation is competitive with recent graph-based approaches, questioning the necessity of graphs synthesized from the text. [...] Fourth, we investigate the connection between text and graph data in document-based recommender systems for citations and subject labels. Experiments on six datasets show that the title as side information improves the performance of autoencoder models. [...] We find that the meaning of item co-occurrence is crucial for the choice of input modalities and an appropriate model. Fifth, we introduce a generic framework for lifelong learning on evolving graphs in which new nodes, edges, and classes appear over time. [...] The results show that by reusing previous parameters in incremental training, it is possible to employ smaller history sizes with only a slight decrease in accuracy compared to training with complete history. Moreover, weighting the binary cross-entropy loss function is crucial to mitigate the problem of class imbalance when detecting newly emerging classes. [...]
4950  https://doi.org/10.21941/kcss/2023/1$xR$3Volltext$534
4950  https://nbn-resolving.org/urn:nbn:de:gbv:8:3-2023-00247-4$xR$3Volltext$534
4961  https://macau.uni-kiel.de/receive/macau_mods_00003566
5051  004
5550  Autoencoders
5550  Continual Learning
5550  Deep Learning
5550  Evolving Graphs
5550  Graph Representation Learning
5550  Information Retrieval
5550  Knowledge Distillation
5550  Lifelong Learning
5550  Machine Learning
5550  Multilayer Perceptrons
5550  Natural Language Processing
5550  Neural Networks
5550  Out-of-distribution Detection
5550  Recommender Systems
5550  Representation Learning
5550  Text Classification
5550  Text Representation Learning
5550  Transformers
5550  Unseen Class Detection
5550  Word Embeddings