Skip to content

Commit b81f4ae

Browse files
committed
added preprints on continual learning
1 parent 6b42443 commit b81f4ae

File tree

1 file changed

+44
-0
lines changed

1 file changed

+44
-0
lines changed

_data/publications.yaml

Lines changed: 44 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -1,4 +1,48 @@
11
---
2+
- title: "The Unreasonable Effectiveness of Randomized Representations in Online Continual Graph Learning"
3+
venue: Preprint
4+
year: 2025
5+
authors:
6+
- G Donghi
7+
- id:dzambon
8+
- L Pasa
9+
- id:calippi
10+
- N Navarin
11+
keywords:
12+
- continual learning
13+
- nonstationary environments
14+
abstract: "Catastrophic forgetting is one of the main obstacles for Online Continual Graph Learning (OCGL), where nodes arrive one by one, distribution drifts may occur at any time and offline training on task-specific subgraphs is not feasible. In this work, we explore a surprisingly simple yet highly effective approach for OCGL: we use a fixed, randomly initialized encoder to generate robust and expressive node embeddings by aggregating neighborhood information, training online only a lightweight classifier. By freezing the encoder, we eliminate drifts of the representation parameters, a key source of forgetting, obtaining embeddings that are both expressive and stable. When evaluated across several OCGL benchmarks, despite its simplicity and lack of memory buffer, this approach yields consistent gains over state-of-the-art methods, with surprising improvements of up to 30% and performance often approaching that of the joint offline-training upper bound. These results suggest that in OCGL, catastrophic forgetting can be minimized without complex replay or regularization by embracing architectural simplicity and stability."
15+
bibtex: >
16+
@misc{donghi2025unreasonable,
17+
title={The Unreasonable Effectiveness of Randomized Representations in Online Continual Graph Learning},
18+
author={Donghi, Giovanni and Zambon, Daniele and Pasa, Luca and Alippi, Cesare and Navarin, Nicolò},
19+
year={2025},
20+
url={https://arxiv.org/abs/2510.06819},
21+
}
22+
links:
23+
paper: https://arxiv.org/abs/2510.06819
24+
- title: "Online Continual Graph Learning"
25+
venue: Preprint
26+
year: 2025
27+
authors:
28+
- G Donghi
29+
- L Pasa
30+
- id:dzambon
31+
- id:calippi
32+
- N Navarin
33+
keywords:
34+
- continual learning
35+
- nonstationary environments
36+
abstract: "The aim of Continual Learning (CL) is to learn new tasks incrementally while avoiding catastrophic forgetting. Online Continual Learning (OCL) specifically focuses on learning efficiently from a continuous stream of data with shifting distribution. While recent studies explore Continual Learning on graphs exploiting Graph Neural Networks (GNNs), only few of them focus on a streaming setting. Yet, many real-world graphs evolve over time, often requiring timely and online predictions. Current approaches, however, are not well aligned with the standard OCL setting, partly due to the lack of a clear definition of online Continual Learning on graphs. In this work, we propose a general formulation for online Continual Learning on graphs, emphasizing the efficiency requirements on batch processing over the graph topology, and providing a well-defined setting for systematic model evaluation. Finally, we introduce a set of benchmarks and report the performance of several methods in the CL literature, adapted to our setting."
37+
bibtex: >
38+
@misc{donghi2025online,
39+
title={Online Continual Graph Learning},
40+
author={Giovanni Donghi and Luca Pasa and Daniele Zambon and Cesare Alippi and Nicol{\`o} Navarin},
41+
year={2025},
42+
url={https://arxiv.org/abs/2508.03283}
43+
}
44+
links:
45+
paper: https://arxiv.org/abs/2508.03283
246
- title: "Equilibrium Policy Generalization: A Reinforcement Learning Framework for Cross-Graph Zero-Shot Generalization in Pursuit-Evasion Games"
347
venue: To appear in Advances in Neural Information Processing Systems
448
year: 2025

0 commit comments

Comments
 (0)