Skip to content

Commit 20d8bba

Browse files
committed
Merge branch 'main' of github.com:Graph-Machine-Learning-Group/graph-machine-learning-group.github.io
2 parents b81f4ae + d8b894d commit 20d8bba

File tree

3 files changed

+29
-3
lines changed

3 files changed

+29
-3
lines changed

_data/news.yml

Lines changed: 8 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -1,4 +1,10 @@
1-
- date: 2025/06
1+
- date: 2025/10
2+
text: >
3+
Two new preprints about our latest research!
4+
<a href="https://arxiv.org/abs/2509.24728">Beyond Softmax: A Natural Parameterization for Categorical Random Variables</a> (Manenti and Alippi) and
5+
<a href="https://arxiv.org/abs/2507.23604">Hierarchical Message-Passing Policies for Multi-Agent Reinforcement Learning</a> (Marzi et al).
6+
Check them out!
7+
- date: 2025/09
28
text: 'Our papers <a href="https://arxiv.org/abs/2506.15507">Over-squashing in Spatiotemporal Graph Neural Networks</a> (Marisca et al.) and <a href="#">Equilibrium Policy Generalization: A Reinforcement Learning Framework for Cross-Graph Zero-Shot Generalization in Pursuit-Evasion Games</a> (Lu et al.) have been accepted at <strong><a href="https://neurips.cc">NeurIPS 2025</a></strong>!'
39
- date: 2025/06
410
text: >
@@ -9,7 +15,7 @@
915
- date: 2025/06
1016
text: >
1117
In collaboration with <strong>MeteoSwiss</strong>, we have released <a href="https://arxiv.org/abs/2506.13652"><strong>PeakWeather</strong></a> - a high-resolution benchmark <strong>dataset</strong> for spatiotemporal weather modeling from ground measuments. Check it out on <a href="https://huggingface.co/datasets/MeteoSwiss/PeakWeather">Hugging Face</a>!
12-
- date: 2025/06
18+
- date: 2025/05
1319
text: 'Our paper <a href="https://doi.org/10.1145/3742784">Graph Deep Learning for Time Series Forecasting (Cini et al.)</a> has been accepted to <strong><a href="https://dl.acm.org/journal/csur">ACM Computing Surveys</a></strong>!'
1420
- date: 2025/05
1521
text: 'Our papers <a href="https://arxiv.org/abs/2405.19933">Learning Latent Graph Structures and their Uncertainty (Manenti et al.)</a> and <a href="http://arxiv.org/abs/2502.09443">Relational Conformal Prediction for Correlated Time Series (Cini et al.)</a> have been accepted at <strong><a href="https://icml.cc/Conferences/2025">ICML 2025</a></strong>!'

_data/people.yml

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -68,7 +68,7 @@
6868
surname: Manenti
6969
group: team
7070
role: Ph.D. Student
71-
description: He studies spatiotemporal data processing using graph latent spaces.
71+
description: He studies methods for learning latent variables more accurately and efficiently.
7272
links:
7373
website: https://allemanenti.github.io/
7474
github: allemanenti

_data/publications.yaml

Lines changed: 20 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -21,6 +21,25 @@
2121
}
2222
links:
2323
paper: https://arxiv.org/abs/2510.06819
24+
- title: "Beyond Softmax: A Natural Parameterization for Categorical Random Variables"
25+
venue: Preprint
26+
year: 2025
27+
authors:
28+
- id:amanenti
29+
- id:calippi
30+
keywords:
31+
- probabilistic modeling
32+
- gradient-based optimization
33+
- graph structure learning
34+
- latent random variables
35+
abstract: 'Latent categorical variables are frequently found in deep learning architectures. They can model actions in discrete reinforcement-learning environments, represent categories in latent-variable models, or express relations in graph neural networks. Despite their widespread use, their discrete nature poses significant challenges to gradient-descent learning algorithms. While a substantial body of work has offered improved gradient estimation techniques, we take a complementary approach. Specifically, we: 1) revisit the ubiquitous softmax function and demonstrate its limitations from an information-geometric perspective; 2) replace the softmax with the catnat function, a function composed by a sequence of hierarchical binary splits; we prove that this choice offers significant advantages to gradient descent due to the resulting diagonal Fisher Information Matrix. A rich set of experiments - including graph structure learning, variational autoencoders, and reinforcement learning - empirically show that the proposed function improves the learning efficiency and yields models characterized by consistently higher test performance. Catnat is simple to implement and seamlessly integrates into existing codebases. Moreover, it remains compatible with standard training stabilization techniques and, as such, offers a better alternative to the softmax function.'
36+
bibtex: >
37+
@article{manenti2025beyond,
38+
title={Beyond Softmax: A Natural Parameterization for Categorical Random Variables},
39+
author={Alessandro Manenti and Cesare Alippi},
40+
journal={arXiv preprint arXiv:2509.24728},
41+
year={2025}
42+
}
2443
- title: "Online Continual Graph Learning"
2544
venue: Preprint
2645
year: 2025
@@ -182,6 +201,7 @@
182201
- graph structure learning
183202
- graph neural networks
184203
- model calibration
204+
- probabilistic modeling
185205
abstract: Within a prediction task, Graph Neural Networks (GNNs) use relational information as an inductive bias to enhance the model's accuracy. As task-relevant relations might be unknown, graph structure learning approaches have been proposed to learn them while solving the downstream prediction task. In this paper, we demonstrate that minimization of a point-prediction loss function, e.g., the mean absolute error, does not guarantee proper learning of the latent relational information and its associated uncertainty. Conversely, we prove that a suitable loss function on the stochastic model outputs simultaneously grants (i) the unknown adjacency matrix latent distribution and (ii) optimal performance on the prediction task. Finally, we propose a sampling-based method that solves this joint learning task. Empirical results validate our theoretical claims and demonstrate the effectiveness of the proposed approach.
186206
bibtex: >
187207
@inproceedings{manenti2025learning,

0 commit comments

Comments
 (0)