Showing 10 papers for 2026-02-19
This work argues that prior results on GNN convergence on large random graphs ignore node feature correlations common in real networks. It proposes a novel random graph generator that embeds realistic node-feature correlations and uses it to study how such correlations affect GNN convergence and expressive power in large graphs. The findings indicate that feature correlations can significantly alter convergence behavior compared with i.i.d. feature models.
This paper models sea ice floes as graph nodes with edges capturing physical interactions, including collisions. They introduce the Collision-Captured Network (CN), a GNN that integrates data assimilation to improve dynamic forecasting. Compared to traditional numerical solvers, CN offers scalable, efficient collision-aware dynamics modeling.
The authors present an FPGA-based implementation of event-graph neural networks for neuromorphic audio processing on a SoC device. By using a cochlea-inspired front-end to convert time-series data into sparse events, the approach reduces memory and computation. The hardware design enables low-latency, energy-efficient audio classification and keyword spotting.
The paper extends Laplacian-constrained Gaussian Graphical Models to jointly leverage node signals and associated textual metadata. It provides an optimization formulation and develops an efficient majorization-minimization algorithm for joint graph and metadata estimation. Experiments show improved graph recovery and predictive performance when metadata is incorporated.
This work proposes a fully quantum graph convolutional architecture tailored for unsupervised learning on NISQ devices. The model uses an edge-local design and qubit-efficient representations to reduce circuit depth and qubit counts, combining a variational feature extraction layer with quantum graph operations. It demonstrates practical quantum graph learning on near-term hardware while balancing resource constraints.
CardinalityGraphFormer introduces a cardinality-preserving attention (CPA) channel for graph transformers to retain dynamic signal cardinalities alongside static centrality embeddings. It blends structured sparse attention with Graphormer-inspired biases and dual-objective self-supervised pretraining (masked reconstruction and contrastive learning). Results show improved molecular property predictions, especially with limited data.
The paper explores using large language models to assist causal discovery within a constraint-based, argumentation-driven framework. Causal Assumption-based Argumentation (ABA) ensures correspondence between input constraints and output graphs and enables principled data-expert knowledge integration. The approach provides a scalable, interpretable path to combine symbolic reasoning with observational data for causal graphs.
FedGraph-AGI introduces a federated learning framework for cross-border insider threat intelligence in government financial schemes. It addresses privacy constraints, cross-jurisdiction data sharing, and the need to reason about complex multi-step attack patterns within graph-structured financial networks. The approach enables privacy-preserving collaboration to enhance threat intelligence across borders.
GDGB is proposed as a benchmark for generative dynamic text-attributed graphs (DyTAGs). It addresses poor textual quality in existing datasets and standardizes tasks and evaluation protocols for generative DyTAG problems, providing high-quality textual attributes and unified evaluation to foster progress in generative modeling on DyTAGs.
This study analyzes the expressive power of graph transformers, comparing real-number and floating-point regimes under soft attention and average hard attention. It shows that with real numbers, GTs restricted to first-order-logic-definable vertex properties have expressiveness aligned with FO logic, and discusses how these findings extend to practical floating-point settings, shedding light on logical definability limits of GTs on graphs.