Showing 17 papers for 2026-02-16
Coden proposes efficient temporal graph neural networks for continuous prediction, addressing the need for frequent predictions over time rather than a single forecast over a fixed temporal window. It analyzes the computational bottlenecks of adapting TGNNs to continuous inference on large graphs and presents design choices to balance runtime with prediction quality.
The paper investigates what discrete algorithms graph neural networks can learn, in the area of neural algorithmic reasoning. It examines the capabilities and limitations of MP-GNNs for algorithm execution and discusses the lack of formal guarantees in many empirical studies, aiming to characterize learnable classes of algorithms.
FlashSchNet introduces an IO-aware GNN-MD framework that optimizes data movement between GPU high-bandwidth memory and on-chip SRAM to accelerate molecular dynamics simulations. It achieves faster and more accurate SchNet-style potentials by accounting for IO bottlenecks.
The work explores using graph neural networks to approximate the Uniform Facility Location problem, aiming to provide fast heuristics with performance guarantees inspired by classical approximation algorithms. It discusses training and architectural choices to achieve competitive performance without relying on heavy supervised data or reinforcement learning.
VDW-GNNs introduce vector diffusion wavelets for geometric GNNs, enabling wavelet-based processing on data lying in tangent bundles. The framework is evaluated on synthetic point clouds and real-world data such as wind fields and neural activity, with theoretical insights into the method.
The paper asks what temporal graph learning models actually learn, examining reliability of benchmarks and the risk of simple heuristics competing with state-of-the-art. It analyzes which graph properties and dynamics models leverage to make predictions.
ATLAS is a propagation-free framework for learning on graphs that handles both homophilic and heterophilic settings by encoding topology via multi-resolution community features rather than message passing. The paper proves a fundamental trade-off in community refinement between granularity and performance and demonstrates scalability.
Bayesian Neighborhood Adaptation for GNNs proposes a Bayesian approach to adaptively select neighborhood scope for aggregation, avoiding exhaustive search over fixed hops. It aims to improve robustness across homophilic and heterophilic graphs.
SaVe-TAG uses LLM-based semantic interpolation for long-tailed text-attributed graphs via semantic-aware vicinal risk minimization, going beyond embedding arithmetic. It preserves rich textual semantics and improves generalization across head and tail classes.
Bayesian Ego-graph Inference for Networked-MARL introduces a stochastic ego-graph policy to capture local graph uncertainty and enable dynamic neighborhood adaptation in decentralized multi-agent reinforcement learning. It improves robustness and coordination under changing network conditions.
TA-KAND presents a two-stage attention-based triple enhancement and diffusion method (U-KAN based) for few-shot KG completion, addressing long-tailed relation distributions. It improves how few-shot relations are inferred by refining relation triples and diffusing information.
Entity State Tuning (EST) is an encoder-agnostic framework for temporal knowledge graph forecasting that endows forecasters with persistent and evolving entity states, addressing episodic amnesia and long-term dependency decay. It integrates structure and sequence for better predictions.
Intent-Driven Smart Manufacturing integrates instruction-tuned LLMs with ontology-aligned knowledge graphs to translate human intents into machine-executable requirements. The paper demonstrates fine-tuning Mistral-7B-Instruct-V02 to generate structured JSON requirement models from natural language.
WebClipper compresses web agent trajectories by graph-based pruning, removing unproductive branches in the agent's state graph to speed up search without sacrificing results.
Beyond Static Question Banks proposes dynamic knowledge expansion via LLM-automated graph construction and adaptive generation for personalized education. It addresses cost and scalability by automating KG creation and state-aware reasoning rather than relying on static question banks.
Semantic Communities and Boundary-Spanning Lyrics in K-pop presents a graph-based unsupervised analysis of K-pop lyrics to discover semantic communities across multilingual and repetitive content. The framework constructs line-level semantic representations and detects topic-level communities.
RLMiner frames the problem of finding the most frequent induced subgraph of size k as a reinforcement learning task, aiming to overcome NP-hard counting cost. It proposes an RL-based search strategy that reduces enumeration time and scales to larger graphs.