← Home

Daily arXiv Papers

Graph Neural Networks · Graph Learning · LLM × Graph

Showing 22 papers for 2026-03-19

Federated Multi Agent Deep Learning and Neural Networks for Advanced Distributed Sensing in Wireless Networks
GNN Graph Learning

This paper examines a federated multi-agent deep learning framework for distributed sensing in wireless networks. It surveys how MADL, including MADRL, distributed training, and graph neural networks, can unify decision-making for sensing, communication, and computing in 5G-Advanced and 6G scenarios with edge intelligence and open RAN. It highlights decentralized, partially observed, time-varying settings and outlines key challenges and future directions.

PowerModelsGAT-AI: Physics-Informed Graph Attention for Multi-System Power Flow with Continual Learning
GNN Graph Learning

We introduce PowerModelsGAT-AI, a physics-informed graph attention model that predicts bus voltages and generator injections for multi-system power flow in real time. The model uses bus-type aware masking to handle diverse bus types and balances multiple loss terms to integrate physics with data. The approach improves generalization across different power systems and supports continual learning.

EEG-SeeGraph: Interpreting functional connectivity disruptions in dementias via sparse-explanatory dynamic EEG-graph learning
GNN Graph Learning

We propose SeeGraph, a sparse-explanatory dynamic EEG graph network that models time evolving functional connectivity to diagnose dementias. It uses a dual-trajectory temporal encoder and a node guided sparse edge mask to identify connections driving the decision while staying robust to noise and cross-site variability. Together these yield robust, interpretable dementia predictions.

Per-Domain Generalizing Policies: On Learning Efficient and Robust Q-Value Functions (Extended Version with Technical Appendix)
GNN Graph Learning

The paper argues for learning Q-value functions for per-domain generalization rather than value functions. Q-based policies are cheaper to evaluate since only the present state is processed. It discusses why vanilla supervised learning of Q-values underperforms and proposes improvements.

Gaussian Process Limit Reveals Structural Benefits of Graph Transformers
GNN Graph Learning

The authors study the Gaussian process limits of graph transformers such as GAT, Graphormer, and Specformer. They show that attention mechanisms offer structural benefits over vanilla graph convolution on node-level prediction tasks. The results provide theoretical justification for the empirical success of graph transformers.

HighAir: A Hierarchical Graph Neural Network-Based Air Quality Forecasting Method
GNN Graph Learning

HighAir proposes a hierarchical graph neural network for air quality forecasting. It models diffusion processes of pollutants across cities and monitoring stations to capture cross-location interactions, outperforming non-hierarchical baselines.

Hi-GMAE: Hierarchical Graph Masked Autoencoders
GNN Graph Learning

Hi-GMAE extends graph masked autoencoders to hierarchical graphs. It targets hierarchical structure such as atoms, functional groups, and molecules, enabling multi-scale self-supervised learning on graphs.

MSGCN: Multiplex Spatial Graph Convolution Network for Interlayer Link Weight Prediction
GNN Graph Learning

We introduce MSGCN, a multiplex spatial graph convolution network for predicting interlayer link weights in multilayer networks. The model leverages cross-layer information to predict different layer之间的边权重。

Exact Generalisation Error Exposes Benchmarks Skew Graph Neural Networks Success (or Failure)
GNN Graph Learning

The paper analyzes exact generalization error for graph neural networks to explain why benchmark results can be skewed across models. It provides theoretical insights into factors that influence generalization in GNNs and highlights implications for fair evaluation.

Learning Time-Varying Graphs from Incomplete Graph Signals
Graph Learning

We propose a unified nonconvex optimization framework to jointly infer time-varying graph Laplacians and reconstruct missing signals from incomplete observations. The method enables bidirectional information exchange between graph structure and signal values and demonstrates robustness under high missing data.

Neural-Symbolic Logic Query Answering in Non-Euclidean Space
Knowledge Graph

HYQNET is a neural symbolic model for answering first order logic queries on knowledge graphs using hyperbolic space. It decomposes FOL queries into relation paths and exploits hyperbolic geometry to capture hierarchical query structure, combining interpretability with neural generalization.

Knowledge Graph Extraction from Biomedical Literature for Alkaptonuria Rare Disease
Knowledge Graph

The work builds knowledge graphs from biomedical literature to support research on alkaptonuria, a ultra-rare metabolic disorder. It addresses data scarcity by extracting and connecting entities and relations from scarce literature, enabling integrated reasoning and discovery.

A federated learning framework with knowledge graph and temporal transformer for early sepsis prediction in multi-center ICUs
Knowledge Graph Graph Learning

The framework combines federated learning with a medical knowledge graph and a temporal transformer, enhanced by meta learning, to predict sepsis early across multiple ICUs while preserving privacy. It aims to improve accuracy by leveraging structured clinical knowledge and time dependent data.

Tackling Over-smoothing on Hypergraphs: A Ricci Flow-guided Neural Diffusion Approach
GNN Graph Learning

The paper introduces a Ricci flow guided neural diffusion method for hypergraphs to mitigate over smoothing as depth increases. It shows that discrete Ricci flow can regulate feature evolution and improves learning on higher order relationships.

Controllable Graph Generation with Diffusion Models via Inference-Time Tree Search Guidance
Graph Learning

We propose a controllable graph generation method using diffusion models with inference-time guidance. Tree search guidance allows steering generation toward desired properties without retraining, improving stability and controllability.

AGRAG: Advanced Graph-based Retrieval-Augmented Generation for LLMs
Graph Learning

AGRAG aims to improve graph-based retrieval augmented generation for LLMs by addressing graph construction accuracy, explicit reasoning for chunk selection, and complete answer generation.

SentGraph: Hierarchical Sentence Graph for Multi-hop Retrieval-Augmented Question Answering
Graph Learning

SentGraph constructs a sentence-level graph to support multi-hop RAG. It organizes evidence across documents to produce coherent evidence chains and improve reasoning in multi-hop QA.

Enhanced Atrial Fibrillation Prediction in ESUS Patients with Hypergraph-based Pre-training
Graph Learning

The study uses supervised and unsupervised hypergraph pre-training to improve AF prediction in ESUS patients, tackling small cohorts and high-dimensional features. It demonstrates improved predictive performance over baselines.

Open Biomedical Knowledge Graphs at Scale: Construction, Federation, and AI Agent Access with Samyama Graph Database
Knowledge Graph

The paper presents three open biomedical knowledge graphs built on Samyama for scale, including Pathways KG, Clinical Trials KG, and Drug Interactions KG, with construction and federation strategies and support for AI agent access.

A spatio-temporal graph-based model for team sports analysis
Graph Learning

The work models team sports as a spatio-temporal graph and analyzes attacking play as a directed path with spatial, temporal, and semantic ball carrier information. It introduces a generic graph-based framework to study tactical decisions under external constraints.

Graph-Native Cognitive Memory for AI Agents: Formal Belief Revision Semantics for Versioned Memory Architectures
Graph Learning

The paper proposes Kumiho, a graph-native cognitive memory system for AI agents, grounded in formal belief revision semantics. It defines structural primitives (immutable revisions, mutable tag pointers, typed dependency edges, URI-based addressing) that allow memory and agent-produced work to be managed as versioned assets within a unified graph architecture. This enables coherent memory management, provenance, and revision control for autonomous agents.

The Reasoning Bottleneck in Graph-RAG: Structured Prompting and Context Compression for Multi-Hop QA
Graph Learning

The paper analyzes the reasoning bottleneck in Graph-RAG for multi-hop QA, showing that although retrieved contexts often contain the gold answer, overall answer accuracy remains low due to reasoning failures. It introduces two augmentations—SPARQL chain-of-thought prompting that decomposes complex questions into sequences of SPARQL steps, and context compression techniques to prune extraneous information while preserving reasoning-relevant content. Empirical evaluation on HotpotQA, MuSiQue, and 2WikiMultiHopQA demonstrates the potential gains from structured prompting and context-aware processing.