Daily arXiv Papers

Graph Neural Networks · Graph Learning · LLM × Graph
Conferences
Archives

Showing 22 papers for 2026-02-12

Colorful Talks with Graphs: Human-Interpretable Graph Encodings for Large Language Models
LLM × Graph

Graphs pose challenges for LLMs due to explicit structure and permutation invariance. We introduce a human-interpretable structural encoding strategy that translates graph structure into text in a way LLMs can leverage for reasoning. This graph-to-text encoding improves LLM-based reasoning on graph problems and is validated on representative tasks.

Exploring the impact of adaptive rewiring in Graph Neural Networks
GNN

We study sparsification as regularization for Graph Neural Networks to reduce memory and computation in large-scale graphs. Using Erdős–Rényi based sparsification, we demonstrate improved efficiency while preserving performance. The method is tested on N-1 contingency assessment in electrical grids and additional datasets.

RiemannGL: Riemannian Geometry Changes Graph Deep Learning
Graph Learning

RiemannGL argues that Riemannian geometry provides a principled foundation for graph representation learning, unifying disparate techniques under a geometric view. It treats graphs as objects in non-Euclidean spaces and develops learning methods grounded in differential geometry. The framework aims to unify graph DL rather than cataloging isolated techniques.

MoToRec: Sparse-Regularized Multimodal Tokenization for Cold-Start Recommendation
GNN

MoToRec tackles cold-start in multimodal recommender systems by transforming multimodal content into discrete semantic tokens. The sparse-regularized tokenization reduces noise and entanglement in sparse data, yielding more robust item representations. Consequently, cold-start recommendation performance improves.

SynergyKGC: Reconciling Topological Heterogeneity in Knowledge Graph Completion via Topology-Aware Synergy
Knowledge Graph

Knowledge Graph Completion often suffers from structural resolution mismatch across graph densities. SynergyKGC introduces an adaptive framework that fuses pre-trained entity semantics with heterogeneous topology to enable robust relational reasoning. It reduces structural noise in dense clusters and avoids representation collapse in sparse regions, improving KGC.

Proficient Graph Neural Network Design by Accumulating Knowledge on Large Language Models
GNN LLM × Graph

We explore leveraging large language models to accumulate knowledge for designing Graph Neural Networks, addressing knowledge gaps and input noise that hinder automated design. The framework aggregates knowledge in LLMs to guide GNN architecture choices, improving automation reliability.

MOTGNN: Interpretable Graph Neural Networks for Multi-Omics Disease Classification
GNN

MOTGNN integrates multi-omics data for disease classification with an interpretable Graph Neural Network. It tackles high dimensionality, modality heterogeneity, and lack of reliable interaction networks by integrating modalities without handcrafted graphs. The model provides interpretable insights across modalities.

Improving Long-Range Interactions in Graph Neural Simulators via Hamiltonian Dynamics
GNN

We propose Information-preserving Graph Neural Simulators (IP-GNS) based on Hamiltonian dynamics to improve long-range interactions in graph-based physical system simulators. The approach reduces error accumulation in autoregressive rollouts and better captures distant interactions. Experiments show improved accuracy and stability.

Entropy-Guided Dynamic Tokens for Graph-LLM Alignment in Molecular Understanding
LLM × Graph

This work introduces EDT-Former, an entropy-guided dynamic token transformer for molecular graphs that generates tokens on the fly instead of fixed-length static tokens. By leveraging token entropy to adapt token granularity, it better preserves stereochemistry and substructural context while reducing the need for heavy LLM fine-tuning. The result is improved efficiency and alignment between molecular graphs and LLM reasoning.

Localized Graph-Based Neural Dynamics Models for Terrain Manipulation
GNN

We present Localized Graph-Based Neural Dynamics (GBND) for terrain manipulation, modeling terrain deformation as the motion of a particle graph. The approach yields a compact, scalable dynamics model that supports predictive control for robot manipulation on varied terrains. It enables planning in construction and extraterrestrial exploration scenarios.

Efficient Learning on Large Graphs using a Densifying Regularity Lemma
Graph Learning Graph Theory

Proposes IBG, a low-rank factorization of large directed graphs based on intersecting bipartite components, enabling efficient learning. By weighting non-edges less and representing graphs with dense IBG blocks, large graphs can be approximated efficiently across sparse and dense regimes.

MalMoE: Mixture-of-Experts Enhanced Encrypted Malicious Traffic Detection Under Graph Drift
GNN

MalMoE addresses encrypted malicious traffic detection under graph drift, where traffic statistics and topology change over time. It uses a mixture-of-experts model on graph-based features to adapt to drift and maintain detection performance. Experiments show robustness against encryption and drift.

KORAL: Knowledge Graph Guided LLM Reasoning for SSD Operational Analysis
Knowledge Graph LLM × Graph

KORAL presents Knowledge Graph Guided LLM Reasoning for SSD Operational Analysis, integrating fragmented and time-disjoint data sources through a security knowledge graph. The KG backbone supports structured, explainable reasoning about SSD performance across varying workloads and environmental conditions. This enables more reliable diagnostics.

VulReaD: Knowledge-Graph-guided Software Vulnerability Reasoning and Detection
Knowledge Graph

VulReaD uses a security knowledge graph to perform CWE-aligned vulnerability reasoning and detection beyond binary judgments. It leverages structured relationships in the KG to support CWE-level explanations and detection. This approach aims for semantic consistency and actionable insights.

GraphSeek: Next-Generation Graph Analytics with LLMs
LLM × Graph Graph Learning

GraphSeek proposes a planning-based abstraction for next-generation graph analytics with LLMs, replacing direct NL-to-graph queries with planning over a Semantic Catalog. This enables scalable, multi-query analytics on industry-scale property graphs. It improves accuracy and efficiency of LLM-assisted graph analytics.

Controllable Logical Hypothesis Generation for Abductive Reasoning in Knowledge Graphs
Knowledge Graph

We define controllable hypothesis generation for abductive reasoning in knowledge graphs to produce plausible, relevant hypotheses while avoiding redundancy. We introduce mechanisms to constrain the generation process, improving practicality and usefulness in domains like clinical diagnostics and scientific discovery.

Unifying Deductive and Abductive Reasoning in Knowledge Graphs with Masked Diffusion Model
Knowledge Graph

We unify deductive and abductive reasoning in knowledge graphs with a masked diffusion model, enabling joint reasoning over graphs. The approach blends deduction and abduction within a single framework to enhance inference capabilities. This leads to more robust and coherent reasoning outcomes.

Structured Sentiment Analysis as Transition-based Dependency Graph Parsing
Graph Theory

We recast Structured Sentiment Analysis as a transition-based dependency graph parsing task, applying transition systems to extract opinions and their relations as a graph. This yields improvements in accuracy and efficiency over previous SSA methods that treated it differently.

JAG: Joint Attribute Graphs for Filtered Nearest Neighbor Search
Graph Learning

JAG introduces Joint Attribute Graphs for filtered nearest neighbor search to generalize across different filter types and query selectivities. By modeling joint attributes, it achieves robust performance in diverse filtering scenarios, outperforming specialized baselines.

Boundary-Aware Multi-Behavior Dynamic Graph Transformer for Sequential Recommendation
GNN Graph Learning

We propose Boundary-Aware Multi-Behavior Dynamic Graph Transformer for sequential recommendation, which models both dynamic graph topology and multi-behavior sequences. The boundary-aware design helps capture transition points and varied interaction patterns, improving recommendation accuracy.

Breaking the Likelihood Trap: Consistent Generative Recommendation with Graph-structured Model
Generative Rec Graph Learning

The authors focus on generative reranking in the final stage of recommendation, treating the ranking as a holistic sequence generation problem that can capture dependencies among items. They identify the 'likelihood trap', where sequences with high likelihood tend to be repetitive and perceived as low quality by users. They propose a graph-structured model to enable consistent, diverse, and engaging generation, aiming to break the trap and improve final exposure and user experience.

Benchmarking Large Language Models for Knowledge Graph Validation
Knowledge Graph LLM × Graph

This paper benchmarks the use of Large Language Models for validating factual claims in Knowledge Graphs, aiming to assess their ability to reason about and access factual knowledge. Through systematic evaluation, the authors examine accuracy, reliability, prompt design, and practical viability of LLMs for KG fact validation compared with automated baselines. They discuss limitations and provide recommendations for deploying LLMs in real-world KG validation tasks.