Showing 22 papers for 2026-02-12
Graphs pose challenges for LLMs due to explicit structure and permutation invariance. We introduce a human-interpretable structural encoding strategy that translates graph structure into text in a way LLMs can leverage for reasoning. This graph-to-text encoding improves LLM-based reasoning on graph problems and is validated on representative tasks.
We study sparsification as regularization for Graph Neural Networks to reduce memory and computation in large-scale graphs. Using Erdős–Rényi based sparsification, we demonstrate improved efficiency while preserving performance. The method is tested on N-1 contingency assessment in electrical grids and additional datasets.
RiemannGL argues that Riemannian geometry provides a principled foundation for graph representation learning, unifying disparate techniques under a geometric view. It treats graphs as objects in non-Euclidean spaces and develops learning methods grounded in differential geometry. The framework aims to unify graph DL rather than cataloging isolated techniques.
MoToRec tackles cold-start in multimodal recommender systems by transforming multimodal content into discrete semantic tokens. The sparse-regularized tokenization reduces noise and entanglement in sparse data, yielding more robust item representations. Consequently, cold-start recommendation performance improves.
Knowledge Graph Completion often suffers from structural resolution mismatch across graph densities. SynergyKGC introduces an adaptive framework that fuses pre-trained entity semantics with heterogeneous topology to enable robust relational reasoning. It reduces structural noise in dense clusters and avoids representation collapse in sparse regions, improving KGC.
We explore leveraging large language models to accumulate knowledge for designing Graph Neural Networks, addressing knowledge gaps and input noise that hinder automated design. The framework aggregates knowledge in LLMs to guide GNN architecture choices, improving automation reliability.
MOTGNN integrates multi-omics data for disease classification with an interpretable Graph Neural Network. It tackles high dimensionality, modality heterogeneity, and lack of reliable interaction networks by integrating modalities without handcrafted graphs. The model provides interpretable insights across modalities.
We propose Information-preserving Graph Neural Simulators (IP-GNS) based on Hamiltonian dynamics to improve long-range interactions in graph-based physical system simulators. The approach reduces error accumulation in autoregressive rollouts and better captures distant interactions. Experiments show improved accuracy and stability.
This work introduces EDT-Former, an entropy-guided dynamic token transformer for molecular graphs that generates tokens on the fly instead of fixed-length static tokens. By leveraging token entropy to adapt token granularity, it better preserves stereochemistry and substructural context while reducing the need for heavy LLM fine-tuning. The result is improved efficiency and alignment between molecular graphs and LLM reasoning.
We present Localized Graph-Based Neural Dynamics (GBND) for terrain manipulation, modeling terrain deformation as the motion of a particle graph. The approach yields a compact, scalable dynamics model that supports predictive control for robot manipulation on varied terrains. It enables planning in construction and extraterrestrial exploration scenarios.
Proposes IBG, a low-rank factorization of large directed graphs based on intersecting bipartite components, enabling efficient learning. By weighting non-edges less and representing graphs with dense IBG blocks, large graphs can be approximated efficiently across sparse and dense regimes.
MalMoE addresses encrypted malicious traffic detection under graph drift, where traffic statistics and topology change over time. It uses a mixture-of-experts model on graph-based features to adapt to drift and maintain detection performance. Experiments show robustness against encryption and drift.
KORAL presents Knowledge Graph Guided LLM Reasoning for SSD Operational Analysis, integrating fragmented and time-disjoint data sources through a security knowledge graph. The KG backbone supports structured, explainable reasoning about SSD performance across varying workloads and environmental conditions. This enables more reliable diagnostics.
VulReaD uses a security knowledge graph to perform CWE-aligned vulnerability reasoning and detection beyond binary judgments. It leverages structured relationships in the KG to support CWE-level explanations and detection. This approach aims for semantic consistency and actionable insights.
GraphSeek proposes a planning-based abstraction for next-generation graph analytics with LLMs, replacing direct NL-to-graph queries with planning over a Semantic Catalog. This enables scalable, multi-query analytics on industry-scale property graphs. It improves accuracy and efficiency of LLM-assisted graph analytics.
We define controllable hypothesis generation for abductive reasoning in knowledge graphs to produce plausible, relevant hypotheses while avoiding redundancy. We introduce mechanisms to constrain the generation process, improving practicality and usefulness in domains like clinical diagnostics and scientific discovery.
We unify deductive and abductive reasoning in knowledge graphs with a masked diffusion model, enabling joint reasoning over graphs. The approach blends deduction and abduction within a single framework to enhance inference capabilities. This leads to more robust and coherent reasoning outcomes.
We recast Structured Sentiment Analysis as a transition-based dependency graph parsing task, applying transition systems to extract opinions and their relations as a graph. This yields improvements in accuracy and efficiency over previous SSA methods that treated it differently.
JAG introduces Joint Attribute Graphs for filtered nearest neighbor search to generalize across different filter types and query selectivities. By modeling joint attributes, it achieves robust performance in diverse filtering scenarios, outperforming specialized baselines.
We propose Boundary-Aware Multi-Behavior Dynamic Graph Transformer for sequential recommendation, which models both dynamic graph topology and multi-behavior sequences. The boundary-aware design helps capture transition points and varied interaction patterns, improving recommendation accuracy.
The authors focus on generative reranking in the final stage of recommendation, treating the ranking as a holistic sequence generation problem that can capture dependencies among items. They identify the 'likelihood trap', where sequences with high likelihood tend to be repetitive and perceived as low quality by users. They propose a graph-structured model to enable consistent, diverse, and engaging generation, aiming to break the trap and improve final exposure and user experience.
This paper benchmarks the use of Large Language Models for validating factual claims in Knowledge Graphs, aiming to assess their ability to reason about and access factual knowledge. Through systematic evaluation, the authors examine accuracy, reliability, prompt design, and practical viability of LLMs for KG fact validation compared with automated baselines. They discuss limitations and provide recommendations for deploying LLMs in real-world KG validation tasks.