Showing 15 papers for 2026-03-06
This paper proposes an LLM-guided query-aware inference system to accelerate GNN inference on large knowledge graphs by tailoring computations to each query's structure and semantics. It argues that existing acceleration methods like pruning, quantization, and distillation treat models monolithically and miss query-specific adaptation. The system aims to improve efficiency without sacrificing accuracy.
This work studies clean-label backdoor attacks on Graph Neural Networks, showing that triggers can be injected without relabeling training nodes, making stealthy poisoning feasible in realistic settings. It analyzes the limitations of conventional backdoor strategies and proposes mechanisms for such clean-label attacks. The results demonstrate practical viability and raise security concerns for GNN deployments.
We introduce Geometric-Aware Quantization (GAQ) for SO(3)-equivariant GNNs to compress and accelerate models without destroying rotational symmetry. Naive quantization breaks the equivariance and harms conservation laws, so GAQ preserves geometric structure while improving efficiency. The framework enables faithful physics-consistent simulations at lower cost.
We propose TopKGraphs, a node-affinity method based on start-node biased random walks that emphasize transitions to structurally similar neighborhoods measured by Jaccard similarity. Instead of computing full stationary distributions, the walks act as stochastic neighborhood samplers, producing partial rankings that are robustly aggregated.
This paper questions whether learnable restriction maps in sheaf Laplacians are necessary for addressing oversmoothing on heterophilous graphs. It analyzes the theoretical and empirical implications of learnable versus fixed maps in Sheaf Neural Networks, offering insights into when learnable components help.
MPBMC presents a framework for multi-property bounded model checking by clustering properties with GNN embeddings. The clustering aims to group related properties to be solved together, guided by the property cone of influence to improve verification efficiency.
We characterize the computational power of recurrent graph neural networks via recurrent arithmetic circuits, introducing memory gates to store data across iterations. This unifies recurrent GNNs with arithmetic circuit models and clarifies their expressive capabilities.
We scalable model edge-wise dependencies in CopulaGNN for link sign prediction by modeling latent edge correlations with a Gaussian copula. The approach directly handles negative edges in signed graphs and improves predictive performance.
EchoGuard presents an agentic AI framework with a knowledge-graph memory to detect manipulative communication in longitudinal dialogue. It uses a structured Log-Analyze-Reflect loop to track tactics like gaslighting and emotional coercion across interactions.
AIS-TGNN couples a Temporal Graph Attention Network with an LLM reasoning module to produce port congestion predictions accompanied by faithful natural-language explanations, enabling evidence-based interpretability for operational decisions.
From Spark to Fire examines error cascades in LLM-based multi-agent collaboration, showing how small inaccuracies can propagate into system-level false consensus. It proposes protections to trace and mitigate cascading errors without heavily restructuring collaboration.
GEM-TFL bridges weak and full supervision for temporal forgery localization by EM-guided decomposition and temporal refinement, addressing label scarcity and misalignment between training objectives and inference goals.
RoboPARA introduces an LLM-driven dual-arm planning framework with a two-stage process: dependency-graph-based planning candidate generation and parallel allocation and recomposition across tasks for enhanced productivity.
Give Users the Wheel argues for a promptable recommendation paradigm where explicit user intent expressed via prompts guides recommendations, aiming to combine the interpretability and flexibility of LLMs with the efficiency of fast recommenders.
Core-based Hierarchies for Efficient GraphRAG proposes organizing documents in GraphRAG by core-based hierarchies rather than Leiden clustering, enabling better global sensemaking on sparse knowledge graphs.