Showing 11 papers for 2026-03-02
Rudder introduces a software module embedded in the distributed GNN training loop that steers data prefetching via LLM agents. It addresses irregular communication, dynamic data changes, and caching policies, enabling adaptive data access across graph distributions, samples, and batches. By guiding prefetch decisions with learned policies, it reduces stalls and improves forward progress.
Flowette presents a continuous flow matching framework for graph generation. It uses a graph neural network–based transformer to learn a velocity field over graph representations with node and edge attributes. It preserves topology through optimal-transport–based coupling and enforces long-range dependencies via regularisation; graphettes provide domain-driven priors to improve generation quality.
Normalisation and Initialisation Strategies for Graph Neural Networks in Blockchain Anomaly Detection systematically ablates initialization and normalisation strategies across GCN, GAT, and GraphSAGE on the Elliptic Bitcoin dataset. It shows how these training choices critically affect AML performance and stability, offering practical recommendations.
SME-HGT proposes a heterogeneous Graph Transformer framework to predict which SBIR Phase I awardees will advance to Phase II using publicly available data. It builds a heterogeneous graph with company, topic, and government agency nodes connected by about 99k edges. The method demonstrates effective identification of high-potential SMEs.
MMKG-RDS introduces a flexible framework for reasoning data synthesis using multimodal knowledge graphs. It supports fine-grained knowledge extraction, customizable path sampling, and other features to generate synthetic data for reasoning tasks. The framework aims to improve long-tail coverage, verifiability, and interpretability of synthetic data.
Democratizing GraphRAG presents SPRIG, a CPU-only, linear-time, token-free GraphRAG pipeline that replaces expensive LLM-driven graph construction with lightweight NER-driven co-occurrence graphs. It uses Personalized PageRank for retrieval and achieves about 28% recall improvement with negligible Recall@10 changes, illustrating when CPU-friendly retrieval helps multi-hop QA.
Hierarchical Multi-Scale Graph Learning with Knowledge-Guided Attention for Whole-Slide Image Survival Analysis introduces HMKGN, a knowledge-aware graph network that models multi-scale interactions and spatial locality in WSIs for cancer prognostication. Hierarchical structure enforces local spatial constraints so that regional cellular graphs aggregate patches within a region of interest, improving survival prediction.
SAGE-LLM proposes a train-free two-layer decision architecture for UAV decisions that combines high-level safety planning with low-level precise control using LLMs. It integrates fuzzy-CBF verification and graph-structured knowledge retrieval to provide safer, more generalizable decisions without extensive task-specific training.
Language Models as Messengers argues that in heterophilic graphs, messaging can be enhanced by semantic information from node text. It proposes using language models to generate richer semantic messages to accompany standard neighbor embeddings, improving propagation and overall performance on heterophilic graphs without sacrificing homophily performance.
LEC-KG presents an LLM-embedding collaborative framework for domain-specific knowledge graph construction, demonstrated via SDGs. It combines hierarchical relation extraction with evidence grounding and KG embeddings, enabling bidirectional collaboration between LLMs and embeddings to improve graph quality.
PersonalAI provides a systematic comparison of knowledge-graph storage and retrieval approaches for personalized LLM agents. It proposes an external memory framework that automatically constructs and updates memory with LLMs and evaluates trade-offs across storage and retrieval architectures for scalability and memory fidelity.