<?xml version='1.0' encoding='UTF-8'?>
<rss xmlns:atom="http://www.w3.org/2005/Atom" xmlns:content="http://purl.org/rss/1.0/modules/content/" version="2.0"><channel><title>hx-nous</title><link>https://hx-nous.github.io</link><description>记录研究生学习历程</description><copyright>hx-nous</copyright><docs>http://www.rssboard.org/rss-specification</docs><generator>python-feedgen</generator><lastBuildDate>Mon, 16 Mar 2026 07:27:07 +0000</lastBuildDate><managingEditor>hx-nous</managingEditor><ttl>60</ttl><webMaster>hx-nous</webMaster><item><title>Dynamic Bundling with Large Language Models for Zero-Shot Inference on Text-Attributed Graphs</title><link>https://hx-nous.github.io/post/Dynamic%20Bundling%20with%20Large%20Language%20Models%20for%20Zero-Shot%20Inference%20on%20Text-Attributed%20Graphs.html</link><description># Dynamic Bundling with Large Language Models for Zero-Shot Inference on Text-Attributed Graphs

- **会议/期刊**：NeurIPS 2025 Poster
- **年份**：2025
- **论文链接**：https://openreview.net/forum?id=1nSynwHvu2
- **PDF**：https://openreview.net/pdf?id=1nSynwHvu2
- **代码链接**：论文页面提供了 GitHub 仓库 `bundle-neurips25`

## 一句话总结

这篇论文研究 **文本属性图（Text-Attributed Graph, TAG）上的 zero-shot 节点分类**，提出 **DENSE**：先把一组相近节点组成一个 bundle，让 LLM 预测这组文本的主类别，再用 bundle 级标签监督 GNN，并在训练中动态剔除不合群节点；最终在 10 个数据集上都优于 15 个基线。</description><guid isPermaLink="true">https://hx-nous.github.io/post/Dynamic%20Bundling%20with%20Large%20Language%20Models%20for%20Zero-Shot%20Inference%20on%20Text-Attributed%20Graphs.html</guid><pubDate>Mon, 16 Mar 2026 07:26:36 +0000</pubDate></item><item><title>MODALITY-FREE GRAPH IN-CONTEXT ALIGNMENT</title><link>https://hx-nous.github.io/post/MODALITY-FREE%20GRAPH%20IN-CONTEXT%20ALIGNMENT.html</link><description># 总览
本文提出 **MF-GIA（Modality-Free Graph In-context Alignment）**：一种面向**图基础模型（Graph Foundation Models, GFM）**的、**无模态假设（modality-free）**的图上 in-context learning 框架。</description><guid isPermaLink="true">https://hx-nous.github.io/post/MODALITY-FREE%20GRAPH%20IN-CONTEXT%20ALIGNMENT.html</guid><pubDate>Wed, 11 Mar 2026 02:49:01 +0000</pubDate></item><item><title>MLDGG: Meta-Learning for Domain Generalization on Graphs</title><link>https://hx-nous.github.io/post/MLDGG-%20Meta-Learning%20for%20Domain%20Generalization%20on%20Graphs.html</link><description>## 总览（修订版）

本文提出 **MLDGG**：一种面向**图上的节点级域泛化（Domain Generalization, DG）**的方法。</description><guid isPermaLink="true">https://hx-nous.github.io/post/MLDGG-%20Meta-Learning%20for%20Domain%20Generalization%20on%20Graphs.html</guid><pubDate>Wed, 14 Jan 2026 03:51:34 +0000</pubDate></item><item><title>Graph-R1: Incentivizing the Zero-Shot Graph Learning Capability in LLMs via Explicit Reasoning</title><link>https://hx-nous.github.io/post/Graph-R1-%20Incentivizing%20the%20Zero-Shot%20Graph%20Learning%20Capability%20in%20LLMs%20via%20Explicit%20Reasoning.html</link><description>论文要解决的问题：
现在主流的GNN与LLM结合的方法有以下几种：
１、将图扁平化为文本序列，并输入 LLM；然而，由于缺乏结构 性归纳偏置，这往往导致效果不佳
２、一种工作线保留 GNN 作为预测变量，同时使用 LLM 生成辅助信号，如合成标签或节点描述； 然而，这些方法仍然依赖于刚性 GNN 头，并且每项任务都需要重新训练
３、将预测工作委托给 LLM，同时通过跨模态投影 纳入来自冻结 GNN 的结构信号；但是，训练在各个组成部分间的分离导致任务条件作用薄弱，迁移性有限
４、在推理时直接将 GNN 特征注入 LLM 令牌流。</description><guid isPermaLink="true">https://hx-nous.github.io/post/Graph-R1-%20Incentivizing%20the%20Zero-Shot%20Graph%20Learning%20Capability%20in%20LLMs%20via%20Explicit%20Reasoning.html</guid><pubDate>Thu, 18 Dec 2025 13:42:42 +0000</pubDate></item><item><title>Contextual Structure Knowledge Transfer for Graph Neural Networks</title><link>https://hx-nous.github.io/post/Contextual%20Structure%20Knowledge%20Transfer%20for%20Graph%20Neural%20Networks.html</link><description># CS-GNN 论文笔记

## 论文要解决的问题
这篇论文关注 **图迁移学习（source → target）** 中一个常被忽略但很致命的问题：**同质性偏移（homophily shift）**。</description><guid isPermaLink="true">https://hx-nous.github.io/post/Contextual%20Structure%20Knowledge%20Transfer%20for%20Graph%20Neural%20Networks.html</guid><pubDate>Tue, 16 Dec 2025 08:50:24 +0000</pubDate></item><item><title>Towards Graph Foundation Models: Training on Knowledge Graphs Enables Transferability to General Graphs</title><link>https://hx-nous.github.io/post/Towards%20Graph%20Foundation%20Models-%20Training%20on%20Knowledge%20Graphs%20Enables%20Transferability%20to%20General%20Graphs.html</link><description>条件消息传递代表了GNN从无条件聚合向条件驱动推理的演进

本文提出一种新的图基础模型思路：只在少量知识图谱上做“归纳推理”预训练，就能在完全不微调的情况下，直接迁移到各种普通图上的节点/图/链路任务。</description><guid isPermaLink="true">https://hx-nous.github.io/post/Towards%20Graph%20Foundation%20Models-%20Training%20on%20Knowledge%20Graphs%20Enables%20Transferability%20to%20General%20Graphs.html</guid><pubDate>Wed, 10 Dec 2025 09:23:42 +0000</pubDate></item><item><title>LLaGA: Large Language and Graph Assistant</title><link>https://hx-nous.github.io/post/LLaGA-%20Large%20Language%20and%20Graph%20Assistant.html</link><description>本文介绍了一种新的方法，帮助LLM更好的理解图同时不损失LLM的通用性
## 一、方法
LLaGA 提出一个框架，让冻结的 LLM 能够利用图结构信息解决多种图任务。</description><guid isPermaLink="true">https://hx-nous.github.io/post/LLaGA-%20Large%20Language%20and%20Graph%20Assistant.html</guid><pubDate>Thu, 04 Dec 2025 08:40:40 +0000</pubDate></item><item><title>GraphBridge: Towards Arbitrary Transfer Learning in GNNs</title><link>https://hx-nous.github.io/post/GraphBridge-%20Towards%20Arbitrary%20Transfer%20Learning%20in%20GNNs.html</link><description>## 文章简要说明：
这篇文章主要是针对预训练模型做图迁移任务提出的两个新的迁移方法：
1、**GSST：**针对迁移领域跨度小，任务跨度小的迁移任务，用这个简单的框架可以达到很好的效果。</description><guid isPermaLink="true">https://hx-nous.github.io/post/GraphBridge-%20Towards%20Arbitrary%20Transfer%20Learning%20in%20GNNs.html</guid><pubDate>Fri, 28 Nov 2025 03:04:41 +0000</pubDate></item><item><title>Graph Foundation Models: A Comprehensive Survey</title><link>https://hx-nous.github.io/post/Graph%20Foundation%20Models-%20A%20Comprehensive%20Survey.html</link><description>整篇文章是对**图基础模型（Graph Foundation Models, GFMs）**的全面调查，旨在探索如何将大规模预训练的“基础模型”技术（类似于自然语言处理和计算机视觉领域中的大模型）应用于图结构数据。</description><guid isPermaLink="true">https://hx-nous.github.io/post/Graph%20Foundation%20Models-%20A%20Comprehensive%20Survey.html</guid><pubDate>Mon, 10 Nov 2025 03:25:50 +0000</pubDate></item><item><title>GraphGPT: Graph Instruction Tuning for Large Language Models</title><link>https://hx-nous.github.io/post/GraphGPT-%20Graph%20Instruction%20Tuning%20for%20Large%20Language%20Models.html</link><description>## 主要内容与核心方法

### 目标：
作者想把“图结构知识”真正注入到大语言模型（LLM）里，让模型在零样本与跨数据集/跨任务的图学习场景中也能泛化（如节点分类、链路预测），而不是只靠下游少量标注微调的传统GNN路线。</description><guid isPermaLink="true">https://hx-nous.github.io/post/GraphGPT-%20Graph%20Instruction%20Tuning%20for%20Large%20Language%20Models.html</guid><pubDate>Mon, 03 Nov 2025 12:41:56 +0000</pubDate></item><item><title>LLMs as Zero-shot Graph Learners: Alignment of GNN Representations with LLM Token Embeddings</title><link>https://hx-nous.github.io/post/LLMs%20as%20Zero-shot%20Graph%20Learners-%20Alignment%20of%20GNN%20Representations%20with%20LLM%20Token%20Embeddings.html</link><description># 论文总结

## 一、研究背景
图机器学习中，图神经网络（GNN）泛化能力有限，跨数据集 / 任务时性能不稳定，且依赖大量标注数据；现有自监督学习（如 DGI）、图提示学习需任务特定微调，实用性受限。</description><guid isPermaLink="true">https://hx-nous.github.io/post/LLMs%20as%20Zero-shot%20Graph%20Learners-%20Alignment%20of%20GNN%20Representations%20with%20LLM%20Token%20Embeddings.html</guid><pubDate>Thu, 16 Oct 2025 08:42:42 +0000</pubDate></item><item><title>GL-Fusion: Rethinking the Combination of Graph Neural Network and Large Language model阅读</title><link>https://hx-nous.github.io/post/GL-Fusion-%20Rethinking%20the%20Combination%20of%20Graph%20Neural%20Network%20and%20Large%20Language%20model-yue-du.html</link><description>#  粗略阅读

## 本文主要介绍了一种GNN&amp;LLM结合的新架构，主要目的是避免以往的结合方式中出现的一些缺陷

### 创新点

1. 结构感知Transformer层：在 Transformer 层内嵌入 message-passing（节点级聚合与更新），并通过特殊 attention mask 保证文本的自回归因果性与图节点的置换不变性。</description><guid isPermaLink="true">https://hx-nous.github.io/post/GL-Fusion-%20Rethinking%20the%20Combination%20of%20Graph%20Neural%20Network%20and%20Large%20Language%20model-yue-du.html</guid><pubDate>Tue, 07 Oct 2025 13:05:18 +0000</pubDate></item><item><title>9.26-Large Language Models Meet Graph Neural Networks: A Perspective of Graph Mining</title><link>https://hx-nous.github.io/post/9.26-Large%20Language%20Models%20Meet%20Graph%20Neural%20Networks-%20A%20Perspective%20of%20Graph%20Mining.html</link><description># 本篇综述主要讲解了GNN和LLM在图挖掘领域的结合相关情况

## 首先，本文讲解了图挖掘的重要性，GNN擅长结构建模，LLM擅长语义理解，然后列出了GNN和LLM在各自领域的应用。</description><guid isPermaLink="true">https://hx-nous.github.io/post/9.26-Large%20Language%20Models%20Meet%20Graph%20Neural%20Networks-%20A%20Perspective%20of%20Graph%20Mining.html</guid><pubDate>Fri, 26 Sep 2025 08:15:06 +0000</pubDate></item><item><title>MD-GNN阅读</title><link>https://hx-nous.github.io/post/MD-GNN-yue-du.html</link><description># 9.15组会汇报本篇论文
本论文主要是讲了一个带有多种自约束的多通道解耦图神经网络，目的是为了解决在使用图神经网络过程中的标签不足问题，相较于以往的多通道图神经网络，它有三个通道：特征通道，拓扑通道，潜在通道。</description><guid isPermaLink="true">https://hx-nous.github.io/post/MD-GNN-yue-du.html</guid><pubDate>Fri, 19 Sep 2025 08:18:15 +0000</pubDate></item></channel></rss>