计算机工程与应用 ›› 2024, Vol. 60 ›› Issue (15): 77-90.DOI: 10.3778/j.issn.1002-8331.2306-0417

• 理论与研发 • 上一篇    下一篇

融合信息瓶颈与图卷积的跨域推荐算法

王永贵,胡鹏程,时启文,赵炀,邹赫宇   

  1. 辽宁工程技术大学 电子与信息工程学院,辽宁 葫芦岛 125105
  • 出版日期:2024-08-01 发布日期:2024-07-30

Cross-Domain Recommendation Algorithm Combining Information Bottleneck and Graph Convolution

WANG Yonggui, HU Pengcheng, SHI Qiwen, ZHAO Yang, ZOU Heyu   

  1. College of Electronics and Information Engineering, Liaoning Technical University, Huludao, Liaoning 125105, China
  • Online:2024-08-01 Published:2024-07-30

摘要: 基于迁移学习的跨域推荐可以有效地学习连接源域和目标域的映射函数,但其性能仍然受到表征质量不高和负迁移问题的影响,不能有效地为冷启动用户进行推荐,为此提出了一种融合信息瓶颈与图卷积网络的跨域推荐模型(IBGC)。利用图卷积神经网络聚合有关联的用户-用户和项目-项目信息;利用注意力机制学习用户和项目偏好,以提高节点特征表示质量;考虑到两个领域的信息交互,将重叠用户进行嵌入表示的同时限制特定信息的编码,利用信息瓶颈理论设计了三种正则化器,以捕获域内和跨域用户-项目的相关性,并将不同领域的重叠用户表征对齐以解决负迁移问题。在Amazon数据集中的四对公开数据集上进行实验,实验结果表明该模型在MRR、HR@K和NDCG@K三个推荐性能指标上的表现均优于基线模型,在四对数据集上与最优对比基线模型相比,MRR平均提升34.36%,HR@10平均提升34.94%,NDCG@10平均提升36.83%,证明了IBGC模型的有效性。

关键词: 跨域推荐算法, 用户冷启动推荐, 图卷积神经网络, 信息瓶颈理论, 网络嵌入学习, 注意力机制

Abstract: The cross-domain recommendation based on transfer learning can effectively learn the mapping function connecting source domain and target domain, but its performance is still affected by poor representation quality and negative transfer problem, and it can not accurately recommend users of cold start. Therefore, a cross-domain recommendation model (IBGC) combining information bottleneck and graph convolution neural network is proposed. The graph convolutional network is used to aggregate the associated user-user and project-item information. The attention mechanism is used to learn user and item preferences to improve the quality of node feature representation. Considering the information interaction between the two domains, three regularizers are designed to capture intra-domain and cross-domain user-item correlation by using the information bottleneck theory, and overlapping user representations in different domains are aligned to solve the negative transfer problem. Experimentsare conducted on four pairs of public datasets in the Amazon dataset. The model has performed better than the baseline model on the three recommendation performance indicators of MRR, HR@K, and NDCG@K, compared with the optimal comparison baseline model on the four datasets, MRR has improved by an average of 34.36%, HR@10 has improved by an average of 34.94%, and NDCG@10 has improved by an average of 36.83%, which proves the validity of the IBGC model.

Key words: cross-domain recommendation algorithms, user cold-start recommendation, graph convolutional neural networks, information bottleneck theory, network embedding learning, attention mechanism