计算机工程与应用 ›› 2023, Vol. 59 ›› Issue (16): 240-247.DOI: 10.3778/j.issn.1002-8331.2204-0068

• 网络、通信与安全 • 上一篇    下一篇

面向命名数据网的高性能内核级网络缓存方法

杨济科,嵩天,李天龙,杨雅婷   

  1. 1.北京理工大学 网络空间安全学院,北京 100081
    2.北京理工大学 计算机学院,北京 100081
  • 出版日期:2023-08-15 发布日期:2023-08-15

High-Performance Kernel-Level In-Network Caching for Named Data Networking

YANG Jike, SONG Tian, LI Tianlong, YANG Yating   

  1. 1.School of Cyberspace Science and Technology, Beijing Institute of Technology, Beijing 100081, China
    2.School of Computer Science and Technology, Beijing Institute of Technology, Beijing 100081, China
  • Online:2023-08-15 Published:2023-08-15

摘要: 命名数据网(named data networking,NDN)是一种以信息为中心的新型网络架构方案,网内缓存是其核心功能之一。现有缓存模块主要以应用级缓存实现为主,存在网络操作效率低、设备兼容性差、部署位置受限等问题。相比于应用级缓存模块,内核级缓存模块可以直接且广泛地部署于通用网络设备上,有助于推动网内缓存技术的规模应用以及NDN网络方案的实际部署。然而,由于NDN网内缓存机制涉及频繁的逐包缓存操作,将网内缓存功能引入内核将会影响内核处理性能。针对这一性能问题,设计并实现了一种内核级缓存方法。该方法维护一个哈希表进行缓存的精确查找,利用NDN名称的结构性,在节点间构建字典树支撑NDN缓存模糊匹配功能。提出使用细粒度的逐槽锁保护缓存查找表,原子操作保护替换队列,将缓存操作多线程并行化。在Linux内核中实现了所提的多线程缓存模块,实验结果表明,所提出的方法可以将缓存模块查找延迟降低至现有方案的一半,并且通过多线程将吞吐量提升至6.785?Mpacket/s。

关键词: 命名数据网, 网内缓存, 名称查找, 多线程

Abstract: Named data networking(NDN) is a new information-centric network architecture. In-network caching is one of core functions in NDN. Current cache modules are mainly deployed in user-level, which brings issues of network operation efficiency, compatibility among devices and limitation of deployment location. Comparing to user-level cache module, kernel-level cache module can be directly and widely deployed on general net devices, so it can boost the large-scale deployment of in-network caching and the practical use of NDN solutions. However, due to the frequent per-packet caching operations in NDN, introducing cache into kernel may cause performance issues. To solve these performance issues, this paper designs and implements a kernel-level cache method. The cache method uses a hash table for exact cache lookup and utilizes the naming conventions of NDN to construct a trie for prefix cache lookup. Furthermore, this paper presents an approach of using fine-grained per-slot locks for lookup table and atomic operations for replacement queues to parallelize cache operations. The multi-thread cache module is implemented in the Linux kernel. The experimental results show that the proposed cache scheme reduces the lookup latency by half comparing to current solutions, and boosts the throughput up to 6.785 Mpacket/s via multithreading.

Key words: named data networking, in-network caching, named lookup, multi-thread