Data-driven machine-learning based inference 2022

We are developing machine-learning algorithms that can perform sensing and learning missions in real-world environments. As such, on-drone adaptation and continuous learning are critical to ensure the algorithmic performance since data that the algorithm would see in the real-world missions might be very different from those that are used to pre-train the algorithm offline. However, performing communication across a network of drones modeled as a graph is a major challenge while performing distributed training of deep learning models such as Graph Convolutional Network (GCN) limits the efficiency and scalability. Moreover, on-drone computational and storage resources are very limited due to the weight and form-factor constraints that limit the on-device inference of GCN models. We thus strive to reduce the communication bottleneck in distributed GCN training to improve scalability and reduce storage costs while maintaining algorithmic accuracy. We also aim to accelerate GCN inference by resolving the data processing and movement bottlenecks caused by irregular adjacency matrices towards which we devise an algorithm-hardware codesigned accelerator.

Publications

  • Cheng Wan, Youjie Li, Ang Li, Nam Sung Kim, and Yingyan Lin, “BNS-GCN: Efficient Full-Graph Training of Graph Convolutional Networks with Partition-Parallelism and Random Boundary Node Sampling”, Fifth Conference on Machine Learning and Systems (MLSys 2022)
  • Cheng Wan, Youjie Li, Cameron R. Wolfe, Anastasios Kyrillidis, Nam Sung Kim, and Yingyan Lin, “PipeGCN: Efficient Full-Graph Training of Graph Convolutional Networks with Pipelined Feature Communication”, The Tenth International Conference on Learning Representations (ICLR 2022) arXiv:2203.10428v1
  • Haoran You, Tong Geng, Yongan Zhang, Ang Li, and Yingyan Lin, “GCoD: Graph Convolutional Network Acceleration via Dedicated Algorithm and Accelerator Co-Design”, 28th IEEE International Symposium on High-Performance Computer Architecture (HPCA 2022) arXiv:2112.11594v2