The MADSys (Machine Learning, AI, Big Data Systems) group is dedicated to the design, implementation, evaluation, and application of parallel and distributed systems. Our research spans various methods aimed at accelerating data processing. Although the foundational principles like caching, batching, and overlapping are consistent, the strategic and innovative application of these techniques allows system researchers to optimally utilize diverse hardware resources across different scenarios.
Sep 10, 2024 - Our paper “Transparently Share Serverless Execution Environments Across Different Functions and Nodes” is accepted by SOSP 2024!
Sep 1, 2024 - Research Assistant Professor Shan Yingdi joins MADSys Group, Welcome!
Sep 1, 2024 - Sixing Lin, Ruoyu Qin, Boxin Zhang, Jianfeng Li, Jingbo Shan, Jianwei Dong, Chen Lin, Yuanyong Chen are welcome to join the MadSys Group.
Dec 20, 2023 - Professor Wu Yongwei has been elevated to CCF Fellow. Congratulations!
Sep 15, 2023 - Wang Leping and his team participated in the “Changchengbei” cyber security competition and won the first place. Congratulations!
Sep 1, 2023 - Jinqi Hua, Xun Sun, Shaofeng Ding and Ziyu Zeng are welcome to join the MadSys Group.
Jul 16, 2023 - Our paper “Falcon: Fast OLTP Engine for Persistent Cache and Non-Volatile Memory” is accepted by SOSP 2023!
Jul 16, 2023 - Our paper “Partial Failure Resilient Memory Management System for (CXL-based) Distributed Shared Memory” is accepted by SOSP 2023!
Jul 7, 2023 - Our paper “Multi-objective optimization for Floating Point Mix-Precision Tuning” is accepted by ISLPED 2023!
Jun 27, 2023 - MadSys group dine together at Beijing.
Jun 25, 2023 - Qi Chen, Feng Ren and Shaonan Ma have received their PhD degrees. Hanyang Mao have received their master degrees. Congratulations!
Jun 7, 2023 - Our papar “Explore Data Placement Algorithm for Balanced Recovery Load Distribution” is accecpted by 2023 USENIX Annual Technical Conference (USENIX ATC ‘23), congratulations to Yingdi Shan.
Essentially, a decoder-only Transformer model transforms data from any modality into KVCache, positioning it as a central element in LLM serving optimizations. These optimizations include, but are not limited to, caching, scheduling, compression, and offloading. KVCache.AI is a collaborative endeavor with leading industry partners such as Approaching.AI and Moonshot AI. The project focuses on developing effective and practical techniques that enrich both academic research and open-source development.
Prompted by advancements in modern interconnect technologies like RDMA and CXL, this project aims to revisit the implementation and application of distributed shared memory (DSM). The objective is to facilitate the development of resilient distributed applications that can tolerate partial failures, making this process as straightforward and efficient as programming concurrent applications on a single machine. RDSM is a collaborative endeavor with leading industry partners such as Alibaba and Intel, dedicated to establishing fundamental frameworks that enhance both academic research and open-source development.