Office: LGRC A335
740 N Pleasant St
Amherst, MA 01003
I am an Assistant Professor in the College of Information and Computer Sciences at the University of Massachusetts Amherst. I am a member of the Data systems Research for Exploration, Analytics, and Modeling (DREAM) lab and of the Center for Data Science.
I lead the Data Systems group, which works on systems for machine learning and data science, data management systems, and parallel and distributed systems. We focus on performance, scalability, fault tolerance, and programming abstractions. My research areas and projects are listed here.
Before joining UMass, I was with Yahoo Research and QCRI. I got my PhD from the Technical University of Darmstadt, Germany.
|Our paper “GMorph: Accelerating Multi-DNN Inference via Model Fusion” accepted at Eurosys’24. The paper proposes “model fusion”, a new approach to fuse multiple task-specific, pre-trained, and heterogeneous DNNs into a single multi-task model to reduce inference latency.
|GraphMini paper accepted at PACT’23. GraphMini speeds up graph pattern matching, a key step in graph mining, by up to on order of magnitude compared to GraphPi and Dryadic. It builds auxiliary graphs by proactively pruning the input graph during query execution time.
|GSplit preprint published. GSplit is a multi-GPU Graph Neural Network training system that introduces split parallelism to reduce sampling, loading, and training overheads.
|Amazon Research Award on split-parallel graph neural network training (PI).
|NSF CNS Core Small grant on split-parallel graph neural network training (PI).
|FlexPushDownDB paper appeared at VLDB. It investigates the tradeoff between caching data at the query execution server vs. pushing computation to storage in analytical query workloads.
|Test-of-time award for the Zookeeper Atomic Broadcast (Zab) paper at DSN’21.
|Our paper on scalable graph neural network training using sampling appeared in the ACM SIGOPS Operating Systems Reviews.
|NextDoor paper appeared at Eurosys. NextDoor proposes pushing graph sampling to the GPU in order to significantly speed up end-to-end training time for GNNs and graph ML.
|Adobe Research Collaboration Grant on distributed data caching (PI).
|I became an ACM Senior Member.
|Our paper on finding optimal resource configurations on the cloud appeared at VLDB. We evaluate and compare several commonly used black-box optimization algorithms.
|LiveGraph paper appeared at VLDB. LiveGraph is the first graph storage system that supports transactions.
|Facebook Systems for ML Research Award on the NextDoor project, which pushes graph sampling to the GPU for graph machine learning (PI).
|PushDownDB paper appeared at ICDE. It studies the effectiveness of pushing parts of DBMS analytics queries onto the storage layer, specifically the S3 service by AWS.
|Our paper on choosing cloud DBMS appeared at VLDB. We discuss the tradeoffs involved in using shared-nothing vs. shared-storage designs on the cloud, considering different databases.
|I gave a keynote at the DataStax 2019 Product and Engineering Summit.
Abhinav Jangda - now at Microsoft Research
DSN 2021 Test-of-Time Award for the paper “Zab: High-Performance Broadcast for Primary-Backup Systems”.
Nomination for the “Best PhD thesis of the year” by the German, Swiss and Austrian Computer Science societies and the German chapter of ACM.
ACM Senior Member.