📊 Research

My research interests lie in distributed high-performance parallel computing, and in the distributed training and inference of deep learning and large language model workflows on state-of-the-art computing clusters, both on premises and in the cloud.

During my Ph.D. at UCR, my research has been focusing on developing scalable high-performance solutions for graph processing by employing resources available on a heterogeneous computing cluster. More specifically, we developed the distributed MultiLyra [BigData’19] system whose scalability enables simultaneous evaluation of batches of hundreds of iterative graph queries. then, BEAD [BigData’20] extends MultiLyra to consider scenarios in which a batch of queries needs to be continuously reevaluated due to changes to the graph (for growing graphs). In the shared-memory setting, we have developed SimGQ [HiPC’20] online system that optimizes the evaluation of a batch of queries by sharing results of common sub-computations among them which won the best paper award in HiPC’20 and extended in SimGQ+ [JPDC’22].

I am also interested in Computer Architecture in general. During my master study I worked on High Performance Chip Multiprocessors by focusing on their Interconnection Networks; my thesis was titled “Using hybrid packet-circuit switching to improve memory access in NoC-based CMPs”.

Research Interests

  • Distributed Training and Inference of DL and LL Models
  • Distributed Graph Analytics & AI
  • High Performance & Parallel Computing
  • Computer Architecture
  • GPU Micro-Architecture and Programming