I am a Senior ML Research Engineer at the Kempner Institute for the Study of Natural and Artificial Intelligence at Harvard University. Prior to this, I have served as a lecturer in Computer Science at University of California Riverside as well as the San Diego State University.

I received my PhD in Computer Science from University of California Riverside under supervision of Dr. Rajiv Gupta where I was a member of the RIPLE research group and was working on GRASP projects. I received two master’s degrees one in Computer Science from University of California Riverside and one in Computer Architecture from University of Tehran.

🔥 News

📊 Research

My research interests lie in distributed high-performance parallel computing, and in the distributed training and inference of deep learning and large language model workflows on state-of-the-art computing clusters, both on premises and in the cloud.

During my Ph.D. at UCR, my research has been focusing on developing scalable high-performance solutions for graph processing by employing resources available on a heterogeneous computing cluster. More specifically, we developed the distributed MultiLyra [BigData’19] system whose scalability enables simultaneous evaluation of batches of hundreds of iterative graph queries. then, BEAD [BigData’20] extends MultiLyra to consider scenarios in which a batch of queries needs to be continuously reevaluated due to changes to the graph (for growing graphs). In the shared-memory setting, we have developed SimGQ [HiPC’20] online system that optimizes the evaluation of a batch of queries by sharing results of common sub-computations among them which won the best paper award in HiPC’20 and extended in SimGQ+ [JPDC’22].

I am also interested in Computer Architecture in general. During my master study I worked on High Performance Chip Multiprocessors by focusing on their Interconnection Networks; my thesis was titled “Using hybrid packet-circuit switching to improve memory access in NoC-based CMPs”.

Research Interests

  • Distributed Training and Inference of DL and LL Models
  • Distributed Graph Analytics & AI
  • High Performance & Parallel Computing
  • Computer Architecture
  • GPU Micro-Architecture and Programming

📝 Publications

🤝 Service

Program Committee

  • Data Analytics 2022-2024, DAC 2020, IETE 2020, JOC 2020

Sub-reviewer

  • MICRO’23, ICS’23, CGO’21, PACT’20, CGO’20, ISPASS’19, IA3 at SC’19, ASPLOS’18, PACT’17, IPDPS’14, HPIN’14

📖 Teaching

Workshops

Lecturer

  • Spring 2024 - CS 005: Introduction to Computer Programming
  • Winter 2024 - CS 005: Introduction to Computer Programming
  • Fall 2023 - CS 005: Introduction to Computer Programming
  • Fall 2023 - CS 635: Advanced Object-Oriented Programming
  • Summer 2021 - CS 153: Design of Operating Systems
  • Spring 2021 - CS/EE 147: GPU Computing and Programming