About
Brief overview: Currently, I'm a Ph.D. student at the University of Colorado Boulder. I major in Computer Science and I'm happy with that! ^__^. My research area is around Networked Systems and I'm doing RA under supervision of Prof. Eric Keller and glad to work with people in the Network and Security Research Group in Computer Systems Lab. More specifically, my research focus is on Cloud Computing, Virtualization technologies, Orchestration, Computer Networking, and Linux Kernel. Currently, I am working on a project called Distributed Containers in which I hope to end up with a centralized orchestration system that will improve resource utilization and resource efficiency, while it pretty much preserves Microservice applications thoughput and latency compare to running it using famous orchestration tools like K8s.
Experiences
Implemented a proxy server for both REST and gRPC calls, intercepting REST and gRPC request/responses to generatestubs for Mocking purpose
Software Engineer Intern at Data Manager Setup team, C360 product
As a research assistant, I started working on a project called Toccoa. Generally speaking, this project is about cloud-scale packet-Level network analytics in software.
Another project which I have been working on roughly for a year, is called Distributed Container. Essentially, this is a manager implemented alongside with K8s which monitors containers deployed on physical hosts, gathers info about them and make decisions like setting some knobs/resource limits/etc dynamically in runtime to achieve higher resource utilization while preserving throughput
Teaching Assistant of the following courses:
Computer Networking
Operating Systems & Operating Systems Lab
Computer Architecture
Introduction to Computing Systems and C Programming
Artificial Intelligence
Publication
Efficient resource management in cloud computing research is a crucial problem because over-provisioning resources increases costs for cloud providers and cloud customers; under-provisioning resources increases the application latency, and it may violate service level agreement, which eventually makes cloud providers lose their customers and income. As a result, researchers have been striving to develop optimal resource management in the cloud computing environments in different ways, such as container placement, job scheduling and multi-resource scheduling. Machine learning techniques are extensively used in this area. In this paper, we present a comprehensive survey on the projects which leveraged machine learning techniques for resource management solutions in the cloud computing environment. At the end, we provide a comparison between these works. Furthermore, we propose some future directions that willguide researchers to advance this field
Containers are a popular mechanism used among application developers when deploying their systems on cloud platforms. Both developers and cloud providers are constantly looking to simplify container management, provisioning, and monitoring. In this paper, we present a container management layer that sits beside a container orchestrator that runs, what we call, Elastic Containers. Each elastic container contains multiple subcontainers that are connected to a centralized Global Cloud Manager (GCM). The GCM gathers subcontainer resource utilization information directly from inside each kernel running the subcontainers. The GCM then tries to efficiently and optimally distribute resources between the application subcontainers residing on a distributed environment.
Elastic Containers PaperAbstract: With the increasing need for more reactive services, and theneed to process large amounts of IoT data, edge clouds are emerging to enable applications to be run close to the users and/or devices. Following the trend in hyperscale clouds, applications are trending toward a microservices architecture where the application is decomposed into smaller pieces that can each run in its own container and communicate with each other over a network through well defined APIs. This improves the development effort and deployability, but also introduces inefficiencies in communication. In this paper, we rethink the communication model, and introduce the ability tocreate shared memory channels between containers supporting both a pub/sub model and streaming model. Our approachis not only applicable to the edge clouds but also beneficial incore cloud environments. Local communication is made more efficient, and remote communication is efficiently supported through synchronizing shared memory regions via RDMA.
Shimmy PaperProjects
Here are some more recent and notable projects:
Design a general and flexible, packet-level, network analytics system on top of programmable switch
having different customers, transit service and peers
applying routing policies based on the relashionships
using route reflection, redistribution for statically routed customers and BGP attributes for traffic manipulation
Tools: Cisco Router (IOS), GNS3
IP Routing course project
Re-implementing the paper and improving the results by using feature engineering techniques and adding LSTM
Advanced Operating Systems course project
DevOps in the Cloud course project
Computer Networking course project
Computer Networking Lab course project
Operating Systems Lab course project
Internet Engineering course project
Database Lab course project
System Analysis & Design course project
An algorithm using Genetic approach to solve a minimization problem
Skills & Proficiency
Programming Languages (C/C++, Python, Java, Bash Script)
Linux Kernel
Software Defined Networking
DevOps
Web Programming
Project Management/Version Control
Awards & Honors
Build and deploy virtual machine live migration in cloud environment, using OpenStack, NFS, VSphere