Hina Gandhi, Senior Software program Engineer, brings almost a decade of expertise from trade leaders like Cisco, VMware, and CloudHealth Applied sciences. On this interview, Hina shares classes from scaling distributed programs, main the transition from monoliths to microservices, and navigating the evolution of engineering management in a hybrid world. She additionally highlights frequent pitfalls in SaaS improvement and explores the way forward for cloud computing formed by AI. Learn on for a candid look into the methods and challenges shaping the subsequent technology of cloud innovation.
Discover extra interviews right here: Srinivas Sandiri, Expertise Chief in Digital Transformation — The Evolution of AI in CX, Automation vs. Human Contact, Moral AI, Cross-Group Alignment, and Getting ready the Subsequent Era
Your profession spans a number of the most revolutionary firms within the tech trade—Cisco, VMware, and CloudHealth Applied sciences. What are some defining moments that formed your experience in distributed programs and cloud computing?
I started my journey in distributed programs and cloud computing at CloudHealth Applied sciences, first as an intern and later as a full-time engineer. As one of many early engineers, I performed a key function in designing and growing Microsoft Azure cloud price administration options, which grew to become a significant product providing and helped seize new market share. This expertise gave me hands-on publicity to distributed programs and bolstered my understanding of how important they’re for top availability and scalability in cloud-based purposes.
Throughout my time at CloudHealth (later acquired by VMware), I labored on optimizing crucial knowledge processing jobs used for cloud insights reporting. The prevailing job was a significant efficiency bottleneck—regardless of working on AWS’s highest-tier EC2 occasion, it took over 24 hours to finish, negatively impacting the consumer expertise. To resolve this, I designed and carried out a distributed knowledge processing system that divided a single giant job into a number of parallel duties, drastically bettering efficiency by 10x, decreasing prices, and making the system considerably extra scalable. This mission deepened my experience in distributed knowledge processing and real-world cloud optimization methods.
At VMware, engaged on large-scale purposes additional refined my abilities, setting the stage for my subsequent problem at Cisco. There, I led an end-to-end transformation of a monolithic utility right into a cloud-based microservice. The unique system, accountable for exporting buyer vulnerability knowledge from on-premise belongings and software program purposes, confronted extreme efficiency points as a result of noisy neighbor issues and useful resource rivalry. I redesigned the system utilizing AWS providers in a distributed structure, bettering efficiency by 5x, decreasing prices, and enabling on-demand useful resource allocation to deal with unpredictable workloads effectively.
Throughout my 9+ years within the trade, I’ve not solely deepened my experience in cloud computing and distributed programs however have actively utilized it to unravel scalability and efficiency challenges in real-world purposes.
Many organizations battle with the transition from monolithic to microservices architectures. Are you able to stroll us by a real-world transformation you led, the most important hurdles you confronted, and the way you measured success?
In my 9+ years of expertise, two of essentially the most impactful initiatives which had been transitioned from monolith to microservice had been at VMware and Cisco. In each circumstances, the monolithic purposes confronted extreme efficiency points, with job processing occasions exceeding 24 hours, resulting in service outages and negatively affecting consumer expertise. To handle these challenges, transitioning to microservices was important, because it allowed us to interrupt down giant, tightly coupled jobs into smaller, unbiased providers that enabled parallel processing, higher scalability, and fault isolation.
One of many greatest hurdles on this transformation was designing an structure that not solely improved efficiency but additionally remained cost-efficient. This required cautious choice of applied sciences and system design ideas, together with asynchronous processing, event-driven structure, and a database-per-service method to scale back latency and optimize useful resource utilization. One other main problem was making certain knowledge consistency and integrity, as any discrepancies between the outdated monolith and the brand new microservices-based system might elevate issues from prospects. To mitigate this danger, an intensive 1:1 knowledge validation was carried out, working each programs in parallel and evaluating outcomes earlier than totally transitioning. Moreover, efficiency testing performed a vital function within the course of, because it was wanted to make sure that every part of the brand new service functioned optimally. This included verifying that the chosen database might effectively deal with large-scale learn and write operations and that auto-scaling mechanisms dynamically allotted sources primarily based on fluctuating workloads.
As soon as the mission was deployed to manufacturing, measuring success grew to become crucial. A very powerful metrics included efficiency enhancements, price effectivity, scalability, and fault tolerance. In comparison with the outdated monolithic system, processing occasions had been diminished from over 24 hours to only a few minutes, cloud useful resource utilization improved, leading to roughly 30% price financial savings, and the brand new system dealt with extra concurrent requests with out efficiency degradation. Moreover, fault isolation mechanisms ensured that failures in a single service didn’t trigger system-wide outages, considerably bettering total reliability. These transformations not solely enhanced system effectivity but additionally led to a greater buyer expertise by enabling sooner knowledge processing, seamless scalability, and diminished operational prices.
As cloud computing continues to evolve, the place do you see the most important alternatives—and challenges—for enterprises seeking to scale their distributed programs?
Cloud computing has quickly developed in recent times, resulting in elevated adoption of serverless architectures throughout the trade. Platforms like AWS Lambda and Google Cloud Capabilities assist decrease operational overhead whereas enhancing price effectivity. Many organizations are additionally embracing hybrid cloud fashions, permitting workloads to run each on-premises and within the cloud to stability efficiency and price. Nonetheless, this rising adoption brings added duty—significantly in price administration, the place bills can escalate as a result of unnoticed overuse, misconfigured autoscaling, or inefficient useful resource utilization. Moreover, safety dangers might come up from open community ports or misconfigurations that would expose delicate knowledge.
The “Future of Work” in tech is a sizzling matter. With the growing shift in direction of distant and hybrid groups, how do you see engineering management evolving to keep up productiveness, innovation, and robust workforce dynamics?
Hybrid work has develop into the brand new norm, and as hybrid groups proceed to develop, engineering management should embrace flexibility—permitting workforce members to handle their very own schedules whereas sustaining productiveness and innovation. For totally distant groups, fostering robust workforce dynamics is important. One method is to host casual digital gatherings, resembling Friday pleased hours, to encourage private connections past work-related discussions. Moreover, management can manage in-person offsites, bringing distant workforce members collectively to align on the product roadmap, evaluate annual OKRs, and have interaction in deeper conversations round buyer suggestions and strategic subsequent steps.
Your work includes constructing scalable, high-performance purposes. What are the most important errors firms make when designing cloud-based SaaS options, and the way can they keep away from them?
I’ve noticed firms over-engineering purposes to accommodate potential future use circumstances that will by no means materialize. This typically delays the supply of scalable, high-performance options, forcing groups to depend on present underperforming programs for longer than needed. A simpler method is to first give attention to resolving present efficiency challenges, then iteratively evolve the answer to help future wants.
You’ve labored throughout completely different tech giants—every with its personal engineering tradition and expertise stack. How do you adapt and innovate in several environments whereas making certain long-term technical imaginative and prescient?
All through my time at varied tech giants, I’ve persistently adhered to the core ideas of constructing scalable and environment friendly programs. Whereas every firm faces challenges with legacy or inefficient architectures, the stress to ship options shortly typically results in these points being deprioritized. I imagine it’s essential to strike a stability—driving innovation to remain aggressive out there, whereas additionally modernizing and optimizing present programs to make sure they continue to be strong and performant below scale.
Trying forward, what excites you essentially the most about the way forward for cloud computing and distributed programs? Are there any rising developments or applied sciences that you simply imagine will redefine the trade within the subsequent 5 years?
With the rise of AI, I imagine the subsequent main development would be the integration of AI into distributed programs. We’ll see widespread adoption of AI-driven approaches to scale sources dynamically, monitor system well being in actual time, leverage predictive algorithms for load balancing, and detect anomalous or malicious exercise throughout distributed environments.