Overview
One of the world’s most influential research and advisory firms, headquartered in Cambridge, MA. The NASDAQ listed company provides unique insights that is grounded in annual surveys of more than 675,000 consumers and business leaders worldwide, rigorous and objective methodologies, and the shared wisdom of their innovative clients.
As they grow their business, their IT operates with ten different DEV teams in diversified regions for various products and solutions. With a large environment and data center expansion through acquisition, their IT team has strived to understand how to meet the application SLA with minimized cost and enhanced performance. Furthermore, labor-intensive operational tasks consumed the team, leaving the questions of how to formulate the IT infrastructure to support the next phase of business expansion unanswered.
After several months of careful studies, they decided to adopt the containerized MultiCloud infrastructure solution with Kubernetes to automate labor-intensive tasks, scale as the business grows, and optimize their cloud spending. They found that they have heavy usages of NGINX, node.js, and Redis, among other services for the containerized applications. They adopted a microservices-based approach using containers and Kubernetes orchestration. Their developers are in the process of moving initial parts of its infrastructure and new services to Kubernetes. Administrators love the simplicity of setting up new clusters with a single command and managing and balancing workloads on thousands of containers at a time. They can also manage data access down to a fine-grained level, eliminating the need to set up duplicate machines for security purposes. Before they can put the solution to production, they need to resolve the pain points as follows.
Challenge: Manage Complex Operations at Scale
- How to solve the complexity problem? Containerized services are challenging to manage at scale, and performance depends on the underlying architecture’s health. That means system managers must still attend to the details of managing infrastructure.
- How to avoid waste from over-provisioned cloud resources for running its many application workloads? System managers are required to specify the workload requirements. Without a clear understanding of the container workloads and how the application’s containers correlate, most of the specifications are guesswork. Worse, IT teams are too busy to learn how to master a new platform.
- How to optimize the cost of supporting their applications in the cloud? The auto-scaler and scheduler in the standard Kubernetes distribution are reactive and fundamental, resulting in lower performance than expected.