A Smarter, Cost-efficient Way to Provision Cloud Workloads with ProphetStor Federator.ai

blog image

The accelerated shift of running applications on-premises to running them in the cloud has taught many enterprises expensive lessons about managing pay-as-you-go pricing models. While some early missteps may be chalked up to inexperience, many large organizations continue to provision cloud resources ineffectively, and this is costing them well into the hundreds of thousands—sometimes millions—of dollars annually.

As cloud adoption accelerates, the problem is getting worse. According to Gartner, public cloud workloads are forecast to rise 18% this year, while Flexera’s 2020 State of the Cloud Report estimates that 35% of cloud spend is wasted. Datadog, which monitors cloud app performance for thousands of enterprises, says that nearly half of the apps they monitor use 30% or fewer of the allocated resources.

Manual, “gut check” approaches to allocating cloud resources are unsustainable. To avoid the performance impacts of under-provisioning, risk averse CloudOps teams commonly over-provision resources. And over-provision results in over-spending.

There are two main reasons why highly skilled CloudOps teams struggle to get a tighter grip on provisioning. The first is that these teams lack visibility into the hosted services on which their apps run. The second is they lack the capabilities to predict what resources are needed. These teams also lack the tools to choose the most cost-optimized cluster configurations for their workloads.

CloudOps teams need a new way to manage cloud resources that is automated and intelligent and looks at the full application stack—from the workload down to the container, the virtualized infrastructure layer, and individual hardware components—and also across cloud instances. A modern approach to hybrid and multi-cloud provisioning must be aware of the many interrelationships between the virtual and physical components in a system and to continuously observe and react to the dynamic changes occurring throughout the system in real-time.

We at ProphetStor believe that application rightsizing must be done at a granular level. When the right machine learning approaches are applied to IT operations, your CloudOps team can gain a much more fine-grained understanding of things like CPU and memory usage, network traffic, and power consumption metrics—all of the complex, interrelated dependencies that IT administrators need to understand in order to better plan and allocate cloud and data center resources.

ProphetStor Federator.ai is an AI-based solution that helps enterprises manage and optimize resources for applications on Kubernetes. ProphetStor Federator.ai ingests telemetry from multiple sources which include trusted application performance monitoring products such as Sysdig and Datadog, and standard open source tools like Prometheus. Using advanced machine learning algorithms, Federator.ai reasons across this rich set of telemetry data and delivers application resource consumption predictions and recommendations dynamically. Federator.ai offers: 

  • AI-based workload prediction for containerized applications in Kubernetes clusters and VMs in VMware clusters
  • Resource recommendations based on workload prediction, application, Kubernetes, and other related metrics
  • Automatic scaling of Kubernetes application containers, Kafka consumer groups, and NGINX Ingress upstream services
  • Automatic provisioning of CPU/memory for generic Kubernetes application controllers/namespaces
  • Multicloud cost analysis and recommendations based on workload predictions for Kubernetes clusters
  • Actual cost and potential savings based on recommendations for clusters, Kubernetes applications, and Kubernetes namespace

Federator.ai and the SUSE Rancher Apps and Marketplace

In partnership with SUSE, ProphetStor Federator.ai helps customers to optimize resources usage for applications running on any SUSE Rancher-managed clusters. Through Federator.ai’s intelligent/application-aware autoscaling, containers and pods are automatically scaled with the right number of replicas based on the workload demands while maintaining performance goals.
 
To install, select Federator.ai tile from the global catalog. Select the software version and the target project where Federator.ai will be installed. You can modify default configuration options if you choose to and deploy Federator.ai to the specified SUSE Rancher-managed cluster.
After Federator.ai is deployed, you can monitor application resource usages from different clusters and projects. It provides resource recommendations for application containers and namespaces so that they are not over-provisioned or under-provisioned. Further integration with CI/CD pipeline enables continuous optimization of resource usage whenever applications are deployed.
 
With SUSE Rancher and ProphetStor Federator.ai, you can rest assured your Kubernetes environment and container workloads are performant and cost-optimized by taking the guess work out how to best deploy your workloads – no matter where you’ve deployed them – in multicloud, hybrid cloud or on-premises implementations . Please note that ProphetStor Federator.ai fully supports both a SUSE Rancher instance (includes support subscription from SUSE) as well as a Rancher open source project deployment.
For more information on Federator.ai, please visit prophetstor.com/federator-ai.
This blog was originally published on SUSE Blog.

Please select the software you would like a demo of:

Federator.ai GPU Booster ®

Maximizing GPU utilization for AI workloads and doubling your server’s training capacity

Federator.ai ®

Simplifying complexity and continuously optimizing cloud costs and performance