The Kubernetes native HPA algorithms (K8S HPA mechanism) result in modest savings and much larger lags (latency). By using Machine Learning technologies for predictions and analysis, Federator.ai achieves much better performance (reduced latency) with much fewer resources (reduced number of Kafka consumers) and makes implementing autoscaling of Kafka consumers simple and straightforward.
Effective workload predictions
Federator.ai uses message production rate of a Kafka topic and target KPI metrics such as the desired latency as the key metrics for autoscaling Kafka consumers. Predictions of message production rate give a more accurate indication of real workloads for Kafka consumers.
Cost-effective application deployments
Federator.ai integrates the workload metrics, workload predictions, and application KPI in deciding the right number of replicas and achieves more cost-effective application deployments.
Achieving desired performance
Without guessing or experimenting on what metric threshold to set in Kubernetes native HPA, Federator.ai achieves better use of resources for desired performance automatically.