“How can there be an extra 30% overhead in applications like Apache Spark that other optimization solutions can’t touch?” That’s the question that many Pepperdata prospects and customers ask us. They’re surprised—if not downright mind-boggled—to discover that Pepperdata autonomous cost optimization eliminates up to 30% (or more) wasted capacity inside Spark applications. And that’s on top of and in addition to conventional resource optimization techniques such as Cluster Autoscaling, Instance Rightsizing, Spark Dynamic Allocation, and manual tuning of applications. A Long History of Customer Success At Pepperdata we take this skepticism in stride, since we’ve been encountering it—and overcoming it—for the last decade, from leading enterprises and innovative startups alike. In those last ten years, Pepperdata Capacity Optimizer has been operating and hardened in some of the world’s largest and most demanding compute environments running data-intensive workloads. By keeping these complex and highly sophisticated data clusters perfectly in balance, day in and day out for years, Pepperdata has pioneered and refined a unique and industry-leading set of features and expertise that reduces costs for data-intensive workloads an average of 30-47% with no manual tuning or application code changes. By autonomously optimizing cluster CPU and memory in real time, Capacity Optimizer […]
The post Pepperdata “Sounds Too Good to Be True” appeared first on Pepperdata.