Big data adds significant value to your organization but can also add significant cost. Buoyant Data specializes in analyzing Databricks and AWS usage to provide cost optimization and consulting.
Monitoring and alerting for data infrastructure to ensure that the data platform delivers value while staying within the budget.
Guidance for Databricks and Delta Lake deployments to ensure the highest performance to cost ratio.
Review of already deployed data infrastructure to squeeze faster queries and lower cost out your current platform.
Remove those pesky hard-coded secret keys from your data applications and learn how to assume roles using built-in credential providers in AWS. This post includes examples that can be copied for both Rust and Python applications which need to access Delta tables.
Read moreOptimizing cost of workloads running on Databricks can be daunting at first, but there are plenty of low hanging fruit! These tips will help you save thousands of dollars annually on your big data's big bills!
Read moreBuoyant Data will be in San Francisco for Data and AI Summit from June 26th to June 29th. We'll be talking about alternative data pipelines using Rust and Python, and cost optimization in AWS. Come find us!
Read moreA developer focused post explaining how to write to a Delta table in Rust using the Apache Arrow RecordBatch data structure.
Read moreDiscussing whether it is possible to have a Databricks deployment with a $0 idle cost in AWS. It is a nice idea, but not entirely possible in practice. This post discusses the minimum footprint possible with Databricks.
Read moreAn introductory post outlining what Buoyant Data can do to help save on their Databricks and AWS costs, along with our preferences for the most cost effective data platform architecture.
Read more