Back to blog

Databricks Pricing Model Explained: 2024 Cost Guide

Meni Shmueli

Performance Researcher & Software Architect, Intel Granulate

What Is Databricks? 

Databricks is a unified data analytics platform that allows organizations to process, analyze, and visualize large amounts of data. It was founded by the original creators of Apache Spark, a popular open-source big data processing framework. 

Databricks provides a collaborative workspace for data scientists, engineers, and business analysts to work together on data-related projects. It also offers an integrated set of tools for data processing, machine learning, and visualization, as well as a scalable cloud infrastructure for running these workloads.

In this article:

How Does Databricks Pricing Work? 

Databricks charges its customers based on their usage of the platform. There are several pricing tiers available, depending on the size and complexity of the workloads being run.

Free Options

Databricks offers a free trial for new users, which lasts for 14 days. During this trial period, users have access to all the features of the platform, including the ability to create clusters and run workloads. However, after the trial period, users will need to upgrade to a paid plan to continue using the platform.

For users who do not need the full features of the paid plans, Databricks also offers a Community Edition. This version of the platform is free to use and provides access to a limited set of features. It is designed for small-scale workloads and individual users who do not require the full capabilities of the paid plans.

New call-to-action

Databricks charges its customers based on the resources they consume on the platform. The main component of this pricing model is the cost of clusters. Clusters are the compute resources that are used to run workloads on the platform. Databricks offers several tiers of clusters, ranging from a small single-node cluster to a large multi-node cluster with high-performance hardware. Pricing differs for the Standard, Premium, and Enterprise deployments. 

In addition to the cluster size, the pricing of Databricks clusters also varies depending on the compute type and the cloud service provider (CSP) being used. Databricks supports multiple CSPs, including Amazon Web Services (AWS), Microsoft Azure, and Google Cloud Platform (GCP). The pricing for clusters will vary depending on the CSP and the region in which the cluster is deployed.

Databricks also offers two types of compute: standard and high-concurrency. Standard compute is designed for traditional batch processing workloads, while high-concurrency compute is optimized for interactive workloads and real-time data processing. The pricing for these two types of compute also varies depending on the size of the cluster and the CSP being used.

Learn more in our detailed guide to Databricks costs (coming soon)

Databricks Pricing Examples 

The following are pricing examples for Databricks on AWS. The products listed below include Jobs, Databricks SQL, and All-Purpose Compute for Interactive Workloads. 

ProductWorkload TypeStandardPremiumEnterprise
JobsLight Compute0.07$ / DBU0.10$ / DBU0.13$ / DBU
Compute0.10$ / DBU0.15$ / DBU0.20$ / DBU
Compute Photon0.10$ / DBU0.15$ / DBU0.20$ / DBU
Databricks SQLSQL Classic0.22$ / DBU0.22$ / DBU
SQL Pro0.55$ / DBU0.55$ / DBU
SQL Serverless0.70$ / DBU0.70$ / DBU
All-Purpose ComputeAll-Purpose Compute0.040$ / DBU0.55$ / DBU0.65$ / DBU
All-Purpose Compute Photon0.040$ / DBU0.55$ / DBU0.65$

The Photon engine, which is the next-generation engine on the Databricks Lakehouse Platform, offers high-speed query performance at a lower total cost. 

Databricks SQL allows you to run all BI and SQL applications at scale with APIs and open formats, and your choice of tools without being locked-in. With SQL Pro and Classic, users are responsible for paying for the corresponding compute infrastructure charges, whereas for serverless compute, users are not required to pay separate compute infrastructure charges. 

All-Purpose Compute is suitable for interactive machine learning and data science workloads, as well as data engineering, BI, and data analytics. The pricing for each product varies depending on the plan, including Standard, Premium, and Enterprise.

Databricks Cost Optimization Best Practices 

Here are some best practices to help manage and reduce costs in Databricks.

Leverage the DBU Calculator

The Databricks Unit (DBU) calculator can be used to estimate the cost of running workloads on the Databricks platform. By estimating the cost of different configurations and workloads, users can optimize their usage of the platform to minimize costs. This allows users to adjust their cluster sizes, compute types, and other settings to ensure they are using the most cost-effective configurations for their workloads.

Select the Appropriate Instance Type

Selecting the right instances is crucial to ensure optimal performance and cost-efficiency on the cloud platform. Different instance families are optimized for different workloads and use cases, so it’s important to choose the right instance type that best matches the workload requirements.

For example, the M5 family instances are designed for general-purpose workloads, while the C5 instances are optimized for compute-intensive workloads. The R5 instances are best suited to workloads that are memory-intensive, and the X1 instances are best suited to memory-intensive, high-performance computing workloads.

By selecting the right instance type, users can achieve better performance, reduce costs, and optimize the utilization of their resources on the cloud platform.

Use Autoscaling

Autoscaling is a powerful feature that allows users to dynamically adjust the size of their Databricks clusters based on workload demands. It helps optimize cluster utilization and reduce costs by automatically adding or removing nodes as needed. This ensures that users have the resources they need to handle their workloads without wasting resources or overspending.

To enable autoscaling in Databricks, users can configure the cluster with either the standard or enhanced autoscaler. The standard autoscaler is a basic implementation that allows users to set minimum and maximum thresholds for the number of worker nodes in a cluster. When the workload exceeds the configured thresholds, the autoscaler will add nodes to the cluster, and when the workload decreases, it will remove nodes.

The enhanced autoscaler, on the other hand, provides more advanced features such as dynamic scaling policies that can be customized based on the specific workload requirements. Enhanced autoscaling allows users to create custom policies that are triggered by various metrics such as CPU usage, memory usage, and I/O operations. This provides more granular control over cluster scaling and ensures that resources are allocated efficiently.

New call-to-action

Tag the Clusters

Cluster tagging is a feature in Databricks that allows users to apply labels to clusters, which can help optimize resource usage. Tags can be used to track usage by department, team, project, or any other criteria. By tagging clusters, users can gain greater visibility into resource usage and identify areas where optimization is possible. This can help ensure that resources are being allocated efficiently, reduce unnecessary spending, and improve overall cost management on the platform.

Take Advantage of Spot Instances

Spot instances are spare compute capacity in the cloud that can be purchased at a significantly discounted price. These instances are available on a first-come, first-served basis, and their pricing can vary based on supply and demand.

In Databricks, spot instances can be used to reduce costs by providing a cost-effective alternative to on-demand instances. By leveraging spot instances, users can significantly reduce their compute costs and optimize their resource utilization.

However, it’s important to balance the cost efficiency with the risk of interruptions. Since spot instances are spare capacity, they can be reclaimed by the cloud provider at any time if the demand for capacity increases. This can cause interruptions in workloads running on spot instances, potentially leading to delays or failed jobs. To mitigate this risk, users can design their workloads to be fault-tolerant, use checkpointing and replication, and monitor the instances for potential interruptions.

Optimize application performance.

Save on cloud costs.

Start Now
Back to blog