Back to blog

Databricks Pricing Model Explained: 2024 Guide

Alon Berger

Product Marketing Manager, Intel Granulate

How Does Databricks Pricing Work? 

Databricks charges its customers based on their usage of the data analytics platform and the cloud provider used. There are several pricing tiers available, depending on the size and complexity of the workloads being run.

Free Options

Databricks offers a free trial for new users, which lasts for 14 days. During this trial period, users have access to all the features of the platform, including the ability to create clusters and run workloads. However, after the trial period, users will need to upgrade to a paid plan to continue using the platform.

For users who do not need the full features of the paid plans, Databricks also offers a Community Edition. This version of the platform is free to use and provides access to a limited set of features. It is designed for small-scale workloads and individual users who do not require the full capabilities of the paid plans.

Databricks charges its customers based on the resources they consume on the platform. The main component of this pricing model is the cost of clusters. Clusters are the compute resources that are used to run workloads on the platform. Databricks offers several tiers of clusters, ranging from a small single-node cluster to a large multi-node cluster with high-performance hardware. Pricing differs for the Standard, Premium, and Enterprise deployments. 

In addition to the cluster size, the pricing of Databricks clusters also varies depending on the compute type and the cloud service provider (CSP) being used. Databricks supports multiple CSPs, including Amazon Web Services (AWS), Microsoft Azure, and Google Cloud Platform (GCP). The pricing for clusters will vary depending on the CSP and the region in which the cluster is deployed.

Databricks also offers two types of compute: standard and high-concurrency. Standard compute is designed for traditional batch processing workloads, while high-concurrency compute is optimized for interactive workloads and real-time data processing. The pricing for these two types of compute also varies depending on the size of the cluster and the CSP being used.

In this article:

New call-to-action

What is a DBU in Databricks?

A Databricks Unit (DBU) is a unit Databricks uses to quantify processing power and time used by jobs. One DBU equates to one hour of processing time on a predefined compute resource. The cost of DBUs varies based on the type of instance and workload, with different rates for data engineering tasks, interactive analysis, and machine learning workloads.

Whether running large-scale data processing jobs or performing complex analytics, the DBU model ensures that users only pay for the compute power they utilize, making it an efficient way to manage big data projects.

Databricks Pricing in the Cloud

Databricks Pricing on AWS

Pricing for Databricks on AWS is based on the number of DBUs consumed, with specific rates depending on the type of compute instances used. Users can choose from various instance types, each with its own pricing, to match their computational needs and budget. 

Databricks on AWS also supports features like auto-scaling, which automatically adjusts the number of instances based on workload, helping to optimize costs. Additionally, AWS services such as Amazon S3 for storage, AWS Glue for data cataloging, and Amazon Redshift for data warehousing can be integrated with Databricks.

Databricks Pricing on Azure

Databricks is available from the Azure marketplace. Pricing on Azure follows the same DBU-based model, with costs varying by the type and size of the compute resources used. Azure Databricks integrates with Azure services such as Azure Blob Storage, Azure Data Lake Storage, and Azure Synapse Analytics, enabling a seamless data analytics workflow.

The integration between Databricks and Azure offers unique features like native integration with Azure Active Directory for authentication and security, as well as optimized connectors to Azure services for efficient data processing. Pricing options include pay-as-you-go and reserved capacity.

Databricks Pricing on Google Cloud

Pricing for Databricks on Google Cloud is based on the consumption of DBUs, with the cost depending on the chosen compute resources. This integration allows customers to leverage Google Cloud’s data services such as BigQuery, Google Cloud Storage, and Google Kubernetes Engine, alongside Databricks for a unified data analytics solution.

Google Cloud’s strengths in data analytics, machine learning, and artificial intelligence complement Databricks’ capabilities, offering users a robust platform for data processing and analysis. As with the other cloud providers, Google cloud offers a pay-as-you-go pricing model, an option for committed use discounts, and spot VMs.

Databricks Pricing Examples 

The following are pricing examples for Databricks on AWS. The products listed below include Jobs, Databricks SQL, and All-Purpose Compute for Interactive Workloads. 

ProductWorkload TypeStandardPremiumEnterprise
JobsLight Compute0.07$ / DBU0.10$ / DBU0.13$ / DBU
Compute0.10$ / DBU0.15$ / DBU0.20$ / DBU
Compute Photon0.10$ / DBU0.15$ / DBU0.20$ / DBU
Databricks SQLSQL ClassicN/A0.22$ / DBU0.22$ / DBU
SQL ProN/A0.55$ / DBU0.55$ / DBU
SQL ServerlessN/A0.70$ / DBU0.70$ / DBU
All-Purpose ComputeAll-Purpose Compute0.040$ / DBU0.55$ / DBU0.65$ / DBU
All-Purpose Compute Photon0.040$ / DBU0.55$ / DBU0.65$

The Photon engine, which is the next-generation engine on the Databricks Lakehouse Platform, offers high-speed query performance at a lower total cost. 

Databricks SQL allows you to run all BI and SQL applications at scale with APIs and open formats, and your choice of tools without being locked-in. With SQL Pro and Classic, users are responsible for paying for the corresponding compute infrastructure charges, whereas for serverless compute, users are not required to pay separate compute infrastructure charges. 

All-Purpose Compute is suitable for interactive machine learning and data science workloads, as well as data engineering, BI, and data analytics. The pricing for each product varies depending on the plan, including Standard, Premium, and Enterprise.

Databricks Cost Optimization Best Practices 

Here are some best practices to help manage and reduce costs in Databricks.

1. Leverage the DBU Calculator

The Databricks Unit (DBU) calculator can be used to estimate the cost of running workloads on the Databricks platform. By estimating the cost of different configurations and workloads, users can optimize their usage of the platform to minimize costs. This allows users to adjust their cluster sizes, compute types, and other settings to ensure they are using the most cost-effective configurations for their workloads.

2. Select the Appropriate Instance Type

Selecting the right instances is crucial to ensure optimal performance and cost-efficiency on the cloud platform. Different instance families are optimized for different workloads and use cases, so it’s important to choose the right instance type that best matches the workload requirements.

For example, the M5 family instances are designed for general-purpose workloads, while the C5 instances are optimized for compute-intensive workloads. The R5 instances are best suited to workloads that are memory-intensive, and the X1 instances are best suited to memory-intensive, high-performance computing workloads.

By selecting the right instance type, users can achieve better performance, reduce costs, and optimize the utilization of their resources on the cloud platform.

3. Use Autoscaling

Autoscaling is a powerful feature that allows users to dynamically adjust the size of their Databricks clusters based on workload demands. It helps optimize cluster utilization and reduce costs by automatically adding or removing nodes as needed. This ensures that users have the resources they need to handle their workloads without wasting resources or overspending.

To enable autoscaling in Databricks, users can configure the cluster with either the standard or enhanced autoscaler. The standard autoscaler is a basic implementation that allows users to set minimum and maximum thresholds for the number of worker nodes in a cluster. When the workload exceeds the configured thresholds, the autoscaler will add nodes to the cluster, and when the workload decreases, it will remove nodes.

The enhanced autoscaler, on the other hand, provides more advanced features such as dynamic scaling policies that can be customized based on the specific workload requirements. Enhanced autoscaling allows users to create custom policies that are triggered by various metrics such as CPU usage, memory usage, and I/O operations. This provides more granular control over cluster scaling and ensures that resources are allocated efficiently.

New call-to-action

4. Tag the Clusters

Cluster tagging is a feature in Databricks that allows users to apply labels to clusters, which can help optimize resource usage. Tags can be used to track usage by department, team, project, or any other criteria. By tagging clusters, users can gain greater visibility into resource usage and identify areas where optimization is possible. This can help ensure that resources are being allocated efficiently, reduce unnecessary spending, and improve overall cost management on the platform.

5. Take Advantage of Spot Instances

Spot instances are spare compute capacity in the cloud that can be purchased at a significantly discounted price. These instances are available on a first-come, first-served basis, and their pricing can vary based on supply and demand.

In Databricks, spot instances can be used to reduce costs by providing a cost-effective alternative to on-demand instances. By leveraging spot instances, users can significantly reduce their compute costs and optimize their resource utilization.

However, it’s important to balance the cost efficiency with the risk of interruptions. Since spot instances are spare capacity, they can be reclaimed by the cloud provider at any time if the demand for capacity increases. This can cause interruptions in workloads running on spot instances, potentially leading to delays or failed jobs. To mitigate this risk, users can design their workloads to be fault-tolerant, use checkpointing and replication, and monitor the instances for potential interruptions.

Optimize application performance.

Save on cloud costs.

Start Now
Back to blog