Back to blog

Apache Spark Architecture Simplified

Alon Berger

Product Marketing Manager, Intel Granulate

Apache Spark’s architecture is designed to handle large-scale data processing. It uses a central coordinator known as the Spark Driver and multiple distributed workers called Spark Executors. This architecture supports various cluster managers, enabling it to run in diverse environments like Hadoop YARN, Apache Mesos, Kubernetes, or standalone clusters.

The Spark Driver is responsible for maintaining the application’s context, managing distributed data, and coordinating the execution of tasks across the Spark Executors. The Spark Executors are distributed across the cluster, executing the tasks assigned by the Driver and storing the data required for processing. This architecture ensures scalability and fault tolerance, making Apache Spark suitable for big data analytics.

In this article:

Key Features of Apache Spark

Here are some of the key features that contribute to Spark’s performance, ease of use, and integration capabilities:

  • In-memory computing: Spark’s in-memory computation capabilities significantly speed up data processing tasks by reducing the need to read and write data from disk repeatedly.
  • Fault tolerance: Spark uses Resilient Distributed Datasets (RDDs) to ensure fault tolerance, allowing it to recover data across the cluster in case of failure.
  • Scalability: It can handle large-scale data processing tasks across numerous nodes in a cluster, scaling up to petabytes of data.
  • Ease of use: It provides high-level APIs in Java, Scala, Python, and R, making it accessible to developers with different programming backgrounds.
  • Advanced analytics: Spark supports complex analytics, including machine learning, graph processing, and real-time stream processing through libraries like MLlib, GraphX, and Spark Streaming.
  • Integration: It integrates with Hadoop and other big data tools, allowing it to leverage existing data infrastructures.
  • Interactive shells: It offers interactive shells for Scala and Python, enabling developers to write and test code snippets on-the-fly.

Apache Spark Architecture Components

Here’s a closer look at Spark’s components.

The Spark Driver

The Spark Driver is the central coordinator that manages the Spark application. It maintains the Spark context, which is the entry point for interacting with Spark’s capabilities. The Driver is responsible for:

  • Job scheduling: Dividing the application into smaller tasks and scheduling them for execution on the Executors.
  • Task distribution: Distributing tasks to different Executors based on data locality and available resources.
  • Result aggregation: Collecting and processing the results returned by the Executors.
  • Monitoring: Tracking the progress of tasks, handling failures, and ensuring that the application completes successfully.

The Spark Executors

Spark Executors are the worker nodes in the Spark architecture. Each Executor is responsible for executing tasks assigned by the Driver and reporting the status of task execution. They enable parallel processing and efficient resource utilization. Key responsibilities of Executors include:

  • Task execution: Performing the actual data processing tasks as per the instructions from the Driver.
  • Data storage: Storing data in memory or on disk, as required, to facilitate efficient processing.
  • Communication: Communicating with the Driver to provide status updates and receive new tasks. 

The Cluster Manager

The Cluster Manager manages resources and coordinates the distributed environment in which Spark runs. It allocates resources to the Driver and Executors based on the requirements of the Spark application. The Cluster Manager ensures efficient resource utilization, load balancing, and high availability, which are critical for large-scale data processing with Apache Spark.

There are several types of cluster managers: Standalone, Hadoop YARN, Apache Mesos, and Kubernetes. They are described in more detail below.

SparkContext

SparkContext is the primary entry point for a Spark application. It represents the connection to a Spark cluster and coordinates the execution of tasks on the cluster. Here are its key roles:

  • Initialization: Sets up internal services and establishes a connection to the cluster manager, acquiring resources for running the application.
  • Resource allocation: Allocates executors on worker nodes to perform parallel computations and manage their lifecycles.
  • Configuration: Allows users to configure application settings, such as the number of cores, memory per executor, and other runtime parameters.
  • Job submission: Handles the submission of Spark jobs by creating RDDs and DAGs, then distributing tasks across the cluster.
  • Context management: Maintains metadata and lineage information of RDDs to enable fault tolerance and efficient execution.
Tips from the expert:
In my experience, here are a few ways you can make more of your Apache Spark architecture:
Optimize resource allocation using dynamic scaling: Use the dynamic resource allocation feature in Spark to adjust the number of executors dynamically based on workload demands. This helps in efficiently using resources and reducing costs, especially in cloud environments.
Optimize shuffle operations: Tuning the configurations related to shuffle operations, such as spark.sql.shuffle.partitions, spark.shuffle.memoryFraction, and spark.reducer.maxSizeInFlight, can significantly enhance performance. Experiment with different settings based on your workload.
Use data caching effectively: Cache and persist intermediate datasets that are reused multiple times within a job to avoid recomputation. Use the appropriate storage levels (e.g., MEMORY_ONLY, MEMORY_AND_DISK) based on available resources and the size of the data.
Tune garbage collection settings: Fine-tune the JVM garbage collection settings to improve the performance of Spark applications, especially for large-scale jobs. Adjust parameters like spark.executor.memory, spark.driver.memory, and spark.memory.fraction for optimal memory usage.
Consider using DataFrames and Datasets over RDDs: Where possible, prefer using DataFrames and Datasets instead of RDDs. They provide higher-level abstractions, optimize execution plans using the Catalyst optimizer, and are generally more performant for most data processing tasks.

Understanding Apache Spark Cluster Manager Types 

Here’s a look at the different cluster managers that can be used in Spark.

Standalone

The standalone cluster manager is Spark’s built-in cluster management system. In a standalone cluster, a master node manages the cluster’s resources and schedules tasks, while worker nodes execute the tasks.

Key features of a standalone cluster manager include:

  • Simplicity: The manager is easy to set up and use, requiring minimal configuration compared to other resource managers.
  • Flexibility: It supports dynamic allocation of resources, allowing Spark to request more executors during runtime based on the workload.
  • Fault tolerance: The master node can be configured to run in a high-availability mode with standby masters, ensuring continued operation even if the primary master fails.

This type of cluster manager is suitable for small to medium-sized clusters where simplicity is the priority. It’s useful for development and testing environments, as well as production environments with moderate resource management needs.

Hadoop YARN

Hadoop YARN (Yet Another Resource Negotiator) is a cluster manager that allows Spark to share resources with other big data applications. It is part of the Hadoop ecosystem and provides extensive resource management and job scheduling capabilities.

Key features of Hadoop YARN include:

  • Resource sharing: Enables Spark to coexist with other Hadoop applications, leveraging the same cluster resources.
  • Scalability: Can manage large-scale clusters, making it suitable for extensive production environments.
  • Isolation and security: Provides resource isolation and security features, ensuring that applications run securely and efficiently.

Apache Mesos

Apache Mesos is a flexible and scalable cluster manager that supports running multiple distributed systems on a shared pool of resources. Mesos abstracts the underlying resources, providing efficient resource isolation and sharing across different frameworks, including Spark.

Key features of Apache Mesos include:

  • Resource efficiency: Allows fine-grained resource allocation, enabling better utilization of cluster resources.
  • Multi-tenancy: Supports running various types of applications on the same cluster, promoting resource sharing and reducing infrastructure costs.
  • Scalability: Can scale to thousands of nodes, making it suitable for large, multi-tenant environments.

Kubernetes

Kubernetes is a container orchestration platform that can also serve as a cluster manager for Spark. It automates the deployment, scaling, and management of containerized applications, providing support for running Spark in a cloud-native environment. Running Spark on Kubernetes is particularly useful for adopting a microservices architecture.

Key features of Kubernetes include:

  • Containerization: Uses Docker containers, ensuring consistent environments and simplifying dependency management.
  • Scalability and Flexibility: Can dynamically scale applications based on demand, providing efficient resource utilization.
  • Extensibility: Offers a rich ecosystem of plugins and integrations, enhancing Spark’s capabilities in a cloud-native setup.

Related content: Read our guide to spark security

Apache Spark Modes of Execution

Spark can be run in a cluster, on a client server, or locally.

Cluster Mode

In Cluster Mode, the Spark Driver operates on a master node within a cluster managed by a resource manager such as YARN, Mesos, or Kubernetes. This mode is suitable for production environments due to its scalability, fault tolerance, and efficient resource management. It allows Spark applications to leverage the full computational power of the cluster, distributing workloads and balancing resource usage to handle large-scale data processing.

Cluster Mode ensures that if the Driver node fails, a new Driver can be launched on another node, maintaining high availability. Additionally, it supports dynamic resource allocation, which means it can request additional resources on demand, making it highly adaptable to varying workloads. This mode is optimal for long-running applications and batch processing tasks.

Client Mode

Client Mode is where the Spark Driver runs on the client machine that submits the application, while the Executors run on the worker nodes within the cluster. This configuration is particularly useful for interactive applications and development, as it allows for immediate feedback and debugging. Since the Driver is located on the client machine, developers can easily monitor application progress, log messages, and troubleshoot issues in real time.

However, Client Mode may not be as suitable for production deployments as Cluster Mode, especially for large-scale applications. The reliance on the client machine for the Driver introduces potential bottlenecks and a single point of failure. The client machine’s resources can become a limiting factor for application performance. This mode is suitable for prototyping, testing, and scenarios requiring direct interaction with the Driver.

Local Mode

In Local Mode, Spark runs in a single JVM on a single machine, simulating a complete cluster environment. Both the Driver and Executors run as threads within this JVM, allowing developers to execute Spark applications without the need for a cluster. This mode is suitable for development, testing, and debugging small-scale applications or portions of larger applications.

Local Mode simplifies the development process by eliminating the overhead associated with cluster setup and resource management. It provides an environment for experimenting with Spark’s APIs and testing individual components before deploying them to a production cluster. 

Main Abstractions of Apache Spark 

Users can understand data in Spark using an RDD or DAG.

Resilient Distributed Dataset (RDD)

A Resilient Distributed Dataset (RDD) is an immutable, distributed collection of objects. RDDs support parallel processing across a cluster, providing a fault-tolerant way to handle large-scale data. RDDs can be created from Hadoop InputFormats, existing data in storage systems like HDFS or Amazon S3, or by transforming other RDDs.

RDDs support two main types of operations: transformations and actions. Transformations, such as map, filter, and reduceByKey, create a new RDD from an existing one and are lazily evaluated, meaning they only compute results when an action is called. Actions, such as count, collect, and saveAsTextFile, trigger the execution of transformations and return results to the Driver or save data to an external storage system.

RDDs track the lineage of transformations that created them, which allows Spark to recompute lost data in case of node failures. They can be persisted in memory across nodes, enabling fast access to frequently used data and reducing the need for redundant computations. 

Directed Acyclic Graph (DAG)

A Directed Acyclic Graph (DAG) represents the sequence of computations required to perform a task. When a Spark application executes, the DAG scheduler constructs a graph of stages based on the RDD transformations and actions defined in the code. Each stage comprises a set of tasks that can be executed in parallel, optimizing the overall execution plan.

The DAG scheduler aids in optimizing task execution by minimizing data shuffling and network communication. By analyzing the dependencies between RDDs, the scheduler determines the most efficient order of operations, reducing the time and resources required to complete the tasks. This helps improve performance in large-scale data processing where the overhead of shuffling data between nodes can be significant.

The DAG abstraction enhances fault tolerance by enabling Spark to recompute only the lost or failed tasks instead of restarting the entire job. This selective recomputation uses the lineage information of RDDs, ensuring that Spark can recover from failures. 

Spark Optimization with Intel® Tiber™ App-Level Optimization

Intel Tiber App-Level Optimization solution for Spark and PySpark workloads offers continuous and autonomous optimization to enhance performance and reduce costs. It dynamically allocates resources and optimizes JVM execution, memory usage, and crypto/compression processes, leading to faster job completion, reduced CPU usage, and lower costs. This results in significant efficiency improvements, such as memory and CPU reduction, leading to significant cost savings.

App-Level Optimization integrates seamlessly with all major data storage and infrastructure platforms, making it suitable for diverse use cases, including batch/streaming data, large-scale data science, SQL analytics, and machine learning. The solution’s dynamic scaling and optimization techniques address inefficiencies in traditional Spark autoscaling, ensuring better resource utilization and significant cost reductions without requiring R&D efforts or code changes.

Learn more about Intel Tiber for Big Data Optimization

Optimize application performance.

Save on cloud costs.

Start Now
Back to blog