Back to blog

Elasticsearch: Use Cases, Architecture, and 6 Best Practices

Omer Mesika

Director of Solution Engineering, Intel Granulate

What Is Elasticsearch? 

Elasticsearch is an open source distributed search and analytics engine designed for handling large volumes of data. It is built on top of Apache Lucene and is supported by Elastic. Elasticsearch is used for storing, searching, and analyzing structured and unstructured data in near real-time.

One of the key features of Elasticsearch is its scalability, which allows it to handle large datasets across multiple nodes in a cluster. This makes it a popular choice for enterprise search, log analytics, and monitoring applications.

Elasticsearch provides a RESTful API for interacting with the search engine and supports a wide range of query types, including full-text search, phrase search, and aggregations. It also includes a variety of search and analysis features such as faceting, filtering, sorting, and highlighting.

In addition to search and analytics, Elasticsearch also supports various use cases such as application search, security analytics, and business analytics. It has a large and active community of users and developers, with many plugins and integrations available to extend its functionality.

In this article:

4 Common Elasticsearch Use Cases

Elasticsearch is often used alongside Logstash and Kibana in the ELK technology stack. Here are some of the popular use cases for Elasticsearch:

  1. Observability: Used to monitor and understand complex systems. Elasticsearch is a popular choice for observability due to its real-time search and analysis capabilities. It allows for collecting and analyzing data from different sources such as logs, metrics, and traces, and provides visualizations and alerts to help identify and troubleshoot issues quickly. Elasticsearch can be integrated with other tools such as Kibana, Beats, and Logstash to provide a complete observability solution.
  2. Full-text search: Supports a variety of search queries, including fuzzy search, phrase search, and autocomplete. Elasticsearch can be used for different types of applications, such as e-commerce websites, document management systems, and social networks, to provide fast and accurate search results.
  3. Real-time log analytics: Enables organizations to monitor their systems for errors, security issues, and other anomalies. By collecting and analyzing logs from different sources in real-time, Elasticsearch provides valuable insights into system performance and helps to identify and troubleshoot issues quickly. Elasticsearch can be integrated with tools such as Logstash and Beats to simplify the log collection and analysis process.
  4. Security analytics: Used to detect and investigate security threats in real-time. It can analyze different types of data such as network traffic, user behavior, and system logs to identify anomalies and threats. Elasticsearch can be integrated with other security tools such as Suricata, Zeek, and Snort to provide a comprehensive security solution.

Related content: Read our guide to Elasticsearch security (coming soon)

New call-to-action

Elasticsearch Architecture 

Here are the core components of Elasticsearch:


An Elasticsearch cluster is a group of one or more Elasticsearch nodes that work together to store, index, and search data. The cluster provides horizontal scalability, fault-tolerance, and high availability by distributing data across multiple nodes. Clusters are typically used for storing and analyzing large volumes of data, such as log files or application metrics.


In Elasticsearch, a node is a single server that stores data and participates in the cluster’s search and indexing capabilities. Each node in an Elasticsearch cluster is assigned a unique identifier, and they communicate with each other to coordinate the cluster’s operations.

There are three types of nodes in an Elasticsearch cluster:

  • Master node: Responsible for coordinating administrative tasks in the cluster, such as creating or deleting indices, managing the cluster’s state, and assigning shards to data nodes. Each cluster must have at least one master node, and additional master-eligible nodes can be added for redundancy.
  • Data node: Responsible for storing and indexing data in the cluster. Each data node stores a portion of the cluster’s data, and the cluster’s overall storage capacity scales with the number of data nodes in the cluster.
  • Client node: Used to route search and indexing requests to the appropriate data nodes in the cluster. Client nodes do not store data but provide a lightweight interface for interacting with the cluster, improving the performance of search and indexing operations.


Ports 9200 and 9300 are network ports used by Elasticsearch for client-server communication and node-to-node communication, respectively.

Port 9200 is the default HTTP port used for RESTful API requests to Elasticsearch. Clients, such as Kibana or Logstash, use port 9200 to send requests to Elasticsearch for indexing and searching data. Port 9200 also serves as a default port for Elasticsearch’s built-in HTTP-based monitoring APIs.

Port 9300 is the default port used by Elasticsearch for inter-node communication. Nodes use port 9300 to communicate with each other to share data, replicate shards, and coordinate cluster operations. Port 9300 is used for efficient communication using Elasticsearch’s proprietary protocol, rather than HTTP.


In Elasticsearch, a shard is a unit of data that represents a subset of a larger index. Each shard is a self-contained index that can be stored on a single node or distributed across multiple nodes in a cluster for horizontal scalability. Sharding allows Elasticsearch to split large datasets into smaller pieces and distribute them across multiple nodes, enabling fast search and analysis of large volumes of data. 


In Elasticsearch, a replica is a copy of a primary shard that is stored on a separate node in the cluster. Replicas provide redundancy and high availability, allowing Elasticsearch to continue serving requests in case of node failures or network issues. Replicas are used to distribute the search and indexing load across the cluster and to improve query response times.

While a shard is a unit of data that represents a subset of a larger index, a replica is a copy of a shard that is stored on a separate node for redundancy. Each shard can have one or more replicas, and the total number of shards and replicas determines the amount of data that can be stored in the cluster and the level of fault tolerance.


In Elasticsearch, the standard and simple analyzers are two built-in analyzers that can be used for text analysis during indexing and searching.

The standard analyzer is the default analyzer used in Elasticsearch. It provides sophisticated text analysis by splitting text into tokens, removing stop words, and performing stemming.

The simple analyzer is a basic analyzer that divides text into terms based on whitespace and punctuation, without any additional processing. The simple analyzer is useful for indexing and searching data that does not require sophisticated text analysis, such as log files or system metrics.


In Elasticsearch, documents are the basic units of information that are stored and indexed. A document can be any type of data, such as text, numbers, or structured data, and is represented in JSON format. Elasticsearch retrieves documents based on search queries, which can match specific fields or values within the document.


The JSON REST API is a core component of Elasticsearch that allows clients to interact with Elasticsearch using HTTP requests in JSON format. The REST API provides a simple and flexible interface for performing a wide range of operations, including indexing and searching data, managing indices and clusters, and configuring settings and mappings. The JSON format allows for easy parsing and serialization of data, making it easy to integrate Elasticsearch with a wide range of programming languages and tools.

Learn more in our detailed guide to Elasticsearch architecture (coming soon)

New call-to-action

Elasticsearch on the Cloud

Elasticsearch can be run on a variety of cloud platforms, including Amazon Web Services (AWS), Microsoft Azure, and Google Cloud Platform (GCP). Running Elasticsearch on the cloud can provide many benefits, including:

  • Scalability: With cloud-based Elasticsearch, you can easily scale your cluster up or down to meet changing data needs without having to worry about hardware constraints.
  • High availability: Cloud providers offer robust infrastructure and availability guarantees, so you can ensure that your Elasticsearch cluster is always up and running.
  • Ease of management: Many cloud providers offer managed Elasticsearch services that handle tasks such as software updates, backups, and security, freeing up your team to focus on other tasks.
  • Cost savings: Cloud-based Elasticsearch can be more cost-effective than running your own infrastructure, as you only pay for what you use and can easily scale up or down to control costs.

When running Elasticsearch on the cloud, it’s important to consider factors such as data security, network latency, and backup and recovery options. It’s also important to choose the right cloud provider and plan for the long-term growth of your data needs.

Learn more in our detailed guides to: 

Elasticsearch on Kubernetes

Kubernetes was originally intended for managing stateless workloads, but it can also be used to run stateful workloads, such as databases, message queues, and search engines.  Elasticsearch clusters are stateful, meaning they require persistent storage and stable network identities, which are challenging to manage in traditional container environments. Kubernetes provides several features, such as PersistentVolumes and StatefulSets, that make it easy to deploy, scale, and manage stateful workloads.

PersistentVolumes are used to provide persistent storage to Kubernetes workloads. They are independent of the pods and can be attached and detached dynamically, allowing for the data to be preserved across pod restarts.

StatefulSets are used to manage stateful workloads that require stable network identities and ordered deployment. Stateful Sets provide features such as ordered deployment, stable network identities, and dynamic scaling that make it easy to deploy and manage stateful workloads in Kubernetes.

Deploying an Elasticsearch cluster on Kubernetes makes it possible to simplify the process of configuring, scaling, and managing the cluster. Elasticsearch can be scaled horizontally using Kubernetes StatefulSets, allowing for easy scaling of the search and analytics infrastructure. Kubernetes provides a single platform for managing both the infrastructure and the applications.

Learn more in our detailed guide to Elasticsearch on Kubernetes (coming soon)

Elasticsearch Performance Issues and Problems

Elasticsearch is a powerful and versatile search and analytics engine, but like any complex system, it can experience performance issues and problems. Here are some common Elasticsearch performance issues and problems and how to address them:

  • Memory usage: Elasticsearch requires a significant amount of memory to perform efficiently. If Elasticsearch is running out of memory, it may slow down or even crash.
  • Disk usage: Elasticsearch stores data on disk, and if the disk becomes full or is slow, Elasticsearch performance may suffer. 
  • Query performance: Elasticsearch provides a powerful query language, but some queries can be expensive and impact performance.
  • Indexing performance: Elasticsearch indexes data as it is added, and if indexing is slow, it can impact the overall performance of Elasticsearch.
  • Hardware limitations: Elasticsearch performance is heavily dependent on hardware, and if hardware is inadequate, Elasticsearch performance may suffer. 
  • Network issues: Elasticsearch performance can also be impacted by network issues, such as latency or packet loss.

6 Best Practices for Optimizing Elasticsearch Performance 

1. Freezing Indices

Elasticsearch stores data in shards, which can be resource-intensive to query. One way to improve query performance is to “freeze” old or infrequently accessed indices. Freezing an index moves it to a separate node, which reduces the number of shards that need to be searched when a query is executed. Frozen indices can still be queried, but updates and new writes are not allowed.

2. Provisioning Capacity

Properly provisioning capacity is critical to Elasticsearch performance. This includes ensuring that there are enough resources available, such as CPU, memory, and storage, to handle the expected query and indexing workload. Provisioning capacity should be based on the expected query and indexing throughput, and should be monitored and adjusted as needed.

New call-to-action

3. Organizing Index Data

The way data is organized in Elasticsearch can have a significant impact on performance. To optimize performance, it is important to organize index data in a way that reflects the query patterns. For example, if queries often search for documents based on a date range, it may be beneficial to organize the data by date. This can be done by creating multiple indices and using an index alias to provide a single endpoint for querying.

4. Keeping Mapping Updates to a Minimum

Mapping updates, which define the schema for an index, can be resource-intensive and impact query performance. To minimize the impact of mapping updates, it is important to avoid frequent changes to the mapping. Instead, create a mapping that reflects the expected data schema and make changes only when necessary.

5. Optimizing Thread Pools

Thread pools are used to execute queries and indexing requests in Elasticsearch. To optimize performance, it is important to ensure that thread pools are properly configured and sized. Thread pools should be sized based on the expected query and indexing throughput, and should be monitored and adjusted as needed. Additionally, it is important to ensure that the proper type of thread pool is used for each task, such as search or indexing.

Learn more in our detailed guides to:

Optimize application performance.

Save on cloud costs.

Start Now
Back to blog