How @twitter keeps its Search systems up and stable at scale



5322 views โ€ข Backend System Design



Managing massive, talking hundreds of terabytes here, Search clusters is no joke, especially at @Twitter’s scale.

To manage them efficiently, Twitter built a bunch of toolings, here’s a quick gist about it ๐Ÿงต๐Ÿ‘‡

Twitter uses ES to power the search of tweets, users, and DMs. ES gives them the necessary speed, performance, and horizontal scalability.

Given massive adoption, they needed to ensure the efficiency, and stability of these clusters and provide some standardized way of access.

Elasticsearch Proxy

The Twitter team built a simple proxy for Elasticsearch that transparently sits in front of the Elasticsearch cluster.

The proxy is an extremely simple and lightweight TCP and HTTP-based relay that…

in a standard way, captures all critical metrics like - cluster health, latency, success, and failure rates here; along with this we can also

  • throttle when some client abuses
  • apply security practices
  • route to a specific node
  • authenticate

Ingestion Service

ES performance degrades when there is a massive surge in traffic. We typically see an

  • increased indexing latencies
  • increased query latencies

But it is a common usecase for Twitter to ingest massive data (tweets) every now and then, hence they tweaked the ingestion…

The write requests that come to the ES proxy are sent to Kafka. Consumers read from Kafka and relay them to the ES cluster.

Doing it asynchronously allows us to

  • do batch writes
  • and retry if the ES down
  • consume at a comfortable pace
  • slow down if ES is overwhelmed

Backfill Service

Twitter has a constant need of ingesting 100s of TBs of data in the Elasticsearch clusters.

Doing massive ingestion through Map Reduce jobs directly on ES will take down the entire cluster and doing it through Kafka makes it unnecessarily granular;

hence a backfill service …

The backfill indexing requests are dumped on an HDFS.

The requests are partitioned and read using distributed jobs and indexed in Elasticsearch.

A separate orchestrator computes the number of workers required to consume the indexing requests.


Arpit Bhayani

Arpit's Newsletter

CS newsletter for the curious engineers

โค๏ธ by 38000+ readers

If you like what you read subscribe you can always subscribe to my newsletter and get the post delivered straight to your inbox. I write essays on various engineering topics and share it through my weekly newsletter.




Other essays that you might like



Be a better engineer

A set of courses designed to make you a better engineer and excel at your career; no-fluff, pure engineering.


Paid Courses

System Design for Beginners

A masterclass that helps early engineers and product managers become great at designing scalable systems.

300+ learners

Details โ†’

System Design Masterclass

A masterclass that helps you become great at designing scalable, fault-tolerant, and highly available systems.

1000+ learners

Details โ†’

Redis Internals

Learn internals of Redis by re-implementing some of the core features in Golang.

98+ learners

Details โ†’

Free Courses

Designing Microservices

A free playlist to help you understand Microservices and their high-level patterns in depth.

823+ learners

Details โ†’

GitHub Outage Dissections

A free playlist to help you learn core engineering from outages that happened at GitHub.

651+ learners

Details โ†’

Hash Table Internals

A free playlist to help you understand the internal workings and construction of Hash Tables.

1027+ learners

Details โ†’

BitTorrent Internals

A free playlist to help you understand the algorithms and strategies that power P2P networks and BitTorrent.

692+ learners

Details โ†’