Using a NewSQL DBMS to Improve Data Freshness and Execute Analytical Queries in Minutes

Why we chose TiDB, a NewSQL database

  • It should handle large amounts of data and frequent updates. We have a very large data volume, and each of our customers’ orders gets updated five to six times. Our new database must be robust and flexible.
  • It should support multi-dimensional queries. Solutions like HBase had very limited support for this feature.
  • The data analytical period should be long.
  • For data analytics, the data should be fresh.
  • We should avoid standalone machine performance bottlenecks, including single-node failures and high risks.
  • The new database should support high queries per second (QPS) and query response times in milliseconds.
  • Offered multiple application metrics for multiple application scenarios.
  • Supported distributed transactions with strong consistency.
  • Was reasonably inexpensive to switch to.
  • Had an engineering system for analytics and calculation, and it wouldn’t use the original stored procedure.
  • Supported high-concurrent reads, writes, and updates.
  • Supported online maintenance, and a single-node failure wouldn’t impact the application.
  • Would integrate with our existing technology and could analyze data in minutes.
  • Supported building a large, wide table with 100+ columns and multi-dimensional query analytics based on this table.

How we use TiDB

Using TiDB in our mission-critical system

Our original Oracle-based architecture
Our new architecture after migration to TiDB
  • Our storage capacity increased. The system’s data storage period more than tripled.
  • Our database can scale out horizontally. The operations and maintenance staff can add or remove computing and storage nodes at any time. This has little impact on the applications.
  • TiDB meets our high-performance OLTP application needs. Some queries’ performance is slightly reduced, but they still meet our application requirements.
  • The pressure on a single node was gone. OLTP and OLAP workloads are separated and no longer interfere with each other.
  • TiDB supports analytics for more dimensions.
  • The new architecture looks clearer than the original. The system is easier to maintain and scale.

Using TiDB to build a large, wide table

Building a large, wide table
  • 137 nodes in total: 32 TiDB nodes, 102 TiKV nodes, 306 TiKV instances, and 3 Placement Driver (PD) nodes
  • 150,000+ QPS; 350,000+ peak
  • 70%+ CPU usage
  • 75%+ disk load
  • 47 TB of stored data
Service port status
The TiDB cluster
Cluster overview
The PD cluster

Using TiDB in the HTAP scenario

  • Queries needed to run faster. They must not only meet the data analytical period requirement, but the application team must also know about the updates more quickly.
  • Downstream systems needed more subscription information, and not just actively pull the information.
  • In big sales campaigns, TiKV was under a lot of pressure. Therefore, we needed to separate computing and storage.
  • Our clusters were too large to manage. It was difficult to troubleshoot cluster issues.
Our new architecture with TiFlash and TiCDC
Commands per second during our big sales promotion
Query duration during our big sales promotion
TiFlash CPU load





PingCAP is the team behind TiDB, an open-source MySQL compatible NewSQL database. Official website: GitHub:

Love podcasts or audiobooks? Learn on the go with our new app.

Recommended from Medium

Never use Alt-Tab again with this free productivity tool

Playmaker: The Reality of 10x Engineer..

Maze Problems in Data Structure

Working with External Libraries of Python

The Cost of Managing AWS Costs

Build Better Software By Going Farther Together

[LeetCode] 260. Single Number III

Just got my CRTP ! Here’s my exam experience

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store


PingCAP is the team behind TiDB, an open-source MySQL compatible NewSQL database. Official website: GitHub:

More from Medium

Leveraging Virtual Tables in Apache Cassandra 4.0

How SHAREit Powers Its AI Workflow with Database

First Anniversary Celebration of Apache DolphinScheduler’s Graduation From ASF Incubator!

Apache APISIX makes it more convenient for You to Proxy Dubbo Services