TiDB Blog.

My Top 10 Feature Requests for TiDB

Cover Image for My Top 10 Feature Requests for TiDB
Morgan Tocker
Morgan Tocker

I’m a TiDB developer. Historically, that has meant that I work on certain features at the request of the product team or customers. Usually this overlaps with what features I think should be worked on. But not always.

I recently left PingCAP, and so today I thought it might be interesting to go over my feature request list.

Ground Rules: I am excluding PD, Tools, TiKV, TiFlash. Why? I’m focusing on my core expertise (tidb-server). I will also exclude overly broad features like “MySQL 8.0 support”, “improve performance” or “improve JSON support”. For brevity, I will also exclude experimental features, of which there are too many but that is a topic for another post.

Okay, onto the list!

My Top 10

#10 GIS and FULLTEXT support. There are some migration cases that depend on these features, and the workaround viability is very difficult. TiDB aims for MySQL compatibility, and I actually think to some degree you are excused by not implementing Stored Procedures, but on GIS and FULLTEXT I’m not so sure. They should really be there.

TiDB issues: #6347 and #1793.

#9 Background task scheduling framework. There are a lot of tasks that run inside the TiDB server, and they each run basically whenever they want with no coordination. I expect in future, there will be even more background tasks as DDL becomes multi-node (see item #2).

The lack of task coordination likely contributes to jittery response times, something TiDB suffers from much more than MySQL. A scheduling framework for the server to execute background tasks based on priority would be most welcome. This framework can be used by ANALYZE TABLE, regenerating plan cache, privilege cache, sysvar cache and DDL.

#8 CREATE TABLE AS SELECT. I use this syntax all the time in MySQL, so it is very annoying that it’s missing from TiDB. But why is it only in 8th place on my list? The workaround is easy, as it can be converted into two statements: CREATE TABLE LIKE and then INSERT SELECT.

Another reason TiDB needs to support this, is that it would make it much easier to port tests from the MySQL test suite.

TiDB issues: #4754

#7 SKIP LOCKED. I love this feature from MySQL 8.0. Hot rows are the bane of existence for distributed systems, and the SKIP LOCKED modifier allows you to non-deterministically read-rows from a table while skipping the locked ones.

TiDB issues: #18207

#6 A better CHECK[SUM] TABLE. With good reason, Database administrators tend to be pedantic about checking that source and replicas are correctly in sync, or when using a non-transactional engine that indexes and data are consistent. In the MySQL world this might mean some combination of CHECK TABLE, CHECKSUM TABLE or pt-table-checksum.

In TiDB, the syntax is called ADMIN CHECKSUM TABLE, and the intention is much the same. But the statement is poorly documented as to what it actually checks, and I have worked on several customer corruption issues where it returns no errors.

The recommended way to check that TiFlash is consistent with TiKV is to SELECT COUNT(*) from the table and compare row counts when using a hint to read from either engine. But this is not a good practice, since it doesn’t account for the myriad of bugs that can be introduced by encoding issues. In fact, the lack of such a feature makes me concerned about using TiFlash.

TiDB issues: #28483

TiFlash issues: #3258

#5 Fix the configuration story. Configuration in TiDB is a complete mess.

Settings might be configurable via either system variables, or via the TiDB configuration file. In some cases they are configurable by both methods, but the name of the setting differs between the two locations (the config file is in toml and uses a hierarchy of sections).

The values of system variables are saved to an internal table when the cluster bootstraps, so if the default value changes, you will be left with the original value after upgrade. So knowing what value a user has for a variable always requires checking - even if they assure you they’ve not changed it.

The previous point wouldn’t be so bad on its own, but some system variables are also hidden and don’t show up in SHOW [GLOBAL] VARIABLES. Some of the historical examples of hidden variables also had a massive impact on performance (such as async commit). So you could be comparing two clusters with SHOW [GLOBAL] VARIABLES and have no idea why they were so much different.

As well as having hidden variables, there are also noop variables. These variables do nothing, and are added to fool applications that require certain options to be present for MySQL compatibility. For example, many connectors change the value of query_cache_type on connecting. There is no easy way to know what is a noop variable except reading the source code.

On top of all this, documentation is missing for about 50 system variables.

I did try my best to improve the situation while I was working for PingCAP, but it’s a difficult problem, since it often requires breaking changes to fix. I really hope there is someone to carry the torch and keep it going.

#4 Fix upgrade/downgrade. More wood behind fewer arrows please.

There is no official upgrade policy for TiDB, and downgrade is not supported. So the TiDB developers need to plan for a scenario where a user could perform a rolling upgrade from TiDB 2.1 to 6.1.

A rolling upgrade is also effectively a downgrade anyway, since TiDB needs to account for both old and new versions of TiDB servers in the cluster. This makes updating metadata basically impossible, since the communication format(s) must be understood by the earliest releases of TiDB.

What would be much easier for server developers is to know that upgrade is limited to a certain version range: for example, to upgrade from TiDB 2.1 to TiDB 6.1 you need to step through at least TiDB 4.0. Smaller jumps are obviously easier on the developer, but whatever the rules are doesn’t matter too much provided it’s not unlimited as it is now.

At this point you might be thinking: I’m not a developer, why should I care? Well, with TiDB releasing 5.0, 5.1, 5.2, 5.3, 5.4, 6.0, 6.1 over basically the last year - do you really expect that each of those variations of upgrade are tested as thoroughly as they should be? Also consider that developer resources are finite, and some of the cleanup that can’t currently be done could improve performance or usability.

#3 DDL fast import. Currently adding indexes in TiDB is much slower than MySQL.

It requires TiDB first reading the values from TiKV, computing the index values and then batch writing them back to TiKV. It needs to work this way, since the TiKV server that the index region is located on, might be a different TiKV server than where the data-row is located.

This feature request is for the batch write phase of adding indexes to accept ingestion of a RocksDB SST file directly. This process should work similar to how TiDB Lightning currently works, and should help simplify some of the steps on the TiKV server side for ingestion.

#2 Multi-node DDL. Currently TiDB only uses one node to process DDL changes.

This relates to item #9 in that there should be a framework for scheduling background tasks across the TiDB cluster, with all nodes potentially able to participate in DDL changes.

There are really two different use cases for DDL:

  1. As you are building the cluster, you might want DDL to run as fast as possible and use all of the cluster resources so you can finish the task as quickly as possible.
  2. In a production environment, you are more concerned about the impact of DDL on user-facing queries.

An advanced scenario for the “background task scheduling framework” is that I should be able to use tags to assign background work to only certain tidb nodes. If I have a large cluster, I will assign it to nodes that I don’t place behind the load balancer, which should help cater for use-case #2 in a nice user-configurable way.

#1 Foreign key support. Why? I think for years we’ve been brainwashed by MySQL marketing that FOREIGN KEYS add overhead, but actually if you consider optimization scenarios like join elimination they can improve query performance. We also don’t have the same criticism of UNIQUE INDEXES, but by this definition they add overhead too.

The benefit of TiDB over sharded MySQL is that you get a complete view of the database, so why can’t you set constraints on your data and take advantage of that?

TiDB issues: #18209

Coming up with a list is hard

#1 might surprise some people, but it’s the most requested feature in GitHub issues ordered by thumbs up. In my opinion it’s time for MySQL users to stop bashing foreign keys. Also, those that are worried about insert performance don’t have to use them.

If you count items #9, #3 and #2 - they all relate to DDL. DDL is nice in TiDB because it’s always online, but that doesn’t mean that there are no shortcomings, and it can be painful to operate at scale. As I said I’ll skip features that are experimental, so I did not mention concurrent DDL (many DDL operations can run at once) and multi schema change (multiple changes can be chained together in one statement).

I didn’t even get into JSON. I should have, but in the ground rules I explained why I would be skipping it. I would really like to see JSON support improved, but I can’t pin it on a specific feature. I also excluded “fix OOM issues” for many of the same reasons.

I would have also liked to say something like “support for Time Appliances Project (TAP) or alternatives to TSO”, but I skipped that because it’s not only tidb-server related.


Yes, I have no shortage of my own feature requests! But I also want to say that I don’t envy those who have to decide what features should be worked on next. From the outside, sometimes it can look like priorities are always in the wrong order, but it’s not usually that straight forward.

Sometimes there can be constraints on engineers with specific skill sets, or there might be an order that features are ideally worked on so they can consider other priorities such as refactoring. It’s not an easy problem to solve.