動態

詳情 返回 返回

GreatSQL 優化技巧:最值子查詢與窗口函數相互轉換 - 動態 詳情

GreatSQL 優化技巧:最值子查詢與窗口函數相互轉換

導語

近期 SQL 優化過程中遇到兩個案例,一個是將最值子查詢改寫為窗口函數,另一個是將窗口函數改寫為最值子查詢,覺得很有意思,特此記錄分享一下。

SQL案例

兩個 SQL 語句

SQL1:

SELECT imei, c1
  FROM (SELECT imei,
               c1,
               row_number() OVER(PARTITION BY imei ORDER BY statistic_time DESC) AS rn
          FROM t1)
 WHERE rn = 1;

SQL2:

 SELECT *
   FROM t1 a
  WHERE to_char(statistic_time, 'yyyymmdd') =
        (SELECT MAX(to_char(statistic_time, 'yyyymmdd'))
           FROM t1 b
          WHERE a.c2 = b.c2
                )
    AND a.imei = 'a';

這兩個語句的真實場景並不是在一個系統中遇到的,這裏只是用一張測試表來説明這兩個 SQL 的問題。

測試表與測試數據如下:

CREATE TABLE t1(
imei VARCHAR(100),
statistic_time datetime,
c1 INT,
c2 INT,
PRIMARY KEY(imei,statistic_time)
);

 SET sql_mode=oracle;
 DELIMITER //
 CREATE OR REPLACE PROCEDURE p1 IS
 BEGIN
   FOR i IN 1..100000 LOOP
      INSERT INTO t1 VALUES('a',TO_DATE('2025-01-01','yyyy-mm-dd')+ (1/24/60/60)*i, TRUNC(RAND()*200000),TRUNC(RAND()*1000));
   END LOOP;
    FOR i IN 1..100000 LOOP
      INSERT INTO t1 VALUES('b',TO_DATE('2025-01-01','yyyy-mm-dd')+ (1/24/60/60)*i, TRUNC(RAND()*200000),RAND()*1000);
   END LOOP;
    FOR i IN 1..100000 LOOP
      INSERT INTO t1 VALUES('c',TO_DATE('2025-01-01','yyyy-mm-dd')+ (1/24/60/60)*i, TRUNC(RAND()*20000),RAND()*1000);
   END LOOP;
END ;
//
DELIMITER ;

CALL p1;

數據統計分佈如下:

greatsql> SHOW INDEX FROM T1;
+-------+------------+----------+--------------+----------------+-----------+-------------+----------+--------+------+------------+---------+---------------+---------+------------+
| Table | Non_unique | Key_name | Seq_in_index | Column_name    | Collation | Cardinality | Sub_part | Packed | Null | Index_type | Comment | Index_comment | Visible | Expression |
+-------+------------+----------+--------------+----------------+-----------+-------------+----------+--------+------+------------+---------+---------------+---------+------------+
| t1    |          0 | PRIMARY  |            1 | imei           | A         |           2 |     NULL |   NULL |      | BTREE      |         |               | YES     | NULL       |
| t1    |          0 | PRIMARY  |            2 | statistic_time | A         |      298697 |     NULL |   NULL |      | BTREE      |         |               | YES     | NULL       |
| t1    |          1 | idx_c1   |            1 | c1             | A         |      132721 |     NULL |   NULL | YES  | BTREE      |         |               | YES     | NULL       |
| t1    |          1 | idx_c2   |            1 | c2             | A         |        1015 |     NULL |   NULL | YES  | BTREE      |         |               | YES     | NULL       |
+-------+------------+----------+--------------+----------------+-----------+-------------+----------+--------+------+------------+---------+---------------+---------+------------+
4 rows in set (0.00 sec)

t1表插入30萬行數據,列imei的唯一值3個,列c1唯一值13萬左右,列c2唯一值1000左右。

測試數據只是模擬了真實的業務數據分佈特點,真實業務場景數據量是千萬級別的。

SQL1 執行分析與優化

SQL1 執行計劃:

greatsql> EXPLAIN ANALYZE
    -> SELECT imei, c1
    ->   FROM (SELECT imei,
    ->                c1,
    ->                row_number() OVER(PARTITION BY imei ORDER BY statistic_time DESC) AS rn
    ->           FROM t1)
    ->  WHERE rn = 1\G
*************************** 1. row ***************************
EXPLAIN: -> Index lookup on alias_temp_101454818022785039 using <auto_key0> (rn=1)  (cost=0.35..3.50 rows=10) (actual time=1327.190..1327.192 rows=3 loops=1)
    -> Materialize  (cost=0.00..0.00 rows=0) (actual time=1327.172..1327.172 rows=300000 loops=1)
        -> Window aggregate: row_number() OVER (PARTITION BY t1.imei ORDER BY t1.statistic_time desc )   (actual time=345.111..918.642 rows=300000 loops=1)
            -> Sort: t1.imei, t1.statistic_time DESC  (cost=31248.70 rows=298697) (actual time=345.091..374.978 rows=300000 loops=1)
                -> Index scan on t1 using idx_c1  (cost=31248.70 rows=298697) (actual time=0.133..96.587 rows=300000 loops=1)

1 row in set (1.36 sec)

從執行計劃可以看出,對t1表的索引idx_c1做了覆蓋掃描,又按照 t1.imei, t1.statistic_time做排序,再做窗口聚合,物化處理,物化處理後的結果集是30萬,而最終結果集只有3行。測試數據量只有30萬,耗時1327ms,如果數據量放大100倍呢,這個代價就非常高了。

如何才能提升這個SQL的效率呢?

結果集只有3行,卻掃描了整個索引,SQL 優化的核心思想,是減少I/O消耗,那就是要掃描的數據儘量少。分組排序列imei,statistic_time 是有主鍵聯合索引的,這裏窗口函數的目的就是取得前導列imei對應的statistic_time最大值的那條記錄,而通過索引取最值是不用去掃描整個索引的。所以我們就利用索引有序這個特點,來減少掃描的數據頁。

對SQL改寫如下:

SELECT t1.imei, t1.c1
  FROM t1
  JOIN (SELECT imei, MAX(statistic_time) statistic_time
          FROM t1
         GROUP BY imei) t11
    ON t1.imei = t11.imei
   AND t1.statistic_time = t11.statistic_time;

先對t1表的imei分組取最值,這個結果集是非常小的,然後再與大表t1表關聯。

執行計劃如下:

EXPLAIN: -> Nested loop inner join  (cost=6.14 rows=3) (actual time=0.300..0.313 rows=3 loops=1)
    -> Filter: (t11.statistic_time is not null)  (cost=4.15..2.84 rows=3) (actual time=0.280..0.281 rows=3 loops=1)
        -> Table scan on t11  (cost=5.65..7.34 rows=3) (actual time=0.274..0.275 rows=3 loops=1)
            -> Materialize  (cost=4.80..4.80 rows=3) (actual time=0.270..0.270 rows=3 loops=1)
                -> Covering index skip scan for grouping on t1 using PRIMARY  (cost=4.50 rows=3) (actual time=0.118..0.226 rows=3 loops=1)
    -> Single-row index lookup on t1 using PRIMARY (imei=t11.imei, statistic_time=t11.statistic_time)  (cost=1.03 rows=1) (actual time=0.009..0.009 rows=1 loops=3)

1 row in set (0.00 sec)

從執行計劃看出,修改後的SQL,分組取最值時用到了索引跳躍掃描(index skip scan),結果集只有3行,這個小結果集物化的代價很小,而與大表關聯時,大表循環執行3次(loops=3),又使用了主鍵索引,整個SQL執行完畢耗時0.313毫秒,相較之前的1327毫秒,性能提升超過4000倍

SQL2 執行分析與優化

SQL2 執行計劃分析

greatsql> explain analyze
    ->  SELECT *
    ->    FROM t1 a
    ->   WHERE to_char(statistic_time, 'yyyymmdd') =
    ->         (SELECT MAX(to_char(statistic_time, 'yyyymmdd'))
    ->            FROM t1 b
    ->           WHERE a.c2 = b.c2
    ->                 )
    ->     AND a.imei = 'a'\G
*************************** 1. row ***************************
EXPLAIN: -> Filter: (to_char(a.statistic_time,'yyyymmdd') = (select #2))  (cost=15037.65 rows=149898) (actual time=33379.186..38313.772 rows=13601 loops=1)
    -> Index lookup on a using PRIMARY (imei='a')  (cost=15037.65 rows=149898) (actual time=0.221..82.639 rows=100000 loops=1)
    -> Select #2 (subquery in condition; dependent)
        -> Aggregate: max(to_char(b.statistic_time,'yyyymmdd'))  (cost=77.27 rows=1) (actual time=0.381..0.381 rows=1 loops=100000)
            -> Covering index lookup on b using idx_c2 (c2=a.c2)  (cost=46.74 rows=305) (actual time=0.047..0.187 rows=301 loops=100000)

1 row in set, 1 warning (38.33 sec)

從執行計劃可以看出,條件imei='a'返回的結果集比較大,這裏測試數據結果集是10萬,佔全表的1/3,子查詢是根據c2列取statistic_time的最值,c2列的選擇性不是太好,每次掃描,返回301行數據,耗時0.187毫秒,循環執行10萬次,SQL總耗時38.33秒。

如何才能提升這個SQL的效率呢?

還是那句優化宗旨,要減少I/O消耗,就是避免循環執行子查詢那麼多次,怎麼辦呢,此處藉助rank()窗口函數可以達到這一目的,窗口函數的好處就是隻需對錶做一次掃描,就能得到想要的分組排序名次,再通過名次過濾取第一名,就能得到最值子查詢想要的效果,rank()與row_number()的區別在於同一名次可以並列,而此處c2,statistics_time兩列組合不具有唯一性,所以此處應使用rank()窗口函數才能與最值子查詢等價。

SQL改寫如下:

 SELECT t11.imei, t11.statistic_time, t11.c1, t11.c2 
   FROM (SELECT t1.*,
                rank() OVER(PARTITION BY c2 ORDER BY to_char(statistic_time, 'yyyymmdd') desc) rn
           FROM t1 
           WHERE imei='a'
          ) t11 
  WHERE t11.rn = 1

改寫後SQL執行計劃如下:

EXPLAIN: -> Index lookup on t11 using <auto_key0> (rn=1)  (cost=0.35..3.50 rows=10) (actual time=374.428..378.893 rows=13601 loops=1)
    -> Materialize  (cost=0.00..0.00 rows=0) (actual time=374.409..374.409 rows=100000 loops=1)
        -> Window aggregate: rank() OVER (PARTITION BY t1.c2 ORDER BY to_char(t1.statistic_time,'yyyymmdd') desc )   (actual time=127.944..221.134 rows=100000 loops=1)
            -> Sort: t1.c2, to_char(t1.statistic_time,'yyyymmdd') DESC  (cost=15046.54 rows=149348) (actual time=127.896..137.878 rows=100000 loops=1)
                -> Index lookup on t1 using PRIMARY (imei='a')  (cost=15046.54 rows=149348) (actual time=0.159..46.607 rows=100000 loops=1)

1 row in set (0.40 sec)

從執行計劃可以看出,修改後的SQL只需對t1表按條件掃描一次,再做排序,聚合,物化,SQL整體耗時0.40秒,與修改前的38.33秒相比,性能提升近100倍

總結

SQL優化不要拘泥於規則,不需要死記是哪種寫法快,重要的是懂SQL執行計劃,明白SQL主要耗時在什麼地方,以及使用何種技巧來降低I/O消耗,總之是原理與技巧不可或缺。


Enjoy GreatSQL :)

## 關於 GreatSQL

GreatSQL是適用於金融級應用的國內自主開源數據庫,具備高性能、高可靠、高易用性、高安全等多個核心特性,可以作為MySQL或Percona Server的可選替換,用於線上生產環境,且完全免費併兼容MySQL或Percona Server。

相關鏈接: GreatSQL社區 Gitee GitHub Bilibili

GreatSQL社區:

社區博客有獎徵稿詳情:https://greatsql.cn/thread-100-1-1.html

image-20230105161905827

技術交流羣:

微信:掃碼添加GreatSQL社區助手微信好友,發送驗證信息加羣

image-20221030163217640

Add a new 評論

Some HTML is okay.