Skip to content

SnailJob 性能压测报告

  • 报告日期: 2025-08-25
  • 版本: 1.7.2
  • 提供者: rpei

测试目标

本次压测的目标是验证 单个 SnailJob 服务节点在稳定条件下可支持的最大定时任务数量,并评估系统在高并发任务调度下的整体性能表现。

测试环境

🔹 数据库

  • 类型: 阿里云 RDS MySQL 8.0
  • 实例规格: mysql.n2.xlarge.1(8 vCPU,16 GB 内存)
  • 存储: 100 GB,InnoDB 引擎
  • 版本: MySQL_InnoDB_8.0_Default

🔹 应用部署

  • 服务器信息: 阿里云 ECS g6.4xlarge
  • SnailJob Server: 单实例(4 vCPU,8 GB 内存)
  • SnailJob Client: 16 个实例(每个 1 vCPU,1 GB 内存)

服务端配置

pekko配置(snail-job-server-starter/src/main/resources/snailjob.conf)

pekko {
  actor {
    common-log-dispatcher {
      type = "Dispatcher"
      executor = "thread-pool-executor"
      thread-pool-executor {
        core-pool-size-min = 16
        core-pool-size-factor = 1.0
        core-pool-size-max = 256
      }
      throughput = 10
    }


    common-scan-task-dispatcher {
      type = "Dispatcher"
      executor = "thread-pool-executor"
      thread-pool-executor {
        core-pool-size-min = 64
        core-pool-size-factor = 1.0
        core-pool-size-max = 256
      }
      throughput = 10
    }


    netty-receive-request-dispatcher {
      type = "Dispatcher"
      executor = "thread-pool-executor"
      thread-pool-executor {
        core-pool-size-min = 128
        core-pool-size-factor = 1.0
        core-pool-size-max = 256
      }
      throughput = 10
    }


    retry-task-executor-dispatcher {
      type = "Dispatcher"
      executor = "thread-pool-executor"
      thread-pool-executor {
        core-pool-size-min = 32
        core-pool-size-factor = 1.0
        core-pool-size-max = 256
      }
      throughput = 10
    }


    retry-task-executor-call-client-dispatcher {
      type = "Dispatcher"
      executor = "thread-pool-executor"
      thread-pool-executor {
        core-pool-size-min = 32
        core-pool-size-factor = 1.0
        core-pool-size-max = 256
      }
      throughput = 10
    }




    retry-task-executor-result-dispatcher {
      type = "Dispatcher"
      executor = "thread-pool-executor"
      thread-pool-executor {
        core-pool-size-min = 32
        core-pool-size-factor = 1.0
        core-pool-size-max = 256
      }
      throughput = 10
    }


    job-task-prepare-dispatcher {
      type = "Dispatcher"
      executor = "thread-pool-executor"
      thread-pool-executor {
        core-pool-size-min = 128
        core-pool-size-factor = 1.0
        core-pool-size-max = 256
      }
      throughput = 10
    }


    job-task-executor-dispatcher {
      type = "Dispatcher"
      executor = "thread-pool-executor"
      thread-pool-executor {
        core-pool-size-min = 160
        core-pool-size-factor = 1.0
        core-pool-size-max = 256
      }
      throughput = 10
    }


    job-task-executor-call-client-dispatcher {
      type = "Dispatcher"
      executor = "thread-pool-executor"
      thread-pool-executor {
        core-pool-size-min = 160
        core-pool-size-factor = 1.0
        core-pool-size-max = 256
      }
      throughput = 10
    }


    job-task-executor-result-dispatcher {
      type = "Dispatcher"
      executor = "thread-pool-executor"
      thread-pool-executor {
        core-pool-size-min = 160
        core-pool-size-factor = 1.0
        core-pool-size-max = 256
      }
      throughput = 10
    }


    workflow-task-prepare-dispatcher {
      type = "Dispatcher"
      executor = "thread-pool-executor"
      thread-pool-executor {
        core-pool-size-min = 4
        core-pool-size-factor = 1.0
        core-pool-size-max = 256
      }
      throughput = 10
    }


    workflow-task-executor-dispatcher {
      type = "Dispatcher"
      executor = "thread-pool-executor"
      thread-pool-executor {
        core-pool-size-min = 4
        core-pool-size-factor = 1.0
        core-pool-size-max = 512
      }
      throughput = 10
    }
  }
}

系统配置文件(snail-job-server-starter/src/main/resources/application.yml)

server:
  port: 8080
  servlet:
    context-path: /snail-job


spring:
  main:
    banner-mode: off
  profiles:
    active: dev
  datasource:
    name: snail_job
    ## mysql
    driver-class-name: com.mysql.cj.jdbc.Driver
    url: jdbc:mysql://snailjob-mysql-svc:3306/snail_job?useSSL=false&characterEncoding=utf8&useUnicode=true
    username: root
    password: Ab1234567
    type: com.zaxxer.hikari.HikariDataSource
    hikari:
      connection-timeout: 30000
      minimum-idle: 16
      maximum-pool-size: 256
      auto-commit: true
      idle-timeout: 30000
      pool-name: snail_job
      max-lifetime: 1800000
  web:
    resources:
      static-locations: classpath:admin/


mybatis-plus:
  typeAliasesPackage: com.aizuda.snailjob.template.datasource.persistence.po
  global-config:
    db-config:
      where-strategy: NOT_EMPTY
      capital-mode: false
      logic-delete-value: 1
      logic-not-delete-value: 0
  configuration:
    map-underscore-to-camel-case: true
    cache-enabled: true
logging:
  config: /usr/snailjob/config/logback.xml
snail-job:
  retry-pull-page-size: 2000 # 拉取重试数据的每批次的大小
  job-pull-page-size: 2000 # 拉取重试数据的每批次的大小
  server-port: 17888  # 服务器端口
  log-storage: 7 # 日志保存时间(单位: day)
  rpc-type: grpc
  summary-day: 0
  server-rpc:
    keep-alive-time: 45s                # 心跳间隔45秒
    keep-alive-timeout: 15s             # 心跳超时15秒
    permit-keep-alive-time: 30s         # 允许心跳间隔30秒  
    dispatcher-tp:                      # 调度线程池配置
      core-pool-size: 100
      maximum-pool-size: 100


  client-rpc:
    keep-alive-time: 45s                # 心跳间隔45秒
    keep-alive-timeout: 15s             # 心跳超时15秒  
    client-tp:                         # 客户端线程池配置
      core-pool-size: 100
      maximum-pool-size: 100

测试场景

  • 每个定时任务的执行周期:60 秒
  • 单个任务平均执行耗时:200 毫秒
  • 测试目标:测量单节点 SnailJob Server 可稳定调度的任务数量

测试结果

在单节点(4C/8G)环境下,SnailJob Server 能够稳定承载 30,000 个定时任务,并保证任务在每 60 秒 内按时执行。此时数据库负载率仅 20%,表明系统具备良好的可扩展性。通过水平扩展服务端节点,理论上可轻松支持 100,000+ 任务调度,满足绝大多数企业的业务场景。 同时,SnailJob Pro 版本引入 Redis 缓存改造与日志剥离(基于 Mongo 存储),进一步提升了系统的调度能力与稳定性。

资源消耗情况(受公司保密限制,截图无法公开,这里仅分享压测的结果数据)

指标数据
SnailJob服务端CPU使用率均值:71% 峰值:82%
SnailJob服务端内存32%
数据库实例IOPS使用率采样间隔5秒峰值:40%
采样间隔30秒峰值:50%
数据库实例CPU使用率20%
数据库实例内存使用率55%

总结

SnailJob 的性能瓶颈主要来源于 数据库存储。由于调度过程中存在大量任务批次与日志写入操作,对数据库 IOPS 会产生较大压力。因此在部署 SnailJob 时,建议:

  • 数据库独立部署,避免与其他业务服务共享实例;
  • 优先选择高性能磁盘,以提升写入效率;
  • 开启异步写盘,进一步降低数据库写入延迟。