Skip to content

bug: limit-req: conf_type/conf_version suffix makes per-consumer rate limits effectively per-route #12946

@falvaradorodriguez

Description

@falvaradorodriguez

Current Behavior

When the limit-req plugin is configured only at the Consumer level, the effective rate limit is still scoped per route, not per consumer.

Internally, limit-req always appends ctx.conf_type and ctx.conf_version to the generated key. At runtime, ctx.conf_type includes both route and consumer (e.g. route&consumer), and conf_version includes the route config ID. As a result, a separate Redis counter is created for each route, even though the plugin is not configured on the routes themselves.

This can be observed in logs such as:

conf type: route&consumer
conf version: <route_version_id>&<consumer_version>
limit key: <base_key>route&consumer<conf_version>

Because of this behavior, a consumer configured with N req/sec can effectively make N req/sec per route, which bypasses the expected global per-consumer rate limit.

There is currently no configuration option in limit-req to prevent the conf_type/conf_version suffix from being added, making it impossible to implement a true global per-consumer leaky-bucket rate limit using limit-req.

Expected Behavior

When the limit-req plugin is configured at the Consumer level, the rate limit should apply globally to that consumer, regardless of which route is being accessed.

In this scenario, all requests made by the same consumer should share a single rate-limit bucket, so that a consumer configured with N req/sec can only make N req/sec in total, even when calling multiple routes.

More generally, users should be able to control the scope of the rate-limit key (for example, per route vs. per consumer), or at least have a way to prevent route-specific information from being implicitly included in the rate-limit key when the plugin is defined on a consumer.

This would allow limit-req to be used for true global per-consumer rate limiting, without requiring custom plugin overrides or switching to a different rate-limiting plugin.

Error Logs

The following logs were captured by adding debug prints in limit-req.lua while sending requests from the same consumer to different routes.

  key = key .. ctx.conf_type .. ctx.conf_version
  core.log.error("limit key: ", key)
  core.log.error("conf type: ", ctx.conf_type)
  core.log.error("conf version: ", ctx.conf_version)

Even though limit-req is configured only on the Consumer, the plugin reports a combined configuration context (route&consumer), and the generated key includes route-specific information:

2026/01/27 13:11:59 [error] 204#204: *53475 [lua] limit-req.lua:162: phase_func(): conf version: 2008&3, client: 192.168.65.1, server: _, request: "GET /test HTTP/1.1", host: "localhost:9080"

2026/01/27 13:12:00 [error] 194#194: *53481 [lua] limit-req.lua:160: phase_func(): limit key: 192.168.65.1route&consumer2008&3, client: 192.168.65.1, server: _, request: "GET /test HTTP/1.1", host: "localhost:9080"

2026/01/27 13:12:00 [error] 194#194: *53481 [lua] limit-req.lua:161: phase_func(): conf type: route&consumer, client: 192.168.65.1, server: _, request: "GET /test HTTP/1.1", host: "localhost:9080"

Redis keys:

127.0.0.1:6379> keys *
1) "limit_req:192.168.65.1route&consumer2008&3excess"
2) "limit_req:192.168.65.1route&consumer2031&3last"
3) "limit_req:192.168.65.1route&consumer2031&3excess"
4) "limit_req:192.168.65.1route&consumer2008&3last"

These logs show that the rate-limit key is effectively namespaced by both the route and the consumer, resulting in separate rate-limit counters per route.

Steps to Reproduce

  1. Start APISIX 3.12 and Redis

Run APISIX 3.12 with Redis enabled (local or via Docker).
Ensure Redis is reachable from APISIX.

  1. Create a Consumer with key-auth and limit-req

Configure limit-req only at the Consumer level:

curl -X PUT http://127.0.0.1:9180/apisix/admin/consumers/test-consumer \
  -H 'X-API-KEY: <ADMIN_API_KEY>' \
  -H 'Content-Type: application/json' \
  -d '{
    "username": "test-consumer",
    "plugins": {
      "key-auth": {
        "key": "test-api-key"
      },
      "limit-req": {
        "rate": 2,
        "burst": 1,
        "key": "remote_addr",
        "key_type": "var",
        "policy": "redis",
        "redis_host": "redis",
        "redis_port": 6379,
        "redis_database": 0
      }
    }
  }'
  1. Create two routes using key-auth

Route A:

curl -X PUT http://127.0.0.1:9180/apisix/admin/routes/1 \
  -H 'X-API-KEY: <ADMIN_API_KEY>' \
  -H 'Content-Type: application/json' \
  -d '{
    "uri": "/route-a",
    "plugins": {
      "key-auth": {}
    },
    "upstream": {
      "type": "roundrobin",
      "nodes": {
        "httpbin.org:80": 1
      }
    }
  }'

Route B:

curl -X PUT http://127.0.0.1:9180/apisix/admin/routes/2 \
  -H 'X-API-KEY: <ADMIN_API_KEY>' \
  -H 'Content-Type: application/json' \
  -d '{
    "uri": "/route-b",
    "plugins": {
      "key-auth": {}
    },
    "upstream": {
      "type": "roundrobin",
      "nodes": {
        "httpbin.org:80": 1
      }
    }
  }'
  1. Send requests to both routes using the same API key
curl http://127.0.0.1:9080/route-a -H "apikey: test-api-key"
curl http://127.0.0.1:9080/route-b -H "apikey: test-api-key"

Repeat the requests rapidly (more than 2 requests per second in total).

  1. Observe the behavior

Requests to /route-a and /route-b are rate-limited independently

The consumer can exceed the configured rate by distributing requests across routes

Redis shows separate rate-limit keys per route, even though limit-req is not configured on the routes

Environment

  • APISIX version (run apisix version): APISIX version: 3.12.0
  • Operating system (run uname -a): Linux (Docker-based setup)

Metadata

Metadata

Assignees

No one assigned

    Type

    No type

    Projects

    Status

    📋 Backlog

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions