Skip to content

Rate Limits

The FluxiQ NPC API implements request limits to ensure stability and availability for all customers.

Limits by Endpoint

Each endpoint has specific limits based on its nature and impact on the system:

EndpointMethodLimitWindowDescription
POST /boletosPOST100 req1 minuteBoleto creation
GET /boletosGET1,000 req1 minuteBoleto listing
GET /boletos/:nosso_numeroGET1,000 req1 minuteBoleto query
PATCH /boletos/:nosso_numeroPATCH100 req1 minuteBoleto update
DELETE /boletos/:nosso_numeroDELETE100 req1 minuteBoleto cancellation
POST /boletos/:nosso_numero/payPOST100 req1 minuteMark as paid
GET /settlementsGET1,000 req1 minuteSettlement listing
GET /settlements/:idGET1,000 req1 minuteSettlement query
POST /settlements/triggerPOST10 req1 hourManual processing trigger
GET /healthGET1,000 req1 minuteHealth check

Trigger Endpoint

The POST /settlements/trigger endpoint has a very restrictive limit (10 req/hour) as it triggers intensive processing. Use only when necessary.


Rate Limit Headers

All responses include headers that report the current state of your limit:

HeaderTypeDescription
X-RateLimit-LimitintegerMaximum number of allowed requests
X-RateLimit-RemainingintegerNumber of remaining requests in the current window
X-RateLimit-ResetintegerUnix timestamp of when the limit will reset

Header Example

http
HTTP/1.1 200 OK
Content-Type: application/json
X-RateLimit-Limit: 100
X-RateLimit-Remaining: 87
X-RateLimit-Reset: 1706968800

Interpretation:

  • Limit of 100 requests per minute
  • 87 requests remaining
  • Limit resets at 1706968800 (Unix timestamp)

Rate Limit Exceeded Response

When the limit is exceeded, the API returns status 429 Too Many Requests:

http
HTTP/1.1 429 Too Many Requests
Content-Type: application/json
X-RateLimit-Limit: 100
X-RateLimit-Remaining: 0
X-RateLimit-Reset: 1706968800
Retry-After: 45
json
{
  "success": false,
  "error": {
    "code": "RATE_LIMITED",
    "message": "Request limit exceeded",
    "details": "Wait 45 seconds before making new requests"
  },
  "meta": {
    "request_id": "req_abc123def456"
  }
}

The Retry-After header indicates how many seconds you should wait before trying again.


Retry Implementation

JavaScript with Exponential Backoff

javascript
class PixConnectClient {
  constructor(apiKey) {
    this.apiKey = apiKey;
    this.baseUrl = "https://api.pixconnect.com.br/api/v1/central";
    this.maxRetries = 5;
    this.baseDelay = 1000; // 1 second
  }

  async request(method, path, body = null, retryCount = 0) {
    const url = `${this.baseUrl}${path}`;

    const options = {
      method,
      headers: {
        "X-API-Key": this.apiKey,
        "Content-Type": "application/json",
      },
    };

    if (body) {
      options.body = JSON.stringify(body);
    }

    try {
      const response = await fetch(url, options);

      // Monitor rate limit headers
      const remaining = response.headers.get("X-RateLimit-Remaining");
      const limit = response.headers.get("X-RateLimit-Limit");

      if (remaining !== null) {
        console.log(`Rate limit: ${remaining}/${limit} remaining`);

        // Alert when few requests remain
        if (parseInt(remaining) < parseInt(limit) * 0.1) {
          console.warn("Warning: Near request limit!");
        }
      }

      // Handle rate limit
      if (response.status === 429) {
        if (retryCount >= this.maxRetries) {
          throw new Error("Retry limit exceeded");
        }

        const retryAfter = response.headers.get("Retry-After");
        const delay = retryAfter
          ? parseInt(retryAfter) * 1000
          : this.baseDelay * Math.pow(2, retryCount);

        console.log(`Rate limited. Waiting ${delay/1000}s (attempt ${retryCount + 1}/${this.maxRetries})`);

        await this.sleep(delay);
        return this.request(method, path, body, retryCount + 1);
      }

      // Handle server errors with retry
      if (response.status >= 500) {
        if (retryCount >= this.maxRetries) {
          const data = await response.json();
          throw new Error(`Server error: ${data.error.message}`);
        }

        const delay = this.baseDelay * Math.pow(2, retryCount);
        console.log(`Server error. Waiting ${delay/1000}s (attempt ${retryCount + 1}/${this.maxRetries})`);

        await this.sleep(delay);
        return this.request(method, path, body, retryCount + 1);
      }

      return response.json();
    } catch (error) {
      if (error.name === "TypeError" && retryCount < this.maxRetries) {
        // Network error - retry
        const delay = this.baseDelay * Math.pow(2, retryCount);
        console.log(`Network error. Waiting ${delay/1000}s`);

        await this.sleep(delay);
        return this.request(method, path, body, retryCount + 1);
      }
      throw error;
    }
  }

  sleep(ms) {
    return new Promise(resolve => setTimeout(resolve, ms));
  }

  // Convenience methods
  async createBoleto(boleto) {
    return this.request("POST", "/boletos", { boleto });
  }

  async getBoleto(nossoNumero) {
    return this.request("GET", `/boletos/${nossoNumero}`);
  }

  async listBoletos(params = {}) {
    const query = new URLSearchParams(params).toString();
    return this.request("GET", `/boletos?${query}`);
  }
}

// Usage
const client = new PixConnectClient(process.env.PIXCONNECT_API_KEY);

const result = await client.createBoleto({
  nosso_numero: "12345678901",
  amount_cents: 15000,
  due_date: "2026-03-15",
  payer_document: "12345678901",
  payer_name: "Joao Silva",
  beneficiary_ispb: "02992335",
  beneficiary_name: "Empresa XYZ"
});

Python with Exponential Backoff

python
import requests
import time
import os
from typing import Optional, Dict, Any

class PixConnectClient:
    def __init__(self, api_key: str):
        self.api_key = api_key
        self.base_url = "https://api.pixconnect.com.br/api/v1/central"
        self.max_retries = 5
        self.base_delay = 1.0  # 1 second
        self.session = requests.Session()
        self.session.headers.update({
            "X-API-Key": api_key,
            "Content-Type": "application/json"
        })

    def request(
        self,
        method: str,
        path: str,
        json: Optional[Dict] = None,
        retry_count: int = 0
    ) -> Dict[str, Any]:
        url = f"{self.base_url}{path}"

        try:
            response = self.session.request(method, url, json=json)

            # Monitor rate limit headers
            remaining = response.headers.get("X-RateLimit-Remaining")
            limit = response.headers.get("X-RateLimit-Limit")

            if remaining is not None:
                print(f"Rate limit: {remaining}/{limit} remaining")

                # Alert when few requests remain
                if int(remaining) < int(limit) * 0.1:
                    print("Warning: Near request limit!")

            # Handle rate limit
            if response.status_code == 429:
                if retry_count >= self.max_retries:
                    raise Exception("Retry limit exceeded")

                retry_after = response.headers.get("Retry-After")
                delay = (
                    float(retry_after) if retry_after
                    else self.base_delay * (2 ** retry_count)
                )

                print(f"Rate limited. Waiting {delay}s (attempt {retry_count + 1}/{self.max_retries})")
                time.sleep(delay)
                return self.request(method, path, json, retry_count + 1)

            # Handle server errors with retry
            if response.status_code >= 500:
                if retry_count >= self.max_retries:
                    data = response.json()
                    raise Exception(f"Server error: {data['error']['message']}")

                delay = self.base_delay * (2 ** retry_count)
                print(f"Server error. Waiting {delay}s (attempt {retry_count + 1}/{self.max_retries})")
                time.sleep(delay)
                return self.request(method, path, json, retry_count + 1)

            return response.json()

        except requests.exceptions.RequestException as e:
            if retry_count < self.max_retries:
                delay = self.base_delay * (2 ** retry_count)
                print(f"Network error. Waiting {delay}s")
                time.sleep(delay)
                return self.request(method, path, json, retry_count + 1)
            raise

    # Convenience methods
    def create_boleto(self, boleto: Dict) -> Dict:
        return self.request("POST", "/boletos", {"boleto": boleto})

    def get_boleto(self, nosso_numero: str) -> Dict:
        return self.request("GET", f"/boletos/{nosso_numero}")

    def list_boletos(self, **params) -> Dict:
        query = "&".join(f"{k}={v}" for k, v in params.items())
        path = f"/boletos?{query}" if query else "/boletos"
        return self.request("GET", path)


# Usage
client = PixConnectClient(os.environ["PIXCONNECT_API_KEY"])

result = client.create_boleto({
    "nosso_numero": "12345678901",
    "amount_cents": 15000,
    "due_date": "2026-03-15",
    "payer_document": "12345678901",
    "payer_name": "Joao Silva",
    "beneficiary_ispb": "02992335",
    "beneficiary_name": "Empresa XYZ"
})

Best Practices

1. Monitor Headers

Always check the X-RateLimit-Remaining headers and proactively adjust your request rate:

javascript
if (remaining < limit * 0.2) {
  // Less than 20% remaining - slow down
  await sleep(1000);
}

2. Implement Exponential Backoff

When receiving 429 or 5xx errors, use exponential backoff:

Attempt 1: 1 second
Attempt 2: 2 seconds
Attempt 3: 4 seconds
Attempt 4: 8 seconds
Attempt 5: 16 seconds

Jitter

Add a small random value (jitter) to the delay to prevent multiple clients from retrying at the same time:

javascript
const jitter = Math.random() * 1000;
const delay = baseDelay * Math.pow(2, retryCount) + jitter;

3. Circuit Breaker Pattern

For high-availability systems, implement a circuit breaker:

javascript
class CircuitBreaker {
  constructor(threshold = 5, timeout = 30000) {
    this.failures = 0;
    this.threshold = threshold;
    this.timeout = timeout;
    this.state = "CLOSED"; // CLOSED, OPEN, HALF_OPEN
    this.lastFailure = null;
  }

  async execute(fn) {
    if (this.state === "OPEN") {
      if (Date.now() - this.lastFailure > this.timeout) {
        this.state = "HALF_OPEN";
      } else {
        throw new Error("Circuit breaker is OPEN");
      }
    }

    try {
      const result = await fn();
      this.onSuccess();
      return result;
    } catch (error) {
      this.onFailure();
      throw error;
    }
  }

  onSuccess() {
    this.failures = 0;
    this.state = "CLOSED";
  }

  onFailure() {
    this.failures++;
    this.lastFailure = Date.now();
    if (this.failures >= this.threshold) {
      this.state = "OPEN";
    }
  }
}

// Usage
const breaker = new CircuitBreaker();

try {
  const result = await breaker.execute(() =>
    client.createBoleto(boletoData)
  );
} catch (error) {
  if (error.message === "Circuit breaker is OPEN") {
    // Service temporarily unavailable
    // Use fallback or notify user
  }
}

4. Batch Requests

When possible, group operations to reduce the number of requests:

javascript
// Avoid: multiple individual requests
for (const id of boletoIds) {
  await client.getBoleto(id); // Many requests!
}

// Prefer: use listing with filters
const result = await client.listBoletos({
  nosso_numero: boletoIds.join(","),
  limit: 100
});

5. Read Caching

Implement caching for frequent GET requests:

javascript
const cache = new Map();
const CACHE_TTL = 60000; // 1 minute

async function getBoletoWithCache(nossoNumero) {
  const cacheKey = `boleto:${nossoNumero}`;
  const cached = cache.get(cacheKey);

  if (cached && Date.now() - cached.timestamp < CACHE_TTL) {
    return cached.data;
  }

  const result = await client.getBoleto(nossoNumero);
  cache.set(cacheKey, {
    data: result.data,
    timestamp: Date.now()
  });

  return result.data;
}

Limits by Environment

Limits vary according to the environment:

EnvironmentMultiplierPOST /boletosGET endpointsPOST /settlements/trigger
Production1x100/min1,000/min10/hour
Sandbox0.1x10/min100/min1/hour
LocalUnlimited---

Test Environment

The sandbox environment has reduced limits to prevent excessive use during development. For load testing, contact support.


Requesting Limit Increase

If current limits don't meet your demand, you can request an increase:

Requirements

  1. Active production account for at least 30 days
  2. Usage history demonstrating need
  3. Use case justified

Request Process

  1. Access the FluxiQ Portal
  2. Navigate to Settings > API > Rate Limits
  3. Click Request Increase
  4. Fill out the form with:
    • Endpoint(s) that need increase
    • Desired limit
    • Technical justification
    • Expected transaction volume

Response SLA

Account TypeResponse Time
Enterprise1 business day
Business3 business days
Standard5 business days

Tip

Before requesting an increase, review your implementations to ensure you are using the best practices described on this page. Often, code optimizations can resolve rate limit issues.


Next Steps

Documentação da API FluxiQ NPC