Rate Limits
The FluxiQ NPC API implements request limits to ensure stability and availability for all customers.
Limits by Endpoint
Each endpoint has specific limits based on its nature and impact on the system:
| Endpoint | Method | Limit | Window | Description |
|---|---|---|---|---|
POST /boletos | POST | 100 req | 1 minute | Boleto creation |
GET /boletos | GET | 1,000 req | 1 minute | Boleto listing |
GET /boletos/:nosso_numero | GET | 1,000 req | 1 minute | Boleto query |
PATCH /boletos/:nosso_numero | PATCH | 100 req | 1 minute | Boleto update |
DELETE /boletos/:nosso_numero | DELETE | 100 req | 1 minute | Boleto cancellation |
POST /boletos/:nosso_numero/pay | POST | 100 req | 1 minute | Mark as paid |
GET /settlements | GET | 1,000 req | 1 minute | Settlement listing |
GET /settlements/:id | GET | 1,000 req | 1 minute | Settlement query |
POST /settlements/trigger | POST | 10 req | 1 hour | Manual processing trigger |
GET /health | GET | 1,000 req | 1 minute | Health check |
Trigger Endpoint
The POST /settlements/trigger endpoint has a very restrictive limit (10 req/hour) as it triggers intensive processing. Use only when necessary.
Rate Limit Headers
All responses include headers that report the current state of your limit:
| Header | Type | Description |
|---|---|---|
X-RateLimit-Limit | integer | Maximum number of allowed requests |
X-RateLimit-Remaining | integer | Number of remaining requests in the current window |
X-RateLimit-Reset | integer | Unix timestamp of when the limit will reset |
Header Example
HTTP/1.1 200 OK
Content-Type: application/json
X-RateLimit-Limit: 100
X-RateLimit-Remaining: 87
X-RateLimit-Reset: 1706968800Interpretation:
- Limit of 100 requests per minute
- 87 requests remaining
- Limit resets at
1706968800(Unix timestamp)
Rate Limit Exceeded Response
When the limit is exceeded, the API returns status 429 Too Many Requests:
HTTP/1.1 429 Too Many Requests
Content-Type: application/json
X-RateLimit-Limit: 100
X-RateLimit-Remaining: 0
X-RateLimit-Reset: 1706968800
Retry-After: 45{
"success": false,
"error": {
"code": "RATE_LIMITED",
"message": "Request limit exceeded",
"details": "Wait 45 seconds before making new requests"
},
"meta": {
"request_id": "req_abc123def456"
}
}The Retry-After header indicates how many seconds you should wait before trying again.
Retry Implementation
JavaScript with Exponential Backoff
class PixConnectClient {
constructor(apiKey) {
this.apiKey = apiKey;
this.baseUrl = "https://api.pixconnect.com.br/api/v1/central";
this.maxRetries = 5;
this.baseDelay = 1000; // 1 second
}
async request(method, path, body = null, retryCount = 0) {
const url = `${this.baseUrl}${path}`;
const options = {
method,
headers: {
"X-API-Key": this.apiKey,
"Content-Type": "application/json",
},
};
if (body) {
options.body = JSON.stringify(body);
}
try {
const response = await fetch(url, options);
// Monitor rate limit headers
const remaining = response.headers.get("X-RateLimit-Remaining");
const limit = response.headers.get("X-RateLimit-Limit");
if (remaining !== null) {
console.log(`Rate limit: ${remaining}/${limit} remaining`);
// Alert when few requests remain
if (parseInt(remaining) < parseInt(limit) * 0.1) {
console.warn("Warning: Near request limit!");
}
}
// Handle rate limit
if (response.status === 429) {
if (retryCount >= this.maxRetries) {
throw new Error("Retry limit exceeded");
}
const retryAfter = response.headers.get("Retry-After");
const delay = retryAfter
? parseInt(retryAfter) * 1000
: this.baseDelay * Math.pow(2, retryCount);
console.log(`Rate limited. Waiting ${delay/1000}s (attempt ${retryCount + 1}/${this.maxRetries})`);
await this.sleep(delay);
return this.request(method, path, body, retryCount + 1);
}
// Handle server errors with retry
if (response.status >= 500) {
if (retryCount >= this.maxRetries) {
const data = await response.json();
throw new Error(`Server error: ${data.error.message}`);
}
const delay = this.baseDelay * Math.pow(2, retryCount);
console.log(`Server error. Waiting ${delay/1000}s (attempt ${retryCount + 1}/${this.maxRetries})`);
await this.sleep(delay);
return this.request(method, path, body, retryCount + 1);
}
return response.json();
} catch (error) {
if (error.name === "TypeError" && retryCount < this.maxRetries) {
// Network error - retry
const delay = this.baseDelay * Math.pow(2, retryCount);
console.log(`Network error. Waiting ${delay/1000}s`);
await this.sleep(delay);
return this.request(method, path, body, retryCount + 1);
}
throw error;
}
}
sleep(ms) {
return new Promise(resolve => setTimeout(resolve, ms));
}
// Convenience methods
async createBoleto(boleto) {
return this.request("POST", "/boletos", { boleto });
}
async getBoleto(nossoNumero) {
return this.request("GET", `/boletos/${nossoNumero}`);
}
async listBoletos(params = {}) {
const query = new URLSearchParams(params).toString();
return this.request("GET", `/boletos?${query}`);
}
}
// Usage
const client = new PixConnectClient(process.env.PIXCONNECT_API_KEY);
const result = await client.createBoleto({
nosso_numero: "12345678901",
amount_cents: 15000,
due_date: "2026-03-15",
payer_document: "12345678901",
payer_name: "Joao Silva",
beneficiary_ispb: "02992335",
beneficiary_name: "Empresa XYZ"
});Python with Exponential Backoff
import requests
import time
import os
from typing import Optional, Dict, Any
class PixConnectClient:
def __init__(self, api_key: str):
self.api_key = api_key
self.base_url = "https://api.pixconnect.com.br/api/v1/central"
self.max_retries = 5
self.base_delay = 1.0 # 1 second
self.session = requests.Session()
self.session.headers.update({
"X-API-Key": api_key,
"Content-Type": "application/json"
})
def request(
self,
method: str,
path: str,
json: Optional[Dict] = None,
retry_count: int = 0
) -> Dict[str, Any]:
url = f"{self.base_url}{path}"
try:
response = self.session.request(method, url, json=json)
# Monitor rate limit headers
remaining = response.headers.get("X-RateLimit-Remaining")
limit = response.headers.get("X-RateLimit-Limit")
if remaining is not None:
print(f"Rate limit: {remaining}/{limit} remaining")
# Alert when few requests remain
if int(remaining) < int(limit) * 0.1:
print("Warning: Near request limit!")
# Handle rate limit
if response.status_code == 429:
if retry_count >= self.max_retries:
raise Exception("Retry limit exceeded")
retry_after = response.headers.get("Retry-After")
delay = (
float(retry_after) if retry_after
else self.base_delay * (2 ** retry_count)
)
print(f"Rate limited. Waiting {delay}s (attempt {retry_count + 1}/{self.max_retries})")
time.sleep(delay)
return self.request(method, path, json, retry_count + 1)
# Handle server errors with retry
if response.status_code >= 500:
if retry_count >= self.max_retries:
data = response.json()
raise Exception(f"Server error: {data['error']['message']}")
delay = self.base_delay * (2 ** retry_count)
print(f"Server error. Waiting {delay}s (attempt {retry_count + 1}/{self.max_retries})")
time.sleep(delay)
return self.request(method, path, json, retry_count + 1)
return response.json()
except requests.exceptions.RequestException as e:
if retry_count < self.max_retries:
delay = self.base_delay * (2 ** retry_count)
print(f"Network error. Waiting {delay}s")
time.sleep(delay)
return self.request(method, path, json, retry_count + 1)
raise
# Convenience methods
def create_boleto(self, boleto: Dict) -> Dict:
return self.request("POST", "/boletos", {"boleto": boleto})
def get_boleto(self, nosso_numero: str) -> Dict:
return self.request("GET", f"/boletos/{nosso_numero}")
def list_boletos(self, **params) -> Dict:
query = "&".join(f"{k}={v}" for k, v in params.items())
path = f"/boletos?{query}" if query else "/boletos"
return self.request("GET", path)
# Usage
client = PixConnectClient(os.environ["PIXCONNECT_API_KEY"])
result = client.create_boleto({
"nosso_numero": "12345678901",
"amount_cents": 15000,
"due_date": "2026-03-15",
"payer_document": "12345678901",
"payer_name": "Joao Silva",
"beneficiary_ispb": "02992335",
"beneficiary_name": "Empresa XYZ"
})Best Practices
1. Monitor Headers
Always check the X-RateLimit-Remaining headers and proactively adjust your request rate:
if (remaining < limit * 0.2) {
// Less than 20% remaining - slow down
await sleep(1000);
}2. Implement Exponential Backoff
When receiving 429 or 5xx errors, use exponential backoff:
Attempt 1: 1 second
Attempt 2: 2 seconds
Attempt 3: 4 seconds
Attempt 4: 8 seconds
Attempt 5: 16 secondsJitter
Add a small random value (jitter) to the delay to prevent multiple clients from retrying at the same time:
const jitter = Math.random() * 1000;
const delay = baseDelay * Math.pow(2, retryCount) + jitter;3. Circuit Breaker Pattern
For high-availability systems, implement a circuit breaker:
class CircuitBreaker {
constructor(threshold = 5, timeout = 30000) {
this.failures = 0;
this.threshold = threshold;
this.timeout = timeout;
this.state = "CLOSED"; // CLOSED, OPEN, HALF_OPEN
this.lastFailure = null;
}
async execute(fn) {
if (this.state === "OPEN") {
if (Date.now() - this.lastFailure > this.timeout) {
this.state = "HALF_OPEN";
} else {
throw new Error("Circuit breaker is OPEN");
}
}
try {
const result = await fn();
this.onSuccess();
return result;
} catch (error) {
this.onFailure();
throw error;
}
}
onSuccess() {
this.failures = 0;
this.state = "CLOSED";
}
onFailure() {
this.failures++;
this.lastFailure = Date.now();
if (this.failures >= this.threshold) {
this.state = "OPEN";
}
}
}
// Usage
const breaker = new CircuitBreaker();
try {
const result = await breaker.execute(() =>
client.createBoleto(boletoData)
);
} catch (error) {
if (error.message === "Circuit breaker is OPEN") {
// Service temporarily unavailable
// Use fallback or notify user
}
}4. Batch Requests
When possible, group operations to reduce the number of requests:
// Avoid: multiple individual requests
for (const id of boletoIds) {
await client.getBoleto(id); // Many requests!
}
// Prefer: use listing with filters
const result = await client.listBoletos({
nosso_numero: boletoIds.join(","),
limit: 100
});5. Read Caching
Implement caching for frequent GET requests:
const cache = new Map();
const CACHE_TTL = 60000; // 1 minute
async function getBoletoWithCache(nossoNumero) {
const cacheKey = `boleto:${nossoNumero}`;
const cached = cache.get(cacheKey);
if (cached && Date.now() - cached.timestamp < CACHE_TTL) {
return cached.data;
}
const result = await client.getBoleto(nossoNumero);
cache.set(cacheKey, {
data: result.data,
timestamp: Date.now()
});
return result.data;
}Limits by Environment
Limits vary according to the environment:
| Environment | Multiplier | POST /boletos | GET endpoints | POST /settlements/trigger |
|---|---|---|---|---|
| Production | 1x | 100/min | 1,000/min | 10/hour |
| Sandbox | 0.1x | 10/min | 100/min | 1/hour |
| Local | Unlimited | - | - | - |
Test Environment
The sandbox environment has reduced limits to prevent excessive use during development. For load testing, contact support.
Requesting Limit Increase
If current limits don't meet your demand, you can request an increase:
Requirements
- Active production account for at least 30 days
- Usage history demonstrating need
- Use case justified
Request Process
- Access the FluxiQ Portal
- Navigate to Settings > API > Rate Limits
- Click Request Increase
- Fill out the form with:
- Endpoint(s) that need increase
- Desired limit
- Technical justification
- Expected transaction volume
Response SLA
| Account Type | Response Time |
|---|---|
| Enterprise | 1 business day |
| Business | 3 business days |
| Standard | 5 business days |
Tip
Before requesting an increase, review your implementations to ensure you are using the best practices described on this page. Often, code optimizations can resolve rate limit issues.
Next Steps
- Error Codes - Understand all possible errors
- Authentication - Configure API Key
- Environments - Test in sandbox