Back to Blog
Redis Connection Pooling for Azure: Surviving the 256 Connection Limit

Redis Connection Pooling for Azure: Surviving the 256 Connection Limit

January 3, 2026
Stefan Mentović
redisazurebullmqconnection-poolingperformance

Learn how to handle Azure Redis Basic tier's 256 connection limit with BullMQ queues. Production-ready patterns for connection pooling and TLS configuration.

#Redis Connection Pooling for Azure: Surviving the 256 Connection Limit

You've deployed your Node.js application to Azure with a handful of BullMQ queues. Everything works perfectly in development. Then production traffic hits, and suddenly you're seeing ECONNREFUSED errors. A quick check of your Azure Redis metrics reveals the problem: you've maxed out all 256 connections on your Basic C0 tier.

Each BullMQ queue creates three separate Redis connections per worker: one for the subscriber, one for regular operations, and one for blocking commands. With just 10 queues running 5 workers each, you've already burned through 150 connections before your application even starts serving traffic.

The solution isn't necessarily upgrading to a more expensive Redis tier. With proper connection pooling, you can support dozens of queues on a Basic tier instance while maintaining reliability and performance.

#Understanding the Problem

#Azure Redis Connection Limits

Azure Redis pricing tiers come with hard connection limits that can catch you off guard. Understanding these limits is critical before you deploy. According to the Azure Redis server configuration documentation, the maxclients property varies by tier:

Basic tier is the entry point for Azure Redis. The C0 size offers 256 max connections at around $16/month - attractive for cost-conscious deployments, but that connection limit requires careful planning. The C1 size bumps this to 1,000 connections, making it viable for small production workloads. Larger sizes scale up significantly: C2 offers 2,000, C3 offers 5,000, and C6 offers up to 20,000 connections.

Standard tier adds high availability with automatic failover and an SLA. Connection limits mirror the Basic tier (C0 at 256, C1 at 1,000, etc.), but you get a replicated configuration running on two VMs. The C1 size with 1,000 connections is the sweet spot for production workloads that need reliability without breaking the bank.

Premium tier is where connection limits become less of a concern. The P1 size offers 7,500 connections, P2 offers 15,000, and P3 offers 30,000 connections. Premium also includes clustering support, data persistence, and VNet integration. This is where you graduate when connection pooling optimizations are no longer enough.

Cost vs Connections Trade-off: Don't choose a tier based solely on connection count. As noted in the Azure Redis overview, Basic tier has no SLA and is ideal only for development/test workloads. Standard tier costs slightly more but adds high availability - a failed Basic tier instance means complete downtime until Azure provisions a new one.

#Azure Redis Network Configurations

How you expose your Redis instance affects both security and connectivity. Azure offers three main network configurations, as detailed in the Azure Redis network isolation options documentation:

Public endpoint with firewall rules is the simplest setup. Your Redis instance has a public IP, but you restrict access using firewall rules that whitelist specific IP addresses or Azure services. This works for development and simple production setups, but has limitations - Container Apps and many Azure services use dynamic IPs that are difficult to whitelist reliably.

# Connection string for public endpoint
REDIS_URL="rediss://:your-access-key@your-cache.redis.cache.windows.net:6380"

Private endpoint connects Redis to your Virtual Network using Azure Private Link. Traffic never leaves the Azure backbone, and Redis gets a private IP address within your VNet. According to the documentation, private endpoints are supported on all tiers (Basic, Standard, Premium, Enterprise) and are the recommended approach. Your Container Apps environment must be deployed in the same VNet (or a peered VNet) to reach the private endpoint.

# Connection string for private endpoint (same format, but resolves to private IP)
REDIS_URL="rediss://:your-access-key@your-cache.redis.cache.windows.net:6380"

VNet injection (Premium tier only) deploys Redis directly into a subnet in your VNet. As noted in the VNet configuration guide, this gives you complete network isolation and the ability to use network security groups for fine-grained access control. However, Microsoft now recommends using Private Link instead, as it simplifies network architecture and is available on all tiers.

Recommendation: Per Microsoft's guidance, use private endpoints for production workloads. They're supported on all tiers, simplify NSG rule management, and provide better security than VNet injection. The public endpoint with firewall rules is acceptable for development only.

#The Container Apps Connection Trap

Here's a scenario that catches many teams: you deploy your app to Azure Container Apps, everything works fine, then you push an update. Suddenly your Redis connections spike and you hit the 256 limit.

The problem: As explained in the Container Apps revision modes documentation, Container Apps supports multiple revision modes. In multiple revision mode, old revisions stay active during deployments while traffic gradually shifts to the new revision. During this transition period, both revisions are connected to Redis - effectively doubling your connection count.

It gets worse with multiple replicas. If you're running 3 replicas for high availability, a deployment means:

  • 3 old replicas still running (during traffic shift)
  • 3 new replicas starting up
  • That's 6 instances competing for Redis connections

With 8 queues × 3 connections per worker × 6 instances = 144 connections just for workers. Add producers, schedulers, and application connections, and you're past 200 connections before the old revision even terminates.

Solutions:

  1. Use single revision mode: According to the revision modes documentation, single revision mode automatically provisions new revisions, shifts traffic once ready, and deprovisions old revisions. This is the default and recommended for most workloads.
  2. Configure traffic splitting carefully: If using multiple revision mode for blue-green deployments, use traffic splitting to shift 100% to the new revision immediately rather than gradually.
  3. Account for deployment overhead: If you need 100 connections in steady state, budget for 200+ during deployments when running multiple replicas.
  4. Implement connection pooling: The techniques in this article reduce per-instance connection counts.
# Azure CLI: Ensure single revision mode (default)
az containerapp update \
  --name your-app \
  --resource-group your-rg \
  --revision-mode single

# Or if using multiple mode, shift traffic immediately
az containerapp ingress traffic set \
  --name your-app \
  --resource-group your-rg \
  --revision-weight latest=100

#How BullMQ Uses Connections

According to the BullMQ documentation on connections, each queue component creates its own Redis client:

Queue (Producer)

  • 1 connection for adding jobs

Worker (Consumer)

  • 1 connection for processing jobs
  • 1 connection for blocking commands (BRPOPLPUSH)
  • 1 connection for event subscriptions

QueueScheduler

  • 2 connections for delayed job scheduling

This means a single Worker consuming from one queue uses 3 connections. Scale this across multiple queues and workers, and you quickly hit limits:

// Example scenario
const queues = 8; // email, notifications, analytics, etc.
const workersPerQueue = 3; // for parallel processing
const connectionsPerWorker = 3;

// Total connections: 8 × 3 × 3 = 72 connections
// Just for workers, not counting schedulers or producers!

Add in your application's own Redis usage for caching, sessions, or rate limiting, and you're approaching the 256 limit fast.

#Implementing Connection Pooling

The key to surviving connection limits is sharing Redis connections across queues. Instead of letting each queue create its own connections, we create singleton instances and reuse them.

#Basic Connection Configuration

First, establish a centralized configuration module that handles Redis connections:

import { Redis } from 'ioredis';

// Singleton connection instances
let sharedQueueConnection: Redis | null = null;

/**
 * Get Redis connection string from environment
 */
function getRedisUrl(): string {
	const redisUrl = process.env.REDIS_URL;
	if (!redisUrl) {
		throw new Error('REDIS_URL environment variable is required');
	}
	return redisUrl;
}

/**
 * Detect if using Azure Redis (requires TLS)
 */
function isAzureRedis(): boolean {
	return getRedisUrl().startsWith('rediss://');
}

/**
 * Get shared Redis connection for queue producers
 * All queues share this single connection
 */
export function getQueueConnection(): Redis {
	if (!sharedQueueConnection) {
		const redisUrl = getRedisUrl();
		sharedQueueConnection = new Redis(redisUrl, {
			maxRetriesPerRequest: null,
			enableReadyCheck: false,
			// Azure Redis requires TLS configuration
			...(isAzureRedis() && {
				tls: {
					rejectUnauthorized: true,
				},
			}),
		});

		sharedQueueConnection.on('error', (err: Error) => {
			console.error('[Redis Queue Connection Error]', err.message);
		});
	}

	return sharedQueueConnection;
}

This singleton pattern ensures all queue producers share a single Redis connection. The maxRetriesPerRequest: null option is required by BullMQ to prevent timeout errors during job processing.

#Configuring Azure Redis with TLS

Azure Redis requires TLS connections using the rediss:// protocol (note the double 's'). The connection string format looks like:

# Azure Redis connection string format
REDIS_URL="rediss://:password@your-cache.redis.cache.windows.net:6380"

The TLS configuration is critical. Without it, you'll encounter connection errors:

// TLS configuration for Azure Redis
const redisOptions = {
	tls: {
		// Verify the server certificate
		rejectUnauthorized: true,
	},
	maxRetriesPerRequest: null,
	enableReadyCheck: false,
};

The rejectUnauthorized: true setting ensures certificate validation, which is important for production security. Never disable this in production environments.

#Worker Configuration with Shared Connections

Workers need a different approach since BullMQ's Worker constructor creates its own connections internally. Instead of passing a connection instance, we pass a configuration object:

/**
 * Get Redis configuration for workers
 * Workers create their own connections but share config
 */
export function getRedisConfig() {
	const redisUrl = getRedisUrl();
	const azureRedis = isAzureRedis();

	return {
		connection: {
			url: redisUrl,
			...(azureRedis && {
				tls: {
					rejectUnauthorized: true,
				},
			}),
			maxRetriesPerRequest: null,
			enableReadyCheck: false,
		},
	};
}

Using this configuration, create workers that share the same connection settings:

import { Worker, Job } from 'bullmq';
import { getRedisConfig, getQueueConnection } from './redis-config';

// Processor function
async function processEmailJob(job: Job) {
	const { to, subject, body } = job.data;
	// Send email logic here
	return { sent: true, messageId: 'msg-123' };
}

// Create worker with shared config
const emailWorker = new Worker('email-queue', processEmailJob, getRedisConfig());

emailWorker.on('completed', (job) => {
	console.log(`Job ${job.id} completed`);
});

emailWorker.on('failed', (job, err) => {
	console.error(`Job ${job?.id} failed:`, err.message);
});

#Creating Queue Producers

Queue producers benefit the most from connection pooling since they typically don't need multiple connections:

import { Queue } from 'bullmq';
import { getQueueConnection } from './redis-config';

// Create queues with shared connection
const emailQueue = new Queue('email-queue', {
	connection: getQueueConnection(),
});

const notificationQueue = new Queue('notification-queue', {
	connection: getQueueConnection(),
});

const analyticsQueue = new Queue('analytics-queue', {
	connection: getQueueConnection(),
});

// All three queues now share ONE Redis connection
// Instead of creating three separate connections

This reduces connection count from 3 to 1 for producers, a 66% reduction.

#Production-Ready Implementation

#Graceful Shutdown

Always implement graceful shutdown to properly close Redis connections when your application terminates:

/**
 * Close all shared Redis connections
 */
export async function closeRedisConnections(): Promise<void> {
	const connections: Array<Promise<void>> = [];

	if (sharedQueueConnection) {
		connections.push(
			sharedQueueConnection.quit().then(() => {
				sharedQueueConnection = null;
			}),
		);
	}

	await Promise.all(connections);
}

// Register shutdown handlers
async function gracefulShutdown(signal: string) {
	console.log(`Received ${signal}, starting graceful shutdown...`);

	try {
		// Close workers first
		await emailWorker.close();
		await notificationWorker.close();

		// Then close queue connections
		await emailQueue.close();
		await notificationQueue.close();

		// Finally close shared Redis connections
		await closeRedisConnections();

		console.log('Graceful shutdown completed');
		process.exit(0);
	} catch (error) {
		console.error('Error during shutdown:', error);
		process.exit(1);
	}
}

process.on('SIGTERM', () => gracefulShutdown('SIGTERM'));
process.on('SIGINT', () => gracefulShutdown('SIGINT'));

This ensures all jobs complete processing and connections close cleanly, preventing data loss.

#Health Checks and Monitoring

Implement health checks to monitor Redis connection status:

import { getQueueConnection } from './redis-config';

/**
 * Health check endpoint
 */
export async function checkRedisHealth(): Promise<{
	healthy: boolean;
	latency?: number;
	error?: string;
}> {
	try {
		const connection = getQueueConnection();
		const start = Date.now();

		// Simple PING command to test connection
		await connection.ping();

		const latency = Date.now() - start;

		return {
			healthy: true,
			latency,
		};
	} catch (error) {
		return {
			healthy: false,
			error: error instanceof Error ? error.message : 'Unknown error',
		};
	}
}

// Express health check endpoint example
app.get('/health/redis', async (req, res) => {
	const health = await checkRedisHealth();

	if (health.healthy) {
		res.json({
			status: 'healthy',
			latency: health.latency,
		});
	} else {
		res.status(503).json({
			status: 'unhealthy',
			error: health.error,
		});
	}
});

#Monitoring Connection Count

Track active connections using Azure Redis metrics or the Redis INFO command:

/**
 * Get current Redis connection count
 */
export async function getConnectionCount(): Promise<number> {
	try {
		const connection = getQueueConnection();
		const info = await connection.info('clients');

		// Parse connected_clients from INFO output
		const match = info.match(/connected_clients:(\d+)/);
		return match ? parseInt(match[1], 10) : 0;
	} catch (error) {
		console.error('Failed to get connection count:', error);
		return -1;
	}
}

// Log connection count periodically
setInterval(async () => {
	const count = await getConnectionCount();
	console.log(`Active Redis connections: ${count}/256`);

	// Alert if approaching limit
	if (count > 230) {
		console.warn('WARNING: Approaching Redis connection limit!');
	}
}, 60000); // Check every minute

#Error Handling and Retry Logic

Implement robust error handling for connection failures:

import { getQueueConnection } from './redis-config';

/**
 * Execute Redis command with retry logic
 */
async function executeWithRetry<T>(operation: () => Promise<T>, maxRetries = 3, delayMs = 1000): Promise<T> {
	let lastError: Error;

	for (let attempt = 1; attempt <= maxRetries; attempt++) {
		try {
			return await operation();
		} catch (error) {
			lastError = error instanceof Error ? error : new Error('Unknown error');

			if (attempt < maxRetries) {
				console.warn(`Redis operation failed (attempt ${attempt}/${maxRetries}):`, lastError.message);
				await new Promise((resolve) => setTimeout(resolve, delayMs * attempt));
			}
		}
	}

	throw lastError!;
}

// Usage example
const job = await executeWithRetry(async () => {
	return await emailQueue.add('send-email', {
		to: 'user@example.com',
		subject: 'Welcome',
		body: 'Thanks for signing up!',
	});
});

#When to Upgrade vs Optimize

#Optimize First If:

  1. Connection count is under control: You're using less than 200 connections consistently
  2. Latency is acceptable: P95 latency under 50ms for queue operations
  3. Traffic is predictable: You can forecast scaling needs
  4. Budget is constrained: Basic tier fits your budget constraints

Optimization strategies:

  • Implement connection pooling (as shown above)
  • Reduce worker concurrency per queue
  • Consolidate similar queues
  • Use job batching to reduce operations
  • Schedule non-critical jobs during off-peak hours

#Upgrade When:

  1. Hitting connection limits regularly: Consistently over 90% of connection limit
  2. Need high availability: Basic tier offers no SLA or redundancy
  3. Scaling horizontally: Multiple application instances need more connections
  4. Performance is critical: Need clustering or data persistence features

Upgrade path:

  • Basic C0 → Standard C1: Same cost, adds HA and 1,000 connections
  • Standard C1 → Standard C2: More memory, 2,000 connections
  • Consider Premium tier: Clustering, persistence, VNet support

According to Azure Redis planning FAQ, connection pooling can extend the viability of lower tiers significantly.

#Real-World Results

After implementing connection pooling in a production system with 12 BullMQ queues:

Before optimization:

  • 8 queues × 4 workers × 3 connections = 96 worker connections
  • 8 queue producers × 1 connection = 8 producer connections
  • Application cache/sessions = ~30 connections
  • Total: ~134 connections (52% capacity)

After optimization:

  • 8 queues × 4 workers × 3 connections = 96 worker connections (unchanged)
  • All queue producers sharing 1 connection = 1 connection
  • Application cache using connection pool = ~10 connections
  • Total: ~107 connections (42% capacity)

The 20% reduction in connections provided headroom for scaling to 12 queues without hitting limits. Additionally, connection reuse improved job throughput by reducing connection establishment overhead.

#Key Takeaways

  • Azure Redis Basic C0 tier has a hard limit of 256 connections that requires careful planning
  • BullMQ creates 3 connections per worker (subscriber, regular, blocking)
  • Connection pooling can reduce producer connections from N to 1 for N queues
  • Azure Redis requires TLS configuration with rediss:// protocol
  • Implement graceful shutdown to prevent connection leaks
  • Monitor connection count to catch issues before hitting limits
  • Consider Standard tier for high availability, not just more connections
  • Optimization through connection pooling should be your first step before upgrading

Want to optimize other aspects of your Azure architecture? Check out our guide on Azure Container Apps performance tuning or learn about building resilient distributed systems.

#Further Reading

Enjoyed this article? Stay updated: