Multi-Stage Docker Builds for pnpm Monorepos: Production-Ready Patterns
Master multi-stage Docker builds for pnpm monorepos with Turborepo filtering, init containers for migrations, and Alpine runtime optimization.
Your Docker image is 2GB, builds take 15 minutes, and you're deploying unused dependencies to production. Sound familiar?
Multi-stage Docker builds solve these problems by separating build dependencies from runtime needs, but implementing them for pnpm monorepos requires careful orchestration. Get it wrong and you'll face broken builds, missing dependencies, or bloated images.
In this guide, we'll explore production-tested patterns for building minimal, secure Docker images for pnpm monorepos using a three-stage architecture: Builder, Migrator, and Runtime.
Why Multi-Stage Builds Matter
Multi-stage builds aren't just about image size—they fundamentally improve your deployment pipeline:
Security: Final images contain only production dependencies and runtime code. No build tools, no test utilities, no source files that could leak sensitive information.
Performance: Smaller images mean faster deployments. A 200MB production image downloads and starts significantly faster than a 2GB development image.
Build Caching: Docker caches each stage independently. Change your application code, and only the final stage rebuilds—dependency installation is cached.
Clear Separation: Each stage has a single responsibility, making Dockerfiles easier to understand and maintain.
According to Docker's official documentation, multi-stage builds can reduce final image size by 90% or more compared to single-stage builds.
The Three-Stage Pattern
Our production pattern uses three distinct stages:
- Builder: Installs all dependencies, builds TypeScript/frontend code
- Migrator: Runs database migrations as an init container
- Runtime: Minimal Alpine image with only production dependencies
This separation allows parallel execution in orchestrators like Kubernetes—migrations run as init containers before the main application starts.
Stage 1: Builder - Reproducible Dependencies with Corepack
The builder stage is where all compilation happens. Start with a clean, reproducible environment:
FROM node:22-alpine AS builder
# Enable pnpm via Corepack (built into Node.js)
ENV PNPM_HOME="/pnpm"
ENV PATH="$PNPM_HOME:$PATH"
RUN corepack enable
WORKDIR /app
Why Corepack? Instead of npm install -g pnpm, use Corepack for reproducible pnpm versions. Corepack is built into Node.js 16+ and respects the packageManager field in package.json:
{
"packageManager": "pnpm@9.15.0"
}
This ensures every developer and CI environment uses the exact same pnpm version—no version drift, no "works on my machine" issues.
Maximizing Layer Cache Efficiency
Copy dependency manifests before source code. This leverages Docker's layer caching:
# Copy only package manifests first
COPY package.json pnpm-lock.yaml pnpm-workspace.yaml ./
COPY apps/api/package.json ./apps/api/
COPY packages/shared/package.json ./packages/shared/
COPY packages/types/package.json ./packages/types/
# Install dependencies (cached until manifests change)
RUN pnpm install --frozen-lockfile
The frozen lockfile flag (--frozen-lockfile) ensures reproducible builds. It fails if pnpm-lock.yaml is out of sync with package.json, preventing accidental dependency updates in production builds.
Building with Workspace Filtering
After dependencies are installed, copy source code and build with Turborepo filtering:
# Copy source code
COPY apps/api ./apps/api
COPY packages ./packages
# Build only the API and its dependencies
RUN pnpm --filter api-app... run build
The triple-dot notation (api-app...) tells pnpm to build the target workspace and all its dependencies. This is critical in monorepos—you need shared packages built before the main application.
Build Verification
Always verify build outputs exist before moving to the next stage:
# Verify build artifacts
RUN ls -la apps/api/dist && \
test -f apps/api/dist/index.js || \
(echo "❌ Build failed: dist/index.js missing" && exit 1)
This catches build failures early. Without verification, you might not discover missing files until runtime—in production.
Stage 2: Migrator - Database Migrations as Init Containers
Database migrations need to run before your application starts, but they shouldn't be part of the main application container. Enter the migrator stage:
FROM node:22-alpine AS migrator
WORKDIR /app
ENV PNPM_HOME="/pnpm"
ENV PATH="$PNPM_HOME:$PATH"
RUN corepack enable
# Copy manifests and install dependencies
COPY --from=builder /app/package.json /app/pnpm-lock.yaml ./
COPY --from=builder /app/apps/api/package.json ./apps/api/
RUN pnpm install --frozen-lockfile
# Copy migration files and configuration
COPY --from=builder /app/apps/api/dist ./apps/api/dist
COPY --from=builder /app/apps/api/drizzle ./apps/api/drizzle
COPY --from=builder /app/apps/api/drizzle.config.ts ./apps/api/
WORKDIR /app/apps/api
# Run migrations and exit
CMD ["pnpm", "run", "migrate:prod"]
Using Init Containers in Kubernetes
In Kubernetes, deploy the migrator as an init container:
apiVersion: apps/v1
kind: Deployment
metadata:
name: api-deployment
spec:
template:
spec:
initContainers:
- name: migrator
image: your-registry/api:latest
command: ['pnpm', 'run', 'migrate:prod']
env:
- name: DATABASE_URL
valueFrom:
secretKeyRef:
name: db-secrets
key: connection-string
containers:
- name: api
image: your-registry/api:latest
The init container runs to completion before the main container starts. If migrations fail, the pod never reaches a running state—preventing broken deployments.
Azure Container Apps Init Containers
Azure Container Apps also support init containers:
properties:
template:
initContainers:
- name: migrator
image: your-registry/api:latest
command:
- pnpm
- run
- migrate:prod
env:
- name: DATABASE_URL
secretRef: database-connection-string
This pattern works across orchestrators — the migration stage is a portable, reusable component.
Stage 3: Runtime - Alpine Linux for Minimal Images
The final runtime stage contains only what's necessary to run your application:
FROM node:22-alpine AS runner
WORKDIR /app
ENV NODE_ENV=production
ENV PORT=4000
# Enable pnpm for production dependency installation
ENV PNPM_HOME="/pnpm"
ENV PATH="$PNPM_HOME:$PATH"
RUN corepack enable
# Copy manifests
COPY package.json pnpm-lock.yaml pnpm-workspace.yaml ./
COPY apps/api/package.json ./apps/api/
COPY packages/types/package.json ./packages/types/
# Install production dependencies only
RUN pnpm install --prod --frozen-lockfile
# Copy compiled application from builder
COPY --from=builder /app/apps/api/dist ./apps/api/dist
# Verify artifacts in runtime image
RUN test -f apps/api/dist/index.js || \
(echo "❌ Runtime verification failed" && exit 1)
Why Alpine Linux?
Alpine images are 10-20x smaller than full Debian-based images:
node:22(Debian): 1.1GBnode:22-alpine: 130MB
Alpine uses musl libc instead of glibc, which can cause compatibility issues with native modules. Test thoroughly if your application uses native dependencies.
Installing System Dependencies
If you need system packages (e.g., for Puppeteer, Sharp, or Canvas), install them efficiently:
RUN apk add --no-cache \
chromium \
nss \
freetype \
harfbuzz \
ca-certificates \
ttf-freefont
The --no-cache flag prevents apk from storing the package index, reducing image size by 2-5MB.
Security: Non-Root User
Never run containers as root in production:
RUN addgroup --system --gid 1001 nodejs && \
adduser --system --uid 1001 appuser && \
chown -R appuser:nodejs /app
USER appuser
This follows the principle of least privilege. If your application is compromised, the attacker has limited permissions within the container.
Health Checks
Add health checks for orchestrator integration:
EXPOSE 4000
HEALTHCHECK --interval=30s --timeout=5s --retries=3 \
CMD wget -qO- http://localhost:4000/health || exit 1
CMD ["node", "apps/api/dist/index.js"]
Kubernetes and Container Apps use health checks to determine when containers are ready to receive traffic and when to restart unhealthy containers.
Complete Example: Next.js Application
Here's a complete multi-stage build for a Next.js application with public environment variables:
# =========================================================
# Stage 1: Builder
# =========================================================
FROM node:22-alpine AS builder
# Build-time arguments for Next.js public variables
ARG NEXT_PUBLIC_API_URL
ARG NEXT_PUBLIC_STRIPE_KEY
ENV NEXT_PUBLIC_API_URL=$NEXT_PUBLIC_API_URL
ENV NEXT_PUBLIC_STRIPE_KEY=$NEXT_PUBLIC_STRIPE_KEY
ENV PNPM_HOME="/pnpm"
ENV PATH="$PNPM_HOME:$PATH"
RUN corepack enable
WORKDIR /app
# Copy manifests
COPY package.json pnpm-lock.yaml pnpm-workspace.yaml turbo.json ./
COPY apps/web/package.json ./apps/web/
COPY packages/ui/package.json ./packages/ui/
# Install dependencies
RUN pnpm install --frozen-lockfile
# Copy source and build
COPY . .
RUN pnpm turbo build --filter=web-app...
# =========================================================
# Stage 2: Migrator
# =========================================================
FROM node:22-alpine AS migrator
WORKDIR /app
ENV PNPM_HOME="/pnpm"
ENV PATH="$PNPM_HOME:$PATH"
RUN corepack enable
COPY --from=builder /app/package.json /app/pnpm-lock.yaml ./
COPY --from=builder /app/apps/web/package.json ./apps/web/
RUN pnpm install --frozen-lockfile
COPY --from=builder /app/apps/web/drizzle.config.ts ./apps/web/
COPY --from=builder /app/apps/web/drizzle ./apps/web/drizzle
COPY --from=builder /app/packages ./packages
WORKDIR /app/apps/web
CMD ["pnpm", "run", "db:migrate"]
# =========================================================
# Stage 3: Runtime
# =========================================================
FROM node:22-alpine AS runner
WORKDIR /app
ENV NODE_ENV=production
ENV PORT=3000
ENV HOSTNAME="0.0.0.0"
# Create non-root user
RUN addgroup --system --gid 1001 nodejs && \
adduser --system --uid 1001 nextjs
USER nextjs
# Copy Next.js standalone output
COPY --from=builder --chown=nextjs:nodejs /app/apps/web/.next/standalone ./
COPY --from=builder --chown=nextjs:nodejs /app/apps/web/.next/static ./apps/web/.next/static
COPY --from=builder --chown=nextjs:nodejs /app/apps/web/public ./apps/web/public
EXPOSE 3000
HEALTHCHECK --interval=30s --timeout=5s --retries=3 \
CMD wget -qO- http://localhost:3000 || exit 1
CMD ["node", "apps/web/server.js"]
Next.js Standalone Output: Enable in next.config.js:
module.exports = {
output: 'standalone',
};
This creates a minimal server bundle with only required dependencies—perfect for Docker images.
Layer Caching Strategies for Fast CI/CD
Optimize build order to maximize cache hits:
1. Dependency Caching
Always copy package.json files before source code:
# ✅ GOOD: Manifests first
COPY package.json pnpm-lock.yaml ./
RUN pnpm install
# Later...
COPY src ./src
# ❌ BAD: Source code invalidates dependency cache
COPY . .
RUN pnpm install
2. BuildKit Cache Mounts
Use BuildKit's cache mounts for pnpm store:
# syntax=docker/dockerfile:1
FROM node:22-alpine AS builder
RUN --mount=type=cache,id=pnpm,target=/pnpm/store \
pnpm install --frozen-lockfile
This persists pnpm's store across builds, dramatically reducing install time.
3. Multi-Platform Caching
If you build for multiple architectures (amd64, arm64), use separate cache keys:
docker buildx build \
--platform linux/amd64,linux/arm64 \
--cache-from type=registry,ref=myapp:buildcache-amd64 \
--cache-from type=registry,ref=myapp:buildcache-arm64 \
--cache-to type=registry,ref=myapp:buildcache-amd64,mode=max \
--push \
-t myapp:latest .
Advanced: Parallel Stage Builds
Modern Docker BuildKit can build stages in parallel:
FROM builder AS api-build
RUN pnpm --filter api-app... run build
FROM builder AS web-build
RUN pnpm --filter web-app... run build
FROM runner AS final
COPY --from=api-build /app/apps/api/dist ./apps/api/dist
COPY --from=web-build /app/apps/web/.next ./apps/web/.next
BuildKit detects the independent stages and builds them concurrently—reducing overall build time.
Error Handling and Debugging
Build Verification
Add explicit checks after critical steps:
RUN pnpm run build && \
test -d dist || (echo "Build output missing" && exit 1) && \
test -f dist/index.js || (echo "Entry point missing" && exit 1)
Debugging Failed Builds
Build to a specific stage for debugging:
docker build --target builder -t debug-builder .
docker run -it debug-builder sh
This lets you inspect the builder stage without running subsequent stages.
Common Pitfalls
Missing workspace dependencies: Always use --filter package... (with triple dots) to include dependencies:
# ✅ CORRECT: Includes dependencies
pnpm --filter web-app... run build
# ❌ WRONG: Only builds web-app
pnpm --filter web-app run build
NODE_ENV in builder: Don't set NODE_ENV=production in the builder stage—it prevents devDependencies from installing:
# ❌ BAD: Can't build without TypeScript
FROM node:22-alpine AS builder
ENV NODE_ENV=production
RUN pnpm install # Skips devDependencies!
# ✅ GOOD: Set in runtime stage only
FROM node:22-alpine AS builder
RUN pnpm install
FROM node:22-alpine AS runner
ENV NODE_ENV=production
Key Takeaways
- Use three-stage pattern: Builder for compilation, Migrator for init containers, Runtime for execution
- Enable Corepack for reproducible pnpm versions across environments
- Leverage
--frozen-lockfileto prevent unexpected dependency changes - Use Turborepo's
--filter package...with triple dots to include workspace dependencies - Deploy migrations as init containers for safe, ordered startup
- Alpine Linux reduces image size by 90%+, but test native dependencies carefully
- Verify build artifacts at stage boundaries to catch failures early
- Structure Dockerfile for maximum layer cache efficiency
- Never run production containers as root—create dedicated users
- Add health checks for orchestrator integration
Ready to optimize your Docker builds? Check out the official pnpm Docker guide and Docker multi-stage build documentation for more advanced patterns.
Multi-stage builds transform Docker from a deployment tool into a build system—embracing them unlocks faster CI/CD, smaller images, and more secure production deployments.