Guia de Docker Compose para produccion - Configuracion lista para produccion

Avanzado | 90 min de lectura | 2025.12.02

Docker Compose puede utilizarse no solo para entornos de desarrollo, sino tambien de manera efectiva en entornos de produccion de pequena a mediana escala. En este articulo, construiremos una configuracion lista para produccion considerando seguridad, rendimiento y operabilidad, mientras clarificamos las diferencias con el entorno de desarrollo.

Lo que aprenderas en este articulo

  1. Separacion de configuracion entre desarrollo y produccion
  2. Optimizacion mediante builds multi-etapa
  3. Gestion de secretos y seguridad
  4. Health checks y recuperacion automatica
  5. Gestion de logs y monitoreo
  6. Estrategia de backup y recuperacion

Estructura del proyecto

project/
├── docker/
│   ├── app/
│   │   └── Dockerfile
│   ├── nginx/
│   │   ├── Dockerfile
│   │   └── nginx.conf
│   └── postgres/
│       └── init.sql
├── docker-compose.yml           # Configuracion base
├── docker-compose.override.yml  # Desarrollo (carga automatica)
├── docker-compose.prod.yml      # Produccion
├── docker-compose.staging.yml   # Staging
├── .env.example                 # Plantilla de variables de entorno
└── src/
    └── ...

Separacion de entorno de desarrollo y produccion

Configuracion base (docker-compose.yml)

# docker-compose.yml - Configuracion comun
services:
  app:
    build:
      context: .
      dockerfile: docker/app/Dockerfile
    depends_on:
      db:
        condition: service_healthy
      redis:
        condition: service_started
    networks:
      - backend

  db:
    image: postgres:16-alpine
    volumes:
      - postgres_data:/var/lib/postgresql/data
    networks:
      - backend
    healthcheck:
      test: ["CMD-SHELL", "pg_isready -U postgres"]
      interval: 10s
      timeout: 5s
      retries: 5

  redis:
    image: redis:7-alpine
    networks:
      - backend

networks:
  backend:
    driver: bridge

volumes:
  postgres_data:

Configuracion de desarrollo (docker-compose.override.yml)

# docker-compose.override.yml - Entorno de desarrollo
# Se carga automaticamente con docker compose up
services:
  app:
    build:
      target: development
    volumes:
      - .:/app
      - /app/node_modules
    environment:
      - NODE_ENV=development
      - DEBUG=app:*
    ports:
      - "3000:3000"
      - "9229:9229"  # Para debugger
    command: npm run dev

  db:
    environment:
      - POSTGRES_USER=dev
      - POSTGRES_PASSWORD=dev
      - POSTGRES_DB=app_dev
    ports:
      - "5432:5432"

  redis:
    ports:
      - "6379:6379"

Configuracion de produccion (docker-compose.prod.yml)

# docker-compose.prod.yml - Entorno de produccion
services:
  app:
    build:
      target: production
    environment:
      - NODE_ENV=production
    deploy:
      resources:
        limits:
          cpus: '2'
          memory: 2G
        reservations:
          cpus: '1'
          memory: 1G
      restart_policy:
        condition: on-failure
        delay: 5s
        max_attempts: 3
    logging:
      driver: "json-file"
      options:
        max-size: "100m"
        max-file: "5"
    secrets:
      - db_password
      - app_secret

  nginx:
    image: nginx:1.25-alpine
    ports:
      - "80:80"
      - "443:443"
    volumes:
      - ./docker/nginx/nginx.conf:/etc/nginx/nginx.conf:ro
      - ./docker/nginx/ssl:/etc/nginx/ssl:ro
    depends_on:
      - app
    networks:
      - backend
    deploy:
      resources:
        limits:
          cpus: '0.5'
          memory: 256M

  db:
    environment:
      - POSTGRES_USER_FILE=/run/secrets/db_user
      - POSTGRES_PASSWORD_FILE=/run/secrets/db_password
      - POSTGRES_DB=app_production
    volumes:
      - postgres_data:/var/lib/postgresql/data
      - ./docker/postgres/init.sql:/docker-entrypoint-initdb.d/init.sql:ro
    deploy:
      resources:
        limits:
          cpus: '2'
          memory: 4G
    secrets:
      - db_user
      - db_password

  redis:
    command: redis-server --appendonly yes --requirepass ${REDIS_PASSWORD}
    volumes:
      - redis_data:/data
    deploy:
      resources:
        limits:
          cpus: '0.5'
          memory: 512M

secrets:
  db_user:
    file: ./secrets/db_user.txt
  db_password:
    file: ./secrets/db_password.txt
  app_secret:
    file: ./secrets/app_secret.txt

volumes:
  postgres_data:
  redis_data:

Build multi-etapa

Aplicacion Node.js

# docker/app/Dockerfile
# ========== Base Stage ==========
FROM node:20-alpine AS base
WORKDIR /app
RUN apk add --no-cache libc6-compat

# ========== Dependencies Stage ==========
FROM base AS deps
COPY package.json package-lock.json ./
RUN npm ci --only=production && \
    cp -R node_modules /tmp/prod_modules && \
    npm ci

# ========== Development Stage ==========
FROM base AS development
COPY --from=deps /app/node_modules ./node_modules
COPY . .
ENV NODE_ENV=development
EXPOSE 3000 9229
CMD ["npm", "run", "dev"]

# ========== Builder Stage ==========
FROM base AS builder
COPY --from=deps /app/node_modules ./node_modules
COPY . .
RUN npm run build

# ========== Production Stage ==========
FROM base AS production

# Crear usuario no-root
RUN addgroup -g 1001 -S nodejs && \
    adduser -S nextjs -u 1001

# Solo dependencias de produccion
COPY --from=deps /tmp/prod_modules ./node_modules
COPY --from=builder /app/dist ./dist
COPY --from=builder /app/package.json ./

# Cambiar propietario
RUN chown -R nextjs:nodejs /app
USER nextjs

ENV NODE_ENV=production
ENV PORT=3000
EXPOSE 3000

# Health check
HEALTHCHECK --interval=30s --timeout=3s --start-period=5s --retries=3 \
  CMD wget --no-verbose --tries=1 --spider http://localhost:3000/health || exit 1

CMD ["node", "dist/main.js"]

Comparacion de tamano de imagen

Efecto del build multi-etapa

EtapaTamanoReduccion
Desarrollo (todas las dependencias)1.2 GB-
Etapa de build1.5 GB-
Produccion (final)180 MB85%

Gestion de secretos

Docker Secrets (Recomendado)

# Crear archivos de secretos
mkdir -p secrets
echo "app_user" > secrets/db_user.txt
echo "$(openssl rand -base64 32)" > secrets/db_password.txt
echo "$(openssl rand -base64 64)" > secrets/app_secret.txt

# Configurar permisos de archivo
chmod 600 secrets/*
# docker-compose.prod.yml
services:
  app:
    secrets:
      - db_password
      - app_secret
    environment:
      # Los secretos se leen desde archivos
      - DATABASE_PASSWORD_FILE=/run/secrets/db_password

secrets:
  db_password:
    file: ./secrets/db_password.txt
  app_secret:
    file: ./secrets/app_secret.txt
// Lectura de secretos en la aplicacion
import { readFileSync } from 'fs';

function getSecret(name: string): string {
  const filePath = `/run/secrets/${name}`;
  try {
    return readFileSync(filePath, 'utf8').trim();
  } catch {
    // En desarrollo, obtener de variable de entorno
    return process.env[name.toUpperCase()] || '';
  }
}

const dbPassword = getSecret('db_password');

Gestion segura de variables de entorno

# .env.example (plantilla)
NODE_ENV=production
DB_HOST=db
DB_PORT=5432
DB_NAME=app_production
# Los secretos no se incluyen ya que se leen desde archivos

# .env.prod (produccion - agregar a .gitignore)
NODE_ENV=production
DB_HOST=db
DB_PORT=5432
DB_NAME=app_production
# docker-compose.prod.yml
services:
  app:
    env_file:
      - .env.prod

Configuracion de seguridad

Proxy inverso Nginx

# docker/nginx/nginx.conf
user nginx;
worker_processes auto;
error_log /var/log/nginx/error.log warn;
pid /var/run/nginx.pid;

events {
    worker_connections 1024;
    use epoll;
    multi_accept on;
}

http {
    include /etc/nginx/mime.types;
    default_type application/octet-stream;

    # Formato de log
    log_format main_json escape=json '{'
        '"time": "$time_iso8601",'
        '"remote_addr": "$remote_addr",'
        '"method": "$request_method",'
        '"uri": "$request_uri",'
        '"status": $status,'
        '"body_bytes_sent": $body_bytes_sent,'
        '"request_time": $request_time,'
        '"upstream_response_time": "$upstream_response_time",'
        '"user_agent": "$http_user_agent"'
    '}';

    access_log /var/log/nginx/access.log main_json;

    # Configuracion de rendimiento
    sendfile on;
    tcp_nopush on;
    tcp_nodelay on;
    keepalive_timeout 65;
    types_hash_max_size 2048;

    # Headers de seguridad
    add_header X-Frame-Options "SAMEORIGIN" always;
    add_header X-Content-Type-Options "nosniff" always;
    add_header X-XSS-Protection "1; mode=block" always;
    add_header Referrer-Policy "strict-origin-when-cross-origin" always;
    add_header Content-Security-Policy "default-src 'self'; script-src 'self' 'unsafe-inline'; style-src 'self' 'unsafe-inline';" always;

    # Ocultar informacion del servidor
    server_tokens off;

    # Compresion Gzip
    gzip on;
    gzip_vary on;
    gzip_proxied any;
    gzip_comp_level 6;
    gzip_types text/plain text/css text/xml application/json application/javascript application/xml;

    # Limitacion de tasa
    limit_req_zone $binary_remote_addr zone=api:10m rate=10r/s;
    limit_req_zone $binary_remote_addr zone=login:10m rate=1r/s;

    upstream app {
        server app:3000;
        keepalive 32;
    }

    server {
        listen 80;
        server_name _;
        return 301 https://$host$request_uri;
    }

    server {
        listen 443 ssl http2;
        server_name example.com;

        ssl_certificate /etc/nginx/ssl/cert.pem;
        ssl_certificate_key /etc/nginx/ssl/key.pem;
        ssl_protocols TLSv1.2 TLSv1.3;
        ssl_ciphers ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256;
        ssl_prefer_server_ciphers off;
        ssl_session_cache shared:SSL:10m;
        ssl_session_timeout 1d;

        # HSTS
        add_header Strict-Transport-Security "max-age=31536000; includeSubDomains" always;

        location / {
            proxy_pass http://app;
            proxy_http_version 1.1;
            proxy_set_header Upgrade $http_upgrade;
            proxy_set_header Connection 'upgrade';
            proxy_set_header Host $host;
            proxy_set_header X-Real-IP $remote_addr;
            proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
            proxy_set_header X-Forwarded-Proto $scheme;
            proxy_cache_bypass $http_upgrade;
        }

        location /api/ {
            limit_req zone=api burst=20 nodelay;
            proxy_pass http://app;
            # ... otras configuraciones
        }

        location /api/auth/login {
            limit_req zone=login burst=5 nodelay;
            proxy_pass http://app;
        }

        # Archivos estaticos
        location /static/ {
            alias /app/static/;
            expires 30d;
            add_header Cache-Control "public, immutable";
        }

        # Health check
        location /health {
            access_log off;
            proxy_pass http://app;
        }
    }
}

Fortalecimiento de seguridad del contenedor

# docker-compose.prod.yml
services:
  app:
    # Sistema de archivos de solo lectura
    read_only: true
    tmpfs:
      - /tmp
      - /app/tmp

    # Restriccion de privilegios
    cap_drop:
      - ALL
    cap_add:
      - NET_BIND_SERVICE  # Agregar solo privilegios necesarios

    # Opciones de seguridad
    security_opt:
      - no-new-privileges:true

    # Usuario no-root (ya configurado en Dockerfile)
    user: "1001:1001"

Health checks y recuperacion automatica

Configuracion de health checks

# docker-compose.prod.yml
services:
  app:
    healthcheck:
      test: ["CMD", "wget", "--no-verbose", "--tries=1", "--spider", "http://localhost:3000/health"]
      interval: 30s
      timeout: 10s
      retries: 3
      start_period: 40s

  db:
    healthcheck:
      test: ["CMD-SHELL", "pg_isready -U postgres -d app_production"]
      interval: 10s
      timeout: 5s
      retries: 5
      start_period: 30s

  redis:
    healthcheck:
      test: ["CMD", "redis-cli", "ping"]
      interval: 10s
      timeout: 3s
      retries: 3

Endpoint de health check en la aplicacion

// src/health.ts
import { Router } from 'express';
import { Pool } from 'pg';
import Redis from 'ioredis';

const router = Router();
const pool = new Pool();
const redis = new Redis();

interface HealthStatus {
  status: 'healthy' | 'unhealthy';
  timestamp: string;
  checks: {
    database: { status: string; latency?: number };
    redis: { status: string; latency?: number };
    memory: { used: number; total: number };
  };
}

router.get('/health', async (req, res) => {
  const health: HealthStatus = {
    status: 'healthy',
    timestamp: new Date().toISOString(),
    checks: {
      database: { status: 'unknown' },
      redis: { status: 'unknown' },
      memory: {
        used: process.memoryUsage().heapUsed,
        total: process.memoryUsage().heapTotal,
      },
    },
  };

  try {
    // Verificacion de base de datos
    const dbStart = Date.now();
    await pool.query('SELECT 1');
    health.checks.database = {
      status: 'healthy',
      latency: Date.now() - dbStart,
    };
  } catch {
    health.checks.database = { status: 'unhealthy' };
    health.status = 'unhealthy';
  }

  try {
    // Verificacion de Redis
    const redisStart = Date.now();
    await redis.ping();
    health.checks.redis = {
      status: 'healthy',
      latency: Date.now() - redisStart,
    };
  } catch {
    health.checks.redis = { status: 'unhealthy' };
    health.status = 'unhealthy';
  }

  const statusCode = health.status === 'healthy' ? 200 : 503;
  res.status(statusCode).json(health);
});

// Liveness probe (verificacion de supervivencia)
router.get('/health/live', (req, res) => {
  res.status(200).json({ status: 'alive' });
});

// Readiness probe (verificacion de disponibilidad)
router.get('/health/ready', async (req, res) => {
  try {
    await pool.query('SELECT 1');
    await redis.ping();
    res.status(200).json({ status: 'ready' });
  } catch {
    res.status(503).json({ status: 'not ready' });
  }
});

export default router;

Gestion de logs

Logs estructurados

// src/logger.ts
import pino from 'pino';

const logger = pino({
  level: process.env.LOG_LEVEL || 'info',
  formatters: {
    level: (label) => ({ level: label }),
  },
  timestamp: () => `,"timestamp":"${new Date().toISOString()}"`,
  base: {
    service: 'app',
    version: process.env.APP_VERSION || '1.0.0',
  },
});

export default logger;

Integracion con Fluentd/Loki

# docker-compose.prod.yml
services:
  app:
    logging:
      driver: "fluentd"
      options:
        fluentd-address: "localhost:24224"
        tag: "app.{{.Name}}"
        fluentd-async-connect: "true"

  fluentd:
    image: fluent/fluentd:v1.16-debian
    volumes:
      - ./docker/fluentd/fluent.conf:/fluentd/etc/fluent.conf:ro
      - fluentd_logs:/fluentd/log
    ports:
      - "24224:24224"
    networks:
      - backend

volumes:
  fluentd_logs:
# docker/fluentd/fluent.conf
<source>
  @type forward
  port 24224
  bind 0.0.0.0
</source>

<filter app.**>
  @type parser
  key_name log
  reserve_data true
  <parse>
    @type json
  </parse>
</filter>

<match app.**>
  @type elasticsearch
  host elasticsearch
  port 9200
  logstash_format true
  logstash_prefix app
  <buffer>
    @type file
    path /fluentd/log/buffer
    flush_interval 5s
  </buffer>
</match>

Backup y recuperacion

Script de backup de base de datos

#!/bin/bash
# scripts/backup-db.sh

set -euo pipefail

BACKUP_DIR="/backups/postgres"
DATE=$(date +%Y%m%d_%H%M%S)
BACKUP_FILE="${BACKUP_DIR}/backup_${DATE}.sql.gz"
RETENTION_DAYS=7

# Crear directorio de backup
mkdir -p "${BACKUP_DIR}"

# Ejecutar backup
docker compose -f docker-compose.prod.yml exec -T db \
  pg_dump -U postgres -d app_production | gzip > "${BACKUP_FILE}"

# Eliminar backups antiguos
find "${BACKUP_DIR}" -name "backup_*.sql.gz" -mtime +${RETENTION_DAYS} -delete

# Verificar backup
if gzip -t "${BACKUP_FILE}"; then
  echo "Backup successful: ${BACKUP_FILE}"
  echo "Size: $(du -h ${BACKUP_FILE} | cut -f1)"
else
  echo "Backup verification failed!"
  exit 1
fi

Script de restauracion

#!/bin/bash
# scripts/restore-db.sh

set -euo pipefail

BACKUP_FILE=$1

if [ -z "${BACKUP_FILE}" ]; then
  echo "Usage: $0 <backup_file>"
  exit 1
fi

echo "Restoring from: ${BACKUP_FILE}"
echo "WARNING: This will overwrite the current database!"
read -p "Continue? (y/N): " confirm

if [ "${confirm}" != "y" ]; then
  echo "Aborted."
  exit 0
fi

# Ejecutar restauracion
gunzip -c "${BACKUP_FILE}" | docker compose -f docker-compose.prod.yml exec -T db \
  psql -U postgres -d app_production

echo "Restore completed."

Comandos de despliegue

Script de despliegue

#!/bin/bash
# scripts/deploy.sh

set -euo pipefail

echo "=== Starting deployment ==="

# Validar archivos de configuracion
docker compose -f docker-compose.yml -f docker-compose.prod.yml config --quiet

# Build de imagen mas reciente
echo "Building images..."
docker compose -f docker-compose.yml -f docker-compose.prod.yml build --pull

# Backup de base de datos
echo "Creating database backup..."
./scripts/backup-db.sh

# Rolling update
echo "Starting rolling update..."
docker compose -f docker-compose.yml -f docker-compose.prod.yml up -d --no-deps app

# Esperar health check
echo "Waiting for health check..."
sleep 10

# Verificar health check
if curl -sf http://localhost/health > /dev/null; then
  echo "=== Deployment successful ==="
else
  echo "=== Health check failed! Rolling back... ==="
  docker compose -f docker-compose.yml -f docker-compose.prod.yml rollback
  exit 1
fi

# Eliminar imagenes no utilizadas
docker image prune -f

echo "=== Deployment completed ==="

Estandarizacion de operaciones con Makefile

# Makefile
.PHONY: dev prod build deploy logs backup restore clean

# Entorno de desarrollo
dev:
	docker compose up -d

dev-logs:
	docker compose logs -f

# Entorno de produccion
prod:
	docker compose -f docker-compose.yml -f docker-compose.prod.yml up -d

prod-logs:
	docker compose -f docker-compose.yml -f docker-compose.prod.yml logs -f

# Build
build:
	docker compose -f docker-compose.yml -f docker-compose.prod.yml build --no-cache

# Despliegue
deploy:
	./scripts/deploy.sh

# Backup
backup:
	./scripts/backup-db.sh

restore:
	./scripts/restore-db.sh $(FILE)

# Limpieza
clean:
	docker compose down -v --remove-orphans
	docker image prune -af
	docker volume prune -f

Referencia rapida de comandos de operacion

# Iniciar
docker compose -f docker-compose.yml -f docker-compose.prod.yml up -d

# Detener
docker compose -f docker-compose.yml -f docker-compose.prod.yml down

# Ver logs
docker compose -f docker-compose.yml -f docker-compose.prod.yml logs -f app

# Scale out
docker compose -f docker-compose.yml -f docker-compose.prod.yml up -d --scale app=3

# Entrar al contenedor
docker compose -f docker-compose.yml -f docker-compose.prod.yml exec app sh

# Uso de recursos
docker stats

# Validar configuracion
docker compose -f docker-compose.yml -f docker-compose.prod.yml config

Resumen

Resumimos los puntos clave para usar Docker Compose en entornos de produccion.

Seguridad

  1. Gestion de secretos: Usar Docker Secrets
  2. Usuario no-root: Sin privilegios dentro del contenedor
  3. Minimo privilegio: Otorgar solo permisos necesarios
  4. Optimizacion de imagen: Reducir superficie de ataque con build multi-etapa

Fiabilidad

  1. Health checks: Configurar para todos los servicios
  2. Politica de reinicio: Recuperacion automatica en caso de fallo
  3. Limites de recursos: Proteccion contra OOM killer
  4. Backups: Backups automaticos periodicos

Operabilidad

  1. Gestion de logs: Logs estructurados y agregacion
  2. Monitoreo: Recoleccion de metricas
  3. Automatizacion de despliegue: Scripts
  4. Separacion de configuracion: Archivos de configuracion por entorno

Docker Compose puede utilizarse efectivamente en entornos de produccion cuando se configura correctamente.

Enlaces de referencia

← Volver a la lista