A arquitetura serverless é uma abordagem moderna que permite construir aplicações escaláveis enquanto reduz a carga de gerenciamento de infraestrutura. Neste artigo, explicamos os padrões de design serverless e as melhores práticas de implementação.
Conceitos Básicos de Serverless
Comparação de Arquiteturas
flowchart TB
subgraph Traditional["Arquitetura de Servidor Tradicional"]
LB1["Load Balancer"]
LB1 --> S1["Server (EC2)<br/>Operação 24/7"]
LB1 --> S2["Server (EC2)<br/>Operação 24/7"]
LB1 --> S3["Server (EC2)<br/>Operação 24/7"]
S1 & S2 & S3 --> DB1["DB"]
end
subgraph Serverless["Arquitetura Serverless"]
API["API Gateway"]
API --> L1["Lambda Function<br/>(Sob demanda)"]
API --> L2["Lambda Function<br/>(Sob demanda)"]
API --> L3["Lambda Function<br/>(Sob demanda)"]
L1 & L2 & L3 --> DB2["DynamoDB / Aurora<br/>(Serverless)"]
end
Classificação FaaS/BaaS
| Categoria | Exemplos de Serviços | Características |
|---|---|---|
| FaaS | Lambda, Cloud Functions | Execução de funções orientada a eventos |
| Edge Functions | Cloudflare Workers, Vercel Edge | Execução na borda CDN |
| Container-based | AWS Fargate, Cloud Run | Execução baseada em contêineres |
| BaaS | Firebase, Supabase | Fornecimento de funcionalidades de backend |
Comparação de Plataformas
Características dos Principais Serviços
| Item | Lambda | Workers | Vercel | Cloud Run |
|---|---|---|---|---|
| Ambiente de Execução | Node/Py etc | V8 | Node/Edge | Container |
| Cold Start | 100ms-1s | <5ms | <50ms | 1-5s |
| Limite de Memória | 10GB | 128MB | 1GB | 32GB |
| Limite de Tempo de Execução | 15min | 30s/ilimitado | 10s/300s | 60min |
| Localização | Regional | Global | Global | Regional |
| Precificação | Tempo de execução | Requisições | Tempo de execução | segundos vCPU |
AWS Lambda
Função Lambda Básica
// handler.ts - AWS Lambda Handler
import { APIGatewayProxyEvent, APIGatewayProxyResult, Context } from 'aws-lambda';
// Handler básico
export const hello = async (
event: APIGatewayProxyEvent,
context: Context
): Promise<APIGatewayProxyResult> => {
console.log('Event:', JSON.stringify(event, null, 2));
return {
statusCode: 200,
headers: {
'Content-Type': 'application/json',
'Access-Control-Allow-Origin': '*',
},
body: JSON.stringify({
message: 'Hello from Lambda!',
requestId: context.awsRequestId,
}),
};
};
// Handler de API RESTful
export const users = async (
event: APIGatewayProxyEvent
): Promise<APIGatewayProxyResult> => {
const method = event.httpMethod;
const userId = event.pathParameters?.id;
try {
switch (method) {
case 'GET':
if (userId) {
return await getUser(userId);
}
return await listUsers(event.queryStringParameters);
case 'POST':
const body = JSON.parse(event.body || '{}');
return await createUser(body);
case 'PUT':
if (!userId) return badRequest('User ID required');
return await updateUser(userId, JSON.parse(event.body || '{}'));
case 'DELETE':
if (!userId) return badRequest('User ID required');
return await deleteUser(userId);
default:
return {
statusCode: 405,
body: JSON.stringify({ error: 'Method not allowed' }),
};
}
} catch (error) {
console.error('Error:', error);
return {
statusCode: 500,
body: JSON.stringify({ error: 'Internal server error' }),
};
}
};
// Função auxiliar
function badRequest(message: string): APIGatewayProxyResult {
return {
statusCode: 400,
body: JSON.stringify({ error: message }),
};
}
Integração com Fontes de Eventos
// event-handlers.ts
// Handler de eventos SQS
import { SQSEvent, SQSBatchResponse } from 'aws-lambda';
export const sqsHandler = async (event: SQSEvent): Promise<SQSBatchResponse> => {
const batchItemFailures: { itemIdentifier: string }[] = [];
for (const record of event.Records) {
try {
const body = JSON.parse(record.body);
await processMessage(body);
} catch (error) {
console.error(`Failed to process message: ${record.messageId}`, error);
batchItemFailures.push({ itemIdentifier: record.messageId });
}
}
return { batchItemFailures };
};
// Handler de DynamoDB Streams
import { DynamoDBStreamEvent } from 'aws-lambda';
export const dynamoHandler = async (event: DynamoDBStreamEvent): Promise<void> => {
for (const record of event.Records) {
console.log('Event Type:', record.eventName);
console.log('DynamoDB Record:', JSON.stringify(record.dynamodb, null, 2));
switch (record.eventName) {
case 'INSERT':
await handleInsert(record.dynamodb?.NewImage);
break;
case 'MODIFY':
await handleModify(record.dynamodb?.OldImage, record.dynamodb?.NewImage);
break;
case 'REMOVE':
await handleRemove(record.dynamodb?.OldImage);
break;
}
}
};
// Handler de eventos S3
import { S3Event } from 'aws-lambda';
export const s3Handler = async (event: S3Event): Promise<void> => {
for (const record of event.Records) {
const bucket = record.s3.bucket.name;
const key = decodeURIComponent(record.s3.object.key.replace(/\+/g, ' '));
console.log(`Processing: ${bucket}/${key}`);
// Processamento de arquivo
await processS3Object(bucket, key);
}
};
// Execução agendada (EventBridge)
import { ScheduledEvent } from 'aws-lambda';
export const scheduledHandler = async (event: ScheduledEvent): Promise<void> => {
console.log('Scheduled event:', event);
// Tarefas de execução periódica
await runDailyReport();
await cleanupExpiredData();
};
Deploy com SAM/CDK
# template.yaml (AWS SAM)
AWSTemplateFormatVersion: '2010-09-09'
Transform: AWS::Serverless-2016-10-31
Globals:
Function:
Timeout: 30
Runtime: nodejs20.x
MemorySize: 256
Environment:
Variables:
TABLE_NAME: !Ref UsersTable
NODE_OPTIONS: --enable-source-maps
Resources:
ApiGateway:
Type: AWS::Serverless::Api
Properties:
StageName: prod
Cors:
AllowOrigin: "'*'"
AllowMethods: "'GET,POST,PUT,DELETE,OPTIONS'"
AllowHeaders: "'Content-Type,Authorization'"
UsersFunction:
Type: AWS::Serverless::Function
Properties:
CodeUri: dist/
Handler: handler.users
Events:
GetUsers:
Type: Api
Properties:
RestApiId: !Ref ApiGateway
Path: /users
Method: GET
CreateUser:
Type: Api
Properties:
RestApiId: !Ref ApiGateway
Path: /users
Method: POST
GetUser:
Type: Api
Properties:
RestApiId: !Ref ApiGateway
Path: /users/{id}
Method: GET
Policies:
- DynamoDBCrudPolicy:
TableName: !Ref UsersTable
UsersTable:
Type: AWS::DynamoDB::Table
Properties:
TableName: users
BillingMode: PAY_PER_REQUEST
AttributeDefinitions:
- AttributeName: id
AttributeType: S
KeySchema:
- AttributeName: id
KeyType: HASH
Outputs:
ApiEndpoint:
Value: !Sub "https://${ApiGateway}.execute-api.${AWS::Region}.amazonaws.com/prod"
Cloudflare Workers
Execução na Borda
// worker.ts - Cloudflare Workers
export interface Env {
KV: KVNamespace;
DB: D1Database;
BUCKET: R2Bucket;
API_KEY: string;
}
export default {
async fetch(request: Request, env: Env, ctx: ExecutionContext): Promise<Response> {
const url = new URL(request.url);
// Roteamento
if (url.pathname === '/api/users') {
return handleUsers(request, env);
}
if (url.pathname.startsWith('/api/cache')) {
return handleCache(request, env);
}
if (url.pathname.startsWith('/api/storage')) {
return handleStorage(request, env);
}
return new Response('Not Found', { status: 404 });
},
// Scheduled Worker (Cron Triggers)
async scheduled(event: ScheduledEvent, env: Env, ctx: ExecutionContext): Promise<void> {
console.log(`Cron triggered at ${event.scheduledTime}`);
await cleanupOldData(env);
},
};
// Uso do KV Storage
async function handleCache(request: Request, env: Env): Promise<Response> {
const url = new URL(request.url);
const key = url.searchParams.get('key');
if (!key) {
return new Response('Key required', { status: 400 });
}
if (request.method === 'GET') {
const value = await env.KV.get(key);
if (!value) {
return new Response('Not found', { status: 404 });
}
return new Response(value);
}
if (request.method === 'PUT') {
const value = await request.text();
await env.KV.put(key, value, {
expirationTtl: 3600, // 1 hora
});
return new Response('OK');
}
return new Response('Method not allowed', { status: 405 });
}
// D1 Database (SQLite)
async function handleUsers(request: Request, env: Env): Promise<Response> {
if (request.method === 'GET') {
const { results } = await env.DB.prepare(
'SELECT * FROM users ORDER BY created_at DESC LIMIT 100'
).all();
return Response.json(results);
}
if (request.method === 'POST') {
const body = await request.json<{ name: string; email: string }>();
const result = await env.DB.prepare(
'INSERT INTO users (name, email) VALUES (?, ?) RETURNING *'
)
.bind(body.name, body.email)
.first();
return Response.json(result, { status: 201 });
}
return new Response('Method not allowed', { status: 405 });
}
// R2 Storage
async function handleStorage(request: Request, env: Env): Promise<Response> {
const url = new URL(request.url);
const key = url.pathname.replace('/api/storage/', '');
if (request.method === 'GET') {
const object = await env.BUCKET.get(key);
if (!object) {
return new Response('Not found', { status: 404 });
}
return new Response(object.body, {
headers: {
'Content-Type': object.httpMetadata?.contentType || 'application/octet-stream',
'Cache-Control': 'public, max-age=31536000',
},
});
}
if (request.method === 'PUT') {
await env.BUCKET.put(key, request.body, {
httpMetadata: {
contentType: request.headers.get('Content-Type') || 'application/octet-stream',
},
});
return new Response('OK');
}
return new Response('Method not allowed', { status: 405 });
}
Configuração wrangler.toml
# wrangler.toml
name = "my-worker"
main = "src/worker.ts"
compatibility_date = "2024-01-01"
[triggers]
crons = ["0 * * * *"] # Execução a cada hora
[[kv_namespaces]]
binding = "KV"
id = "abc123"
[[d1_databases]]
binding = "DB"
database_name = "my-database"
database_id = "def456"
[[r2_buckets]]
binding = "BUCKET"
bucket_name = "my-bucket"
[vars]
ENVIRONMENT = "production"
[env.staging]
name = "my-worker-staging"
vars = { ENVIRONMENT = "staging" }
Vercel Functions
Edge Functions
// app/api/hello/route.ts (Next.js App Router)
import { NextRequest, NextResponse } from 'next/server';
export const runtime = 'edge'; // Executar como Edge Function
export async function GET(request: NextRequest) {
const { searchParams } = new URL(request.url);
const name = searchParams.get('name') || 'World';
return NextResponse.json({
message: `Hello, ${name}!`,
region: process.env.VERCEL_REGION,
timestamp: new Date().toISOString(),
});
}
export async function POST(request: NextRequest) {
const body = await request.json();
// Processamento na Edge Function
const result = processData(body);
return NextResponse.json(result);
}
Serverless Functions
// api/users/[id].ts (Pages Router)
import type { NextApiRequest, NextApiResponse } from 'next';
export default async function handler(
req: NextApiRequest,
res: NextApiResponse
) {
const { id } = req.query;
switch (req.method) {
case 'GET':
const user = await getUser(id as string);
if (!user) {
return res.status(404).json({ error: 'User not found' });
}
return res.json(user);
case 'PUT':
const updated = await updateUser(id as string, req.body);
return res.json(updated);
case 'DELETE':
await deleteUser(id as string);
return res.status(204).end();
default:
res.setHeader('Allow', ['GET', 'PUT', 'DELETE']);
return res.status(405).end();
}
}
// Vercel Blob Storage
import { put, del } from '@vercel/blob';
export async function uploadFile(file: File) {
const blob = await put(file.name, file, {
access: 'public',
});
return blob.url;
}
Padrões de Design
Arquitetura Orientada a Eventos
flowchart TB
EventSource["Event Source<br/>(API Gateway / S3 / DynamoDB Streams)"]
EventBus["Event Bus<br/>(EventBridge)"]
EventSource --> EventBus
EventBus --> OrderLambda["Lambda<br/>Order Process"]
EventBus --> EmailLambda["Lambda<br/>Email Notify"]
EventBus --> AnalyticsLambda["Lambda<br/>Analytics Process"]
OrderLambda --> DynamoDB["DynamoDB"]
EmailLambda --> SES["SES"]
AnalyticsLambda --> S3["S3"]
Padrão Fan-out
// SNS → SQS → Lambda (Fan-out)
// Publisher Lambda
import { SNSClient, PublishCommand } from '@aws-sdk/client-sns';
const sns = new SNSClient({});
export async function publishOrderEvent(order: Order) {
await sns.send(new PublishCommand({
TopicArn: process.env.ORDER_TOPIC_ARN,
Message: JSON.stringify({
eventType: 'ORDER_CREATED',
order,
timestamp: new Date().toISOString(),
}),
MessageAttributes: {
eventType: {
DataType: 'String',
StringValue: 'ORDER_CREATED',
},
},
}));
}
// Consumer Lambda (Recebe do SQS)
import { SQSEvent } from 'aws-lambda';
export async function processOrderNotification(event: SQSEvent) {
for (const record of event.Records) {
const snsMessage = JSON.parse(record.body);
const orderEvent = JSON.parse(snsMessage.Message);
// Envio de e-mail
await sendOrderConfirmationEmail(orderEvent.order);
}
}
export async function processOrderAnalytics(event: SQSEvent) {
for (const record of event.Records) {
const snsMessage = JSON.parse(record.body);
const orderEvent = JSON.parse(snsMessage.Message);
// Salvamento de dados analíticos
await saveAnalyticsData(orderEvent);
}
}
Padrão Saga
// Step Functions Saga Pattern
// Definição da State Machine (ASL)
const orderSagaDefinition = {
Comment: 'Order Processing Saga',
StartAt: 'ReserveInventory',
States: {
ReserveInventory: {
Type: 'Task',
Resource: '${ReserveInventoryFunctionArn}',
Next: 'ProcessPayment',
Catch: [{
ErrorEquals: ['States.ALL'],
Next: 'InventoryReservationFailed',
}],
},
ProcessPayment: {
Type: 'Task',
Resource: '${ProcessPaymentFunctionArn}',
Next: 'ShipOrder',
Catch: [{
ErrorEquals: ['States.ALL'],
Next: 'PaymentFailed',
}],
},
ShipOrder: {
Type: 'Task',
Resource: '${ShipOrderFunctionArn}',
End: true,
Catch: [{
ErrorEquals: ['States.ALL'],
Next: 'ShippingFailed',
}],
},
// Transações compensatórias
InventoryReservationFailed: {
Type: 'Fail',
Cause: 'Failed to reserve inventory',
},
PaymentFailed: {
Type: 'Task',
Resource: '${ReleaseInventoryFunctionArn}',
Next: 'PaymentFailedState',
},
PaymentFailedState: {
Type: 'Fail',
Cause: 'Payment processing failed',
},
ShippingFailed: {
Type: 'Task',
Resource: '${RefundPaymentFunctionArn}',
Next: 'ReleaseInventoryAfterShipFail',
},
ReleaseInventoryAfterShipFail: {
Type: 'Task',
Resource: '${ReleaseInventoryFunctionArn}',
Next: 'ShippingFailedState',
},
ShippingFailedState: {
Type: 'Fail',
Cause: 'Shipping failed',
},
},
};
Estratégias para Cold Start
Provisioned Concurrency
# template.yaml
Resources:
MyFunction:
Type: AWS::Serverless::Function
Properties:
Handler: handler.main
ProvisionedConcurrencyConfig:
ProvisionedConcurrentExecutions: 5
AutoPublishAlias: live
Otimização do Pool de Conexões
// db.ts - Inicialização fora do Lambda
import { Pool } from 'pg';
// Criação do pool no escopo global (será reutilizado)
const pool = new Pool({
connectionString: process.env.DATABASE_URL,
max: 1, // 1 conexão é recomendado para Lambda
idleTimeoutMillis: 120000,
connectionTimeoutMillis: 10000,
});
export async function query<T>(sql: string, params?: any[]): Promise<T[]> {
const client = await pool.connect();
try {
const result = await client.query(sql, params);
return result.rows;
} finally {
client.release();
}
}
// handler.ts
import { query } from './db';
export const handler = async (event: any) => {
// O pool de conexões é reutilizado
const users = await query<User>('SELECT * FROM users');
return {
statusCode: 200,
body: JSON.stringify(users),
};
};
Resumo
A arquitetura serverless pode alcançar alta escalabilidade e eficiência de custos quando projetada adequadamente.
Guia de Seleção
| Caso de Uso | Serviço Recomendado |
|---|---|
| Backend de API | Lambda + API Gateway |
| Baixa latência global | Cloudflare Workers |
| Aplicações Web | Vercel / Next.js |
| Processamento de longa duração | Fargate / Cloud Run |
| Processamento de eventos | Lambda + EventBridge |
Melhores Práticas
- Separação de responsabilidades das funções: Princípio da responsabilidade única
- Estratégias para cold start: Otimização, Provisioned Concurrency
- Processamento assíncrono: Acoplamento fraco via filas
- Monitoramento: Uso de CloudWatch, Datadog, etc.
- Gerenciamento de custos: Otimização de tempo de execução e memória
Serverless não é uma solução universal, mas oferece grandes benefícios para casos de uso apropriados.
Links de Referência
- AWS Lambda Documentation
- Cloudflare Workers Documentation
- Vercel Serverless Functions
- Serverless Framework