Serverless architecture is a modern approach that allows building scalable applications while reducing infrastructure management burden. This article explains serverless design patterns and implementation best practices.
Basic Concepts of Serverless
Architecture Comparison
Traditional Server Architecture:
flowchart TB
LB["Load Balancer"]
LB --> S1["Server (EC2)<br/>24/7"]
LB --> S2["Server (EC2)<br/>24/7"]
LB --> S3["Server (EC2)<br/>24/7"]
S1 --> DB["DB"]
S2 --> DB
S3 --> DB
Serverless Architecture:
flowchart TB
AG["API Gateway"]
AG --> L1["Lambda Function<br/>(on-demand)"]
AG --> L2["Lambda Function<br/>(on-demand)"]
AG --> L3["Lambda Function<br/>(on-demand)"]
L1 --> DDB["DynamoDB / Aurora<br/>(Serverless)"]
L2 --> DDB
L3 --> DDB
FaaS/BaaS Classification
| Category | Example Services | Features |
|---|---|---|
| FaaS | Lambda, Cloud Functions | Event-driven function execution |
| Edge Functions | Cloudflare Workers, Vercel Edge | Execution at CDN edge |
| Container-based | AWS Fargate, Cloud Run | Container-based execution |
| BaaS | Firebase, Supabase | Provides backend functionality |
Platform Comparison
Major Service Characteristics
| Lambda | Workers | Vercel | Cloud Run | |
|---|---|---|---|---|
| Runtime | Node/Py | V8 | Node/Edge | Container |
| Cold start | 100ms-1s | <5ms | <50ms | 1-5s |
| Memory limit | 10GB | 128MB | 1GB | 32GB |
| Execution limit | 15 min | 30s/unltd | 10s/300s | 60 min |
| Location | Regional | Global | Global | Regional |
| Pricing | Duration | Requests | Duration | vCPU-sec |
AWS Lambda
Basic Lambda Function
// handler.ts - AWS Lambda Handler
import { APIGatewayProxyEvent, APIGatewayProxyResult, Context } from 'aws-lambda';
// Basic handler
export const hello = async (
event: APIGatewayProxyEvent,
context: Context
): Promise<APIGatewayProxyResult> => {
console.log('Event:', JSON.stringify(event, null, 2));
return {
statusCode: 200,
headers: {
'Content-Type': 'application/json',
'Access-Control-Allow-Origin': '*',
},
body: JSON.stringify({
message: 'Hello from Lambda!',
requestId: context.awsRequestId,
}),
};
};
// RESTful API handler
export const users = async (
event: APIGatewayProxyEvent
): Promise<APIGatewayProxyResult> => {
const method = event.httpMethod;
const userId = event.pathParameters?.id;
try {
switch (method) {
case 'GET':
if (userId) {
return await getUser(userId);
}
return await listUsers(event.queryStringParameters);
case 'POST':
const body = JSON.parse(event.body || '{}');
return await createUser(body);
case 'PUT':
if (!userId) return badRequest('User ID required');
return await updateUser(userId, JSON.parse(event.body || '{}'));
case 'DELETE':
if (!userId) return badRequest('User ID required');
return await deleteUser(userId);
default:
return {
statusCode: 405,
body: JSON.stringify({ error: 'Method not allowed' }),
};
}
} catch (error) {
console.error('Error:', error);
return {
statusCode: 500,
body: JSON.stringify({ error: 'Internal server error' }),
};
}
};
// Helper function
function badRequest(message: string): APIGatewayProxyResult {
return {
statusCode: 400,
body: JSON.stringify({ error: message }),
};
}
Event Source Integration
// event-handlers.ts
// SQS event handler
import { SQSEvent, SQSBatchResponse } from 'aws-lambda';
export const sqsHandler = async (event: SQSEvent): Promise<SQSBatchResponse> => {
const batchItemFailures: { itemIdentifier: string }[] = [];
for (const record of event.Records) {
try {
const body = JSON.parse(record.body);
await processMessage(body);
} catch (error) {
console.error(`Failed to process message: ${record.messageId}`, error);
batchItemFailures.push({ itemIdentifier: record.messageId });
}
}
return { batchItemFailures };
};
// DynamoDB Streams handler
import { DynamoDBStreamEvent } from 'aws-lambda';
export const dynamoHandler = async (event: DynamoDBStreamEvent): Promise<void> => {
for (const record of event.Records) {
console.log('Event Type:', record.eventName);
console.log('DynamoDB Record:', JSON.stringify(record.dynamodb, null, 2));
switch (record.eventName) {
case 'INSERT':
await handleInsert(record.dynamodb?.NewImage);
break;
case 'MODIFY':
await handleModify(record.dynamodb?.OldImage, record.dynamodb?.NewImage);
break;
case 'REMOVE':
await handleRemove(record.dynamodb?.OldImage);
break;
}
}
};
// S3 event handler
import { S3Event } from 'aws-lambda';
export const s3Handler = async (event: S3Event): Promise<void> => {
for (const record of event.Records) {
const bucket = record.s3.bucket.name;
const key = decodeURIComponent(record.s3.object.key.replace(/\+/g, ' '));
console.log(`Processing: ${bucket}/${key}`);
// File processing
await processS3Object(bucket, key);
}
};
// Scheduled execution (EventBridge)
import { ScheduledEvent } from 'aws-lambda';
export const scheduledHandler = async (event: ScheduledEvent): Promise<void> => {
console.log('Scheduled event:', event);
// Scheduled tasks
await runDailyReport();
await cleanupExpiredData();
};
Deployment with SAM/CDK
# template.yaml (AWS SAM)
AWSTemplateFormatVersion: '2010-09-09'
Transform: AWS::Serverless-2016-10-31
Globals:
Function:
Timeout: 30
Runtime: nodejs20.x
MemorySize: 256
Environment:
Variables:
TABLE_NAME: !Ref UsersTable
NODE_OPTIONS: --enable-source-maps
Resources:
ApiGateway:
Type: AWS::Serverless::Api
Properties:
StageName: prod
Cors:
AllowOrigin: "'*'"
AllowMethods: "'GET,POST,PUT,DELETE,OPTIONS'"
AllowHeaders: "'Content-Type,Authorization'"
UsersFunction:
Type: AWS::Serverless::Function
Properties:
CodeUri: dist/
Handler: handler.users
Events:
GetUsers:
Type: Api
Properties:
RestApiId: !Ref ApiGateway
Path: /users
Method: GET
CreateUser:
Type: Api
Properties:
RestApiId: !Ref ApiGateway
Path: /users
Method: POST
GetUser:
Type: Api
Properties:
RestApiId: !Ref ApiGateway
Path: /users/{id}
Method: GET
Policies:
- DynamoDBCrudPolicy:
TableName: !Ref UsersTable
UsersTable:
Type: AWS::DynamoDB::Table
Properties:
TableName: users
BillingMode: PAY_PER_REQUEST
AttributeDefinitions:
- AttributeName: id
AttributeType: S
KeySchema:
- AttributeName: id
KeyType: HASH
Outputs:
ApiEndpoint:
Value: !Sub "https://${ApiGateway}.execute-api.${AWS::Region}.amazonaws.com/prod"
Cloudflare Workers
Execution at the Edge
// worker.ts - Cloudflare Workers
export interface Env {
KV: KVNamespace;
DB: D1Database;
BUCKET: R2Bucket;
API_KEY: string;
}
export default {
async fetch(request: Request, env: Env, ctx: ExecutionContext): Promise<Response> {
const url = new URL(request.url);
// Routing
if (url.pathname === '/api/users') {
return handleUsers(request, env);
}
if (url.pathname.startsWith('/api/cache')) {
return handleCache(request, env);
}
if (url.pathname.startsWith('/api/storage')) {
return handleStorage(request, env);
}
return new Response('Not Found', { status: 404 });
},
// Scheduled Worker (Cron Triggers)
async scheduled(event: ScheduledEvent, env: Env, ctx: ExecutionContext): Promise<void> {
console.log(`Cron triggered at ${event.scheduledTime}`);
await cleanupOldData(env);
},
};
// Using KV Storage
async function handleCache(request: Request, env: Env): Promise<Response> {
const url = new URL(request.url);
const key = url.searchParams.get('key');
if (!key) {
return new Response('Key required', { status: 400 });
}
if (request.method === 'GET') {
const value = await env.KV.get(key);
if (!value) {
return new Response('Not found', { status: 404 });
}
return new Response(value);
}
if (request.method === 'PUT') {
const value = await request.text();
await env.KV.put(key, value, {
expirationTtl: 3600, // 1 hour
});
return new Response('OK');
}
return new Response('Method not allowed', { status: 405 });
}
// D1 Database (SQLite)
async function handleUsers(request: Request, env: Env): Promise<Response> {
if (request.method === 'GET') {
const { results } = await env.DB.prepare(
'SELECT * FROM users ORDER BY created_at DESC LIMIT 100'
).all();
return Response.json(results);
}
if (request.method === 'POST') {
const body = await request.json<{ name: string; email: string }>();
const result = await env.DB.prepare(
'INSERT INTO users (name, email) VALUES (?, ?) RETURNING *'
)
.bind(body.name, body.email)
.first();
return Response.json(result, { status: 201 });
}
return new Response('Method not allowed', { status: 405 });
}
// R2 Storage
async function handleStorage(request: Request, env: Env): Promise<Response> {
const url = new URL(request.url);
const key = url.pathname.replace('/api/storage/', '');
if (request.method === 'GET') {
const object = await env.BUCKET.get(key);
if (!object) {
return new Response('Not found', { status: 404 });
}
return new Response(object.body, {
headers: {
'Content-Type': object.httpMetadata?.contentType || 'application/octet-stream',
'Cache-Control': 'public, max-age=31536000',
},
});
}
if (request.method === 'PUT') {
await env.BUCKET.put(key, request.body, {
httpMetadata: {
contentType: request.headers.get('Content-Type') || 'application/octet-stream',
},
});
return new Response('OK');
}
return new Response('Method not allowed', { status: 405 });
}
wrangler.toml Configuration
# wrangler.toml
name = "my-worker"
main = "src/worker.ts"
compatibility_date = "2024-01-01"
[triggers]
crons = ["0 * * * *"] # Hourly execution
[[kv_namespaces]]
binding = "KV"
id = "abc123"
[[d1_databases]]
binding = "DB"
database_name = "my-database"
database_id = "def456"
[[r2_buckets]]
binding = "BUCKET"
bucket_name = "my-bucket"
[vars]
ENVIRONMENT = "production"
[env.staging]
name = "my-worker-staging"
vars = { ENVIRONMENT = "staging" }
Vercel Functions
Edge Functions
// app/api/hello/route.ts (Next.js App Router)
import { NextRequest, NextResponse } from 'next/server';
export const runtime = 'edge'; // Execute as Edge Function
export async function GET(request: NextRequest) {
const { searchParams } = new URL(request.url);
const name = searchParams.get('name') || 'World';
return NextResponse.json({
message: `Hello, ${name}!`,
region: process.env.VERCEL_REGION,
timestamp: new Date().toISOString(),
});
}
export async function POST(request: NextRequest) {
const body = await request.json();
// Processing in Edge Function
const result = processData(body);
return NextResponse.json(result);
}
Serverless Functions
// api/users/[id].ts (Pages Router)
import type { NextApiRequest, NextApiResponse } from 'next';
export default async function handler(
req: NextApiRequest,
res: NextApiResponse
) {
const { id } = req.query;
switch (req.method) {
case 'GET':
const user = await getUser(id as string);
if (!user) {
return res.status(404).json({ error: 'User not found' });
}
return res.json(user);
case 'PUT':
const updated = await updateUser(id as string, req.body);
return res.json(updated);
case 'DELETE':
await deleteUser(id as string);
return res.status(204).end();
default:
res.setHeader('Allow', ['GET', 'PUT', 'DELETE']);
return res.status(405).end();
}
}
// Vercel Blob Storage
import { put, del } from '@vercel/blob';
export async function uploadFile(file: File) {
const blob = await put(file.name, file, {
access: 'public',
});
return blob.url;
}
Design Patterns
Event-Driven Architecture
flowchart TB
ES["Event Source<br/>(API Gateway / S3 / DynamoDB Streams)"]
EB["Event Bus<br/>(EventBridge)"]
ES --> EB
EB --> L1["Lambda<br/>Order Process"]
EB --> L2["Lambda<br/>Email Notify"]
EB --> L3["Lambda<br/>Analytics Process"]
L1 --> DDB["DynamoDB"]
L2 --> SES["SES"]
L3 --> S3["S3"]
Fan-out Pattern
// SNS → SQS → Lambda (Fan-out)
// Publisher Lambda
import { SNSClient, PublishCommand } from '@aws-sdk/client-sns';
const sns = new SNSClient({});
export async function publishOrderEvent(order: Order) {
await sns.send(new PublishCommand({
TopicArn: process.env.ORDER_TOPIC_ARN,
Message: JSON.stringify({
eventType: 'ORDER_CREATED',
order,
timestamp: new Date().toISOString(),
}),
MessageAttributes: {
eventType: {
DataType: 'String',
StringValue: 'ORDER_CREATED',
},
},
}));
}
// Consumer Lambda (receiving from SQS)
import { SQSEvent } from 'aws-lambda';
export async function processOrderNotification(event: SQSEvent) {
for (const record of event.Records) {
const snsMessage = JSON.parse(record.body);
const orderEvent = JSON.parse(snsMessage.Message);
// Send email
await sendOrderConfirmationEmail(orderEvent.order);
}
}
export async function processOrderAnalytics(event: SQSEvent) {
for (const record of event.Records) {
const snsMessage = JSON.parse(record.body);
const orderEvent = JSON.parse(snsMessage.Message);
// Save analytics data
await saveAnalyticsData(orderEvent);
}
}
Saga Pattern
// Step Functions Saga Pattern
// State Machine definition (ASL)
const orderSagaDefinition = {
Comment: 'Order Processing Saga',
StartAt: 'ReserveInventory',
States: {
ReserveInventory: {
Type: 'Task',
Resource: '${ReserveInventoryFunctionArn}',
Next: 'ProcessPayment',
Catch: [{
ErrorEquals: ['States.ALL'],
Next: 'InventoryReservationFailed',
}],
},
ProcessPayment: {
Type: 'Task',
Resource: '${ProcessPaymentFunctionArn}',
Next: 'ShipOrder',
Catch: [{
ErrorEquals: ['States.ALL'],
Next: 'PaymentFailed',
}],
},
ShipOrder: {
Type: 'Task',
Resource: '${ShipOrderFunctionArn}',
End: true,
Catch: [{
ErrorEquals: ['States.ALL'],
Next: 'ShippingFailed',
}],
},
// Compensating transactions
InventoryReservationFailed: {
Type: 'Fail',
Cause: 'Failed to reserve inventory',
},
PaymentFailed: {
Type: 'Task',
Resource: '${ReleaseInventoryFunctionArn}',
Next: 'PaymentFailedState',
},
PaymentFailedState: {
Type: 'Fail',
Cause: 'Payment processing failed',
},
ShippingFailed: {
Type: 'Task',
Resource: '${RefundPaymentFunctionArn}',
Next: 'ReleaseInventoryAfterShipFail',
},
ReleaseInventoryAfterShipFail: {
Type: 'Task',
Resource: '${ReleaseInventoryFunctionArn}',
Next: 'ShippingFailedState',
},
ShippingFailedState: {
Type: 'Fail',
Cause: 'Shipping failed',
},
},
};
Cold Start Mitigation
Provisioned Concurrency
# template.yaml
Resources:
MyFunction:
Type: AWS::Serverless::Function
Properties:
Handler: handler.main
ProvisionedConcurrencyConfig:
ProvisionedConcurrentExecutions: 5
AutoPublishAlias: live
Connection Pool Optimization
// db.ts - Initialize outside Lambda
import { Pool } from 'pg';
// Create pool in global scope (will be reused)
const pool = new Pool({
connectionString: process.env.DATABASE_URL,
max: 1, // 1 connection is recommended for Lambda
idleTimeoutMillis: 120000,
connectionTimeoutMillis: 10000,
});
export async function query<T>(sql: string, params?: any[]): Promise<T[]> {
const client = await pool.connect();
try {
const result = await client.query(sql, params);
return result.rows;
} finally {
client.release();
}
}
// handler.ts
import { query } from './db';
export const handler = async (event: any) => {
// Connection pool is reused
const users = await query<User>('SELECT * FROM users');
return {
statusCode: 200,
body: JSON.stringify(users),
};
};
Summary
Serverless architecture can achieve high scalability and cost efficiency when designed properly.
Selection Guidelines
| Use Case | Recommended Service |
|---|---|
| API backend | Lambda + API Gateway |
| Global low latency | Cloudflare Workers |
| Web app | Vercel / Next.js |
| Long-running processes | Fargate / Cloud Run |
| Event processing | Lambda + EventBridge |
Best Practices
- Separate function responsibilities: Single responsibility principle
- Cold start mitigation: Lightweight, Provisioned Concurrency
- Asynchronous processing: Loose coupling via queues
- Monitoring: Use CloudWatch, Datadog, etc.
- Cost management: Optimize execution time and memory
Serverless is not a silver bullet, but it can be highly effective for appropriate use cases.
Reference Links
- AWS Lambda Documentation
- Cloudflare Workers Documentation
- Vercel Serverless Functions
- Serverless Framework