Complete performance testing suite for comparing AWS Lambda, Azure Functions, and Google Cloud Functions.
- Overview
- Quick Start
- Test Scenarios
- Installation
- Configuration
- Running Tests
- Analyzing Results
- Understanding Metrics
- Troubleshooting
This performance testing suite provides:
β 6 Test Scenarios: Cold Start, Sustained Load, Spike, Stress, Concurrency, Endurance β 3 Testing Tools: K6, JMeter, Gatling β Automated Analysis: Python-based result analysis and comparison β Comprehensive Reports: Markdown, HTML, and text reports β Platform Comparison: Side-by-side AWS vs Azure vs GCP metrics
# Install K6 (Ubuntu/Debian)
sudo gpg -k
sudo gpg --no-default-keyring --keyring /usr/share/keyrings/k6-archive-keyring.gpg \
--keyserver hkp://keyserver.ubuntu.com:80 \
--recv-keys C5AD17C747E3415A3642D57D77C6C491D6AC1D69
echo "deb [signed-by=/usr/share/keyrings/k6-archive-keyring.gpg] https://dl.k6.io/deb stable main" | \
sudo tee /etc/apt/sources.list.d/k6.list
sudo apt-get update
sudo apt-get install k6
# Install Python dependencies (optional, for analysis)
pip install pandas matplotlib seabornexport AWS_FUNCTION_URL="https://your-aws-lambda-url"
export AZURE_FUNCTION_URL="https://your-app.azurewebsites.net/api/AzureHttpTrigger"
export GCP_FUNCTION_URL="https://region-project.cloudfunctions.net/your-function"# Run all tests (takes 2-3 hours)
./run-performance-tests.sh
# Run specific test
TEST_TYPE=sustained ./run-performance-tests.sh
# Quick test (10 minutes)
TEST_DURATION=10 TEST_VUS=50 ./run-performance-tests.shPurpose: Measure function initialization time
Duration: 30 minutes
Pattern: Burst β 15min wait β Burst
Metrics: Cold start latency per platform
TEST_TYPE=cold_start ./run-performance-tests.shPurpose: Test consistent performance under steady load
Duration: 60 minutes (configurable)
Pattern: Ramp up β Steady β Ramp down
Metrics: Response time, throughput, error rate
TEST_TYPE=sustained TEST_DURATION=60 ./run-performance-tests.shPurpose: Test behavior under sudden traffic spikes
Duration: 20 minutes
Pattern: Baseline β Spike β Peak β Drop β Recovery
Metrics: Response time during spike, recovery time
TEST_TYPE=spike ./run-performance-tests.shPurpose: Find breaking point and resource limits
Duration: 30 minutes
Pattern: Gradual increase until failure
Metrics: Maximum throughput, breaking point
TEST_TYPE=stress ./run-performance-tests.shPurpose: Test maximum concurrent executions
Duration: 15 minutes
Pattern: 1000 concurrent requests, repeated
Metrics: Concurrent execution handling
TEST_TYPE=concurrency ./run-performance-tests.shPurpose: Check for memory leaks and degradation
Duration: 4-8 hours
Pattern: Moderate constant load
Metrics: Performance over time, memory usage
TEST_TYPE=endurance TEST_DURATION=240 ./run-performance-tests.shmacOS:
brew install k6Windows:
choco install k6Linux (Ubuntu/Debian): See Quick Start section above
wget https://dlcdn.apache.org//jmeter/binaries/apache-jmeter-5.6.3.tgz
tar -xzf apache-jmeter-5.6.3.tgz
export PATH=$PATH:$(pwd)/apache-jmeter-5.6.3/binwget https://repo1.maven.org/maven2/io/gatling/highcharts/gatling-charts-highcharts-bundle/3.10.3/gatling-charts-highcharts-bundle-3.10.3.zip
unzip gatling-charts-highcharts-bundle-3.10.3.zippip install pandas matplotlib seaborn numpy# Required
export AWS_FUNCTION_URL="..."
export AZURE_FUNCTION_URL="..."
export GCP_FUNCTION_URL="..."
# Optional
export TEST_DURATION=60 # Test duration in minutes
export TEST_VUS=100 # Number of virtual users
export TEST_TYPE=all # Test type to runEdit run-performance-tests.sh to customize:
TEST_DURATION="${TEST_DURATION:-60}"
TEST_VUS="${TEST_VUS:-100}"
TEST_TYPE="${TEST_TYPE:-all}"Modify k6-multicloud-test.js for advanced configuration:
export const options = {
thresholds: {
http_req_duration: ['p(95)<500', 'p(99)<1000'],
http_req_failed: ['rate<0.01'],
},
// ... more options
};./run-performance-tests.shThis runs:
- Cold Start Test (30 min)
- Sustained Load Test (60 min)
- Spike Test (20 min)
- Stress Test (30 min)
- Concurrency Test (15 min)
- Analysis & Report Generation
Total Time: ~2.5-3 hours
# Just cold start
TEST_TYPE=cold_start ./run-performance-tests.sh
# Just sustained load with custom duration
TEST_TYPE=sustained TEST_DURATION=30 ./run-performance-tests.shTEST_DURATION=10 TEST_VUS=50 TEST_TYPE=sustained ./run-performance-tests.sh# Simulate production traffic patterns
TEST_DURATION=120 TEST_VUS=200 TEST_TYPE=sustained ./run-performance-tests.shThe test suite automatically generates:
results/
βββ k6/ # Raw K6 results (JSON)
βββ analysis/ # Analysis reports
β βββ cold_start_analysis.txt
β βββ sustained_load_analysis.txt
β βββ quick-stats.txt
β βββ recommendations.txt
βββ reports/ # Summary reports
βββ summary.md
βββ platform_comparison.md
# Run analysis on existing results
python3 analyze-results.py
# View quick stats
cat results/analysis/quick-stats.txt
# View recommendations
cat results/analysis/recommendations.txt# View summary
cat results/reports/summary.md
# View platform comparison
cat results/reports/platform_comparison.md
# View detailed cold start analysis
cat results/analysis/cold_start_analysis.txt- Average (Mean): Average response time across all requests
- Median (P50): 50% of requests faster than this value
- P95: 95% of requests faster than this value (key SLA metric)
- P99: 99% of requests faster than this value
- Max: Slowest request in the test
β
GOOD:
- P95 < 500ms
- P99 < 1000ms
- Error rate < 0.1%
- Success rate > 99.9%
β οΈ ACCEPTABLE:
- P95 < 1000ms
- P99 < 2000ms
- Error rate < 1%
- Success rate > 99%
β POOR:
- P95 > 1000ms
- P99 > 2000ms
- Error rate > 1%
- Success rate < 99%AWS Lambda:
- Cold Start: 100-500ms typical
- Warm Response: 10-50ms
- Concurrency: 1000 default
Azure Functions:
- Cold Start: 200-800ms typical
- Warm Response: 15-60ms
- Concurrency: 200 per instance
Google Cloud Functions:
- Cold Start: 150-600ms typical
- Warm Response: 10-40ms
- Concurrency: 1000 per instance
AWS: Average: 245.32ms β Fastest
Azure: Average: 523.12ms
GCP: Average: 312.45ms
Winner: AWS (lowest average cold start)
Platform | Avg | P95 | P99 | Error Rate
----------|--------|--------|--------|------------
AWS | 45ms | 123ms | 245ms | 0.02% β Best P95
Azure | 67ms | 189ms | 334ms | 0.05%
GCP | 52ms | 145ms | 278ms | 0.03%
Winner: AWS (best overall performance)
Calculate value per dollar:
Cost per 1M requests:
- AWS: $0.20
- Azure: $0.20
- GCP: $0.40
Performance (P95):
- AWS: 123ms β Cost efficiency: 615 req/$ per ms
- Azure: 189ms β Cost efficiency: 473 req/$ per ms
- GCP: 145ms β Cost efficiency: 345 req/$ per ms
Winner: AWS (best cost-performance ratio)
# Test URLs manually
curl "https://your-function-url?name=test"
# Check for CORS issues
curl -v "https://your-function-url?name=test"# Verify installation
which k6
# Reinstall if needed
sudo apt-get install --reinstall k6-
AWS: Increase concurrency limit
aws lambda put-function-concurrency \ --function-name YourFunction \ --reserved-concurrent-executions 1000
-
Azure: Scale up to Premium plan
-
GCP: Increase max instances
Check function logs:
# AWS
aws logs tail /aws/lambda/YourFunction --follow
# Azure
az webapp log tail --name YourApp --resource-group YourRG
# GCP
gcloud functions logs read YourFunction --limit=50# Run shorter tests
TEST_DURATION=5 TEST_VUS=10 ./run-performance-tests.sh
# Run single test type
TEST_TYPE=sustained TEST_DURATION=10 ./run-performance-tests.sh# Enable verbose output
K6_DEBUG=1 k6 run k6-multicloud-test.js
# Save detailed logs
k6 run --log-output=file=debug.log k6-multicloud-test.js# Send warm-up requests before tests
for i in {1..10}; do
curl "$AWS_FUNCTION_URL?name=warmup" &
curl "$AZURE_FUNCTION_URL?name=warmup" &
curl "$GCP_FUNCTION_URL?name=warmup" &
done
wait- Set billing alerts
- Monitor during tests
- Clean up after tests
# Run tests from different locations
# Use K6 Cloud or distributed testing# Always run baseline test before code changes
./run-performance-tests.sh
mv results results-baseline
# Make changes, then compare
./run-performance-tests.sh
./compare-results.sh results-baseline resultsAfter running tests:
-
Review Results
- Check
results/reports/summary.md - Read
results/analysis/recommendations.txt
- Check
-
Optimize Based on Findings
- Address high cold start times
- Fix error-prone endpoints
- Optimize slow operations
-
Implement Monitoring
- Set up CloudWatch/Application Insights/Cloud Monitoring
- Create alerts for key metrics
- Track trends over time
-
Run Regular Tests
- Schedule weekly performance tests
- Test before major releases
- Compare against baseline
Contributions welcome! Please:
- Test your changes
- Update documentation
- Follow existing patterns
- Submit pull request
MIT License - feel free to use and modify