Skip to content

GaffarProjects/PerformanceTestMultiCloud

Folders and files

NameName
Last commit message
Last commit date

Latest commit

Β 

History

6 Commits
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 

Repository files navigation

Multi-Cloud Serverless Performance Testing Guide

Complete performance testing suite for comparing AWS Lambda, Azure Functions, and Google Cloud Functions.

πŸ“‹ Table of Contents

🎯 Overview

This performance testing suite provides:

βœ… 6 Test Scenarios: Cold Start, Sustained Load, Spike, Stress, Concurrency, Endurance βœ… 3 Testing Tools: K6, JMeter, Gatling βœ… Automated Analysis: Python-based result analysis and comparison βœ… Comprehensive Reports: Markdown, HTML, and text reports βœ… Platform Comparison: Side-by-side AWS vs Azure vs GCP metrics

πŸš€ Quick Start

1. Prerequisites

# Install K6 (Ubuntu/Debian)
sudo gpg -k
sudo gpg --no-default-keyring --keyring /usr/share/keyrings/k6-archive-keyring.gpg \
  --keyserver hkp://keyserver.ubuntu.com:80 \
  --recv-keys C5AD17C747E3415A3642D57D77C6C491D6AC1D69
echo "deb [signed-by=/usr/share/keyrings/k6-archive-keyring.gpg] https://dl.k6.io/deb stable main" | \
  sudo tee /etc/apt/sources.list.d/k6.list
sudo apt-get update
sudo apt-get install k6

# Install Python dependencies (optional, for analysis)
pip install pandas matplotlib seaborn

2. Configure Function URLs

export AWS_FUNCTION_URL="https://your-aws-lambda-url"
export AZURE_FUNCTION_URL="https://your-app.azurewebsites.net/api/AzureHttpTrigger"
export GCP_FUNCTION_URL="https://region-project.cloudfunctions.net/your-function"

3. Run Tests

# Run all tests (takes 2-3 hours)
./run-performance-tests.sh

# Run specific test
TEST_TYPE=sustained ./run-performance-tests.sh

# Quick test (10 minutes)
TEST_DURATION=10 TEST_VUS=50 ./run-performance-tests.sh

πŸ“Š Test Scenarios

1. Cold Start Test

Purpose: Measure function initialization time
Duration: 30 minutes
Pattern: Burst β†’ 15min wait β†’ Burst
Metrics: Cold start latency per platform

TEST_TYPE=cold_start ./run-performance-tests.sh

2. Sustained Load Test

Purpose: Test consistent performance under steady load
Duration: 60 minutes (configurable)
Pattern: Ramp up β†’ Steady β†’ Ramp down
Metrics: Response time, throughput, error rate

TEST_TYPE=sustained TEST_DURATION=60 ./run-performance-tests.sh

3. Spike Test

Purpose: Test behavior under sudden traffic spikes
Duration: 20 minutes
Pattern: Baseline β†’ Spike β†’ Peak β†’ Drop β†’ Recovery
Metrics: Response time during spike, recovery time

TEST_TYPE=spike ./run-performance-tests.sh

4. Stress Test

Purpose: Find breaking point and resource limits
Duration: 30 minutes
Pattern: Gradual increase until failure
Metrics: Maximum throughput, breaking point

TEST_TYPE=stress ./run-performance-tests.sh

5. Concurrency Test

Purpose: Test maximum concurrent executions
Duration: 15 minutes
Pattern: 1000 concurrent requests, repeated
Metrics: Concurrent execution handling

TEST_TYPE=concurrency ./run-performance-tests.sh

6. Endurance Test

Purpose: Check for memory leaks and degradation
Duration: 4-8 hours
Pattern: Moderate constant load
Metrics: Performance over time, memory usage

TEST_TYPE=endurance TEST_DURATION=240 ./run-performance-tests.sh

πŸ’» Installation

K6 (Primary Tool)

macOS:

brew install k6

Windows:

choco install k6

Linux (Ubuntu/Debian): See Quick Start section above

JMeter (Optional)

wget https://dlcdn.apache.org//jmeter/binaries/apache-jmeter-5.6.3.tgz
tar -xzf apache-jmeter-5.6.3.tgz
export PATH=$PATH:$(pwd)/apache-jmeter-5.6.3/bin

Gatling (Optional)

wget https://repo1.maven.org/maven2/io/gatling/highcharts/gatling-charts-highcharts-bundle/3.10.3/gatling-charts-highcharts-bundle-3.10.3.zip
unzip gatling-charts-highcharts-bundle-3.10.3.zip

Python Analysis Tools

pip install pandas matplotlib seaborn numpy

βš™οΈ Configuration

Environment Variables

# Required
export AWS_FUNCTION_URL="..."
export AZURE_FUNCTION_URL="..."
export GCP_FUNCTION_URL="..."

# Optional
export TEST_DURATION=60        # Test duration in minutes
export TEST_VUS=100           # Number of virtual users
export TEST_TYPE=all          # Test type to run

Test Parameters

Edit run-performance-tests.sh to customize:

TEST_DURATION="${TEST_DURATION:-60}"
TEST_VUS="${TEST_VUS:-100}"
TEST_TYPE="${TEST_TYPE:-all}"

K6 Options

Modify k6-multicloud-test.js for advanced configuration:

export const options = {
  thresholds: {
    http_req_duration: ['p(95)<500', 'p(99)<1000'],
    http_req_failed: ['rate<0.01'],
  },
  // ... more options
};

πŸƒ Running Tests

Full Test Suite

./run-performance-tests.sh

This runs:

  1. Cold Start Test (30 min)
  2. Sustained Load Test (60 min)
  3. Spike Test (20 min)
  4. Stress Test (30 min)
  5. Concurrency Test (15 min)
  6. Analysis & Report Generation

Total Time: ~2.5-3 hours

Individual Tests

# Just cold start
TEST_TYPE=cold_start ./run-performance-tests.sh

# Just sustained load with custom duration
TEST_TYPE=sustained TEST_DURATION=30 ./run-performance-tests.sh

Quick Performance Check (10 minutes)

TEST_DURATION=10 TEST_VUS=50 TEST_TYPE=sustained ./run-performance-tests.sh

Production Load Simulation

# Simulate production traffic patterns
TEST_DURATION=120 TEST_VUS=200 TEST_TYPE=sustained ./run-performance-tests.sh

πŸ“ˆ Analyzing Results

Automatic Analysis

The test suite automatically generates:

results/
β”œβ”€β”€ k6/                      # Raw K6 results (JSON)
β”œβ”€β”€ analysis/               # Analysis reports
β”‚   β”œβ”€β”€ cold_start_analysis.txt
β”‚   β”œβ”€β”€ sustained_load_analysis.txt
β”‚   β”œβ”€β”€ quick-stats.txt
β”‚   └── recommendations.txt
└── reports/               # Summary reports
    β”œβ”€β”€ summary.md
    └── platform_comparison.md

Manual Analysis

# Run analysis on existing results
python3 analyze-results.py

# View quick stats
cat results/analysis/quick-stats.txt

# View recommendations
cat results/analysis/recommendations.txt

Viewing Results

# View summary
cat results/reports/summary.md

# View platform comparison
cat results/reports/platform_comparison.md

# View detailed cold start analysis
cat results/analysis/cold_start_analysis.txt

πŸ“Š Understanding Metrics

Response Time Metrics

  • Average (Mean): Average response time across all requests
  • Median (P50): 50% of requests faster than this value
  • P95: 95% of requests faster than this value (key SLA metric)
  • P99: 99% of requests faster than this value
  • Max: Slowest request in the test

Success Criteria

βœ… GOOD:
  - P95 < 500ms
  - P99 < 1000ms
  - Error rate < 0.1%
  - Success rate > 99.9%

⚠️  ACCEPTABLE:
  - P95 < 1000ms
  - P99 < 2000ms
  - Error rate < 1%
  - Success rate > 99%

❌ POOR:
  - P95 > 1000ms
  - P99 > 2000ms
  - Error rate > 1%
  - Success rate < 99%

Platform-Specific Metrics

AWS Lambda:

  • Cold Start: 100-500ms typical
  • Warm Response: 10-50ms
  • Concurrency: 1000 default

Azure Functions:

  • Cold Start: 200-800ms typical
  • Warm Response: 15-60ms
  • Concurrency: 200 per instance

Google Cloud Functions:

  • Cold Start: 150-600ms typical
  • Warm Response: 10-40ms
  • Concurrency: 1000 per instance

πŸ” Interpreting Results

Cold Start Analysis

AWS:      Average: 245.32ms  ← Fastest
Azure:    Average: 523.12ms
GCP:      Average: 312.45ms

Winner: AWS (lowest average cold start)

Sustained Load Analysis

Platform  | Avg    | P95    | P99    | Error Rate
----------|--------|--------|--------|------------
AWS       | 45ms   | 123ms  | 245ms  | 0.02%  ← Best P95
Azure     | 67ms   | 189ms  | 334ms  | 0.05%
GCP       | 52ms   | 145ms  | 278ms  | 0.03%

Winner: AWS (best overall performance)

Cost-Performance Analysis

Calculate value per dollar:

Cost per 1M requests:
- AWS: $0.20
- Azure: $0.20
- GCP: $0.40

Performance (P95):
- AWS: 123ms β†’ Cost efficiency: 615 req/$ per ms
- Azure: 189ms β†’ Cost efficiency: 473 req/$ per ms
- GCP: 145ms β†’ Cost efficiency: 345 req/$ per ms

Winner: AWS (best cost-performance ratio)

πŸ”§ Troubleshooting

Common Issues

1. "Function URL not accessible"

# Test URLs manually
curl "https://your-function-url?name=test"

# Check for CORS issues
curl -v "https://your-function-url?name=test"

2. "K6 command not found"

# Verify installation
which k6

# Reinstall if needed
sudo apt-get install --reinstall k6

3. "Rate limiting / Throttling"

  • AWS: Increase concurrency limit

    aws lambda put-function-concurrency \
      --function-name YourFunction \
      --reserved-concurrent-executions 1000
  • Azure: Scale up to Premium plan

  • GCP: Increase max instances

4. "High error rates"

Check function logs:

# AWS
aws logs tail /aws/lambda/YourFunction --follow

# Azure
az webapp log tail --name YourApp --resource-group YourRG

# GCP
gcloud functions logs read YourFunction --limit=50

5. "Tests taking too long"

# Run shorter tests
TEST_DURATION=5 TEST_VUS=10 ./run-performance-tests.sh

# Run single test type
TEST_TYPE=sustained TEST_DURATION=10 ./run-performance-tests.sh

Debug Mode

# Enable verbose output
K6_DEBUG=1 k6 run k6-multicloud-test.js

# Save detailed logs
k6 run --log-output=file=debug.log k6-multicloud-test.js

πŸ“ Best Practices

1. Warm Up Functions

# Send warm-up requests before tests
for i in {1..10}; do
  curl "$AWS_FUNCTION_URL?name=warmup" &
  curl "$AZURE_FUNCTION_URL?name=warmup" &
  curl "$GCP_FUNCTION_URL?name=warmup" &
done
wait

2. Monitor Cloud Costs

  • Set billing alerts
  • Monitor during tests
  • Clean up after tests

3. Test from Multiple Regions

# Run tests from different locations
# Use K6 Cloud or distributed testing

4. Baseline Before Changes

# Always run baseline test before code changes
./run-performance-tests.sh
mv results results-baseline

# Make changes, then compare
./run-performance-tests.sh
./compare-results.sh results-baseline results

🎯 Next Steps

After running tests:

  1. Review Results

    • Check results/reports/summary.md
    • Read results/analysis/recommendations.txt
  2. Optimize Based on Findings

    • Address high cold start times
    • Fix error-prone endpoints
    • Optimize slow operations
  3. Implement Monitoring

    • Set up CloudWatch/Application Insights/Cloud Monitoring
    • Create alerts for key metrics
    • Track trends over time
  4. Run Regular Tests

    • Schedule weekly performance tests
    • Test before major releases
    • Compare against baseline

πŸ“š Additional Resources

🀝 Contributing

Contributions welcome! Please:

  1. Test your changes
  2. Update documentation
  3. Follow existing patterns
  4. Submit pull request

πŸ“„ License

MIT License - feel free to use and modify

About

No description, website, or topics provided.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published