Skip to content
This repository was archived by the owner on Oct 24, 2025. It is now read-only.

Latest commit

 

History

History
166 lines (122 loc) · 5.99 KB

File metadata and controls

166 lines (122 loc) · 5.99 KB

k6 Load Testing Configuration (e.g k6.ts)

This document provides a comprehensive guide to the k6 load testing configuration used in the Stroppy project. The k6 configuration provides a flexible way to run load tests against your database.

Overview

The k6 configuration script is a TypeScript file that defines how load tests should be executed. It integrates with the Stroppy benchmarking framework to generate and execute database queries under load.

Configuration Options

Environment Variables

The script reads configuration from environment variables, primarily from the context environment variable which contains a serialized StepContext object.

Main Configuration Parameters

Parameter Environment Variable Description Default
Setup Timeout STEP_RUN_CONTEXT.config.k6Executor.k6SetupTimeout Maximum time allowed for test setup 1 second
Step Rate STEP_RUN_CONTEXT.config.k6Executor.k6Rate Number of iterations per second 1
Step Duration STEP_RUN_CONTEXT.config.k6Executor.k6Duration How long the test should run (in seconds) 1 second
Pre-allocated VUs STEP_RUN_CONTEXT.config.k6Executor.k6Vus Initial number of virtual users 1
Max VUs STEP_RUN_CONTEXT.config.k6Executor.k6MaxVus Maximum number of virtual users 1
Error Threshold Hardcoded Percentage of errors that will abort the test 50%

Test Lifecycle

The test follows this lifecycle:

  1. Setup: Initializes the test environment and generates the query queue
  2. Execution: Runs the actual load test
  3. Teardown: Cleans up resources
  4. Summary: Generates test results and metrics

Example Xk6Instance, StepContext, Metrics and Summary

1. Xk6 Instance

The instance object is an instance of the Xk6Instance golang type provided by xk6 plugin, which is responsible for executing golang functions from k6.ts. You can't customize it.

type Serialized<T extends any> = string; // protojson serialization of T, need for type safety

interface Xk6Instance {
    setup(config: Serialized<StepContext>): Error | undefined; // call to initialize resources and setup drivers

    generateQueue(): Serialized<DriverQueriesList>;// call to generate the query queue from the StepContext

    runQuery(query: Serialized<DriverQuery>): Error | undefined; // call to execute a query

    teardown(): Error | undefined; // call to clean up resources
}

const instance: Xk6Instance = newstess.new(); // Initialize Xk6Instance

2. Setup StepContext

The STEP_RUN_CONTEXT variable is a serialized StepContext object, which is passed to the setup function of the instance object.

In StepContext could be found all the information about the test such a RunConfig, BenchmarkDescriptor and StepDescriptor.

const err = instance.setup(StepContext.toJsonString(STEP_RUN_CONTEXT));
if (err !== undefined) {
    throw err;
}

3. Metrics Collection

The example script includes a set of metrics to track the performance of the test. You could customize it as you need in your custom script.

const setupTimeCounter = new Counter("setup_time") // Time taken to initialize the test
const respTimeTrend = new Trend("resp_time"); // Response time trend
const requestCounter = new Counter("total_requests"); // Request count 
const errorCounter = new Counter("total_errors"); // Error count

4. Summary Report

Example script uses the handleSummary function to generate a summary report after test completion, the handleSummary function generates a detailed report including:

  • Test configuration details
  • Duration of all test stages
  • Total requests processed
  • Error count and rate
  • Requests per second (RPS) metrics
  • Response time statistics (min, max, avg, percentiles)
interface CounterMeter {
    values: { count: number, rate: number }
}

interface TrendMeter {
    values: {
        avg: number
        min: number
        med: number
        max: number
        p90: number
        p95: number
    }
}

class Summary {
    setup_data: Context
    metrics: {
        data_received: CounterMeter
        total_requests: CounterMeter
        resp_time: TrendMeter
        iteration_duration: TrendMeter
        dropped_iterations?: CounterMeter
        iterations: CounterMeter
        total_errors: CounterMeter
        setup_time: CounterMeter
        data_sent: CounterMeter
    }
    state: {
        isStdOutTTY: boolean
        isStdErrTTY: boolean
        testRunDurationMs: number
    }
}

// Summary function, that will create summary file with metrics.
export function handleSummary(summaryData: Summary) {
};

Best Practices

Performance Optimization

  1. Start Small: Begin with a low number of VUs and gradually increase.
  2. Monitor Resources: Keep an eye on system resources during tests.
  3. Set Realistic Durations: Ensure test durations are long enough to get meaningful results.
  4. Analyze Results: Pay attention to both success rates and response times.
  5. Error Handling: Implement proper error handling for your specific use case.

Error Handling

  • Setup failures
  • Query execution errors
  • Timeout handling
  • Error threshold violations (test will abort if error rate exceeds threshold)

Troubleshooting

  • Test Failing During Setup: Check the setup timeout value and increase if necessary.
  • High Error Rates: Verify your database connection settings and query parameters.
  • Performance Issues: Monitor system resources and adjust VU count accordingly.
  • Unexpected Behavior: Check the k6 logs and console output for detailed error messages.