Posted on 25 May 2023, updated on 9 June 2023.
Today we will see the benefits of using k6 in Kubernetes for load testing, how to write load tests in k6, and its limitations to consider. We already talked about Gatling, let’s see what k6 has to offer in comparison. Let’s dive into this flexible, and open-source solution.
What am I going to talk about?
Load testing is an important practice to ensure that an application or system can handle high traffic and usage. When it comes to load testing in Kubernetes, k6 is a popular tool that can be used to simulate large amounts of virtual users to test the system's performance under load.
It is well-suited for this cloud environment since it integrates very well with Grafana, and also has a Kubernetes operator which helps with the deployment.
By deploying k6 as a Kubernetes operator, users can run distributed load tests that simulate a large number of virtual users hitting the system simultaneously. Additionally, Kubernetes provides automatic scheduling for k6 pods. It ensures that the load testing can continue even as the number of virtual users being simulated increases.
Overall, using it on Kubernetes is an effective way to perform load testing that can help identify and address any performance issues before they impact real users.
Today, we will see how to use k6 on Kubernetes to perform load testing. We will also see how to use Prometheus and Grafana to visualize the results of the load test. Make sure you are clear with Kubernetes before starting.
Prerequisite
For my tutorial, here is what you will need to do for the setup
- A Kubernetes cluster
- Clone of the repo
- Clone of the xk6 repo using Prometheus
Deep dive on k6
A schema is better than a thousand words:
- k6 uses the operator pattern, allowing us to declare resources of kind
K6
- The operator schedules Kubernetes jobs
- Once done, the jobs push their results in the Prometheus Pushgateway
- Which is then scrapped by Prometheus
Writing the test:
First of all, tests are written in javascript, so you need to know the basics of vanilla javascript to write tests. The syntax is fairly simple.
import http from 'k6/http';
import { check } from 'k6';
export const options = {
stages: [
{ target: 400, duration: '30s' },
{ target: 0, duration: '30s' },
],
};
export default function () {
const result = http.get('<https://test-api.k6.io/public/crocodiles/>');
check(result, {
'http response status code is 200': result.status === 200,
});
}
Here we are testing the API of k6, we are ramping up to 400 requests per second over 30 seconds and then ramping down to 0 users.
Applying the K6 resources
Let's take a look a the resources:
apiVersion: k6.io/v1alpha1
kind: K6
metadata:
name: k6-sample-with-extensions
spec:
# The parallelism is the number of pods that will be created
parallelism: 4
# The script is the configmap that contains the test
script:
configMap:
name: crocodile-stress-test
file: test.js
# The arguments are the arguments that will be passed to the k6 command
arguments: -o xk6-prometheus-rw
runner:
# The runner is the image that will be used to run the test, it contains the k6 binary with the xk6-prometheus-rw extension
image: sylvainpadok/k6-kube-prom:1.0
env:
# Make sure to change the K6_PROMETHEUS_RW_SERVER_URL to the correct prometheus's service url
- name: K6_PROMETHEUS_RW_SERVER_URL
value: <http://prometheus-kube-prometheus-prometheus:9090/api/v1/write>
Once you've applied the configmap, you can check the status of the job with:
kubectl get pod
You can check the results of the test using the Grafana dashboard provided in the repository, they are fairly self-explanatory.
Limitations
One of the limitations of using k6 with Kubernetes is that it can be a heavy setup process. To use k6 with Kubernetes, you will need a cluster and a strong understanding of Kubernetes, which can be time-consuming and require specialized expertise.
However, if you're looking for a lighter solution, k6 also offers a software as a service (SaaS) solution that may be a better fit for your needs. Also, if you're planning on running it on your own clusters make sure to do it in a dedicated cluster, otherwise, the result will be biased.
Using Prometheus pushgateway, was not the optimal choice, since we may lose some information based on scraping delays, causing out-of-order metrics. It probably would have been better to use InfluxDB since it has better support for data pushing.
Another factor to consider is that to achieve maximum input/output (IO) performance, it's optimal to have nodes running on dedicated servers. This is because k6 generates a significant amount of load, which can place a heavy strain on the resources of a shared server environment.
For example, NAT Gateways provide limitations to the IO of a network. If you're unable to dedicate nodes to k6, you may experience performance limitations.
Furthermore, multi-region testing can be challenging, since cloud providers usually don't offer multi-region clusters. You'll need multiple clusters in order to do this; each region will have its own network and latency characteristics, which can impact the performance of your tests.
As a result, it's important to carefully plan and execute multi-region testing to ensure that your results are accurate and meaningful. Properly synchronizing and aggregating the results of multi-region tests is also critical.
Conclusion
In conclusion, using k6 with Kubernetes can offer a fully open-source solution for load testing your applications. This can be a great advantage for organizations that are committed to open-source technology and want to avoid vendor lock-in or proprietary software.
k6 offers a robust set of reporting and visualization tools that can help you gain valuable insights into your application's performance and identify areas for improvement. With k6's built-in metrics and support for multiple TSDBs, you can easily monitor and analyze key performance indicators such as response time, error rate, and throughput.
Overall, k6 on Kubernetes offers a flexible, scalable, and open-source solution for load testing your applications. Its powerful features and integrations make it a popular choice among developers and DevOps teams.