Kubernetes hosted tests

Hi there,

Nice project ! I have a question about spawning a Kubernetes test within a pipeline. Current I have cobbled together a pipeline that uses Makisu to build my images and then pushes these to the same Kubernetes cluster (microk8s) the build was performed in. Then I used keel to trigger a step where a pod spun up two containers with rabbitmq, and minio along with the container of the image under test. Once the test was finished the original build container was setting the replica count for the supporting pods to a count of 0 in order to bring down the test environment.

From reading the docs it appears I am set to go with Kaniko, and the builder container itself. What approach would you favor for going about setting up a namespace with pods etc as a downstream test of the produced image inside exactly the same cluster via the Kubernetes API ?

My fairly disorganized pipeline docs, studio-go-runner/ci.md at master · leaf-ai/studio-go-runner · GitHub.

Many Thanks
Karl

Thanks!

You should be able to use also makisu without requiring privileged container, there isn’t any example since at the time of testing makisu was very young and kaniko (with its rough edges) worked better. We’ll try to create a makisu example when we have time or feel free to contribute it

I’ll just setup one or more “setup” tasks that will create and push the images. Then other “setup” tasks that will spun up the k8s environment (create a “temporary” namespace, with an unique id to make it unique to the run) and the various deployments/statefulsets etc… For doing this you can just continue to use keel or create the various deployments/daemonsets by yourself as you prefer calling the k8s api. The k8s api secrets can just be saved in an agola secret and exposed as a variable to the tasks (you could provide a full kubecfg file saving it base64 encoded in a secret).

Then create additional tasks that will depend on the “setup” tasks that will execute the tests.

As the final “cleanup”, we though about it many time. Currently you can create a “cleanup” task that will be called at the end of the run regardless how it’s final result. In your case you can just delete the k8s namespace and k8s will automatically delete all the resources. For doing this you should make this task depend on every other task both on_success and on_failure conditions. This isn’t a very clean approach and I opened a proposal to improve this here:

Another approach will be to externally delete the old k8s namespaces (i.e. a job that will delete namespace older than N hours or for example locking them using mechanisms like etcd locks).

Let us know what you think about this or propose alternative approaches.

My sense is that using the Kubernetes API might be the best approach for initiation of the tests. Adding keel.sh is complicating the solution so I am looking for ways to avoid it. I am looking to use agola to reduce the variety of technologies a consumer of open source needs in order to successfully use CI/CD.

I would then have a step that waits for the test driver pod inside the test namespace to exit and then scrape the logs from the test driver pod and incorporate them as output inside the agola pod output. Finally deleting the test namespace as an additional final step.

As time permits I’ll try to prototype this approach.

Thanks again,
Karl

1 Like