Back to Blog
Testing

How to run an end to end test in Kubernetes?

Avanish Pandey

November 7, 2022

How to run an end to end test in Kubernetes?

How to run an end to end test in Kubernetes? An ever-increasing number of segments that used to be a piece of Kubernetes are presently being created outside of Kubernetes. For instance, stockpiling drivers used to be arranged into Kubernetes doubles, at that point were moved into independent FlexVolume pairs on the host, and now are conveyed as Container Storage Interface (CSI) drivers that get sent in units inside the Kubernetes bunch itself. This represents a test for engineers who chip away at such segments: by what method can start to finish (E2E) testing on a Kubernetes bunch be accomplished for such outside segments? The E2E system that is utilized for testing Kubernetes itself has all the important usefulness. Be that as it may, attempting to utilize it outside of Kubernetes was troublesome and just conceivable via cautiously choosing the correct forms of countless conditions. E2E testing has become much less complex in Kubernetes 1.13.

This blog entry sums up the progressions that went into Kubernetes 1.13. For CSI driver designers, it will cover the continuous exertion to likewise make the capacity tests accessible for testing of outsider CSI drivers.

Instructions to utilize them will be indicated dependent on two Intel CSI drivers:

Open Infrastructure Manager (OIM) PMEM-CSI

Testing those drivers was the fundamental inspiration driving a large portion of these upgrades. E2E diagram

E2E testing comprises of a few stages:

Executing a test suite. This is the fundamental focal point of this blog entry. The Kubernetes E2E system is written in Go. It depends on Ginkgo for overseeing tests and Gomega for attestations. These apparatuses support "conduct driven turn of events", which portrays expected conduct in "specs". In this blog entry, "test" is utilized to reference an individual Ginkgo. It specs. Tests associate with the Kubernetes bunch utilizing customer go.

Raising a test bunch. Apparatuses like kubetest can help here.

Running an E2E test suite against that bunch. Ginkgo test suites can be run with the ginkgo instrument or as a typical Go test with go test. With no boundaries, a Kubernetes E2E test suite will interface with the default group dependent on condition factors like KUBECONFIG, precisely like kubectl. Kubetest additionally realizes how to run the Kubernetes E2E suite.

How to run an end to end test in Kubernetes? : E2E system upgrades in Kubernetes 1.13

The entirety of the accompanying upgrades follow a similar fundamental example: they make the E2E system progressively valuable and simpler to use outside of Kubernetes, without changing the conduct of the first Kubernetes e2e.test twofold.

Parting out supplier support

The principle motivation behind why utilizing the E2E structure from Kubernetes <= 1.12 was troublesome were the conditions on supplier explicit SDKs, which pulled in an enormous number of bundles. Simply getting it assembled was non-trifling.

A significant number of these bundles are just required for specific tests. For instance, testing the mounting of a pre-provisioned volume should initially arrange such a volume a similar path as a manager would, by talking legitimately to a particular stockpiling backend by means of some non-Kubernetes API.

There is a push to expel cloud supplier explicit tests from center Kubernetes. The methodology taken in PR #68483 can be viewed as a gradual advance towards that objective: rather than tearing out the code quickly and breaking all tests that rely upon, everything cloud supplier explicit code was moved into discretionary

bundles under test/e2e/system/suppliers. The E2E system at that point gets to it by means of an interface that gets executed independently by every merchant bundle.

The creator of an E2E test suite chooses which of these bundles get brought into the test suite. The seller support is then initiated through the – supplier order line banner. The Kubernetes e2e.test twofold in 1.13 1.14 despite everything contains support for indistinguishable suppliers from in 1.12. It is likewise alright to incorporate no bundles, which implies that solitary the nonexclusive suppliers will be accessible:

"skeleton": bunch is gotten to by means of the Kubernetes API and that’s it

"neighborhood": like "skeleton", yet furthermore the contents in Kubernetes/Kubernetes/group can recover logs through ssh after a test suite is run

Outside documents

Tests may need to peruse extra documents at runtime, as .yaml shows. Be that as it may, the Kubernetes e2e.test parallel should be usable and totally independent since that rearranges transportation and running it. The arrangement in the Kubernetes manufacture framework is to connect all documents under test/e2e/testing-shows into the twofold with go-bindata. The E2E structure used to have a hard reliance on the yield of go-bindata, presently bindata support is discretionary. While getting to a document by means of the test files bundle, records will be recovered from various sources:

comparative with the index determined with – repo-root boundary at least zero bindata pieces

Test boundaries

The e2e.test paired takes extra boundaries that control test execution. In 2016, an exertion was begun to supplant all E2E order line boundaries with a Viper arrangement document. In any case, that exertion slowed down, which left designers without clear direction how they should deal with test-explicit boundaries.

The methodology in v1.12 was to add all banners to the focal test/e2e/system/test_context.go, which doesn’t work for tests that grew freely from the structure. Since PR #69105 the suggestion has been to utilize the typical banner bundle to characterize its boundaries, in its own source code. Banner names must be various leveled with dabs isolating various levels, for instance, my.test.parameter, and must be one of a kind. Uniqueness is upheld by the banner bundle which alarms while enrolling a banner a subsequent time. The new config bundle rearranges the meaning of numerous choices, which are put away in a solitary struct.

To sum up, this is the way boundaries are taken care of now

The init code in test bundles characterizes tests and boundaries. The genuine boundary esteems are not accessible yet, so test definitions can’t utilize them.

The init code of the test suite parses boundaries and (alternatively) the design document. The tests run and now can utilize boundary esteems.

In any case, as of late it was called attention to that it is alluring and was conceivable to not uncover test settings as order line banners and just set them by means of a design document. There is an open bug and a pending PR about this.

viper support has been improved. Like the supplier support, it is totally discretionary. It gets maneuvered into an e2e.test parallel by bringing in the viperconfig bundle and calling it in the wake of parsing the typical order line banners. This has been executed so all factors which can be set by means of order line banners are likewise set when the banner shows up in a Viper config record. For instance, the Kubernetes v1.13 e2e.test parallel acknowledges – viper config=/tmp/my-config.yaml and that document will set my.test.parameter to esteem when it has this substance: my: test: boundary: esteem

In more established Kubernetes discharges, that choice could just load a document from the current index, the addition must be forgotten about, and just a couple of boundaries really could be set along these lines. Be careful that one constraint of Viper despite everything exists: it works by coordinating config record passages against known banners, without notice about obscure config document sections and in this manner leaving grammatical mistakes undetected. A superior config document parser for Kubernetes is still work in progress.

Making things from .yaml shows

In Kubernetes 1.12, there was some help for stacking singular things from a .yaml record, however then making that thing must be finished by writing by hand code. Presently the system has new techniques for stacking a

.yaml record that has different things, fixing those things (for instance, setting the namespace made for the current test), and making them. This is as of now used to convey CSI drivers once again for each test from precisely the equivalent .yaml documents that are additionally utilized for arrangement by means of kubectl. On the off chance that the CSI driver underpins running under various names, at that point tests are totally free and can run in equal.

Be that as it may, redeploying a driver hinders test execution and it doesn’t cover simultaneous activities against the driver. An increasingly reasonable test situation is to send a driver once when raising the test bunch, at that point run all tests against that arrangement. In the long run, the Kubernetes E2E testing will move to that model when it is more clear how to test bunch bring up can be broadened with the end goal that it likewise incorporates introducing extra substances like CSI drivers.

Up and coming improvements in Kubernetes 1.14

Reusing stockpiling tests

Having the option to utilize the structure outside of Kubernetes empowers constructing a custom test suite. Be that as it may, a test suite without tests is as yet pointless. A few of the current tests, specifically for capacity, can likewise be applied to out-of-tree segments. On account of the work done by Masaki Kimura, stockpiling tests in Kubernetes 1.13 is characterized by the end goal that they can be started up on various occasions for various drivers.

However, history has a propensity for rehashing itself. Similarly as with suppliers, the bundle characterizing these tests likewise pulled in driver definitions for all in-tree stockpiling backends, which thus pulled in more extra bundles than were required. This has been fixed for the up and coming Kubernetes 1.14.

Skirting unsupported tests

A portion of the capacity tests relies upon highlights of the bunch (like running on a host that underpins XFS) or of the driver (like supporting square volumes). These conditions are checked while the trials, prompting skipped tests when they are not fulfilled. Interestingly, this records a clarification of why the test didn’t run.

Beginning a test is moderate, specifically when it should initially convey the CSI driver, yet in addition in different situations. Making the namespace for a test has been estimated at 5 seconds on a quick bunch, and it creates a great deal of loud test yield. It would have been conceivable to address that by skirting the meaning of unsupported tests, however then detailing why a test isn’t a piece of the test suite gets dubious. This methodology has been dropped for redesigning the capacity test suite with the end goal that it first checks conditions before doing the more costly test arrangement steps.

Progressively comprehensible test definitions

A similar PR likewise modifies the tests to work like customary Ginkgo tests, with experiments and their neighborhood factors in a solitary capacity.

Testing outside drivers

Building a custom E2E test suite is still a considerable amount of work. The e2e.test twofold that will get dispersed in the Kubernetes 1.14 test document will be able to test previously introduced capacity drivers without reconstructing the test suite. See this README for additional directions.

How to run an end to end test in Kubernetes? :E2E test suite HOWTO Test suite instatement

The initial step is to set up the vital standard code that characterizes the test suite. In Kubernetes E2E, this is done in the e2e.go and e2e_test.go records. It should likewise be possible in a solitary e2e_test.go document.

Kubernetes imports the entirety of the different suppliers, in-tree tests, Viper design support, and bindata document query in e2e_test.go. e2e.go controls the genuine execution, including some bunch arrangements and measurement assortment.

A more straightforward beginning stage is the e2e_[test].go records from PMEM-CSI. It doesn’t utilize any suppliers, no Viper, no bindata, and imports only the capacity tests.

Like PMEM-CSI, OIM drops the entirety of the additional highlights, however, is more mind-boggling on the grounds that it coordinates a custom group startup straightforwardly into the test suite, which was helpful for this situation since some extra parts need to run on the host side. By running them legitimately in the E2E paired, intuitive investigating with dlv gets simpler.

Both CSI drivers follow the Kubernetes model and utilize the test/e2e registry for their test suites, yet some other catalog and other record names would likewise work.

How to run an end to end test in Kubernetes? : Including E2E stockpiling tests

Tests are characterized by bundles that get brought into a test suite. The main thing explicit to E2E tests is that they start up a framework. Framework pointer (as a rule called f) with the framework.NewDefaultFramework.

This variable gets instated once more in a BeforeEach for each test and liberated in an after each. It has an f.clients and f.Namespace at runtime (and just at runtime!) which can be utilized by a test.

The PMEM-CSI stockpiling test imports the Kubernetes stockpiling test suite and sets up one occurrence of the provisioning tests for a PMEM-CSI driver which must be as of now introduced in the test bunch. The capacity test suite changes the capacity class to run tests with various filesystem types. As a result of this necessity, the capacity class is made from a .yaml document.

Clarifying all the different utility techniques accessible in the structure is out of extension for this blog entry. Perusing existing tests and the source code of the structure is a decent method to begin.

Vendoring

Vendoring Kubernetes code is as yet not trifling, considerably in the wake of taking out a considerable lot of the pointless conditions. k8s.io/kubernetes isn’t intended to be remembered for different tasks and doesn’t characterize its conditions in a manner that is comprehended by apparatuses like dep. The other k8s.io bundles are intended to be incorporated, however, don’t follow semantic forming yet or don’t label any discharges (k8s.io/kube-openapi, k8s.io/utils).

PMEM-CSI utilizes dep. It’s Gopkg.toml record is a decent beginning stage. It empowers pruning (not empowered in dep as a matter of course) and bolts certain undertakings onto renditions that are good with the Kubernetes variant that is utilized. When dep doesn’t pick a good form, at that point checking Kubernetes’ Godeps.json assists with figuring out which correction may be the correct one.

Assembling and running the test suite

go test ./test/e2e – args – help is the quickest method to test that the test suite orders.

When it does incorporate and a group has been set up, the order go test – timeout=0 – v ./test/e2e – ginkgo.v runs all tests. So as to run tests in equal, utilize the ginkgo – p ./test/e2e order.

 

Avanish Pandey

November 7, 2022

icon
icon
icon

Subscribe to our Newsletter

Sign up to receive and connect to our newsletter

Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.

Latest Article