Travis CI for End to End testing

I have been modifying .travis.yml(in travis-e2e branch) to run the dgraphassigner and the dgraphloader on the Freebase data. Have got things into a working state. Had some doubts here.

The idea here is to

  1. Load data from Freebase into Dgraph.
  2. Run the Dgraph server with -memprofile and -cpuprofile flags.
  3. Run throughput test for a minute from the benchmark repository.
  4. Upload memory and cpu profiles to S3 maybe.

So travis has a 50 minute limit on each job and if we try to load the rdf-films and names data, it exceeds the limit on their 2 core, 8 GB ram machine. See here Travis CI - Test and Deploy Your Code with Confidence.

So can we isolate a smaller dataset? If we can do that, then maybe we can run the full test even on feature branches. That is if we can bring it to under 10-15 mins. Or else maybe we run it only on master/release branches. Any advice about this @minions?

1 Like

Sounds good. I have isolated the data required by the queries in throughputtest scrpit and shared it in slack. It comes out to be 85MB compared to rdf-films and names which are 180MB combined. So this should reduce the running time by half and would be around 30 minutes for assigner and loader combined (Hopefully).

2 Likes

@jchiu mentioned an interesting point about End to End testing in our weekly meeting on Friday. He said that end to end test should be fast and just check that all the parts are connected which sounds good. Our present end to end test which loads up data is an overkill and takes more than 20 mins.

We should probably just do some mutations and queries. Actually, we could just run this mutation mentioned on our Wiki and check the response of the corresponding query. This would make sure our E2E test runs within 5 mins and makes sure that the parts are connected.

Assigner and the loader can run when code is pushed to a release branch. Thoughts @minions?

To run the mutations, you’d still have to load the data, because we can be making breaking changes to the way data is stored. I noticed that the frequency at which we push to master is pretty low, given we squash our commits. So, IMO, we could still run the full 20 min test on master.

We could though have the 5 min test run on every commit on every feature branch. That way, we still have something which can test if this feature/fix under development would break our system or not.

Also, regarding the full 20 min test, once we start uploading the cpu and memory profile, and analyzing it, then things would become more interesting. We’d then be able to detect which PR caused regressions in our performance and fix it before it goes to release. Note that if we only have this information for a release branch, it becomes much harder to pinpoint the commit in master which caused the regression.

1 Like

Makes sense. I guess we can just have a basic query and mutation test for feature branches. Next step would be to upload cpu and memory profile for the full test. By the way, the full test actually runs more than 30 mins. Though sometimes Travis kills the process(though have stw_ram_limit set to 3000) and thats when it finishes in 20 mins. Have to figure out what causes that too. Still, I suppose we could do with an even smaller data set.

1 Like

This topic was automatically closed 30 days after the last reply. New replies are no longer allowed.