Archive Test Results

Context:

I am not a big fan of archiving test results, because eventually they will rot and nobody really has the time to go back and look at the results archived. There is not much value in it. Also with Continuous Test Automation, every execution might result in an archive, which might be an overkill. Also the pace at which technology is changing does not really justify the old archaic ways of archiving test results, documenting every little bit of information that you indeed validated a certain functionality. With Agile, trust on team members is of utmost importance and if it was not tested, it will anyways surface up and it is not hard to find who might be a slacker if I may say.

With that context, there might still be some cases where we might want to archive certain files, results, reports. So we are not overruling the concept altogether.

Use Cases for Archiving:

Some of the use cases for archiving are as follows:

  • Audit compliance to save screenshots or results 
  • Regulatory compliance as per acts like SOCKS
  • Financial and Healthcare firms have to record lot of artifacts and configuration items and yes test results [within certain scope] are part of thos configuration items.
  • Maybe I am testing on production and I don’t have the luxury to tear down environments as I have in non-prod, and I have limited time to execute, however want to analyse the results in detail

So as we see, there might be use cases where we might have to archive test execution results. Anyways, I will leave that decision to your discretion. This page shows how we can archive results using Cucumber/Ruby framework we have been discussing so far. 

Agenda:

  • shutils module
  • After_all global hook
  • ENV[‘ARCHIVE’] environment variable 
  • Understand the intricacies of code

1) shutils module:

shutuils is a simple module to archive a directory.

If we need to do something more, like exclude specific files, change names or any other transformations before archiving, use ‘zipfile’ python module

 

 

 

2) After_all:

We will discuss the concept of behave hooks in a separate page, however at this point all we need to understand is that there are scenario hooks and global hooks that behave provides. Hooks are a way to execute certain block of code before/after an event [or we can also define custom hooks when an event occurs]

After_all hook is a Kernel hook that executes as part of global teardown. For our purposes on this page, this helps us execute a certain piece of code (archiving results) after behave scenarios execution and after the results file is generated.

Let’s place this code in environment.py file as follows:

Explanation:

We read the environment variable and if set to yes, we move forward

If environment variable is set to ‘yes’, then we enter the block [We set the value in at run time , scroll down and see how it is set]

Create a directory “failed_scenarios_screenshots” if it is empty or doesn’t exist

 

Then we ask shutils to archive the files for us.

 

 

Output:

Now if you run any behave scenario in your project and set ARCHIVE=Yes, there will be failed_scenarios_screenshots folder created and your execution results will be zipped and placed inside that folder. A screenshot after I executed a sample scenario looks like this:

archive_failed_screenshots