Quick Performance Metrics

Background

When a test automation script fails, there could be many causes for the failure. From a very broad perspective, we know that either it is a “real” bug or the test automation script is buggy in itself. Let’s say for a moment that the test automation script was written correctly. But can we really say now that there has to be a bug in the application code if the script fails ?

The answer is of course – you cannot come to conclusion so quickly. We have to debug through the stack trace right !

One of the common causes of failure (inspite of no faulty test automation code) is the performance of the application itself. Yes, we might have baked in those explicit waits (and gone fancier with fluent waits, whatever) and maybe in some circumstances, gone to the extent of being able to handle ajax also very well. Even then, if the application doesn’t respond within those timeouts, then the script fails. And better, if the application performance / response time is erratic i.e. sometimes the test automation script passes and sometimes it fails (Yep flaky tests, that’s another term for it)

In this section, we are going to see how to capture some non-functional metrics for the application, that might help us get closer to fixing performance issues. If you are a seasoned performance tester, you know how difficult it is to do performance testing. First, there might not be baseline metrics, second we might not have the expected performance metrics – And even if those both are there, how do we hook in monitors in all layers of application and it gets better , how do we correlate and suggest performance tuning. Anyways, that is a much much longer discussion. For this section, we will show you an extremely simple way to measure some performance metrics of the application when a test automation script using Selenium executes in the browser.

Navigation Timing – W3 Spec

This specification defines an interface for web applications to access timing information related to navigation and elements. A web browser implemented with these standards follows the specs.

The spec further divides into “PerformanceTiming”, “PerformanceNavigation” and “window.performance” concepts. The first two are interfaces that define more contracts and the third one is an attribute. There is a lot of information to digest if we go into the details and I will leave it up to your discretion if you want to delve deeper. Here is the full content.

For now, I would like to focus on the below diagram that shows the processing model , meaning how does the browser process information coming in from a server (the url that we hit in the address bar) and what events are called and more importantly in what sequence.

There are 23 steps that happen in this diagram and again the explanation of each of that is in the link provided above. The intent of showing this diagram is to say that many events get fired at various points before the browser calls page load done for example.

NavigationTimingInterface

Manual Way

Almost every browser these days comes shipped with “developer tools”, meaning there are plugins/extensions built inside the browser to monitor browser, DOM, java script debuger, css , ability to debug and so on. If we look at Chrome Developer tools when we launch the application, we can see the timeline of the requests that go between client (browser) and server (the url).

In Chrome, when you click the “3 lines” at the top right and then “More tools – Developer tools”, it launches a interface at the bottom of the chrome browser (sometimes it launches sideways too).

 

  1. If you carefully observe, there is a tab “network” in it. Click that.
  2. Now reload the URL (www.seleniumframework.com) by hitting F5.
  3. As shown below, you can see the calls that are being made
  4. for example, in the below image, it took the first GET call about 822 ms and 15.8kb was downloaded as part of that call
  5. You can also notice that calls are NOT necessarily made sequentially (look at the timeline column)

There are lots of columns, but I highlighted the one that can give you some insight into response time. As you can see the column “Timeline” shows the time (milliseconds or seconds) that each call took. This is a good measure to start with and evaluate the website/ server response time.

chrome_dev_tools_network

So I would suggest to start with this kind of analysis first when you notice issues. Narrow down to the call (row) to see which call is taking long to respond and that might help narrow down the issue. Not always, but at least a place to start with.

Now lets see, when we execute the script and the script runs through many pages, how can we collect these metrics and how to aggregate these. Luckily there is a way to do this.

Automated Way

The metrics as defined by Navigation Timing interface (described at the top of the page) can be captured in an automated way. Read on.

  • Add ‘watir-webdriver-performance’ gem to Gemfile
  • Require the module
  • Output the metrics after execution of the script

Add gem

Add the below line i.e. gem ‘watir-webdriver-performance’ to your project’s Gemfile. Run bundle install or bundle update to install the gem in the environment.

gemfile_watir_performance

Require module

require ‘watir-webdriver-performance’ in env.rb file of your project

env_watir_performance

Collect results

Since we need to output the performance metrics somewhere, I put it in the After scenario block in hooks.rb [You can see that @browser is an instance variable here, so the metrics are collected for each @browser, hence we will get the metrics aggregrate only for the last scenario executed. I am making an assumption here that we are executing one scenario at a time. However, if you would like to collect metrics across all scenarios, then you need to figure out a way to have one browser instance for all scenarios. One way is to declare browser as global variable i.e. $browser]

hooks_watir_performance

Execution Output

Now execute a scenario using Rubymine or command line. I executed the scenario and got the below output

JSON output

If you wish to have json output, replace the code with “@browser.performance.to_json (depending on the ruby version you have). If not install ‘json’ gem, require ‘json’ in env.rb and “puts JSON.parse (@browser.performance)”

Metrics to notice

Some important metrics to watch for are

@browser.performance[:summary][:response_time] 

In the above output, I got this metric is 941 i.e. 941 milli-seconds

@browser.performance[:summary][:dom_processing]

It took 69 milli-seconds in the output I got above

Similarly there are more metrics for your perusal in the hash that got output.