Thursday, April 23, 2015

Performance environment scaling and results extrapolation

In this article we shall discuss about the points we have to consider when we get a new environment or existing environment for performance testing.

Rule 1 - The configurations of Production and test env should match
eg - if they have 4 web, 2 app and 2 Db servers
test env might be 2 web, 1 app and 1 DB but not 1 web, 2 app, 1 Db etc....

Rule 2 - JVM heap memory should match
Prod 2 JVM with 4GB heap each
test has 1 JVM with 4 GB heap but not 1 JVM with 3GB

Rule 3 - The log settings of test and prod should match

Rule 4 - Check for the connection limits of all servers (limit should be w.r.to the performance tests we perform on that env)

Depending on the above responses and some more info specific to that env, we do the scaling.

Results extrapolation:-
Rule 2 is most important while doing extrapolation as the available memory on that server is not important but the heap memory allocated is important and that is what we compare with production - not the total memory of server.
If you just have 1 JVM and have to extrapolate to prod with 4JVM's is not practical and results might not be accurate.
For accuracy we might need more than 1 JVM so that we can test with 1 JVM and then with 2 JVM's and then map the results to see the pattern and through curve fitting we can get a formula that can be used to extrapolate.
Note:- if you need more accuracy, you also need more samples

Apart from Rule 2, we also consider all the rules mentioned above to extrapolate the results.

Friday, February 6, 2015

Hybrid performance testing using Noise Generation in LoadRunner 12.02

In Real world scenario, there will be 20-30 % of idle users(may be higher/lower for some apps) and we also include them while taking the volumetrics but those idle users majorly access the website and then remain idle or closes the session by doing nothing and we call those as NOISE.

With LoadRunner 12.0 HP has provided us the option to generate NOISE along side normal testing and we shall see the purpose of that noise and how we can build a hybrid test scenario using Noise Generation option in LoadRunner. With that we have 2 approaches to web performance testing...
  1. Create relatively complex tests with transactions, complicated flows, parameterization, checkpoints etc. Such tests, of course, require a complex script and can produce large amounts of interesting data.
  2. Create a heavy load on the server by having the same URL accessed simultaneously by a large number of real or virtual users utilizing the power of load testing. The test typically consists of very few lines of code, and won’t be able to produce any sophisticated data. It can only determine the response times and whether the server has crashed or not. We call this “noise generation”, not out of disrespect, but to illustrate that such a test would create a massive and not quite coherent load on the server application.
Traditionally HP LoadRunner has been providing the means for enabling the first approach, so that users can test complex business flows and perform a detailed analysis of the resulting data. However, in some cases it’s enough to execute such flows for a relatively small fraction of vusers, while the rest can just generate load on the server in order to simulate stress conditions for the more “sophisticated” vusers.

HP introduced the “Noise Generation” feature in LoadRunner for precisely this reason. It lets you implement a hybrid testing model:  run large amounts of reasonably-priced vusers, which will generate a ‘primitive’ load by accessing a single URL, and alongside them, a smaller number of ‘proper’ vusers, which will gather the ‘interesting’ data.

Here are the main characteristics of the ‘noise’ vusers:
  1. They’re 10 times cheaper than the regular ones! If you purchase a license for, let’s say, 100 Web vusers, you’ll be able to run 100 regular vusers (as before), or 50 regular and 500 noise ones (which you’re getting for the price of 50 regular), or 10 regular and 900 noise – any combination, as long as the amount of full-scale vusers plus the amount of the noise vusers divided by 10 doesn’t exceed the number of purchased licenses. (Note:- With trail version HP is giving license to 50 Vusers and that means 500 noise user load by default)
  2. No VuGen scripting is needed in order to build a noise vuser script. Just input the number of vusers and the URL you’d like to put under stress, and LoadRunner will take care of everything else behind the scenes.
  3. A noise vuser works by simply sending an HTTP GET request to the specified URL, once per test iteration.
  4. Although noise vusers can run alongside a script based on any protocol, the main goal of this feature is to support Web-based protocols, such as Web HTTP, Mobile HTTP, Flex, SAP Web etc. Nevertheless, if your application under test has a Web interface of any sort, you can apply a load to it using the Noise Generation feature, even if the main testing is being done using a different technology.
 So, how does this feature work? Very simply! As mentioned above, no recording or VuGen scripting is needed. When you’re defining your testing scenario in the Controller, just press the familiar “Add group” button, and you’ll see the new “Noise Generator” option:
Noise Generator option

Select the ‘Use a Noise Generator’ checkbox, and type the URL, which will be the target of the noise generation.  Select the Load Generator machine as usual (unless you’re working in the percentage mode), the number of vusers (remember that you can use 10 of these vusers for the same price as a regular vuser) and the group name (although you don’t have to do the latter, the Controller will auto-generate it based on the URL’s domain):
Use a noise generator

Groups.png

Note that the “noise_” prefix is automatically added to the noise group names, in order to distinguish them.

The ‘Details’ dialog will reflect the fact that it’s a noise group we’re talking about in the script type and the ’View Script’ button will be disabled:
Group Information.png

Since the “noise” script is rather primitive, there’s not much you can do in the way of configuring it. However, one thing is particularly important. If you want to get meaningful data from the script execution, make sure that either the “action as a transaction” or the “step as a transaction” option (which is the same thing in this case, since the script’s main action contains exactly one step) is turned on in the Run-time settings. That way you’ll be able to obtain accurate measurements at the end of the script run:
Run-time Settings

And that’s it! You’re ready to run your hybrid Web+noise test, and the ‘noise’ groups will run just like any other. 

How to calculate Virtual User "footprint"

Originally authored by Mark Tomlinson in HP.com and i felt its very useful for everyone.

One of the most common questions we get about LoadRunner Virtual Users relates to the resources required to execute the scripts on the Load Generator.  One advantage of the maturity of LoadRunner is that we have supported so many different drivers and protocols and environments over the past 2 decades.  We've learned so much about how to give a more detailed response to really advise users on how much Load Generator resources will be required to be successful.  You might imagine that the answer isn't black & white or even close to a 1 sentence answer.  Here are some simple ideas that can help you determine how to configure your Load Generators.

For Memory: each protocol has different parts that affect how much memory is required, so there is no single answer across all virtual users - Web is different RDP, which is different from SAP Click & Script, which is different from RMI-Java.  Some vuser types have external drivers (like ICA, RDP or SAP) so the guidelines didn’t include the footprint for the external executable driver. The Click & Script vuser types really can confuse you, because these seem like new versions of old protocols...but that's not actually true - the C&S protocols are completely new architecture, but more than anything, every vuser’s memory foot print is GREATLY impacted by the following factors:
  • the size of the driver library (this is fairly static)
  • the # of lines of code included in the recording (varies greatly by customer)
  • the # and size of parameters included in the recording (varies greatly by customer and script type)

For CPU: of course, each driver has slight differences in CPU overhead, but for the most part they are all very efficient (and - yes, we will continue to improve Click & Script to be better!!) the amount of CPU used on a LoadRunner load generator will vary by the following factors:
  • iteration and pacing, which controls how “active” the vuser is (varies greatly by customer)
  • stress testing scenarios usually use more CPU, as opposed to real-world testing which has slower vusers (but more of them)
  • customized code or extra script processing (like string manipulation, or calculations) will chew up more CPU

For Disk: the main thing here is logging, the more customers increase the details and amount of logging the more disk will be consumed external parameter files (writing or reading from individual vuser threads) will really hammer local disk some vusers with external driver executables will have additional logging of their own, or caching of content.

For Network: the result of multiple virtual users running on single load generator is a concentration of all those vusers network traffic on single NIC the payload of network api calls varies greatly for each and every different application stress testing (e.g. fast iteration pacing, no think-times) could easily result in over-utilization of NIC bandwidth

When it comes to calculating your virtual user footprint, it's actually quite easy.  But first, let me tell you that not everyone should need to do extensive calculations of the virtual user resource utilization.  This is important *only* when you have a very high number of virtual users or a very limited number of Load Generators.  The basic approach is to run a preliminary test with just 1 script, while you measure the resource utilization on the Load Generator directly.  You are specifically interested in the mmdrv.exe process on the Load Generator, which is LoadRunner's primary multi-threaded driver.  Measuring the private bytes reserved by this process for 1, 2, 3, 4 then 5 then 10 virtual users will give you some clue as to how much memory is required by each additional virtual user added.  Simultaneously you should monitor CPU, Network and Disk just to determine if there are any excessive utilizations.

mmdrv.jpg

It is important to note that you should be gathering information about the performance of your script running on the Load Generator - using the same run-time settings that you will use during the full scenario run.  If you are stress testing with very little think time or delay, then you'll want to use those same settings.

Source:- http://h30499.www3.hp.com/t5/HP-LoadRunner-and-Performance/How-To-Understand-and-calculate-Virtual-User-quot-footprint-quot/ba-p/2407591#.VNPsedLF_ng

How to correlate a value with dynamic boundaries in VuGen

If you are checking this post, seems the left and right boundaries are killing your time. Lets check how to deals with those dynamic boundaries using the text flags of VuGen.


You can use one of the following flags after the text, preceded by a forward slash:
/IC to ignore the case.
/DIG to interpret the pound sign (#) as a wildcard for a single digit.
/ALNUM to interpret the caret sign (^) as a wildcard for a single US-ASCII alphanumeric character. There are three syntaxes: ALNUMIC to ignore case, ALNUMLC to match only lower case, and ALNUMUC to match only upper case.

eg: email\u003d\"notification+mkuw__3d@facebookmail.com
In the above server response if you have to capture the highlighted part(in RED), the left and right boundaries have some alphanumeric chars and if they are dynamic then you have to follow the below approach.

web_reg_save_param("account_notify",
        "LB/DIG=email\u00#d\\"",
        "RB/DIG=__#d@facebookmail.com",
        "Ord=1",
        "Search=Body",
        LAST);

and for instance lets say that "u003d" in the left boundary itself is dynamic, then below will be our approach.

web_reg_save_param("account_notify",
        "LB/ALNUMIC=email\^^^^^\\"",
        "RB/DIG=__#d@facebookmail.com",
        "Ord=1",
        "Search=Body",
        LAST);

ALNUMIC means the boundary is an alphanumeric and IC asks it to ignore the case(upper or lower)

Friday, January 23, 2015

How to calculate doc download time in LoadRunner

Lets says that we have a document in one of the pages that we record in VuGen and we have to capture the time it takes to download only that file, then Use the below steps after the step which downloads the file.
Note that the RecContentType=application/pdf for the step

web_url("BCAction.do",
        "URL={url}//BCAction.do?choice=display&ctno=1234567890",
        "Resource=1",
        "RecContentType=application/pdf",
        "Referer=",
        "Snapshot=t24.inf",
        LAST);

 time = web_get_int_property(HTTP_INFO_DOWNLOAD_TIME);        
 lr_output_message("The time in ms to download CSS PDF is: %d",time);


Some more information about Web_get_int_property
The web_get_int_property function returns specific information about the previous HTTP request.
The meaning of the return value depends on the HttpInfoType argument. HttpInfoType can be any of the following options. The first constant in each pair (HTTP_*) is for C, the second (object.HTTP_*) is for object oriented languages(Java scripts).

HTTP_INFO_RETURN_CODE or object.HTTP_INFO_RETURN_CODE
The return code in HTTP response header.

HTTP_INFO_DOWNLOAD_SIZE or object.HTTP_INFO_DOWNLOAD_SIZE
The size (in bytes) of the last download, including the header, body, and communications overhead (for example, NTLM negotiation).

HTTP_INFO_DOWNLOAD_TIME or object.HTTP_INFO_DOWNLOAD_TIME
The time in (milliseconds) of the last download.

HTTP_INFO_TOTAL_REQUEST_STAT or object.HTTP_INFO_TOTAL_REQUEST_STAT
Returns the accumulated size of all headers and bodies since the first time web_get_int_property was issued with HTTP_INFO_TOTAL_REQUEST_STAT.

HTTP_INFO_TOTAL_RESPONSE_STAT or object.HTTP_INFO_TOTAL_RESPONSE_STAT
Returns the accumulated size, including header and body, of all responses since the first time web_get_int_property was issued with HTTP_INFO_TOTAL_RESPONSE_STAT

This function is supported for all Web scripts, and for WAP scripts running in HTTP mode only. It is not supported for WAP scripts running in Wireless Session Protocol (WSP) replay mode

Friday, October 17, 2014

Using continue on error in VuGen

In the run time settings we see the option "Continue on Error" which is used when you want to continue executing the script even after encountering a failure/error.

In some cases you dont want to use Continue on Error option in Run time settings but want to use that for a specific part of the script, below steps can help you to do that.

In the below example, you want to capture a dynamic value and at that same time that dynamic value can appear intermittently. It comes for 1 iteration but not the other, so i have used "Notfound=Warning", and because script fails if value is not found I have placed lr_continue_on_error at the start and end of that step.

// Set Continue On Error
    lr_continue_on_error(1);

// Save the error message if any
    web_reg_save_param("error_message",
        "LB=Message0 >> ",
        "RB=\r\n",              
        "Search=Body",
        "IgnoreRedirections=Yes",
        "Notfound=Warning",
        LAST);

    Web_url("name",
               "..........",
               LAST);

// Turn off Continue On Error
    lr_continue_on_error(0);

Saturday, August 9, 2014

How to use JavaScript in your HP LoadRunner scripts

This post was written by Boris Kozorovitzky, from the HP LoadRunner R&D team.

JavaScript is rapidly becoming the driving force behind the Internet. Any website, small or large, has some parts written with JavaScript to enhance the user experience. Browser vendors constantly improve the speed in which JavaScript is parsed and executed in their browsers. This increase in speed is now to the point that it’s become one of the key benchmarks for every browser version. When we performance test a website, a web application, or even a REST service we can be sure that JavaScript plays some role in the business process we are about to test.

HP LoadRunner uses C as the native language for scripting HTTP/HTML scripts, but it allows integration and use of JavaScript code with relative ease. To learn more about this integration, consult Using the LoadRunner JavaScript Enginechapter in the HP LoadRunner User Guide. 


Why would we want to use JavaScript in our script?
There are four main reasons:
  • JavaScript often offers a more intuitive experience which is easier to implement than in C.
  • The JavaScript regular expression library simplifies the challenge of working with regular expressions.
  • Numerous JavaScript libraries that assist with any coding task (e.g. string manipulation) are available.
  • Client-side logic is often implemented in JavaScript. Inserting snippets of the original JavaScript code means you don’t have to translate the JavaScript client logic into C code.
This article explores the following scenario: We are testing a web server application which is used for authentication. Our goal is to receive an authentication token from the server and use the token in all future communication with the server. The authentication process is as follows:
  1. Send an HTTP GET request to the “/challenge” REST API to get a challenge string.
  2. Perform some calculation on that string and generate a unique password using a proprietary encryption algorithm.
  3. Send a GET request to the “/token” REST API passing the password generated in step 2 as a query parameter.
(If you are unfamiliar with some of the terms in this process you should read my my previous post about Load testing of a REST API, using HP LoadRunner’s web_custom_request).

For this scenario we run a simple web server which serves static values (using Node.js). The server code is available as an attachment to this article (web_js_example_server.zip).

Getting the challenge string
Our first goal is to obtain the challenge string from the server. We type the following step into our script to make the required REST API call:

    web_custom_request("challenge",
        "URL=http://localhost:3000/challenge",
        "Method=GET",
        "RecContentType=application/json",
          LAST);

We want to capture the server response in a parameter. To this end, we add a web_reg_save_param step before our request and save the response body into a parameter named “challenge”:

    web_reg_save_param("challenge","LB=","RB=","Search=Body",LAST);

Calculating the password                                                                                               
Now that we have the challenge string we can use some JavaScript manipulations to generate the password.
First we need to enable JavaScript for our script. To do this, open the Run-Time Settings (F4) and go to Internet Protocol > Preferences > Set advanced options.  Click on Options…, and in the dialog that opens (scroll all the way down), set Enable running JavaScript code to Yes (as in the screenshot below) and then click OK on both dialogs.

p1.jpg

Next we need to create a new JavaScript file that contains our code. Right-click the Extra Files node in the Solution Explorer and select Create new file… Name the new file calcPassword.js and press OK.
We know that the string returned by the server is a JSON object which contains an array with some strings. The last member of that array is a number which tells us the index of the string we want to extract from the array and encrypt using the proprietary algorithm. We add a new JavaScript function that extracts the value from the array:

function getPassword(stringData){
    var data = JSON.parse(stringData);
    var index = data[data.length - 1];
    var value = data[index];
}

The required value is stored in the “value” variable, and now we have to apply the encryption function. Luckily for us, the server code is written in JavaScript since it runs on Node.js and it already has a module called crypto.js which implements the proprietary cryptographic algorithm. Import the crypto.js file into the script by right-clicking the Extra Files node in theSolution Explorer and selecting the Add files to script… option. Select the file crypto.js from our server folder (you would typically ask for this file from the R&D team). Edit the file so that it contains only the function we need (this step is not mandatory and is performed for clarity, we could use the original file):

   function encrypt(data) {
    return data.toString().split("").reverse().join("");
  }

(I don’t recommend using this function as a real life encryption method as it only reverses the input string J)
Now we can update our getPassword function to call the encrypt function:

function getPassword(stringData){
    var data = JSON.parse(stringData);
    var index = data[data.length - 1];
    var value = data[index];
    return encrypt(value);}

Now we just add the web_js_run step to run our JavaScript code:

    web_js_run(
        "Code=getPassword(LR.getParam('challenge'));",
        "ResultParam=password",
        SOURCES,
        "File=crypto.js"ENDITEM,
        "File=calcPassword.js"ENDITEM,
        LAST);

The parameters are straightforward but you can always consult the product documentation if you are not sure. In this case we call the getPassword function defined in the JavaScript file. We want to pass it the parameter we got from the REST API call. We use the LR.getParam utility function (implemented in JavaScript) to pass that parameter as string to thegetPassword function. The result of the function is stored in the parameter defined by the ResultParam argument (“password” in our case). Finally we have the source files that contain our JavaScript code added as “File=…” arguments separated by the ENDITEM constant.

Obtaining the token
Our final step is obtaining the token from the server. We could just use the password parameter to create the URI, but that would be a mistake because we need to encode the URI first. We can do it easily using another web_js_run call (since JavaScript natively supports URI encoding). This time the call is quite simple:

    web_js_run(
        "Code='http:/'+'/localhost:3000/token?password=+ encodeURI(LR.getParam('password'));",
        "ResultParam=uri",
        LAST);

(Note that we split up the ‘//’ string in the ‘Code’ parameters into two separate sub-strings.  If we were to pass it in as a single string “//”, LoadRunner would report error -26312, “Using "//" line comments in argument of type 'code' is illegal/unsupported”)

Now we add the step that makes the final call with the calculated URI to the appropriate REST API:

        web_custom_request("get token",
        "URL={uri}",
        "Method=GET",
        "RecContentType=application/json"
        LAST);

The result from the server is as expected:

Action.c(25): t=1987ms: 38-byte response body for "http://localhost:3000/token?password=damhA" (RelFrameId=1, Internal ID=4)
Action.c(25):     "4f6dfbce-412f-4c6c-8bac-7d7d66a6b822"

Conclusion
Enhancing your HTTP/HTML script with JavaScript code is easy and useful. There are numerous scenarios where this capability can be employed to shorten scripting time and create a much more robust script. The HP LoadRunner User Guide is key to understanding the full potential of this feature as it includes a built in function library that links the JavaScript code to your test script’s code.