Thursday, December 10, 2015

Error -27796: Failed to connect to server "XXX": [10060] Connection timed out [MsgId: MERR-27796]

Error -27796: Failed to connect to server "XXXXXXX": [10060] Connection timed out [MsgId: MERR-27796]

If you are facing the above error in Vugen then you might have tried the below solution but with nothing positive ....

Run time settings->Preferences->options, there in HTTP properties you will find on option "HTTP-request connect timeout" set it to 999 or any number from default 120

The other scenario that you might be facing is its running successfully in VuGen but its failing in PC 

Solution:-
Instead of using "obtain the proxy settings from the default browser" option , copy the proxy url from the internet options in IE and manually add that proxy as shown below.

If you selected NO PROXY then that might be the issue and you have to get the proxy details..

Thursday, May 28, 2015

Using string manipulation functions in VuGen/LoadRunner

String Manipulation Functions

String Manipulation functions allow you to manipulate or compare strings. These functions begin with the str prefix. Expand the category to view a list of the available functions.
Click one of the following functions for more information:


Function Name
Description
Concatenates two strings.
Returns the pointer to the first occurrence of a character in a string.
Compares two strings to determine the alphabetic order.
Copies one string to another.
Duplicates a string.
Performs a case-insensitive comparison of two strings.
Returns the length of a string.
Converts a string to lower case.
Concatenates n characters from one string to another.
Compares the first n characters of two strings.
Copies the first n characters of one string to another.
Performs a case-insensitive comparison of n strings.
Finds the last occurrence of a character in a string.
Fills a string with a specific character.
Returns the length of the leading characters in a string that are contained in a specified string.
Returns the first occurrence of one string in another.
Returns a token from a string delimited by specified characters.
Converts a string to upper case.

For instance we consider the function STRSTR and see how we use it.....

sample text :- 6172171084 (Service - RPC) Status: Success, Effective Immediately

If you want to capture the above text and try to see if any telephone number & success word is appearing in that phrase, then we use STRSTR function.

Using web_reg_save_param("Confirmation") - Pull that phrase into parameter called confirmation and then use that to validate if mobile number and success message are coming in the server response.

strstr

char *strstr( const char *string1, const char *string2);Returns the first occurrence of one string in another.


string1 The string that is searched.
string2 The string that is searched for in the first string.
strstr returns the first occurrence of one string in another

This is how we do it..........

// If the success message is not displayed, fail the transaction and exit iteration
        if (strstr(lr_eval_string("{Confirmation}"),"Success") < 1)
        {
            lr_error_message("***** Success Message not displayed ******");
            lr_end_transaction("X_RPC_14_SubmitOrder"LR_FAIL);
            lr_exit(LR_EXIT_ITERATION_AND_CONTINUELR_FAIL);
        }

        // Dynamic Validation for mobile number
        if (strstr(lr_eval_string("{Confirmation}"), lr_eval_string("{mobile}")) == 0)
        {
            lr_error_message("***** Dynamic Validation Failed for mobile number*****");
            lr_end_transaction("X_RPC_14_SubmitOrder"LR_FAIL);
            lr_exit(LR_EXIT_ITERATION_AND_CONTINUELR_FAIL);
        }


Example 1

This example uses strstr to search for the word "dog" in the string, str.
After strstr returns the address, position, the code then calculates the word's place in str by subtracting the address of the start of the string from position. This is the offset of the word "dog", in bytes.
    int offset;
    char * position;
    char * str = "The quick brown dog jumps over the lazy fox";
    char * search_str = "dog";
    position = (char *)strstr(str, search_str);
    // strstr has returned the address. Now calculate * the offset from the beginning of str
    offset = (int)(position - str + 1);
    lr_output_message ("The string \"%s\" was found at position %d", search_str, offset);
Output:
Action.c(14): The string "dog" was found at position 17

Example 2

The next example shows how strstr can be used to parse file name suffixes. The code checks the equality of the following two pointers:
  • The pointer returned by strstr after searching for the substring is either null on failure, or the address of the first character of the file suffix on success. Adding 4 moves the pointer to the end of the string.
  • The start of the string (path) plus its length, len. This is also the end of the string.
If these 2 pointers are equal than strstr has found the file suffix. If not, strstr has returned null.
    char * path = "c:\\tmp\\xxx\\foo.ocx";
    int len = strlen(path), filetype;
    // Check the suffix of the file name to determine its type
    if ((char *)(strstr(path, ".dll") + 4) == (path + len))
        filetype = 1;
    else if ((char *)(strstr(path, ".exe") + 4) == (path + len))
        filetype = 2;
    else if ((char *)(strstr(path, ".tlb") + 4) == (path + len))
        filetype = 3;
    else if ((char *)(strstr(path, ".ocx") + 4) == (path + len))
        filetype = 4;
    else
        filetype = 5;
    lr_output_message ("type = %d", filetype);

Output:
Action.c(18): type = 4

Thursday, April 23, 2015

Different performance tests including Peak performance and stability

Peak performance test:-
This test is done to analyze the application peak performance. To achieve peak performance we have to slowly ramp up the users and see the response times and TPS with increase in users. At a certain point TPS will stay flat and response times increase and in some cases TPS decreases and thus increases response time with increase in users.
The point where TPS starts staying flat is the peak performance point and this is how we do peak performance test.


Break point test :- 
We do break point test in the same line as peak performance but we dont stop the test once we reach peak point, we continue to increase users and see when the application breaks, that means we see the errors per sec graph along with vusers and transaction response time and transaction per sec graphs.
As we increase load on the application beyond peak performance, application cannot handle it anymore and at some point it starts throwing errors and you see a spike in the errors per sec and that's the break point for that application for the given test conditions.

Typical reasons for the application to reach breakpoint:- 
1. Queue length is increasing and the wait time at that queues for new request reach max wait time and thus will throw error
2. No Connections available to take new requests
3. Threads are busy and are waiting for other system resources
4. Concurrency of the objects being instantiated reached its limit
5. DB connections are full
6. JVM heap memory isn't sufficient.


Stability test:-
We have to know the peak performance point to perform stability. Once we are done with peak performance test and know the max number of users app supports we run stability test for an hour or 2 at the peak point or 5-20% more load than peak point. Note that we assume break point doesnt come immediately after peak performance point.
eg - if peak performance is at 90 users and break point is 100 users then we can run stability at 95 or 90

As part of performance testing, other tests we do are Regression, Load, Stress, Soak/Endurance.

Performance environment scaling and results extrapolation

In this article we shall discuss about the points we have to consider when we get a new environment or existing environment for performance testing.

Rule 1 - The configurations of Production and test env should match
eg - if they have 4 web, 2 app and 2 Db servers
test env might be 2 web, 1 app and 1 DB but not 1 web, 2 app, 1 Db etc....

Rule 2 - JVM heap memory should match
Prod 2 JVM with 4GB heap each
test has 1 JVM with 4 GB heap but not 1 JVM with 3GB

Rule 3 - The log settings of test and prod should match

Rule 4 - Check for the connection limits of all servers (limit should be w.r.to the performance tests we perform on that env)

Depending on the above responses and some more info specific to that env, we do the scaling.

Results extrapolation:-
Rule 2 is most important while doing extrapolation as the available memory on that server is not important but the heap memory allocated is important and that is what we compare with production - not the total memory of server.
If you just have 1 JVM and have to extrapolate to prod with 4JVM's is not practical and results might not be accurate.
For accuracy we might need more than 1 JVM so that we can test with 1 JVM and then with 2 JVM's and then map the results to see the pattern and through curve fitting we can get a formula that can be used to extrapolate.
Note:- if you need more accuracy, you also need more samples

Apart from Rule 2, we also consider all the rules mentioned above to extrapolate the results.

Friday, February 6, 2015

Hybrid performance testing using Noise Generation in LoadRunner 12.02

In Real world scenario, there will be 20-30 % of idle users(may be higher/lower for some apps) and we also include them while taking the volumetrics but those idle users majorly access the website and then remain idle or closes the session by doing nothing and we call those as NOISE.

With LoadRunner 12.0 HP has provided us the option to generate NOISE along side normal testing and we shall see the purpose of that noise and how we can build a hybrid test scenario using Noise Generation option in LoadRunner. With that we have 2 approaches to web performance testing...
  1. Create relatively complex tests with transactions, complicated flows, parameterization, checkpoints etc. Such tests, of course, require a complex script and can produce large amounts of interesting data.
  2. Create a heavy load on the server by having the same URL accessed simultaneously by a large number of real or virtual users utilizing the power of load testing. The test typically consists of very few lines of code, and won’t be able to produce any sophisticated data. It can only determine the response times and whether the server has crashed or not. We call this “noise generation”, not out of disrespect, but to illustrate that such a test would create a massive and not quite coherent load on the server application.
Traditionally HP LoadRunner has been providing the means for enabling the first approach, so that users can test complex business flows and perform a detailed analysis of the resulting data. However, in some cases it’s enough to execute such flows for a relatively small fraction of vusers, while the rest can just generate load on the server in order to simulate stress conditions for the more “sophisticated” vusers.

HP introduced the “Noise Generation” feature in LoadRunner for precisely this reason. It lets you implement a hybrid testing model:  run large amounts of reasonably-priced vusers, which will generate a ‘primitive’ load by accessing a single URL, and alongside them, a smaller number of ‘proper’ vusers, which will gather the ‘interesting’ data.

Here are the main characteristics of the ‘noise’ vusers:
  1. They’re 10 times cheaper than the regular ones! If you purchase a license for, let’s say, 100 Web vusers, you’ll be able to run 100 regular vusers (as before), or 50 regular and 500 noise ones (which you’re getting for the price of 50 regular), or 10 regular and 900 noise – any combination, as long as the amount of full-scale vusers plus the amount of the noise vusers divided by 10 doesn’t exceed the number of purchased licenses. (Note:- With trail version HP is giving license to 50 Vusers and that means 500 noise user load by default)
  2. No VuGen scripting is needed in order to build a noise vuser script. Just input the number of vusers and the URL you’d like to put under stress, and LoadRunner will take care of everything else behind the scenes.
  3. A noise vuser works by simply sending an HTTP GET request to the specified URL, once per test iteration.
  4. Although noise vusers can run alongside a script based on any protocol, the main goal of this feature is to support Web-based protocols, such as Web HTTP, Mobile HTTP, Flex, SAP Web etc. Nevertheless, if your application under test has a Web interface of any sort, you can apply a load to it using the Noise Generation feature, even if the main testing is being done using a different technology.
 So, how does this feature work? Very simply! As mentioned above, no recording or VuGen scripting is needed. When you’re defining your testing scenario in the Controller, just press the familiar “Add group” button, and you’ll see the new “Noise Generator” option:
Noise Generator option

Select the ‘Use a Noise Generator’ checkbox, and type the URL, which will be the target of the noise generation.  Select the Load Generator machine as usual (unless you’re working in the percentage mode), the number of vusers (remember that you can use 10 of these vusers for the same price as a regular vuser) and the group name (although you don’t have to do the latter, the Controller will auto-generate it based on the URL’s domain):
Use a noise generator

Groups.png

Note that the “noise_” prefix is automatically added to the noise group names, in order to distinguish them.

The ‘Details’ dialog will reflect the fact that it’s a noise group we’re talking about in the script type and the ’View Script’ button will be disabled:
Group Information.png

Since the “noise” script is rather primitive, there’s not much you can do in the way of configuring it. However, one thing is particularly important. If you want to get meaningful data from the script execution, make sure that either the “action as a transaction” or the “step as a transaction” option (which is the same thing in this case, since the script’s main action contains exactly one step) is turned on in the Run-time settings. That way you’ll be able to obtain accurate measurements at the end of the script run:
Run-time Settings

And that’s it! You’re ready to run your hybrid Web+noise test, and the ‘noise’ groups will run just like any other. 

How to calculate Virtual User "footprint"

Originally authored by Mark Tomlinson in HP.com and i felt its very useful for everyone.

One of the most common questions we get about LoadRunner Virtual Users relates to the resources required to execute the scripts on the Load Generator.  One advantage of the maturity of LoadRunner is that we have supported so many different drivers and protocols and environments over the past 2 decades.  We've learned so much about how to give a more detailed response to really advise users on how much Load Generator resources will be required to be successful.  You might imagine that the answer isn't black & white or even close to a 1 sentence answer.  Here are some simple ideas that can help you determine how to configure your Load Generators.

For Memory: each protocol has different parts that affect how much memory is required, so there is no single answer across all virtual users - Web is different RDP, which is different from SAP Click & Script, which is different from RMI-Java.  Some vuser types have external drivers (like ICA, RDP or SAP) so the guidelines didn’t include the footprint for the external executable driver. The Click & Script vuser types really can confuse you, because these seem like new versions of old protocols...but that's not actually true - the C&S protocols are completely new architecture, but more than anything, every vuser’s memory foot print is GREATLY impacted by the following factors:
  • the size of the driver library (this is fairly static)
  • the # of lines of code included in the recording (varies greatly by customer)
  • the # and size of parameters included in the recording (varies greatly by customer and script type)

For CPU: of course, each driver has slight differences in CPU overhead, but for the most part they are all very efficient (and - yes, we will continue to improve Click & Script to be better!!) the amount of CPU used on a LoadRunner load generator will vary by the following factors:
  • iteration and pacing, which controls how “active” the vuser is (varies greatly by customer)
  • stress testing scenarios usually use more CPU, as opposed to real-world testing which has slower vusers (but more of them)
  • customized code or extra script processing (like string manipulation, or calculations) will chew up more CPU

For Disk: the main thing here is logging, the more customers increase the details and amount of logging the more disk will be consumed external parameter files (writing or reading from individual vuser threads) will really hammer local disk some vusers with external driver executables will have additional logging of their own, or caching of content.

For Network: the result of multiple virtual users running on single load generator is a concentration of all those vusers network traffic on single NIC the payload of network api calls varies greatly for each and every different application stress testing (e.g. fast iteration pacing, no think-times) could easily result in over-utilization of NIC bandwidth

When it comes to calculating your virtual user footprint, it's actually quite easy.  But first, let me tell you that not everyone should need to do extensive calculations of the virtual user resource utilization.  This is important *only* when you have a very high number of virtual users or a very limited number of Load Generators.  The basic approach is to run a preliminary test with just 1 script, while you measure the resource utilization on the Load Generator directly.  You are specifically interested in the mmdrv.exe process on the Load Generator, which is LoadRunner's primary multi-threaded driver.  Measuring the private bytes reserved by this process for 1, 2, 3, 4 then 5 then 10 virtual users will give you some clue as to how much memory is required by each additional virtual user added.  Simultaneously you should monitor CPU, Network and Disk just to determine if there are any excessive utilizations.

mmdrv.jpg

It is important to note that you should be gathering information about the performance of your script running on the Load Generator - using the same run-time settings that you will use during the full scenario run.  If you are stress testing with very little think time or delay, then you'll want to use those same settings.

Source:- http://h30499.www3.hp.com/t5/HP-LoadRunner-and-Performance/How-To-Understand-and-calculate-Virtual-User-quot-footprint-quot/ba-p/2407591#.VNPsedLF_ng

How to correlate a value with dynamic boundaries in VuGen

If you are checking this post, seems the left and right boundaries are killing your time. Lets check how to deals with those dynamic boundaries using the text flags of VuGen.


You can use one of the following flags after the text, preceded by a forward slash:
/IC to ignore the case.
/DIG to interpret the pound sign (#) as a wildcard for a single digit.
/ALNUM to interpret the caret sign (^) as a wildcard for a single US-ASCII alphanumeric character. There are three syntaxes: ALNUMIC to ignore case, ALNUMLC to match only lower case, and ALNUMUC to match only upper case.

eg: email\u003d\"notification+mkuw__3d@facebookmail.com
In the above server response if you have to capture the highlighted part(in RED), the left and right boundaries have some alphanumeric chars and if they are dynamic then you have to follow the below approach.

web_reg_save_param("account_notify",
        "LB/DIG=email\u00#d\\"",
        "RB/DIG=__#d@facebookmail.com",
        "Ord=1",
        "Search=Body",
        LAST);

and for instance lets say that "u003d" in the left boundary itself is dynamic, then below will be our approach.

web_reg_save_param("account_notify",
        "LB/ALNUMIC=email\^^^^^\\"",
        "RB/DIG=__#d@facebookmail.com",
        "Ord=1",
        "Search=Body",
        LAST);

ALNUMIC means the boundary is an alphanumeric and IC asks it to ignore the case(upper or lower)

Friday, January 23, 2015

How to calculate doc download time in LoadRunner

Lets says that we have a document in one of the pages that we record in VuGen and we have to capture the time it takes to download only that file, then Use the below steps after the step which downloads the file.
Note that the RecContentType=application/pdf for the step

web_url("BCAction.do",
        "URL={url}//BCAction.do?choice=display&ctno=1234567890",
        "Resource=1",
        "RecContentType=application/pdf",
        "Referer=",
        "Snapshot=t24.inf",
        LAST);

 time = web_get_int_property(HTTP_INFO_DOWNLOAD_TIME);        
 lr_output_message("The time in ms to download CSS PDF is: %d",time);


Some more information about Web_get_int_property
The web_get_int_property function returns specific information about the previous HTTP request.
The meaning of the return value depends on the HttpInfoType argument. HttpInfoType can be any of the following options. The first constant in each pair (HTTP_*) is for C, the second (object.HTTP_*) is for object oriented languages(Java scripts).

HTTP_INFO_RETURN_CODE or object.HTTP_INFO_RETURN_CODE
The return code in HTTP response header.

HTTP_INFO_DOWNLOAD_SIZE or object.HTTP_INFO_DOWNLOAD_SIZE
The size (in bytes) of the last download, including the header, body, and communications overhead (for example, NTLM negotiation).

HTTP_INFO_DOWNLOAD_TIME or object.HTTP_INFO_DOWNLOAD_TIME
The time in (milliseconds) of the last download.

HTTP_INFO_TOTAL_REQUEST_STAT or object.HTTP_INFO_TOTAL_REQUEST_STAT
Returns the accumulated size of all headers and bodies since the first time web_get_int_property was issued with HTTP_INFO_TOTAL_REQUEST_STAT.

HTTP_INFO_TOTAL_RESPONSE_STAT or object.HTTP_INFO_TOTAL_RESPONSE_STAT
Returns the accumulated size, including header and body, of all responses since the first time web_get_int_property was issued with HTTP_INFO_TOTAL_RESPONSE_STAT

This function is supported for all Web scripts, and for WAP scripts running in HTTP mode only. It is not supported for WAP scripts running in Wireless Session Protocol (WSP) replay mode