Wednesday, November 17, 2010

Wasted Time in LoadRunner

Wasted Time:-
If your guess is Replay log, Yes… We get this in the Replay Log of VuGen Script as shown below.
Action.c(21): Notify: Transaction "Home" ended with "Pass" status (Duration: 20.1061 Wasted Time: 0.0010)


Here is what HP says about Wasted Time.


Wasted time is time spent on activities whose purpose is to support test analysis, but would never be performed by a browser user, for example, time spent keeping transaction statistics for later reporting. Wasted time is calculated internally by LoadRunner. Your script can also add wasted time with lr_wasted_time.


Sometimes, you may enter activities in a script that your do not want reported as part of the transaction statistics. Generally, these are activities related to record keeping, logging, or custom analysis. If you enhance the script with steps whose durations should not be included in the test statistics, you can track the time used by these steps with lr_start_timer and lr_end_timer. Then, the function lr_wasted_time is used for adding this user-determined time to the internally generated wasted time.


You can retrieve the total wasted time (both that generated by LoadRunner automatically and that added with lr_wasted_time) with the function lr_get_transaction_wasted_time, or with lr_get_trans_instance_wasted_time, as appropriate.

When VuGen creates the log file, output.txt, and when on line testing generates the Vuser log files, no action is taken with the wasted time. The actual elapsed transaction times are reported. The lr_get_transaction_duration function also returns the actual elapsed transaction time. This information may be useful in analyzing and developing test scripts.


However, in the on-line graphs in the LoadRunner Controller and the transaction response time graphs in the LoadRunner Analysis, the transaction times are reported after subtracting the wasted time. This is the time most pertinent to understanding the system being tested.

Monday, November 15, 2010

HP LoadRunner 11.0 Features

HP LoadRunner for the Windows operating system Software version: 11.00
Publication date: October 2010
What's New
Protocols
Ajax TruClient -
An advanced protocol for modern JavaScript based applications (including Ajax) emulating user activity within a web browser. Scripts are developed interactively in Mozilla Firefox.
Silverlight - A new protocol for Silverlight based applications emulating user activity at the transport level. Allows generating high level scripts by automatically importing and configuring WSDL files used by the application.
Java over HTTP - A new protocol designed to record java-based applications and applets. It produces a Java language script using web functions. This protocol is distinguished from other Java protocols in that it can record and replay Java remote calls over HTTP.
Citrix
The Citrix Protocol now supports Citrix Online Plugin versions 11.2 and 12.0.
Added support for Citrix XenApp Server 5.0
Oracle NCA - NCA Java object property support now provides automated creation and registration within a script of a query-answer table of communication between client-side Java objects and the Oracle NCA server.
SAPGUI - Added support for SAPGUI for Windows Client version 7.20..
Service Test - The LoadRunner Controller can run scripts created in HP Service Test 11.00, HP's solution for creating and running automated tests for SOA and headless technologies. Refer to the Service Test documentation for details of creating Service Test scripts for a load testing scenario.

Features
Data Format Extension (DFE)
- Enhanced data format capabilities for the Web (HTTP/HTML) protocol family. Allows converting raw HTTP traffic into a maintainable and structured XML format and enables correlations by XPATH.
Correlation Studio - Web (HTTP/HTML) automatic correlation mechanism has been enhanced to search for possible correlations in the larger scope of snapshot data created during code generation including data formatted by DFE.
Snapshot View - New snapshot view for Web (HTTP/HTML) protocol steps allows viewing complete HTTP traffic in both raw and DFE generated formats.
VuGen - HP ALM Integration - Enhanced integration with HP Application Lifecycle Management platform that serves also Quality Center and Performance Center editions..
Windows Support - Added support for Windows 7 and Windows Server 2008. See below for limitations. Analysis Reports - Enhanced Analysis reports are more customizable. Analysis data can be exported to a variety of formats, including Word, Excel, PDF, and HTML. New report templates allow saving report definitions and generating reports based on a template.

LIMITATIONS:-
VuGen

SAP (Click and Script) recording. During recording, if you use a keyboard option instead of a UI element (for example, pressing Enter instead of clicking the log on button), the step may not be recorded. In general, when recording your script, it is recommended to use UI elements rather than keyboard options.
Citrix snapshots. Black snapshots may appear during record or replay when using Citrix Presentation Server 4.0 and 4.5 (before Rollup Pack 3).
Possible workaround: On the Citrix server select Start Menu > Settings > Control Panel > Administrative Tools > Terminal Services Configuration > Server Settings > Licensing and change the setting Per User or Per Device to the alternative setting (ie If it is set to Per User , change it to Per Device and vice versa.)

Recording Window Size and XenApp Plugin for Hosted Applications 11. The recording window size options does not work properly with the XenApp Plugin for Hosted Applications 11. The size of the client window is installed, but the server screen resolution is not. This is a Citrix Client bug and will be fixed in future Citrix Client versions.

Workaround: When recording, set the window size equal to the local screen resolution. When replaying/load testing, set the VuGen or Load Generator's screen resolution to equal the resolution used when the script was recorded.
-->For the following protocols, replay is supported but recording is no longer supported: Siebel DB2, Siebel MSSQL, Siebel Oracle, and Voice XML.
-->In Test Result reports , you can only use the Export to HTML file utility for scripts replayed in version 9.50 and later. To generate an HTML report for scripts created with earlier versions, run the script again in the 9.50 version (or later) of the product.
-->If you terminated the Protocol Advisor immediately after the detection process started, the Protocol Advisor may leave a zombie process of the detected application, causing all recordings and detection sessions that follow to fail. Workaround: Manually terminate the zombie process.
-->The new Web snapshot model is backward compatible with previous versions of LoadRunner, however some snapshot data may be missing. If this occurs, regenerate the script.
-->The JSON Data Format Extension does not work in UNIX environments.
-->Recording a network based application on Windows 7. To record a network-based application (including Web), IPv6 communication must be disabled. Follow the instructions in http://www.addictivetips.com/windows-tips/how-to-disable-ipv6-in-windows-7/ to disable IPv6 in Windows 7.
-->For protocols using XML, replay fails to create a request when a parameterized input argument contains the ampersand (&) character.

Correlation Studio
In VuGen, when correlating large amount of snapshot data, in some cases the parameter is not created and data is not correlated.
Scan for correlation will work on a script that was regenerated and replayed with the early access version.

Note:- Please share your experiences on LR 11.0

Source:- Vietnamesetestingboard

Friday, October 29, 2010

Cloud testing solutions with LoadRunner

HP has long been an industry leader in performance testing. In addition, we have strong cloud offerings of our own. We understand performance testing in the cloud, and we offer a range of options to help your IT organization make sure that cloud services are delivering the performance and scalability your business requires.


HP LoadRunner: the No. 1 performance testing software
At the core of our offerings is HP LoadRunner software, a comprehensive testing solution for predicting system behavior and performance that is currently in use by thousands of businesses around the world.
HP LoadRunner can:
• Record scripts at the interface level via clicks on screens and automatically capture valuable scripting information to create succinct, visually intuitive, self-explanatory scripts—and reduce scripting and maintenance time by 80 percent
• Emulate thousands of concurrent users, mimicking real users, so that you can apply production
workloads to almost any application platform or environment, on-premise or in the cloud
• Stress applications end to end and gather data to identify scalability issues and quickly isolate performance bottlenecks
• Use non-intrusive, real-time performance monitors to obtain and display performance data from every tier, server, and system component during the load test
• Provide a single view of end-user, system-level, and code-level performance data, so that you can drill down deeper and identify the root cause of the problems
• Support performance testing for a wide range of application environments and protocols, including web, service-oriented architecture (SOA) and web services, asynchronous JavaScript + XML (Ajax), Remote Desktop Protocol (RDP), database, terminal, Citrix, Java™, .NET, and all major enterprise resource planning (ERP) and customer relationship management (CRM) applications, including PeopleSoft, Oracle®, SAP, and Siebel
• Support a combination of on-premises testing and cloud testing
Key features and benefits
HP LoadRunner, the industry’s best-selling load testing software, now available in the cloud, makes performance testing more accessible to all businesses. This on-demand software solution allows you to use a flexible pay-as-you-go approach for performance testing of mission-critical applications and websites.
HP LoadRunner in the cloud allows your business to:
• Save money by using affordable, hourly rates—you just need a credit card to pay on demand
• Protect the integrity of applications and website performance by making testing more ubiquitous
• Increase agility for quicker response to unplanned, ad hoc situations with an on-demand, pre-installed performance testing application

Failover testing
In the cloud, failover testing must be done virtually, not physically. You can no longer walk behind a machine and disconnect a cable. In the cloud, you’re going to have to virtually disable a network interface controller or a storage adapter to do failover testing for disaster recovery.
Elasticity and scalability
If you’re stress testing, you might find that you can keep adding load and never exhaust any one resource, because the cloud keeps adding resources to the system under test. This might give you the impression that the AUT scales quite well. So be aware of false positives. You also need to be mindful of false negatives that could arise if the application isn’t configured to use the elastic nature of the cloud.
Application tuning
Even though the cloud is an elastic environment that can add physical resources, it is not a replacement for actually tuning up the application code logic. Efficiency is a key part of computing in the cloud. You pay for a less efficient application that does more cycles, makes too many round trips to the client, and moves too much data back and forth between a database service. You pay for inefficiency transactionally as a direct cost and you pay for it in terms of wasted power and cooling.

Cross-cloud alignment bottlenecks

If you’re running different parts of the application under test in different cloud vendor environments, you need to consider cross-cloud alignment bottlenecks and the latency that they bring. You need to be able to align web services and align latencies.

Alignment is particularly important when you’re looking at a web service or application server that is running a web service that is talking to a database service that may be in the same cloud or a different cloud. If the database layer in the cloud has to make extra round trips to collect data sets, you’re going to encounter the associated costs. In on-premises testing, you can take the database layer for granted. That’s not the case with the cloud. And keep in mind that when you make calls that jump across cloud vendors, you might end up paying double for throughput—because you’re paying for cloud-in at one vendor and cloud-out at another vendor. Cloud-to-cloud communication can actually be very costly.

What’s new about cloud performance?

The characteristics of the cloud create a new set of considerations for the entire lifecycle of performance testing from test planning to results analysis and reporting.

Test planning:-
When it comes to test planning, user stories and success criteria are very similar. You still must be concerned with defining objectives for end-user response times, numbers of users, and the ability to meet the requirements of service-level agreements (SLAs). But the way your test bed is designed and constructed can be very different.


Let’s walk though some of these concerns:
• Test bed issues— Your test bed may need to be designed to scale up a number of nodes, scale out a number of nodes, and scale across multiple cloud vendors. What’s more, it might all be located off premises, so you’re not setting up the physical test bed in your local test lab; instead it’s out there in the cloud.
• Response times—If you’re used to testing response times with wide-area network (WAN) emulation, your issues are going to be compounded by the cloud. When your application is hosted on the cloud by external resources, latencies are beyond your control. That means you’re going to have to do extra work to include calculation of real WAN latency combined with emulated WAN latencies.
• End-user experience—Growing numbers of companies are hosting web services in the cloud. Testing the performance and scalability of these cloud-based services is important, but that alone is not enough. To understand the exact user experience, you must test end-to-end response times for the client applications that used these remote web services. This is very important when it comes to hosting an end-user client on a web server in the cloud, such as a rich Internet application (RIA) that is accessing the web service.
• International requirements— You may need to consider certain international requirements and issues, including currency calculations and units-of-measure calculations. These can compound performance issues that require additional background processing or real time computations and translation.
• Green computing— As we noted earlier, cloud vendors have different approaches to optimizing system efficiency. So you can expect green computing criteria to differ from cloud vendor to cloud vendor.
• Test data regulations— You might have test data regulations that limit your ability to put test data out into the cloud. This is an important issue. It suggests you need to pay attention to privacy issues and compliance issues surrounding test data management.

Test development and construction Scripts that worked on-premises might not work at all when you move an application to the cloud. There may be different security requirements, firewalls, network routing, and permissions and access. For instance, the directory services that you are using for the application might change, or be completely unavailable out on the cloud.

Authentication type might change from internal certificates or keys to simple user names and password combinations. This means that when you take a script that worked fine on-premises and move it to the cloud, you might need to either rebuild it or update it to make it work properly in a cloud environment. This can happen even if the application under test is still hosted from your own company’s Internet connection, via your Internet service provider (ISP). Also keep in mind that in the cloud, the test and development environment is more dynamic. Because you’re running on a virtualized platform, the test or development image might suddenly “move” to a different execution location. In the middle of recording a script you are developing against one host and one ID, your image moves to another data center. This can happen automatically, behind the scenes under the control of the cloud service provider. Whether you’re on virtual IP or physical IP, your test bed, and even your development environment, will be more dynamic. Because it’s remotely hosted and it’s on a virtual platform, that AUT is not exactly the same physical, solid, local application that you had before.

Budgetary considerations
The cloud application under test should be included in your budget for testing. There’s a real-time cost associated with your test environment now, in terms of bandwidth, system utilization, storage utilization, and actual utilization for the AUT. If the application is running on third-party infrastructure, there are charges associated with renting those machines. This means you have to budget differently from how you would budget if you were buying your own machines and running your tests locally.

One other consideration: You might want to virtualize costly or impossible services from the cloud. This could be the case if you have a third-party web service that is available only for production use, such as a shipping lookup or a geographic lookup service. You have to pay a transaction cost to the third-party vendor for a service like this, and the contract might preclude you from using the service for anything other than production uses. So you might want to virtualize, or sub out, those costly third-party calls to other web services that are running in the cloud.


Test execution:-
When it’s time to execute the test, you are likely to find that it is extremely difficult to monitor the application under test from on-premises. Many monitoring solutions require specialized opened ports, and there could be firewall implications that affect your ability to monitor the AUT.
Another consideration is the need to remotely monitor usage or instances of the cloud. While you are running your test, some cloud vendors will give you information about how many bytes per second you are moving. So there is actually a type of external monitoring of the cloud infrastructure itself. This allows you to see what’s going on beneath the AUT, just as you would with a physical machine in your own data center.

Another good technique in test execution is to use a baseline transaction, or something that is very generic, such as a ping, that hits each node in the architecture. This baseline transaction gives you an idea of latency. This is a “canary in the coal mine” idea. Because the cloud AUT environment is so dynamic, you want to have at least one baseline transaction for relative comparisons. If you can’t have true visibility into certain factors that affect performance—such as bandwidth limitations, the distance and latencies between zones within a cloud vendor, the place where a VM is actually running, and latencies due to movement in the cloud—you at least want to be able to make relative comparisons.

As we noted earlier, your AUT can move around within the cloud infrastructure, because of the elasticity of the cloud. When resources can scale up and scale out dynamically, and when your AUT can suddenly move, best practices call for some type of dynamic monitoring.

In addition, we recommend real-time investigation and optimization. When the AUT is up and running and a bottleneck occurs, it’s important to have diagnostics, to have a profiler, to have the ability to drill down and find out what query is running. In the cloud, the AUT has an on-off switch, so you might find that at the end of your testing, the AUT turns off and the virtual machine is gone—completely. And the next time you run the same text you might be in a slightly different zone or using a slightly different set of resources. So it’s really important to be able to investigate the cause of bottlenecks while you are running your test.

Test results analysis and reporting
Cloud “weather” can affect the accuracy of your test results. Dynamics such as movement in the cloud or changes in the cloud topology are important. It’s also important to use trending across cloud vendors to compare costs and benefits. Sometimes performance is better with one cloud vendor or another, depending on your application; the location of your users; and the type of operating systems, kernels, and platforms you’re using. It’s OK to test them to identify the best fit for your application.


Another consideration: Large test results stored in the cloud can increase costs, because a large set of test results can be many gigabytes. So it makes sense to think about downloading your test results onto a local drive and archiving them locally. And as we mentioned earlier, root-cause investigation can be difficult when the cloud application under test is turned off, is not running, or is simply gone. So how you manage the cloud AUT while the test is running can make a big difference in terms of your analysis capabilities.

Thursday, September 23, 2010

Testing Cloud & on premises applications


Note:- Click on the image to magnify it

Testing cloud and on-premises applications:-
In this scenario, a customer who has a complex system of business services and applications running both in the cloud and on-premises will use a testing solution from the cloud to design, develop, and execute performance testing of the application from the cloud—generating load on the cloud systems and the on-premises systems simultaneously. A complete testing solution hosted in the cloud is ideal for this situation, especially when combined with testing services and consulting assistance that are also hosted in the cloud.

Testing Cloud applications & services



Note:- Click on the image for magnified view

Testing cloud applications and services:-
In this scenario, a customer who has a business service or application running in the cloud will use a testing solution from the cloud to design, develop, and execute performance testing of the application from the cloud—on the cloud and for the cloud. All of the testing assets and results can be maintained in virtual storage in the cloud, or they could be downloaded from the cloud and saved locally on the customer’s machine.