Wednesday, November 17, 2010

Wasted Time in LoadRunner

Wasted Time:-
If your guess is Replay log, Yes… We get this in the Replay Log of VuGen Script as shown below.
Action.c(21): Notify: Transaction "Home" ended with "Pass" status (Duration: 20.1061 Wasted Time: 0.0010)


Here is what HP says about Wasted Time.


Wasted time is time spent on activities whose purpose is to support test analysis, but would never be performed by a browser user, for example, time spent keeping transaction statistics for later reporting. Wasted time is calculated internally by LoadRunner. Your script can also add wasted time with lr_wasted_time.


Sometimes, you may enter activities in a script that your do not want reported as part of the transaction statistics. Generally, these are activities related to record keeping, logging, or custom analysis. If you enhance the script with steps whose durations should not be included in the test statistics, you can track the time used by these steps with lr_start_timer and lr_end_timer. Then, the function lr_wasted_time is used for adding this user-determined time to the internally generated wasted time.


You can retrieve the total wasted time (both that generated by LoadRunner automatically and that added with lr_wasted_time) with the function lr_get_transaction_wasted_time, or with lr_get_trans_instance_wasted_time, as appropriate.

When VuGen creates the log file, output.txt, and when on line testing generates the Vuser log files, no action is taken with the wasted time. The actual elapsed transaction times are reported. The lr_get_transaction_duration function also returns the actual elapsed transaction time. This information may be useful in analyzing and developing test scripts.


However, in the on-line graphs in the LoadRunner Controller and the transaction response time graphs in the LoadRunner Analysis, the transaction times are reported after subtracting the wasted time. This is the time most pertinent to understanding the system being tested.

Monday, November 15, 2010

HP LoadRunner 11.0 Features

HP LoadRunner for the Windows operating system Software version: 11.00
Publication date: October 2010
What's New
Protocols
Ajax TruClient -
An advanced protocol for modern JavaScript based applications (including Ajax) emulating user activity within a web browser. Scripts are developed interactively in Mozilla Firefox.
Silverlight - A new protocol for Silverlight based applications emulating user activity at the transport level. Allows generating high level scripts by automatically importing and configuring WSDL files used by the application.
Java over HTTP - A new protocol designed to record java-based applications and applets. It produces a Java language script using web functions. This protocol is distinguished from other Java protocols in that it can record and replay Java remote calls over HTTP.
Citrix
The Citrix Protocol now supports Citrix Online Plugin versions 11.2 and 12.0.
Added support for Citrix XenApp Server 5.0
Oracle NCA - NCA Java object property support now provides automated creation and registration within a script of a query-answer table of communication between client-side Java objects and the Oracle NCA server.
SAPGUI - Added support for SAPGUI for Windows Client version 7.20..
Service Test - The LoadRunner Controller can run scripts created in HP Service Test 11.00, HP's solution for creating and running automated tests for SOA and headless technologies. Refer to the Service Test documentation for details of creating Service Test scripts for a load testing scenario.

Features
Data Format Extension (DFE)
- Enhanced data format capabilities for the Web (HTTP/HTML) protocol family. Allows converting raw HTTP traffic into a maintainable and structured XML format and enables correlations by XPATH.
Correlation Studio - Web (HTTP/HTML) automatic correlation mechanism has been enhanced to search for possible correlations in the larger scope of snapshot data created during code generation including data formatted by DFE.
Snapshot View - New snapshot view for Web (HTTP/HTML) protocol steps allows viewing complete HTTP traffic in both raw and DFE generated formats.
VuGen - HP ALM Integration - Enhanced integration with HP Application Lifecycle Management platform that serves also Quality Center and Performance Center editions..
Windows Support - Added support for Windows 7 and Windows Server 2008. See below for limitations. Analysis Reports - Enhanced Analysis reports are more customizable. Analysis data can be exported to a variety of formats, including Word, Excel, PDF, and HTML. New report templates allow saving report definitions and generating reports based on a template.

LIMITATIONS:-
VuGen

SAP (Click and Script) recording. During recording, if you use a keyboard option instead of a UI element (for example, pressing Enter instead of clicking the log on button), the step may not be recorded. In general, when recording your script, it is recommended to use UI elements rather than keyboard options.
Citrix snapshots. Black snapshots may appear during record or replay when using Citrix Presentation Server 4.0 and 4.5 (before Rollup Pack 3).
Possible workaround: On the Citrix server select Start Menu > Settings > Control Panel > Administrative Tools > Terminal Services Configuration > Server Settings > Licensing and change the setting Per User or Per Device to the alternative setting (ie If it is set to Per User , change it to Per Device and vice versa.)

Recording Window Size and XenApp Plugin for Hosted Applications 11. The recording window size options does not work properly with the XenApp Plugin for Hosted Applications 11. The size of the client window is installed, but the server screen resolution is not. This is a Citrix Client bug and will be fixed in future Citrix Client versions.

Workaround: When recording, set the window size equal to the local screen resolution. When replaying/load testing, set the VuGen or Load Generator's screen resolution to equal the resolution used when the script was recorded.
-->For the following protocols, replay is supported but recording is no longer supported: Siebel DB2, Siebel MSSQL, Siebel Oracle, and Voice XML.
-->In Test Result reports , you can only use the Export to HTML file utility for scripts replayed in version 9.50 and later. To generate an HTML report for scripts created with earlier versions, run the script again in the 9.50 version (or later) of the product.
-->If you terminated the Protocol Advisor immediately after the detection process started, the Protocol Advisor may leave a zombie process of the detected application, causing all recordings and detection sessions that follow to fail. Workaround: Manually terminate the zombie process.
-->The new Web snapshot model is backward compatible with previous versions of LoadRunner, however some snapshot data may be missing. If this occurs, regenerate the script.
-->The JSON Data Format Extension does not work in UNIX environments.
-->Recording a network based application on Windows 7. To record a network-based application (including Web), IPv6 communication must be disabled. Follow the instructions in http://www.addictivetips.com/windows-tips/how-to-disable-ipv6-in-windows-7/ to disable IPv6 in Windows 7.
-->For protocols using XML, replay fails to create a request when a parameterized input argument contains the ampersand (&) character.

Correlation Studio
In VuGen, when correlating large amount of snapshot data, in some cases the parameter is not created and data is not correlated.
Scan for correlation will work on a script that was regenerated and replayed with the early access version.

Note:- Please share your experiences on LR 11.0

Source:- Vietnamesetestingboard

Friday, October 29, 2010

Cloud testing solutions with LoadRunner

HP has long been an industry leader in performance testing. In addition, we have strong cloud offerings of our own. We understand performance testing in the cloud, and we offer a range of options to help your IT organization make sure that cloud services are delivering the performance and scalability your business requires.


HP LoadRunner: the No. 1 performance testing software
At the core of our offerings is HP LoadRunner software, a comprehensive testing solution for predicting system behavior and performance that is currently in use by thousands of businesses around the world.
HP LoadRunner can:
• Record scripts at the interface level via clicks on screens and automatically capture valuable scripting information to create succinct, visually intuitive, self-explanatory scripts—and reduce scripting and maintenance time by 80 percent
• Emulate thousands of concurrent users, mimicking real users, so that you can apply production
workloads to almost any application platform or environment, on-premise or in the cloud
• Stress applications end to end and gather data to identify scalability issues and quickly isolate performance bottlenecks
• Use non-intrusive, real-time performance monitors to obtain and display performance data from every tier, server, and system component during the load test
• Provide a single view of end-user, system-level, and code-level performance data, so that you can drill down deeper and identify the root cause of the problems
• Support performance testing for a wide range of application environments and protocols, including web, service-oriented architecture (SOA) and web services, asynchronous JavaScript + XML (Ajax), Remote Desktop Protocol (RDP), database, terminal, Citrix, Java™, .NET, and all major enterprise resource planning (ERP) and customer relationship management (CRM) applications, including PeopleSoft, Oracle®, SAP, and Siebel
• Support a combination of on-premises testing and cloud testing
Key features and benefits
HP LoadRunner, the industry’s best-selling load testing software, now available in the cloud, makes performance testing more accessible to all businesses. This on-demand software solution allows you to use a flexible pay-as-you-go approach for performance testing of mission-critical applications and websites.
HP LoadRunner in the cloud allows your business to:
• Save money by using affordable, hourly rates—you just need a credit card to pay on demand
• Protect the integrity of applications and website performance by making testing more ubiquitous
• Increase agility for quicker response to unplanned, ad hoc situations with an on-demand, pre-installed performance testing application

Failover testing
In the cloud, failover testing must be done virtually, not physically. You can no longer walk behind a machine and disconnect a cable. In the cloud, you’re going to have to virtually disable a network interface controller or a storage adapter to do failover testing for disaster recovery.
Elasticity and scalability
If you’re stress testing, you might find that you can keep adding load and never exhaust any one resource, because the cloud keeps adding resources to the system under test. This might give you the impression that the AUT scales quite well. So be aware of false positives. You also need to be mindful of false negatives that could arise if the application isn’t configured to use the elastic nature of the cloud.
Application tuning
Even though the cloud is an elastic environment that can add physical resources, it is not a replacement for actually tuning up the application code logic. Efficiency is a key part of computing in the cloud. You pay for a less efficient application that does more cycles, makes too many round trips to the client, and moves too much data back and forth between a database service. You pay for inefficiency transactionally as a direct cost and you pay for it in terms of wasted power and cooling.

Cross-cloud alignment bottlenecks

If you’re running different parts of the application under test in different cloud vendor environments, you need to consider cross-cloud alignment bottlenecks and the latency that they bring. You need to be able to align web services and align latencies.

Alignment is particularly important when you’re looking at a web service or application server that is running a web service that is talking to a database service that may be in the same cloud or a different cloud. If the database layer in the cloud has to make extra round trips to collect data sets, you’re going to encounter the associated costs. In on-premises testing, you can take the database layer for granted. That’s not the case with the cloud. And keep in mind that when you make calls that jump across cloud vendors, you might end up paying double for throughput—because you’re paying for cloud-in at one vendor and cloud-out at another vendor. Cloud-to-cloud communication can actually be very costly.

What’s new about cloud performance?

The characteristics of the cloud create a new set of considerations for the entire lifecycle of performance testing from test planning to results analysis and reporting.

Test planning:-
When it comes to test planning, user stories and success criteria are very similar. You still must be concerned with defining objectives for end-user response times, numbers of users, and the ability to meet the requirements of service-level agreements (SLAs). But the way your test bed is designed and constructed can be very different.


Let’s walk though some of these concerns:
• Test bed issues— Your test bed may need to be designed to scale up a number of nodes, scale out a number of nodes, and scale across multiple cloud vendors. What’s more, it might all be located off premises, so you’re not setting up the physical test bed in your local test lab; instead it’s out there in the cloud.
• Response times—If you’re used to testing response times with wide-area network (WAN) emulation, your issues are going to be compounded by the cloud. When your application is hosted on the cloud by external resources, latencies are beyond your control. That means you’re going to have to do extra work to include calculation of real WAN latency combined with emulated WAN latencies.
• End-user experience—Growing numbers of companies are hosting web services in the cloud. Testing the performance and scalability of these cloud-based services is important, but that alone is not enough. To understand the exact user experience, you must test end-to-end response times for the client applications that used these remote web services. This is very important when it comes to hosting an end-user client on a web server in the cloud, such as a rich Internet application (RIA) that is accessing the web service.
• International requirements— You may need to consider certain international requirements and issues, including currency calculations and units-of-measure calculations. These can compound performance issues that require additional background processing or real time computations and translation.
• Green computing— As we noted earlier, cloud vendors have different approaches to optimizing system efficiency. So you can expect green computing criteria to differ from cloud vendor to cloud vendor.
• Test data regulations— You might have test data regulations that limit your ability to put test data out into the cloud. This is an important issue. It suggests you need to pay attention to privacy issues and compliance issues surrounding test data management.

Test development and construction Scripts that worked on-premises might not work at all when you move an application to the cloud. There may be different security requirements, firewalls, network routing, and permissions and access. For instance, the directory services that you are using for the application might change, or be completely unavailable out on the cloud.

Authentication type might change from internal certificates or keys to simple user names and password combinations. This means that when you take a script that worked fine on-premises and move it to the cloud, you might need to either rebuild it or update it to make it work properly in a cloud environment. This can happen even if the application under test is still hosted from your own company’s Internet connection, via your Internet service provider (ISP). Also keep in mind that in the cloud, the test and development environment is more dynamic. Because you’re running on a virtualized platform, the test or development image might suddenly “move” to a different execution location. In the middle of recording a script you are developing against one host and one ID, your image moves to another data center. This can happen automatically, behind the scenes under the control of the cloud service provider. Whether you’re on virtual IP or physical IP, your test bed, and even your development environment, will be more dynamic. Because it’s remotely hosted and it’s on a virtual platform, that AUT is not exactly the same physical, solid, local application that you had before.

Budgetary considerations
The cloud application under test should be included in your budget for testing. There’s a real-time cost associated with your test environment now, in terms of bandwidth, system utilization, storage utilization, and actual utilization for the AUT. If the application is running on third-party infrastructure, there are charges associated with renting those machines. This means you have to budget differently from how you would budget if you were buying your own machines and running your tests locally.

One other consideration: You might want to virtualize costly or impossible services from the cloud. This could be the case if you have a third-party web service that is available only for production use, such as a shipping lookup or a geographic lookup service. You have to pay a transaction cost to the third-party vendor for a service like this, and the contract might preclude you from using the service for anything other than production uses. So you might want to virtualize, or sub out, those costly third-party calls to other web services that are running in the cloud.


Test execution:-
When it’s time to execute the test, you are likely to find that it is extremely difficult to monitor the application under test from on-premises. Many monitoring solutions require specialized opened ports, and there could be firewall implications that affect your ability to monitor the AUT.
Another consideration is the need to remotely monitor usage or instances of the cloud. While you are running your test, some cloud vendors will give you information about how many bytes per second you are moving. So there is actually a type of external monitoring of the cloud infrastructure itself. This allows you to see what’s going on beneath the AUT, just as you would with a physical machine in your own data center.

Another good technique in test execution is to use a baseline transaction, or something that is very generic, such as a ping, that hits each node in the architecture. This baseline transaction gives you an idea of latency. This is a “canary in the coal mine” idea. Because the cloud AUT environment is so dynamic, you want to have at least one baseline transaction for relative comparisons. If you can’t have true visibility into certain factors that affect performance—such as bandwidth limitations, the distance and latencies between zones within a cloud vendor, the place where a VM is actually running, and latencies due to movement in the cloud—you at least want to be able to make relative comparisons.

As we noted earlier, your AUT can move around within the cloud infrastructure, because of the elasticity of the cloud. When resources can scale up and scale out dynamically, and when your AUT can suddenly move, best practices call for some type of dynamic monitoring.

In addition, we recommend real-time investigation and optimization. When the AUT is up and running and a bottleneck occurs, it’s important to have diagnostics, to have a profiler, to have the ability to drill down and find out what query is running. In the cloud, the AUT has an on-off switch, so you might find that at the end of your testing, the AUT turns off and the virtual machine is gone—completely. And the next time you run the same text you might be in a slightly different zone or using a slightly different set of resources. So it’s really important to be able to investigate the cause of bottlenecks while you are running your test.

Test results analysis and reporting
Cloud “weather” can affect the accuracy of your test results. Dynamics such as movement in the cloud or changes in the cloud topology are important. It’s also important to use trending across cloud vendors to compare costs and benefits. Sometimes performance is better with one cloud vendor or another, depending on your application; the location of your users; and the type of operating systems, kernels, and platforms you’re using. It’s OK to test them to identify the best fit for your application.


Another consideration: Large test results stored in the cloud can increase costs, because a large set of test results can be many gigabytes. So it makes sense to think about downloading your test results onto a local drive and archiving them locally. And as we mentioned earlier, root-cause investigation can be difficult when the cloud application under test is turned off, is not running, or is simply gone. So how you manage the cloud AUT while the test is running can make a big difference in terms of your analysis capabilities.

Thursday, September 23, 2010

Testing Cloud & on premises applications


Note:- Click on the image to magnify it

Testing cloud and on-premises applications:-
In this scenario, a customer who has a complex system of business services and applications running both in the cloud and on-premises will use a testing solution from the cloud to design, develop, and execute performance testing of the application from the cloud—generating load on the cloud systems and the on-premises systems simultaneously. A complete testing solution hosted in the cloud is ideal for this situation, especially when combined with testing services and consulting assistance that are also hosted in the cloud.

Testing Cloud applications & services



Note:- Click on the image for magnified view

Testing cloud applications and services:-
In this scenario, a customer who has a business service or application running in the cloud will use a testing solution from the cloud to design, develop, and execute performance testing of the application from the cloud—on the cloud and for the cloud. All of the testing assets and results can be maintained in virtual storage in the cloud, or they could be downloaded from the cloud and saved locally on the customer’s machine.

Testing on-premises applications from the CLOUD


Click on the above image to magnify it...
Testing on-premises applications from the cloud:-
In this scenario, a customer who has an on-premises application will make the application accessible through the firewall thru the Internet, or perhaps it is an e-business application that is normally hosted with Internet access. The customer will use a testing solution like HP LoadRunner software from the cloud to design, develop, and execute performance testing of the application from the cloud. All of the testing assets and results can be downloaded from the cloud and saved locally on the customer’s machine.









Cloud Computing for Performance testers Part2...

Accessibility: -
Accessibility refers to the ability to access the interfaces and the system under test instantly. The minute you fire up a virtual machine, it’s available. You can access that virtual machine from anywhere, and you can access it via application programming interfaces (APIs). But it’s remotely running, so you don’t have physical access to the machine that supports it.

For the developer or performance testers, cloud accessibility is a two-way street. It brings new ways for systems to automatically connect themselves, but it takes away your ability to physically access the underlying systems. You can no longer touch the machine. Accessibility is now in a different mode.
The system is hosted on the Internet. It’s governed by a remote set of administrators that you don’t havecontrol over. There is a whole new way of accessing the system, and this changes how you test it.

Efficiency:-
When it comes to measuring the efficiency of systems, things are very different in the cloud. When systems are located on your premises, you have a direct view into power and cooling issues, temperature deltas, and the characteristics of new system architectures. In the cloud, everything is hosted remotely. Each cloud vendor has a different architecture and a different approach to optimizing the efficiency of your particular solution. So if you’re testing for efficiency, lots of things
are going to change from one cloud vendor to the next.


Global delivery:-
The cloud is ubiquitous. Inherently, it is everywhere, instantly. When you put an application in the cloud, you can get to it from virtually anywhere that you can access the Internet. This global characteristic of the cloud raises special testing concerns. Your performance criteria for enduser response time is key, of course, but you also have to think about other issues—currency calculations, metrics, and all the different functions that make an application available globally in different languages in different cultures, in different currency systems, and in different units of measure.

Immediacy:-
Immediacy has the biggest impact to quality assurance (QA). Immediacy enables elasticity, or the ability to expand or contract almost instantly. But beyond that dynamic, we are talking about the immediacy at which the application under test (AUT)
becomes available. That means it can be developed faster, and it can be turned on and turned off like a light switch.

The speed at which we can deploy an application into a test environment creates unprecedented challenges for performance testing. Deployment time is just shy of instantaneous. You used to have perhaps three to four months to build the system under test, build the test bed, do the test planning, and indentify the test requirements. Now you might have just minutes
to complete the same tasks. The ease and speed at which you can create the system under test means there is now less time for testing.

Thursday, September 9, 2010

Cloud Computing for Performance testers - Part1

Before we get into the importance of Cloud Computing in Performance testing area, we will through a pen on the charactersitics of Could Computing.....
Elasticity:-
When it comes to performance testing in the cloud, the characteristic of greatest concern is elasticity, or scalability.

From a test perspective, this is the familiar scale-up and scale-out story. There is no real difference here in terms of how the cloud is elastic, but there can be significant differences in the ability of the architecture to adapt to changing conditions.

When it comes to testing scale-up and scale-out capabilities, the cloud architecture might be clustered in a manner to automatically add more resources, such as CPU and memory capacity. It could add more nodes to the cluster. And it could automatically adapt to the load you put on the system. That is the elastic responsiveness of the cloud, which can be configured for an application. Then there is the flip side of cloud elasticity—the ability of the architecture to contract. In the cloud, you can actually have a system that scales down automatically when fewer resources are needed. This means you have to incorporate a test run that not only scales up when you need more capacity but scales down when you reduce the load on the system.

For example, you might shed memory and drop a few nodes from a cluster to reduce the cost of operating a service in the cloud.

Wednesday, September 8, 2010

Insight into "CLOUD COMPUTING"

At the most fundamental level, the cloud is a highly organized set of computing systems. It is composed of familiar resources in the world of computing—such as servers, networks, and storage.

If you’re testing a cloud-based application on a set of blade servers or a database server with storage behind it, you encounter the same familiar technologies you would find in on-site testing—such as microprocessors, memory, and networking.

In this sense, the cloud operates on the same premise as everyday computing. Servers connect to networks and networks connect to storage systems. The result is a platform that runs software applications. It just so happens that those applications are either running a virtual machine or are running web services on a platform that is running on a virtual machine. The
platform could be running a website or it could be running a collection of applications on top of virtual machines. Viewed from this level, the cloud doesn’t change much—until you consider that these things are accessed via the Internet.

But the cloud has other interesting aspects for developers and performance testers. The access to the Internet—the bytes that go in and the bytes that come out—is billed on demand and has some instant-on capabilities. When consuming a cloud-based service or cloud-based application, we have unique operational models—such as billing on demand, deploying a virtual machine on demand, and creating a web service on demand.

If your code is ready to go, you can now have an application up and running in minutes and be billing customer transactions almost instantly.

With its on-demand and instant-on capabilities, the cloud offers inherent scalability. That’s implied by the very architecture of the cloud.

With its use of virtualization and web services, the cloud is both elastic and expansive. It will automatically expand itself to meet the demand. This characteristic has far reaching implications for performance testing.

Let’s take a closer look at the characteristics of the cloud that bring a new set of concerns for performance testing….to be continued in my next post. (Source:- www.hp.com)

Myths on "CLOUD COMPUTING"

The concept of the cloud has come to be associated with many things that actually aren’t the cloud, although they are closely connected to it. Let’s walk through some cloud definitions that can be potentially misleading, such as:

• “The cloud is just search engines and e-commerce sites.” Sure, the cloud is often associated with Internet search engines such as Google and Yahoo! and e-commerce sites such Amazon. But these services pre-date today’s “cloud hype,” and we’ve been searching online for decades.

• “The cloud is a bunch of huge data centers located somewhere in the distance.” The leading players in Internet search are investing in enormous data centers, which summons up images of grid or matrix computing. But big data centers have been around for nearly 50 years.

• “The cloud is anything on the Internet, in a world of its own.” The cloud is sometimes seen as an Internetonly world that lives in its own space, in isolation from all the companies and individuals who use it every day. But we can access the Internet from our televisions, computers, cell phones, and cars, so the Internet is already everywhere.

• “The cloud is a reference to virtualized resources.” Virtualization is a term that is commonly associated with the cloud. There is a view that “if it’s in a virtual machine (VM), it’s in the cloud.” But then, some companies run very large private virtualized data centers that are not connected to the public Internet in any way.

• “The cloud is services.” Now this gets really confusing, because there are so many types of services associated with the cloud and the Internet. These include software-as-a-service (SaaS), storage as a service, and applications as a service, along with consulting services (uTest, mTurk), financial services (online banking), and even food services (remember WebVan?)

• “The cloud is network compute clusters.” These would-be high-performance compute clusters that deliver massive amounts of computing power over the Internet are often associated with references to the cloud, or grid computing systems.

While all of these concepts are closely associated with the cloud, none of them actually is the cloud. So let’s look at terms that come closer to describing the cloud in accurate terms in the next post. (Source:- www.hp.com)

Tuesday, September 7, 2010

CPU Utilization (Per User/Total/Server)

1)CPU utilization of user process + CPU utilization of the System state => Total CPU utilization at any given point From Subject Received Size
Nagendra Elugoti RE: The page cannot be displayed error-Bachhus Env 1:46 PM 96 KB in time.

2)Server Centric CPU utilization – user = Total usage of CPU by worker> threads inside the application server context(a portion of CPU utilization –> user should technically be a part of Server Centric CPU utilization)

3)CPU utilization – user = Time spent by CPU in executing user program(time> consumed for activities involving kernal working on the job on behalf of the> user program DOES NOT fall into this category.)

Hence, server centric CPU utilization – user can be greater or lesser than CPU utilization – user of the entire box because the boundaries of these two metrics are different(example: 100% of Server Centric CPU utilization – user can possibly be only 10% of CPU utilization – user)

Tuesday, July 27, 2010

Stress Testing Vs Load Testing

Load testing examines the entire environment and database, while measuring the response time, whereas stress testing focuses on identified transactions, pushing to a level so as to break transactions or systems.
System stress testing: also known as stress testing, is loading the concurrent users over and beyond the level that the system can handle, so it breaks at the weakest link within the entire system.During stress testing, if transactions are selectively stressed.
The other way of dealing this is... testing the system with heavy load & with less resources to handle.
Where as Load Testing mean to test the system with full load & with full resorces in hand.

Monday, July 26, 2010

Hits per sec & CPU

As the hits per second increases....
CPU:- Increases with increase in hits per sec

Relation between Hits Per sec, CPU Usage & Response times

As the hits per second increases....

Response times:-
1. If the application is good enough to high user load, we dont see any changes in teh response times with respect to hits per sec
2. If the application is not good enough & with performance bottlenecks then response times gradually increase with increase in hits per sec.
3. Finally if there is any network delay which occurs intemittently in some instances will increase the response times of all the transaction at that moment & hence due to which hits per sec & throughput decreases.

Relation Between Hits Per Sec & Throughput

As the hits per second increase....

Throughput:- Increases with increasing hits on the webserver for a certain range. Generally it also depends on the bandwidth available. Then at a certain range the Throughput becomes constant & at this point if hits per sec increases then the throughput decreases.

So when ever we derive a relation between Hits per sec & Throughput then with in the threshold limits they are directly proportional to each other...