Loadrunner Introduction

Performance Monitoring - overview and terminology
Performance monitoring ensures that you have up-to-date information about how your application is operating under load. Performance monitoring helps to identify bottlenecks and verify whether the application meets its performance objectives, by collecting metrics that characterize the application's behavior under different workload conditions (load, stress, or single user operation). These metrics should then correlate with those defined in the performance objectives. Examples of such metrics can be: response time, throughput and resource utilization (i.e. CPU, memory, disk I/O, network bandwidth).
Without a good understanding of these metrics, it is very difficult to draw the right conclusions and/or pinpoint the bottleneck when analyzing performance results.
Performance Terminology
Quantitative aspects of performance testing are gathered during the monitoring phase. Let's take a closer look at main terms used in performance monitoring.
Two of the most important measures of system behavior are bandwidth and throughput. Bandwidth is a measure of quantity, which is the rate at which work requests are completed.
Throughput can vary depending on the number of users applied to the system under test. It is usually measured in terms of requests per second. In some systems, throughput may go down when there are many concurrent users, while in other systems, it remains constant under pressure but latency begins to suffer, usually due to queuing. How busy the various resources of a computer system get it known as their utilization?
The key measures of the time it takes to perform specific tasks are queue time, service time and response time:
1. Service time measures how long it takes to process a specific customer work request. When a work request arrives at a busy resource and cannot be serviced immediately, the request is queued.
2. Requests are subject to a queue time delay once they begin to wait in a queue before being serviced.
3. Response time is the most important metric - it can be divided into response time at the server or client. Here the latency measured at the server is the time taken by the server to complete the execution of request. This does not take into account the client-to-server latency, which includes additional time for the request and response to cross the network. Another one is latency measured at the client which includes the request queue, the time taken by the server to complete the execution of the request, and the network latency.
Then system is tested for its full functionality and is expected to function as in production and also testing the OLAP report functionality.
[Best Practices] Performance Monitoring - Guidelines
1. Start from a standard sampling interval. If the problem is more specific, or if you are able to pinpoint a suspected bottleneck, then lower the time period.
2. Based on the sampling interval, decide on the entire monitoring session length. Sampling at frequent intervals should only be done for shorter runs.
3. Try to balance the number of objects you are monitoring and the sampling frequency, in order to keep the collected data within manageable limits.
4. Pick only monitors that are relevant to the nature of the application under test in order to comprehensively cover testing scenario, while avoiding redundancy of deploying similar monitors under different names.
5. Too many deployed counters may overburden analysis as well as performance overheads.
6. Make sure the correct system configuration (for example, virtual memory size) is not overlooked. Although this is not exactly a part of the monitoring discipline, it may greatly affect the results of the test.
7. Decide on a policy towards remote machines. Either regularly run the monitor service on each remote machine in order to collect results and then transfer results to the administrator at the end of the run by bulk, or rather continuously gather metrics and move over the network to the administrator. Choose a policy based on the application under test and the defined performance objectives.
8. When setting thresholds, consider any "generic" recommendations set by hardware and/or operating system vendors (for example, Average CPU usage should be below 80% over a period of time, or disk queue length should be less than 2) as relevant for any test and application. This does mean that it's always worth checking the monitoring results and load test response times with other metrics.
9. Choose the parameters that will monitor the most worthwhile activity of the application and its objectives. Having too much data can overburden the analysis process.
10. Monitoring goals can be achieved not only by using built-in system or application objects and counters, but also by watching application-specific logs, scripts, XML files etc.
11. It may be good idea to have a small number of basic monitors constantly running (for example, in HP SiteScope), and more detailed monitoring defined for the load testing scenario during test execution
12. Measure metrics not only under load, but also for some periods before and after the load test to allow for creating a "local baseline", and verifying that the application under test goes back to the baseline once the load test is complete.

[Best Practices] Performance Monitoring - Factors Affecting Performance
It has been known for years that although software development constantly strives towards constant improvement, it will never completely be 100% perfect. An application's performance, in turn, can only be as good as in comparison to its performance objectives.
Performance problems affect all types of systems, regardless of whether they are client/server or web application systems. It is imperative to understand the factors affecting system performance before embarking on the task of handling them.
Generally speaking, the factors affecting performance may be divided into two large categories: project management oriented and technical
Project Management Factors Affecting Performance
In the modern Software Developement Life Cycle (SDLC), the main phases are subject to time constraints in order to address ever growing competition. This causes the following project management issues to arise:
1. Shorter coding time in development may lead to a lower quality product due to a lack of concentration on performance
2. Chances of missing information due to the rapid approach may disqualify the performance objectives
3. Inconsitent internal designs may be observed after product deployment, for example, for example, too much cluttering of objects and sequence of screen navigation
4. Higher probability of violating coding standards, resulting in unoptimized code that may consume too many resources
5. Module reuse for future projects may not be possible due to the project specific design
6. Module may not be designed for scalability.
7. System may collapse due to a sudden increase in user load.
Technical Factors Affecting Performance
While project management related issues have great impact on the output, technical problems may severely affect the application's overall performance. The problems may stem from the selection of the technology platform, which may be designed for a specific purpose and does not perform well under differenet conditions. Usually, however, the technical problems arise due to the developer's negligence regarding performance. A common practice among many developers is not to optimize the code at the development stage. This code may unnecessarily utilize scare system resources such as memory and processor. Such coding practice may lead to severe performance bottlenecks such as:
1) memory leaks
2) array bound errors
3) inefficient buffering
4) too many processing cycles
5) larger number of HTTP transactions
6) too many file transfers between memory and disk
7) inefficient session state management
8) thread contention due to maximum concurrent users
9) poor architecture sizing for peak load
10) inefficient SQL statements
11) lack of proper indexing on the database tables
12) inappropriate configuration of the servers

How does the Loadrunner Tool works?
LoadRunner works by creating virtual users who take the place of real users operating client software, such as Internet Explorer sending requests using the HTTP protocol to IIS or Apache web servers.
Requests from many virtual user clients are generated by "Load Generators" in order to create a load on various servers under test
These load generator agents are started and stopped by the "Controller" program.
The Controller controls load test runs based on "Scenarios" invoking compiled "Scripts" and associated "Run-time Settings".
Scripts are crafted using the "Virtual user script Generator" (named "V U Gen"); It generates C-language script code to be executed by virtual users by capturing network traffic between Internet application clients and servers.
With Java clients, VuGen captures calls by hooking within the client JVM.
During runs, the status of each machine is monitored by the Controller.
At the end of each run, the Controller combines its monitoring logs with logs obtained from load generators, and makes them available to the "Analysis" program, which can then create run result reports and graphs for Microsoft Word, Crystal Reports, or an HTML webpage browser.
Each HTML report page generated by Analysis includes a link to results in a text file which Microsoft Excel can open to perform additional analysis.

Components of Loadrunner:
1. The Virtual User Generator captures end-user business processes and creates an automated performance testing script, also known as a virtual user script.
2.The Controller organizes, drives, manages, and monitors the load test.
3.The Load Generators create the load by running virtual users.
4.The Analysis helps you view, dissect, and compare the performance results.
5.The Launcher provides a single point of access for all of the LoadRunner components.