Jump to content

HP WinRunner: Difference between revisions

From Wikipedia, the free encyclopedia
Content deleted Content added
correct company
Teji777 (talk | contribs)
Winrunner In depth
Line 1: Line 1:
'''[[Mercury Interactive]]'s WinRunner''' is an automated [[regression testing]] tool that allows a user to record and play back test scripts. The software implements a proprietary Test Script Language that allows customization and parameterization of user input.
'''[[Mercury Interactive]]'s WinRunner''' is an automated [[regression testing]] tool that allows a user to record and play back test scripts. The software implements a proprietary Test Script Language that allows customization and parameterization of user input.


Winrunner(Advance)
{{compu-soft-stub}}
1.There are basically two types of recording;-
§ Context sensitive recording
§ Analog recording
Context sensitive recording-----Records the operations you perform on your application by
identifying GUI objects.
Analog recording------In Analog recording ,analog records keyboard input,mouse
clicks,and the precise x-and y-coordinates traveled by the mouse
pointer across the screen.
2. The Winrunner script can be run in three basic mode
(a) VERIFY
(b)DEBUG
(c)UPDATE
VerifyMode-Verify mode is used to check your application.Winrunner compares the current response of your application to its expected response .Any discrepancies between the current and expected responses are captured and saved as verification results.
DebugMode---Debug mode is used to identify bugs in a test script.Running a test in a debug mode is the same as running a test in verify mode ,except that debug results are always saved in the ddebug folder.
UpdateMode---Update mode is used to update the expected results of a test or to create a new expected results folder.

3. There are three types of Checkpoints avaible in Winrunner
Those are;-
§ GUI checkpoints-A GUI checkpoint helps you to identify changes in the look and behaviour of GUI objects of your application.The results of a GUI checkpoint are displayed in the GUI checkpoint results dialog box that you open from the test results window.
Bitmap checkpoints-- A bitmap checkpoint compares expected and actual bitmaps in your application.
In the Test Results window you can view pictures of the expected and actual
results. If a mismatch is detected by a bitmap checkpoint during a test run in
Verification or Debug mode, the expected, actual, and difference bitmaps are
displayed. For a mismatch during a test run in Update mode, only the expected
bitmaps are displayed.
Database checkpoints-- A database checkpoint helps you to identify changes in the contents and structure
of databases in your application. The results of a database checkpoint are
displayed in the Database Checkpoint Results dialog box that you open from the
Test Results window.
4. We can edit the data in the Edit Check dialog box, which we open from
the diffrent Checkpoint Results dialog box. To do so, highlight the Content
check, and click the Edit Expected Value button.
5.Synchronization Point-- Synchronization points enable you to solve anticipated timing problems between
the test and your application. For example, if you create a test that opens a
database application, you can add a synchronization point that causes the test to
wait until the database records are loaded on the screen.
Functions & Arguments for synchronization
Stopping or pausing a test: You can stop or pause a test that is waiting for a
synchronization statement by using the pause or stop softkeys.
Recording in Analog mode: When recording a test in Analog mode, you should
press the SYNCHRONIZE BITMAP OF OBJECT/WINDOW or the SYNCHRONIZE BITMAP OF
SCREEN AREA softkey to create a bitmap synchronization point. This prevents
WinRunner from recording extraneous mouse movements. If you are
programming a test, you can use the Analog TSL function wait_window to wait
for a bitmap.
Data-driven testing: In order to use bitmap synchronization points in datadriven
tests, you must parameterize the statements in your test script that
contain them.

6. Intially in Winrunner we have four types of files
Those are :-
§ Script File- is a file which contains all the script of the test which is being used for testing of some undertaken project.
§ GUI(Graphic User Interface) File-Gui file is used to save the information of the previously learnt an individual GUI object, a window, or all the GUI objects within a window by WR.
§ CheckList File---Checklist file contains the behavior of a new version of your object or window with the behavior of a previous version.
§ Result File---Result file carries the information of the errors and executed script by lines of execuetion.
7.Gui file is a file whichcontains the information of the previously learnt an individual GUI object, window, or all the GUI objects within a window by WR.,The contents are class,logicname,ID etc.
8. When custom object is close to a similar standard class, then the functionality of a
custom test button is dependent on a standard class. This is known as a class
mapping.
9.Yes ,the anyscript can be executed from another script,by using “call” method.,The call method calls the mentioned script in the main script by giving the exact path of the called script.
10.tl_setp() --divides a test script into sections and inserts a status message in the test results for the previous section.
report_msg( )--The report_msg function inserts a message or expression into the test report.
11.)The startup file contains the startup commands which gets executed only at once. The purpose of startup file is to run once during a session to avoid the win runner from unnecessarily running them again and again.
12.) Checklist file contains the behavior of a new version of your object or window with the behavior of a previous version.Checklist file is stored in a main script Folder.
13.)Gui MAp File per test . Every time you create a new test, WinRunner automatically creates a new GUImap file for the test. Whenever you save the test, WinRunner saves the
corresponding GUI map file. The GUI map file is saved in the same folder as the
test. Moving a test to a new location also moves the GUI map file associated with
the test.
Global Gui Map File-- mode enables you to create a GUI map file for your
entire application, or for each window in your application. Multiple tests can
reference a common GUI map file.
14. The GUI File can hold maximum 500 hundred Items ,it can hold more but those will not function properly.
15. When using the Global GUI Map File the GUI file has to be loaded before
invoking the application.
In the GUI Map File per Test mode, WinRunner creates a new GUI map file
whenever you create a new test. WinRunner saves the test’s GUI map file whenever
you save the test. When you open the test, WinRunner automatically loads the GUI
map file associated with the test.
16.) When we load a compiled module, its functions are automatically compiled and
remain in memory. we can call them directly from within any test. A compiled
module is associated with the main test file using the reload command.
The syntax used for it is reload(ModuleName,Module Type,Module Appearance).
17.)GUI Spy is a spy which view the properties of any GUI object on
your desktop, to see how WinRunner identifies it. we use the Spy
pointer to point to an object, and the GUI Spy displays the properties and their values
in the GUI Spy dialog box. You can choose to view all the properties of an object, or
only the selected set of properties that WinRunner learns.
18).In case of non-data tests ,the testing process is performed in three steps:
Creating a test
Running a test
Analyzing test results.
When we create a data-driven test,we perform an extra two-part step between creating the test and running it: converting the test to a data-driven test and creating a corresponding data table.
For Notepad

# Data DRiven testing(Notepad)
static infile="c:\\Training_7\\Tajinder\\WR\\notepad.txt";
file_close(infile);
file_open(infile,FO_MODE_READ);
while(file_getline(infile,line) == E_OK)
{
split(line,myarray,",");

}file_close(infile);
#Data DRiven testing(Excel)
static excel="c:\\Training_7\\Tajinder\\WR\\excel.xls";
if (win_exists("Flight Reservation") == E_OK)
{
report_msg("Login Passed");
set_window("Flight Reservation");
}
ddt_close(excel);
ddt_open(excel,DDT_MODE_READ);
ddt_get_row_count(excel,num);
for(i=1;i<=num;i++)
{
ddt_set_row(excel,i);
agent = ddt_val(excel,"AGENT");
if (agent == user)
{
dof = ddt_val(excel,"DATEOFFLIGHT");
ffr = ddt_val(excel,"FLYFROM");
fto = ddt_val(excel,"FLYTO");
fno = ddt_val(excel,"FLIGHTNO");
nam = ddt_val(excel,"NAME");
tic = ddt_val(excel,"TICKETS");
cls = ddt_val(excel,"CLASS");
#Data Driven testing (Access Database)
db_connect("mysession","DSN=sampledsn");
db_execute_query("mysession","select * from sampledb",num);
for (i=0;i<num;i++)
{
db_get_row("mysession",i,line);
split(line,myarray,"\t");
#Database Driven testing (Oracle Database)

db_connect("mysession","DSN=tajdsn;uid=scott;pwd=tiger");
db_execute_query("mysession","select * from onlinecatalog",num);
for(i=0;i<=num;i++)
{
db_get_row("mysession",i,r1);
split(r1,myarray,"\t");
19.) User-defined functions are convenient when you want to perform the same
operation several times in a test script. Instead of repeating the code, you can
write a single function that performs the operation. This makes your test scripts
modular, more readable, and easier to debug and maintain.

To add a function to the Function Generator:
1. Open the Function Generator. (Choose Create > Insert Function > From Function Generator, click the Insert Function from Function Generator button on the User toolbar, or press the INSERT FUNCTION FROM FUNCTION GENERATOR softkey.)
2. In the Category box, click function table.
3. In the Function Name box, click generator_add_function.
4. Click Args. The Function Generator expands.
5. In the Function Generator, define the function_name, description, and
arg_number arguments:
6. For the function’s first argument, define the following arguments: arg_name,
arg_type, and default_value (if relevant).
7. Click Paste to paste the TSL statement into your test script.
8. Click Close to close the Function Generator.
20.) To change the Logical name of the GUI
Assume that the GUI Map Editor window is Open
Select the Window/Object to be modified
Click “On Modify Button”
Small “Modify Window” will appear
Change the Logical Name and press OK Button
The operation will be done.
21.) Windows often have varying labels. For example, the main window in a text
application might display a file name as well as the application name in the title
bar.
If WinRunner cannot recognize a window because its name changed after
WinRunner learned it, the Run wizard opens and prompts you to identify the
window in question. Once you identify the window, WinRunner realizes the
window has a varying label, and it modifies the window’s physical description
accordingly.
22.) There are two types of standard database checkpoints: Default and Custom we can use a default check, to check the entire contents of a result set, or we can use a custom check, to check the partial contents, the number of rows, and the number of columns of a result set. Information about which result set properties to check is saved in a checklist. WinRunner captures the current information about the database and saves this information as expected results. A database checkpoint is automatically inserted into the test script. This checkpoint appears in your test script as a db_check statement.
23.) User-defined functions are convenient when you want to perform the same
operation several times in a test script. Instead of repeating the code, you can
write a single function that performs the operation. This makes your test scripts
modular, more readable, and easier to debug and maintain.
24.) Batch Mode determines whether WinRunner suppresses messages during a test
run so that a test can run unattended. WinRunner also saves all the expected and
actual results of a test run in batch mode in one folder, and displays them in one
Test Results window.
1 Choose Settings > General Options. The General Options dialog box opens.
2 Click the Run tab.
3 Select the Run in batch mode check box.
4 Click OK to close the General Options dialog box.
For more information on setting the batch option in the General Options dialog
25. To check a single broken link:
1.Choose Create > GUI Checkpoint > For Object/Window.
The WinRunner window is minimized to an icon, the mouse pointer turns into a
pointing hand, and a help window opens.
2.Double-click a link on your Web page. The Check GUI dialog box opens, and the
object is highlighted.
3. In the Objects column, make sure that the link is selected.
The Properties column indicates the properties available for you to check.
4. In the Properties column, select the BrokenLink check box.
5. In the Properties column, select the BrokenLink check box.
Click the Edit Expected Value button, or double-click the value in the Expected
Value column to edit it. A combo box opens.
Select Valid or NotValid. Valid indicates that the link is active, and NotValid
indicates that the link is broken.
6.Click OK to close the Check GUI dialog box.
WinRunner captures the object information and stores it in the test’s expected
results folder. The WinRunner window is restored and a checkpoint appears in
your test script as an obj_check_gui or win_check_gui statement.
26. The Script gets stored in a folder that has the same name as main file and in the same root directory.
27) The fuctionality of GUI Merge file is that ,we can merge multiple GUI file into a single GUI file.In a merging process we should have one GUI map file as a target file. The target GUI map file can be an existing file or a new (empty) file. You can work with this tool in either automatic or manual mode. Once you merge GUI map files, you must also change the GUI map file mode,
and modify your tests or your startup test to load the appropriate GUI map files.

28.) WinRunner enables you to monitor variables in a test script. to help you debug your tests. You define the variables you want to monitor in a Watch List. As the test runs,
you can view the values that are assigned to the variables.
29.)New in WR 7.6 are:-
§ Significantly increase power and flexibility of tests without any programming
§ Use multiple verification types to ensure sound functionality
§ Verify data integrity in your back-end database
§ View, store and verify at a glance every attribute of tested objects
§ Maintain tests and build reusable scripts
§ Test multiple environments with a single application
§ Simplify creation of test scripts
§ Automatically identify discrepancies in data
§ Validate applications across browsers
§ Automatically recover tested applications from a crash
§ Leverage investments in other testing products

30.) The main test file contains minimum script in its file ,the script in main is mostly crowded with the call statements, where as the compiled module files are the files which cointains the user defined functions called in the main test .The commands in these files are loaded only at once by startup file.which stores the information of it.
31) The getenv (environment_variable) function reads the current value of an
environment variable




WinRunner-(Basic)
1)SDLC stands for Software Development Life Cycle. The SDLC further devided into seven intial steps.
a) Initiate the Project-The clients identifies their bussiness requirements.
b) Define the Sys tem-The Marketing people of the Software Development team ,takes the requirements from the client.The following information is recorded in the clients requirements.
§ Program Function (What the program must do)
§ The form,format,data types and units for input.
§ How exceptions,errors and deviations must be handled.
c) Design the system-The system Architecture team design the system and write the functional Design Document.
d) Build the system-The system specification & design Document are given to the development and test team .Then the development team code the modules as per requirement shown in the design document
e) Test the system-The test team develop the test plan as per the requirement.The Developed Software is installed on the test platforms after the unit testing done by developers.The test team then test the software as per their test plan steps.
f) Deploy the system-Once the software is tested and certified ,the software is installed on the production platform .The demos are given to client.
g) Support the system-After the software is in production the maintence phase of lifecycle begins.at this point ,the two teams resumes their individual roles .The development team works with the development document staff to modify and enhace the application ,where as the test team works with the test documentation staff to verify and validate the changes and enhacements to the application software.
2) Software QA involves the entire software development PROCESS - monitoring and improving the process, making sure that any agreed-upon standards and procedures are followed, and ensuring that problems are found and dealt with.
Testing involves operation of a system or application under controlled conditions and evaluating the results. The controlled conditions should include both normal and abnormal conditions. Testing should intentionally attempt to make things go wrong to determine if things happen when they shouldn't or things don't happen when they should.
3.)The role of a QA Tester is to 'test to break' attitude, an ability to take the point of view of the customer, a strong desire for quality, and an attention to detail. Tact and diplomacy are useful in maintaining a cooperative relationship with developers, and an ability to communicate with both technical (developers) and non-technical (customers, management) people is useful. Judgement skills are needed to assess high-risk areas of an application on which to focus testing efforts when time is limited.
4.)To develop a test plan there are certain set of documents required to refer,those are,
BRD,SSD,FSD etc.
5)Automation tools save a large slots of time as compared to Manual during testing.These are easy to install and maintain etc.
6) A software project test plan is a document that describes the objectives, scope, approach, and focus of a software testing effort. The following are some of the items that might be included in a test plan, depending on the particular project:
· Title
· Identification of software including version/release numbers
· Revision history of document including authors, dates, approvals
· Table of Contents
· Purpose of document, intended audience
· Objective of testing effort
· Software product overview
· Relevant related document list, such as requirements, design documents, other test plans, etc.
· Relevant standards or legal requirements
· Traceability requirements
· Relevant naming conventions and identifier conventions
· Overall software project organization and personnel/contact-info/responsibilties
· Test organization and personnel/contact-info/responsibilities
· Assumptions and dependencies
· Project risk analysis
· Testing priorities and focus
· Scope and limitations of testing
· Test outline - a decomposition of the test approach by test type, feature, functionality, process, system, module, etc. as applicable
· Outline of data input equivalence classes, boundary value analysis, error classes
· Test environment - hardware, operating systems, other required software, data configurations, interfaces to other systems
· Test environment validity analysis - differences between the test and production systems and their impact on test validity.
· Test environment setup and configuration issues
· Software migration processes
· Software CM processes
· Test data setup requirements
· Database setup requirements
· Outline of system-logging/error-logging/other capabilities, and tools such as screen capture software, that will be used to help describe and report bugs
· Discussion of any specialized software or hardware tools that will be used by testers to help track the cause or source of bugs
· Test automation - justification and overview
· Test tools to be used, including versions, patches, etc.
· Test script/test code maintenance processes and version control
· Problem tracking and resolution - tools and processes
· Project test metrics to be used
· Reporting requirements and testing deliverables
· Software entrance and exit criteria
· Initial sanity testing period and criteria
· Test suspension and restart criteria
· Personnel allocation
· Personnel pre-training needs
· Test site/location
· Outside test organizations to be utilized and their purpose, responsibilties, deliverables, contact persons, and coordination issues
· Relevant proprietary, classified, security, and licensing issues.
· Open issues
· Appendix - glossary, acronyms, etc.
7). A test case is a document that describes an input, action, or event and an expected response, to determine if a feature of an application is working correctly. A test case should contain particulars such as test case identifier, test case name, objective, test conditions/setup, input data requirements, steps, and expected results.
Test case contents are
§ Test case ID
§ Test Case Name
§ Test Case Description
§ Test Case Steps to be taken
§ Expected Result
§ Actual Result
§ Status
8) System testing - black-box type testing that is based on overall requirements specifications; covers all combined parts of a system
Functional testing - black-box type testing geared to functional requirements of an application; this type of testing should be done by testers. This doesn't mean that the programmers shouldn't check that their code works before releasing it
Integration testing - testing of combined parts of an application to determine if they function together correctly. The 'parts' can be code modules, individual applications, client and server applications on a network, etc. This type of testing is especially relevant to client/server and distributed systems.
End-to-end testing - similar to system testing; the 'macro' end of the test scale; involves testing of a complete application environment in a situation that mimics real-world use, such as interacting with a database, using network communications, or interacting with other hardware, applications, or systems if appropriate.
Performance testing - term often used interchangeably with 'stress' and 'load' testing. Ideally 'performance' testing (and any other 'type' of testing) is defined in requirements documentation or QA or Test Plans.
Load testing - testing an application under heavy loads, such as testing of a web site under a range of loads to determine at what point the system's response time degrades or fails.
Stress testing - term often used interchangeably with 'load' and 'performance' testing. Also used to describe such tests as system functional testing while under unusually heavy loads, heavy repetition of certain actions or inputs, input of large numerical values, large complex queries to a database system, etc.
9) Black box testing - not based on any knowledge of internal design or code. Tests are based on requirements and functionality.
White box testing - based on knowledge of the internal logic of an application's code. Tests are based on coverage of code statements, branches, paths, conditions.
10)The testing lifecycle contains three main modules
Those are:-
§ Pre-Testing Plan(e.g Test Plans,test cases)
§ Testing Phase(e.g defect cycle)
§ Port Testing (e.g status of testing)
11) Client/server applications can be quite complex due to the multiple dependencies among clients, data communications, hardware, and servers. Thus testing requirements can be extensive. When time is limited (as it usually is) the focus should be on integration and system testing
Web sites are essentially client/server applications - with web servers and 'browser' clients. Consideration should be given to the interactions between html pages, TCP/IP communications, Internet connections, firewalls, applications that run in web pages and applications that run on the server side Additionally, there are a wide variety of servers and browsers, various versions of each, small but sometimes significant differences between them, variations in connection speeds, rapidly changing technologies, and multiple standards and protocols.
· 12) What are the expected loads on the server (e.g., number of hits per unit time?), and what kind of performance is required under such loads (such as web server response time, database query response times). What kinds of tools will be needed for performance testing (such as web load testing tools, other tools already in house that can be adapted, web robot downloading tools, etc.)?
· Who is the target audience? What kind of browsers will they be using? What kind of connection speeds will they by using? Are they intra- organization (thus with likely high connection speeds and similar browsers) or Internet-wide (thus with a wide variety of connection speeds and browser types)?
· What kind of performance is expected on the client side (e.g., how fast should pages appear, how fast should animations, applets, etc. load and run)?
· Will down time for server and content maintenance/upgrades be allowed? how much?
· What kinds of security (firewalls, encryptions, passwords, etc.) will be required and what is it expected to do? How can it be tested?
· How reliable are the site's Internet connections required to be? And how does that affect backup system or redundant connection requirements and testing?
· What processes will be required to manage updates to the web site's content, and what are the requirements for maintaining, tracking, and controlling page content, graphics, links, etc.?
· Which HTML specification will be adhered to? How strictly? What variations will be allowed for targeted browsers?
· Will there be any standards or requirements for page appearance and/or graphics throughout a site or parts of a site??
· How will internal and external links be validated and updated? how often?
· Can testing be done on the production system, or will a separate test system be required? How are browser caching, variations in browser option settings, dial-up connection variabilities, and real-world internet 'traffic congestion' problems to be accounted for in testing?
· How extensive or customized are the server logging and reporting requirements; are they considered an integral part of the system and do they require testing?
· How are cgi programs, applets, javascripts, ActiveX components, etc. to be maintained, tracked, controlled, and tested?
13) Which functionality is most important to the project's intended purpose?
Which functionality is most visible to the user?
Which functionality has the largest safety impact?
Which functionality has the largest financial impact on users?
Which aspects of the application are most important to the customer?
Which aspects of the application can be tested early in the development cycle?
Which parts of the code are most complex, and thus most subject to errors?
Which parts of the application were developed in rush or panic mode?
Which aspects of similar/related previous projects caused problems?
Which aspects of similar/related previous projects had large maintenance expenses?
Which parts of the requirements and design are unclear or poorly thought out?
What do the developers think are the highest-risk aspects of the application?
What kinds of problems would cause the worst publicity?
What kinds of problems would cause the most customer service complaints?
What kinds of tests could easily cover multiple functionalities?
Which tests will have the best high-risk-coverage to time-required ratio?
14). Regression testing - re-testing after fixes or modifications of the software or its environment. It can be difficult to determine how much re-testing is needed, especially near the end of the development cycle. Automated testing tools can be especially useful for this type of testing.
15) The test metrics consists of following
§ Total tests
§ Test run
§ Tests Passed
§ Tests failed
§ Tests Deferred
§ Test passes the first time
16) Performance testing - term often used interchangeably with 'stress' and 'load' testing. Ideally 'performance' testing (and any other 'type' of testing) is defined in requirements documentation or QA or Test Plans.
17)The essential fields of Test case are
§ Test Case ID
§ Test Case Name
§ Test Case Description
§ Test Case Procedure
§ Expected Result
§ Actual Result
§ Status
18) Defect is a irregularity term found in a test as a bug which contains in his cycle certain aspects.Those are
Defect Cycle
§ Defect ID
§ Brief Summary
§ Project/Version
§ Subject/Module
§ Status(NEW/OPENED/FIXED/CLOSED/REJECTED/RE-OPEN/PENDING)
§ Date(the date of filing the defect or information to the server)
§ Assigned to
§ Priorty Levels(1,2,3,4)
§ Detected Name(Own Name)
§ Severity level-1(very high rate of bugs)
-2(High “ “” “ )
-3(Medium“ “” “ )
-4(Low “ “” “ )
-5(Very low “ “” “ )
20)
22)WHite box testing is of two types
Unit Testing--the most 'micro' scale of testing; to test particular functions or code modules. Typically done by the programmer and not by testers, as it requires detailed knowledge of the internal program design and code. Not always easily done unless the application has a well-designed architecture with tight code; may require developing test driver modules or test harnesses
Integration Testing--integration testing - testing of combined parts of an application to determine if they function together correctly. The 'parts' can be code modules, individual applications, client and server applications on a network, etc. This type of testing is especially relevant to client/server and distributed systems
23) Work with the project's stakeholders early on to understand how requirements might change so that alternate test plans and strategies can be worked out in advance, if possible.
It's helpful if the application's initial design allows for some adaptability so that later changes do not require redoing the application from scratch.
If the code is well-commented and well-documented this makes changes easier for the developers.
Use rapid prototyping whenever possible to help customers feel sure of their requirements and minimize changes.
The project's initial schedule should allow for some extra time commensurate with the possibility of changes.
Try to move new requirements to a 'Phase 2' version of an application, while using the original requirements for the 'Phase 1' version.
Negotiate to allow only easily-implemented new requirements into the project, while moving more difficult new requirements into future versions of the application.
Be sure that customers and management understand the scheduling impacts, inherent risks, and costs of significant requirements changes. Then let management or the customers (not the developers or testers) decide if the changes are warranted - after all, that's their job.
Balance the effort put into setting up automated testing with the expected effort required to re-do them to deal with changes.
Try to design some flexibility into automated test scripts.
Focus initial automated testing on application aspects that are most likely to remain unchanged.
Devote appropriate effort to risk analysis of changes to minimize regression testing needs.
Design some flexibility into test cases (this is not easily done; the best bet might be to minimize the detail in the test cases, or set up only higher-level generic-type test plans)
25). SEI = 'Software Engineering Institute' at Carnegie-Mellon University; initiated by the U.S. Defense Department to help improve software development processes.
CMM = 'Capability Maturity Model', now called the CMMI ('Capability Maturity Model Integration'), developed by the SEI. It's a model of 5 levels of process 'maturity' that determine effectiveness in delivering quality software. It is geared to large organizations such as large U.S. Defense Department contractors. However, many of the QA processes involved are appropriate to any organization, and if reasonably applied can be helpful. Organizations can receive CMMI ratings by undergoing assessments by qualified auditors





















==External links==
==External links==

Revision as of 19:24, 14 October 2005

Mercury Interactive's WinRunner is an automated regression testing tool that allows a user to record and play back test scripts. The software implements a proprietary Test Script Language that allows customization and parameterization of user input.

Winrunner(Advance) 1.There are basically two types of recording;- § Context sensitive recording § Analog recording Context sensitive recording-----Records the operations you perform on your application by

                                                 identifying GUI  objects. 
Analog recording------In  Analog recording ,analog records keyboard input,mouse 
       clicks,and the precise x-and y-coordinates traveled by the mouse  
       pointer across the screen.

2. The Winrunner script can be run in three basic mode (a) VERIFY (b)DEBUG (c)UPDATE VerifyMode-Verify mode is used to check your application.Winrunner compares the current response of your application to its expected response .Any discrepancies between the current and expected responses are captured and saved as verification results. DebugMode---Debug mode is used to identify bugs in a test script.Running a test in a debug mode is the same as running a test in verify mode ,except that debug results are always saved in the ddebug folder. UpdateMode---Update mode is used to update the expected results of a test or to create a new expected results folder.

3. There are three types of Checkpoints avaible in Winrunner

        Those are;-

§ GUI checkpoints-A GUI checkpoint helps you to identify changes in the look and behaviour of GUI objects of your application.The results of a GUI checkpoint are displayed in the GUI checkpoint results dialog box that you open from the test results window. Bitmap checkpoints-- A bitmap checkpoint compares expected and actual bitmaps in your application. In the Test Results window you can view pictures of the expected and actual results. If a mismatch is detected by a bitmap checkpoint during a test run in Verification or Debug mode, the expected, actual, and difference bitmaps are displayed. For a mismatch during a test run in Update mode, only the expected bitmaps are displayed. Database checkpoints-- A database checkpoint helps you to identify changes in the contents and structure of databases in your application. The results of a database checkpoint are displayed in the Database Checkpoint Results dialog box that you open from the Test Results window. 4. We can edit the data in the Edit Check dialog box, which we open from the diffrent Checkpoint Results dialog box. To do so, highlight the Content check, and click the Edit Expected Value button. 5.Synchronization Point-- Synchronization points enable you to solve anticipated timing problems between the test and your application. For example, if you create a test that opens a database application, you can add a synchronization point that causes the test to wait until the database records are loaded on the screen. Functions & Arguments for synchronization Stopping or pausing a test: You can stop or pause a test that is waiting for a synchronization statement by using the pause or stop softkeys. Recording in Analog mode: When recording a test in Analog mode, you should press the SYNCHRONIZE BITMAP OF OBJECT/WINDOW or the SYNCHRONIZE BITMAP OF SCREEN AREA softkey to create a bitmap synchronization point. This prevents WinRunner from recording extraneous mouse movements. If you are programming a test, you can use the Analog TSL function wait_window to wait for a bitmap. Data-driven testing: In order to use bitmap synchronization points in datadriven tests, you must parameterize the statements in your test script that contain them.

6. Intially in Winrunner we have four types of files

    Those are :-

§ Script File- is a file which contains all the script of the test which is being used for testing of some undertaken project. § GUI(Graphic User Interface) File-Gui file is used to save the information of the previously learnt an individual GUI object, a window, or all the GUI objects within a window by WR. § CheckList File---Checklist file contains the behavior of a new version of your object or window with the behavior of a previous version. § Result File---Result file carries the information of the errors and executed script by lines of execuetion. 7.Gui file is a file whichcontains the information of the previously learnt an individual GUI object, window, or all the GUI objects within a window by WR.,The contents are class,logicname,ID etc. 8. When custom object is close to a similar standard class, then the functionality of a

     custom test button is dependent on a standard class. This is known as a class 
     mapping. 

9.Yes ,the anyscript can be executed from another script,by using “call” method.,The call method calls the mentioned script in the main script by giving the exact path of the called script. 10.tl_setp() --divides a test script into sections and inserts a status message in the test results for the previous section.

  report_msg( )--The report_msg function inserts a message or expression into the test report.

11.)The startup file contains the startup commands which gets executed only at once. The purpose of startup file is to run once during a session to avoid the win runner from unnecessarily running them again and again. 12.) Checklist file contains the behavior of a new version of your object or window with the behavior of a previous version.Checklist file is stored in a main script Folder. 13.)Gui MAp File per test . Every time you create a new test, WinRunner automatically creates a new GUImap file for the test. Whenever you save the test, WinRunner saves the corresponding GUI map file. The GUI map file is saved in the same folder as the test. Moving a test to a new location also moves the GUI map file associated with the test.

   Global Gui Map File-- mode enables you to create a GUI map file for your

entire application, or for each window in your application. Multiple tests can reference a common GUI map file. 14. The GUI File can hold maximum 500 hundred Items ,it can hold more but those will not function properly. 15. When using the Global GUI Map File the GUI file has to be loaded before

invoking the application.    

In the GUI Map File per Test mode, WinRunner creates a new GUI map file

whenever you create a new test. WinRunner saves the test’s GUI map file whenever 
you save the test. When you open the test, WinRunner automatically loads the GUI 
map file associated with the test.

16.) When we load a compiled module, its functions are automatically compiled and

     remain in memory. we can call them directly from within any test. A compiled 
     module is associated with the main test file using the reload command. 

The syntax used for it is reload(ModuleName,Module Type,Module Appearance). 17.)GUI Spy is a spy which view the properties of any GUI object on your desktop, to see how WinRunner identifies it. we use the Spy

     pointer to point to an object, and the GUI Spy displays the properties and their values 
     in the GUI Spy dialog box. You can choose to view all the properties of an object, or 
     only the selected set of properties that WinRunner learns.

18).In case of non-data tests ,the testing process is performed in three steps:

             Creating a test
             Running a test

Analyzing test results. When we create a data-driven test,we perform an extra two-part step between creating the test and running it: converting the test to a data-driven test and creating a corresponding data table.

For Notepad
  1. Data DRiven testing(Notepad)

static infile="c:\\Training_7\\Tajinder\\WR\\notepad.txt"; file_close(infile); file_open(infile,FO_MODE_READ); while(file_getline(infile,line) == E_OK) {

  split(line,myarray,",");

}file_close(infile);

  1. Data DRiven testing(Excel)

static excel="c:\\Training_7\\Tajinder\\WR\\excel.xls";

if (win_exists("Flight Reservation") == E_OK) { report_msg("Login Passed"); set_window("Flight Reservation"); }

ddt_close(excel); ddt_open(excel,DDT_MODE_READ); ddt_get_row_count(excel,num); for(i=1;i<=num;i++) { ddt_set_row(excel,i);

	 agent = ddt_val(excel,"AGENT");

if (agent == user) { dof = ddt_val(excel,"DATEOFFLIGHT"); ffr = ddt_val(excel,"FLYFROM"); fto = ddt_val(excel,"FLYTO"); fno = ddt_val(excel,"FLIGHTNO"); nam = ddt_val(excel,"NAME"); tic = ddt_val(excel,"TICKETS"); cls = ddt_val(excel,"CLASS");

  1. Data Driven testing (Access Database)

db_connect("mysession","DSN=sampledsn"); db_execute_query("mysession","select * from sampledb",num); for (i=0;i<num;i++) { db_get_row("mysession",i,line); split(line,myarray,"\t");

  1. Database Driven testing (Oracle Database)

db_connect("mysession","DSN=tajdsn;uid=scott;pwd=tiger"); db_execute_query("mysession","select * from onlinecatalog",num); for(i=0;i<=num;i++) {

db_get_row("mysession",i,r1);
split(r1,myarray,"\t");

19.) User-defined functions are convenient when you want to perform the same operation several times in a test script. Instead of repeating the code, you can write a single function that performs the operation. This makes your test scripts modular, more readable, and easier to debug and maintain.

    To add a function to the Function Generator:

1. Open the Function Generator. (Choose Create > Insert Function > From Function Generator, click the Insert Function from Function Generator button on the User toolbar, or press the INSERT FUNCTION FROM FUNCTION GENERATOR softkey.)

     2.   In the Category box, click function table.
     3.   In the Function Name box, click generator_add_function.
     4.   Click Args. The Function Generator expands.
     5.   In the Function Generator, define the function_name, description, and  
           arg_number arguments:
     6.   For the function’s first argument, define the following arguments: arg_name,     
           arg_type, and default_value (if relevant).
     7.   Click Paste to paste the TSL statement into your test script.
     8.   Click Close to close the Function Generator.

20.) To change the Logical name of the GUI

      Assume that the GUI Map Editor window is Open
      Select the Window/Object to be modified
      Click “On Modify Button”
    Small “Modify Window” will appear
    Change the Logical Name and press OK Button

The operation will be done. 21.) Windows often have varying labels. For example, the main window in a text application might display a file name as well as the application name in the title bar. If WinRunner cannot recognize a window because its name changed after WinRunner learned it, the Run wizard opens and prompts you to identify the window in question. Once you identify the window, WinRunner realizes the window has a varying label, and it modifies the window’s physical description accordingly. 22.) There are two types of standard database checkpoints: Default and Custom we can use a default check, to check the entire contents of a result set, or we can use a custom check, to check the partial contents, the number of rows, and the number of columns of a result set. Information about which result set properties to check is saved in a checklist. WinRunner captures the current information about the database and saves this information as expected results. A database checkpoint is automatically inserted into the test script. This checkpoint appears in your test script as a db_check statement. 23.) User-defined functions are convenient when you want to perform the same operation several times in a test script. Instead of repeating the code, you can write a single function that performs the operation. This makes your test scripts modular, more readable, and easier to debug and maintain. 24.) Batch Mode determines whether WinRunner suppresses messages during a test run so that a test can run unattended. WinRunner also saves all the expected and actual results of a test run in batch mode in one folder, and displays them in one Test Results window. 1 Choose Settings > General Options. The General Options dialog box opens.

   2 Click the Run tab.
   3 Select the Run in batch mode check box.
   4 Click OK to close the General Options dialog box.
      For more information on setting the batch option in the General Options dialog

25. To check a single broken link: 1.Choose Create > GUI Checkpoint > For Object/Window. The WinRunner window is minimized to an icon, the mouse pointer turns into a pointing hand, and a help window opens. 2.Double-click a link on your Web page. The Check GUI dialog box opens, and the object is highlighted. 3. In the Objects column, make sure that the link is selected. The Properties column indicates the properties available for you to check. 4. In the Properties column, select the BrokenLink check box. 5. In the Properties column, select the BrokenLink check box. Click the Edit Expected Value button, or double-click the value in the Expected Value column to edit it. A combo box opens. Select Valid or NotValid. Valid indicates that the link is active, and NotValid indicates that the link is broken. 6.Click OK to close the Check GUI dialog box. WinRunner captures the object information and stores it in the test’s expected results folder. The WinRunner window is restored and a checkpoint appears in your test script as an obj_check_gui or win_check_gui statement. 26. The Script gets stored in a folder that has the same name as main file and in the same root directory. 27) The fuctionality of GUI Merge file is that ,we can merge multiple GUI file into a single GUI file.In a merging process we should have one GUI map file as a target file. The target GUI map file can be an existing file or a new (empty) file. You can work with this tool in either automatic or manual mode. Once you merge GUI map files, you must also change the GUI map file mode,

     and modify your tests or your startup test to load the appropriate GUI map files.

28.) WinRunner enables you to monitor variables in a test script. to help you debug your tests. You define the variables you want to monitor in a Watch List. As the test runs,

     you can view the values that are assigned to the variables.

29.)New in WR 7.6 are:- § Significantly increase power and flexibility of tests without any programming § Use multiple verification types to ensure sound functionality § Verify data integrity in your back-end database § View, store and verify at a glance every attribute of tested objects § Maintain tests and build reusable scripts § Test multiple environments with a single application § Simplify creation of test scripts § Automatically identify discrepancies in data § Validate applications across browsers § Automatically recover tested applications from a crash § Leverage investments in other testing products

30.) The main test file contains minimum script in its file ,the script in main is mostly crowded with the call statements, where as the compiled module files are the files which cointains the user defined functions called in the main test .The commands in these files are loaded only at once by startup file.which stores the information of it. 31) The getenv (environment_variable) function reads the current value of an

     environment variable



WinRunner-(Basic) 1)SDLC stands for Software Development Life Cycle. The SDLC further devided into seven intial steps. a) Initiate the Project-The clients identifies their bussiness requirements. b) Define the Sys tem-The Marketing people of the Software Development team ,takes the requirements from the client.The following information is recorded in the clients requirements. § Program Function (What the program must do) § The form,format,data types and units for input. § How exceptions,errors and deviations must be handled. c) Design the system-The system Architecture team design the system and write the functional Design Document. d) Build the system-The system specification & design Document are given to the development and test team .Then the development team code the modules as per requirement shown in the design document e) Test the system-The test team develop the test plan as per the requirement.The Developed Software is installed on the test platforms after the unit testing done by developers.The test team then test the software as per their test plan steps. f) Deploy the system-Once the software is tested and certified ,the software is installed on the production platform .The demos are given to client. g) Support the system-After the software is in production the maintence phase of lifecycle begins.at this point ,the two teams resumes their individual roles .The development team works with the development document staff to modify and enhace the application ,where as the test team works with the test documentation staff to verify and validate the changes and enhacements to the application software. 2) Software QA involves the entire software development PROCESS - monitoring and improving the process, making sure that any agreed-upon standards and procedures are followed, and ensuring that problems are found and dealt with. Testing involves operation of a system or application under controlled conditions and evaluating the results. The controlled conditions should include both normal and abnormal conditions. Testing should intentionally attempt to make things go wrong to determine if things happen when they shouldn't or things don't happen when they should. 3.)The role of a QA Tester is to 'test to break' attitude, an ability to take the point of view of the customer, a strong desire for quality, and an attention to detail. Tact and diplomacy are useful in maintaining a cooperative relationship with developers, and an ability to communicate with both technical (developers) and non-technical (customers, management) people is useful. Judgement skills are needed to assess high-risk areas of an application on which to focus testing efforts when time is limited. 4.)To develop a test plan there are certain set of documents required to refer,those are, BRD,SSD,FSD etc. 5)Automation tools save a large slots of time as compared to Manual during testing.These are easy to install and maintain etc. 6) A software project test plan is a document that describes the objectives, scope, approach, and focus of a software testing effort. The following are some of the items that might be included in a test plan, depending on the particular project: · Title · Identification of software including version/release numbers · Revision history of document including authors, dates, approvals · Table of Contents · Purpose of document, intended audience · Objective of testing effort · Software product overview · Relevant related document list, such as requirements, design documents, other test plans, etc. · Relevant standards or legal requirements · Traceability requirements · Relevant naming conventions and identifier conventions · Overall software project organization and personnel/contact-info/responsibilties · Test organization and personnel/contact-info/responsibilities · Assumptions and dependencies · Project risk analysis · Testing priorities and focus · Scope and limitations of testing · Test outline - a decomposition of the test approach by test type, feature, functionality, process, system, module, etc. as applicable · Outline of data input equivalence classes, boundary value analysis, error classes · Test environment - hardware, operating systems, other required software, data configurations, interfaces to other systems · Test environment validity analysis - differences between the test and production systems and their impact on test validity. · Test environment setup and configuration issues · Software migration processes · Software CM processes · Test data setup requirements · Database setup requirements · Outline of system-logging/error-logging/other capabilities, and tools such as screen capture software, that will be used to help describe and report bugs · Discussion of any specialized software or hardware tools that will be used by testers to help track the cause or source of bugs · Test automation - justification and overview · Test tools to be used, including versions, patches, etc. · Test script/test code maintenance processes and version control · Problem tracking and resolution - tools and processes · Project test metrics to be used · Reporting requirements and testing deliverables · Software entrance and exit criteria · Initial sanity testing period and criteria · Test suspension and restart criteria · Personnel allocation · Personnel pre-training needs · Test site/location · Outside test organizations to be utilized and their purpose, responsibilties, deliverables, contact persons, and coordination issues · Relevant proprietary, classified, security, and licensing issues. · Open issues · Appendix - glossary, acronyms, etc. 7). A test case is a document that describes an input, action, or event and an expected response, to determine if a feature of an application is working correctly. A test case should contain particulars such as test case identifier, test case name, objective, test conditions/setup, input data requirements, steps, and expected results.

  Test case contents are

§ Test case ID § Test Case Name § Test Case Description § Test Case Steps to be taken § Expected Result § Actual Result § Status 8) System testing - black-box type testing that is based on overall requirements specifications; covers all combined parts of a system Functional testing - black-box type testing geared to functional requirements of an application; this type of testing should be done by testers. This doesn't mean that the programmers shouldn't check that their code works before releasing it Integration testing - testing of combined parts of an application to determine if they function together correctly. The 'parts' can be code modules, individual applications, client and server applications on a network, etc. This type of testing is especially relevant to client/server and distributed systems. End-to-end testing - similar to system testing; the 'macro' end of the test scale; involves testing of a complete application environment in a situation that mimics real-world use, such as interacting with a database, using network communications, or interacting with other hardware, applications, or systems if appropriate. Performance testing - term often used interchangeably with 'stress' and 'load' testing. Ideally 'performance' testing (and any other 'type' of testing) is defined in requirements documentation or QA or Test Plans. Load testing - testing an application under heavy loads, such as testing of a web site under a range of loads to determine at what point the system's response time degrades or fails. Stress testing - term often used interchangeably with 'load' and 'performance' testing. Also used to describe such tests as system functional testing while under unusually heavy loads, heavy repetition of certain actions or inputs, input of large numerical values, large complex queries to a database system, etc. 9) Black box testing - not based on any knowledge of internal design or code. Tests are based on requirements and functionality. White box testing - based on knowledge of the internal logic of an application's code. Tests are based on coverage of code statements, branches, paths, conditions. 10)The testing lifecycle contains three main modules

      Those are:-

§ Pre-Testing Plan(e.g Test Plans,test cases) § Testing Phase(e.g defect cycle) § Port Testing (e.g status of testing) 11) Client/server applications can be quite complex due to the multiple dependencies among clients, data communications, hardware, and servers. Thus testing requirements can be extensive. When time is limited (as it usually is) the focus should be on integration and system testing Web sites are essentially client/server applications - with web servers and 'browser' clients. Consideration should be given to the interactions between html pages, TCP/IP communications, Internet connections, firewalls, applications that run in web pages and applications that run on the server side Additionally, there are a wide variety of servers and browsers, various versions of each, small but sometimes significant differences between them, variations in connection speeds, rapidly changing technologies, and multiple standards and protocols. · 12) What are the expected loads on the server (e.g., number of hits per unit time?), and what kind of performance is required under such loads (such as web server response time, database query response times). What kinds of tools will be needed for performance testing (such as web load testing tools, other tools already in house that can be adapted, web robot downloading tools, etc.)? · Who is the target audience? What kind of browsers will they be using? What kind of connection speeds will they by using? Are they intra- organization (thus with likely high connection speeds and similar browsers) or Internet-wide (thus with a wide variety of connection speeds and browser types)? · What kind of performance is expected on the client side (e.g., how fast should pages appear, how fast should animations, applets, etc. load and run)? · Will down time for server and content maintenance/upgrades be allowed? how much? · What kinds of security (firewalls, encryptions, passwords, etc.) will be required and what is it expected to do? How can it be tested? · How reliable are the site's Internet connections required to be? And how does that affect backup system or redundant connection requirements and testing? · What processes will be required to manage updates to the web site's content, and what are the requirements for maintaining, tracking, and controlling page content, graphics, links, etc.? · Which HTML specification will be adhered to? How strictly? What variations will be allowed for targeted browsers? · Will there be any standards or requirements for page appearance and/or graphics throughout a site or parts of a site?? · How will internal and external links be validated and updated? how often? · Can testing be done on the production system, or will a separate test system be required? How are browser caching, variations in browser option settings, dial-up connection variabilities, and real-world internet 'traffic congestion' problems to be accounted for in testing? · How extensive or customized are the server logging and reporting requirements; are they considered an integral part of the system and do they require testing? · How are cgi programs, applets, javascripts, ActiveX components, etc. to be maintained, tracked, controlled, and tested? 13) Which functionality is most important to the project's intended purpose? Which functionality is most visible to the user? Which functionality has the largest safety impact? Which functionality has the largest financial impact on users? Which aspects of the application are most important to the customer? Which aspects of the application can be tested early in the development cycle? Which parts of the code are most complex, and thus most subject to errors? Which parts of the application were developed in rush or panic mode? Which aspects of similar/related previous projects caused problems? Which aspects of similar/related previous projects had large maintenance expenses? Which parts of the requirements and design are unclear or poorly thought out? What do the developers think are the highest-risk aspects of the application? What kinds of problems would cause the worst publicity? What kinds of problems would cause the most customer service complaints? What kinds of tests could easily cover multiple functionalities? Which tests will have the best high-risk-coverage to time-required ratio? 14). Regression testing - re-testing after fixes or modifications of the software or its environment. It can be difficult to determine how much re-testing is needed, especially near the end of the development cycle. Automated testing tools can be especially useful for this type of testing. 15) The test metrics consists of following § Total tests § Test run § Tests Passed § Tests failed § Tests Deferred § Test passes the first time 16) Performance testing - term often used interchangeably with 'stress' and 'load' testing. Ideally 'performance' testing (and any other 'type' of testing) is defined in requirements documentation or QA or Test Plans. 17)The essential fields of Test case are § Test Case ID § Test Case Name § Test Case Description § Test Case Procedure § Expected Result § Actual Result § Status 18) Defect is a irregularity term found in a test as a bug which contains in his cycle certain aspects.Those are Defect Cycle § Defect ID § Brief Summary § Project/Version § Subject/Module § Status(NEW/OPENED/FIXED/CLOSED/REJECTED/RE-OPEN/PENDING) § Date(the date of filing the defect or information to the server) § Assigned to § Priorty Levels(1,2,3,4) § Detected Name(Own Name) § Severity level-1(very high rate of bugs)

                           -2(High  “       “”     “   )
                          -3(Medium“       “”     “   )

-4(Low “ “” “ )

                              -5(Very low 	“       “”     “   )

20) 22)WHite box testing is of two types

         Unit Testing--the most 'micro' scale of testing; to test particular functions or code modules. Typically done by the programmer and not by testers, as it requires detailed knowledge of the internal program design and code. Not always easily done unless the application has a well-designed architecture with tight code; may require developing test driver modules or test harnesses   
 Integration Testing--integration testing - testing of combined parts of an application to determine if they function together correctly. The 'parts' can be code modules, individual applications, client and server applications on a network, etc. This type of testing is especially relevant to client/server and distributed systems

23) Work with the project's stakeholders early on to understand how requirements might change so that alternate test plans and strategies can be worked out in advance, if possible. It's helpful if the application's initial design allows for some adaptability so that later changes do not require redoing the application from scratch. If the code is well-commented and well-documented this makes changes easier for the developers. Use rapid prototyping whenever possible to help customers feel sure of their requirements and minimize changes. The project's initial schedule should allow for some extra time commensurate with the possibility of changes. Try to move new requirements to a 'Phase 2' version of an application, while using the original requirements for the 'Phase 1' version. Negotiate to allow only easily-implemented new requirements into the project, while moving more difficult new requirements into future versions of the application. Be sure that customers and management understand the scheduling impacts, inherent risks, and costs of significant requirements changes. Then let management or the customers (not the developers or testers) decide if the changes are warranted - after all, that's their job. Balance the effort put into setting up automated testing with the expected effort required to re-do them to deal with changes. Try to design some flexibility into automated test scripts. Focus initial automated testing on application aspects that are most likely to remain unchanged. Devote appropriate effort to risk analysis of changes to minimize regression testing needs. Design some flexibility into test cases (this is not easily done; the best bet might be to minimize the detail in the test cases, or set up only higher-level generic-type test plans) 25). SEI = 'Software Engineering Institute' at Carnegie-Mellon University; initiated by the U.S. Defense Department to help improve software development processes. CMM = 'Capability Maturity Model', now called the CMMI ('Capability Maturity Model Integration'), developed by the SEI. It's a model of 5 levels of process 'maturity' that determine effectiveness in delivering quality software. It is geared to large organizations such as large U.S. Defense Department contractors. However, many of the QA processes involved are appropriate to any organization, and if reasonably applied can be helpful. Organizations can receive CMMI ratings by undergoing assessments by qualified auditors












External links