Database testing

From Wikipedia, the free encyclopedia
Jump to: navigation, search

Databases, the collection of interconnected files on a server, storing information, may not deal with the same type of data, i.e. databases may be heterogeneous. As a result, many kinds of implementation and integration errors may occur in large database systems, which negatively affect the system's performance, reliability, consistency and security. Thus, it is important to test in order to obtain a database system which satisfies the ACID properties (Atomicity, Consistency, Isolation, and Durability) of a database management system. [1]

Purposes[edit]

Database testing usually consists of a layered process, including the user interface (UI) layer, the business layer, the data access layer and the database itself. The UI layer deals with the interface design of the database,[citation needed] while the business layer includes databases supporting business strategies. The most critical layer is the data access layer, which deals with databases directly during the communication process. Database testing mainly takes place at this layer and involves testing strategies such as quality control and quality assurance of the product databases.[2] Testing at these different layers is frequently used to maintain consistency of database systems, most commonly seen in the following examples:

  • Data is critical from a business point of view. Companies such as Google or Symantec, who are associated with data storage, need to have a durable and consistent database system. If database operations such as insert, delete, and update are performed without testing the database for consistency first, the company risks a crash of the entire system.
  • Some companies have different types of databases, and also different goals and missions. In order to achieve a level of functionality to meet said goals, they need to test their database system.
  • The current approach of testing may not be sufficient in which developers formally test the databases. However, this approach is not sufficiently effective since database developers are likely to slow down the testing process due to communication gaps. A separate database testing team seems advisable.
  • Database testing mainly deals with finding errors in the databases so as to eliminate them. This will improve the quality of the database or web-based system.
  • Database testing should be distinguished from strategies to deal with other problems such as database crashes, broken insertions, deletions or updates. Here, database refactoring is an evolutionary technique that may apply.

Types of testings and processes[edit]

Black box and white box testing in database test

The figure indicates the areas of testing involved during different database testing methods, such as black-box testing and white-box testing.

Black Box Testing in database testing[edit]

Black box testing involves testing interfaces and the integration of the database, which includes:

  1. Mapping of data (including metadata)
  2. Verifying incoming data
  3. Verifying outgoing data from query functions
  4. Various techniques such as Cause effect graphing technique, equivalence partitioning and boundary-value analysis.

With the help of these techniques, the functionality of the database can be tested thoroughly.

Pros and Cons of black box testing include: Test case generation in black box testing is fairly simple. Their generation is completely independent of software development and can be done in an early stage of development. As a consequence, the programmer has better knowledge of how to design the database application and uses less time for debugging. Cost for development of black box test cases is lower than development of white box test cases. The major drawback of black box testing is that it is unknown how much of the program is being tested. Also, certain errors cannot be detected.[3]

White Box Testing in database testing[edit]

White box testing mainly deals with the internal structure of the database. The specification details are hidden from the user.

  1. It involves the testing of database triggers and logical views which are going to support database refactoring.
  2. It performs module testing of database functions, triggers, views, SQL queries etc.
  3. It validates database tables, data models, database schema etc.
  4. It checks rules of Referential integrity.
  5. It selects default table values to check on database consistency.
  6. The techniques used in white box testing are condition coverage, decision coverage, statement coverage, cyclomatic complexity.

The main advantage of white box testing in database testing is that coding error are detected, so internal bugs in the database can be eliminated. The limitation of white box testing is that SQL statements are not covered.

The WHODATE approach for database testing[edit]

WHODATE approach for SQL statement transformation

While generating test cases for database testing, the semantics of SQL statement need to be reflected in the test cases. For that purpose, a technique called WHite bOx Database Application TEchnique "(WHODATE)" is used. As shown in the figure, SQL statements are independently converted into GPL statements, followed by traditional white box testing to generate test cases which include SQL semantics.[4]

Four stages of database testing[edit]

  • Set Fixture
  • Test run
  • Outcome verification
  • Tear down

A set fixture describes the initial state of the database before entering the testing. After setting fixtures, database behavior is tested for defined test cases. Depending on the outcome, test cases are either modified or kept as is. The "tear down" stage either results in terminating testing or continuing with other test cases.[5]

For successful database testing the following workflow executed by each single test is commonly executed:

  1. Clean up the database: If the testable data is already present in the database, the database needs to be emptied.
  2. Set up Fixture: A tool like PHPUnit will then iterate over fixtures and do insertions into the database.
  3. Run test, Verify outcome and then Tear down: After resetting the database to empty and listing the fixtures, the test is run and the output is verified. If the output is as expected, the tear down process follows, otherwise testing is repeated.[citation needed]

Some problems in database testing[edit]

  1. The setup for database testing is costly and complex to maintain because database systems are constantly changing with expected insert, delete and update operations.
  2. Extra overhead is involved in order to determine the state of the database transactions.
  3. After cleaning up the database, new test cases have to be designed.[citation needed]
  4. An SQL generator is needed to transform SQL statements in order to include the SQL semantic into database test cases.

Basic techniques[edit]

  • SQL Query Analyzer is a helpful tool when using Microsoft SQL Server.[citation needed]
  • One commonly used function,[vague] create_input_dialog["label"], is used to validate the output with user inputs.
  • The design of forms for automated database testing, form front-end and back-end, is helpful to database maintenance workers.
  • Data load testing:
    • For data load testing, knowledge about source database and destination database is required.
    • Workers check the compatibility between source database and destination database using the DTS package.
    • When updating the source database, workers make sure to compare it with the target database.
    • Database load testing measures the capacity of the database server to handle queries as well as the response time of database server and client.[6]
  • In database testing, issues such as atomicity, consistency, isolation, durability, integrity, execution of triggers, and recovery are often considered.

See also[edit]

References[edit]

  1. ^ Korth, Henry (2010). Database System Concepts. Macgraw-Hill. ISBN 978-0-07-352332-3. 
  2. ^ Ambler, Scott (2003). Agile database Techniques: effective strategies for the agile software developer. wiley. ISBN 978-0-471-20283-7. 
  3. ^ Pressman, Roger (1994). Software Tester: A Practitioner's Approach. McGraw-Hill Education. ISBN 978-0-07-707732-7. 
  4. ^ Zhang, Yanchun (1999). Cooperative databases and applications '99: the proceedings of the Second International Symposium on Cooperative Database Systems for Advanced Applications (CODAS '99), Wollongong, Australia, March 27–28, 1999. Springer. ISBN 978-981-4021-64-7. 
  5. ^ Kan, Stephen. Metrics & Models in Software Quality Engineering. Pearson Education. ISBN 978-81-297-0175-6. 
  6. ^ "InfoWorld". InfoWorld Media Group, Inc. 15 Jan 1996. 

External link[edit]