Informal Methods (Validation and Verification)

From Wikipedia, the free encyclopedia
Jump to: navigation, search
[1]For more on validation and verification, see Verification and Validation.

Informal methods of validation and verification are some of the more frequently used in modeling and simulation. They are called informal because they are more qualitative than quantitative.[2] Whereas many methods of validation or verification rely on numerical results, informal methods tend to rely on the opinions of experts to draw a conclusion. While numerical results are not the primary focus, this does not mean that the numerical results are completely ignored. There are several reasons why an informal method might be chosen. In some cases, informal methods offer the convenience of quick testing to see if a model can be validated. In other instances, informal methods are the best available option. In all cases though it is important to note that informal does not mean it is any less of a true testing method. These methods should be performed with the same discipline and structure that one would expect in "formal" methods. When executed in such a way, solid conclusions can be made. [3]

In modeling and simulation, verification techniques are used to analyze the state of the model. Verification is completed by different methods with the focus of comparing different aspects of the executable model with the conceptual model. On the other hand, validation methods are the methods by which a model, either conceptual or executable is compared with the situation it is trying to model. Both are methods by which the model can be analyzed to help find defects in the modeling methods being used, or potential misrepresentations of the real life situation.



Inspection is a verification method that is used to compare how true the conceptual model matches the executable model. Teams of experts, developers, and testers will thoroughly scan the content (algorithms, programming code, documents, equations) in the original conceptual model, and compare with the appropriate counterpart to verify how closely the executable model matches.[2] One of the main purposes of this method of verification is to see what original goals have been overlooked. By doing an inspection check on the model, the team can not only see what issues might have been overlooked, but also catch any potential flaws that can become an issue later in the project.[1]

Depending on the resources available, the members of the inspection team may or may not be part of the model production team. Preferably they would be separate groups. When they are the from the same group, you can potentially run into issues where things are overlooked, since the group member has already spent time looking at the project from a production point of view. Inspections are also more flexible in that they may be ad hoc or highly structured, with members of an inspection team assigned specific roles, such as moderator, reader, and recorder, and specific procedure steps used in the inspection. The inspectors goal is to find and document flaws between the conceptual model and the executable model.[2][4]

Examples of Inspection[edit]

  • Consider the following example from [Schach, 1996].

The team inspecting a simulation design might include a moderator; a recorder; a reader from the simulation design team who will explain the design process and answer questions about the design; a representative of the Developer who will be translating the design into an executable form; SMEs familiar with the requirements of the application, and the V&V Agent.

· Overview—The simulation design team prepares a synopsis of the design. This and related documentation (e.g., problem definition and objectives, M&S requirements, inspection agenda) is distributed to all members of the inspection team.

· Preparation—The inspection team members individually review all the documentation provided. The success of the inspection rests heavily on the conscientiousness of the team members in their preparation.

· Inspection—The moderator plans and chairs the inspection meeting. The reader presents the product and leads the team through the inspection process. The inspection team can be aided during the faultfinding process by a checklist of queries. The objective is to identify problems, not to correct them. At the end of the inspection the recorder prepares a report of the problems detected and submits it to the design team.

· Rework—The design team addresses each problem identified in the report, documenting all responses and corrections.

· Follow-up—The moderator ensures that all faults and problems have been resolved satisfactorily. All changes should be examined carefully to ensure that no new problems have been introduced as a result of a correction.[5]

Face Validation[edit]


Flickr - Official U.S. Navy Imagery - Sailors demonstrate the MQ-8B Fire Scout flight simulator to media.

One of the benefits of face validation is that it can effectively be used during a real-time virtual simulation where the interaction between the user and the simulation is of priority. It is effective during these type of simulations because these types of models require input/interaction from the user. The best way to validate that the model meets the criteria, is by having users who have experienced the model situation in real life confirm that the model accurately represents the situation they are familiar with. Users who are familiar with the situation will notice corrections that are needed that a developer might have never known existed. While this type of validation is effective and most appropriate for virtual simulations, it is also used to validate models when there is a short amount of time scheduled for testing, or when it is difficult to produce quantitative results that can be analyzed. While quantitaive results should be the preferred result, a solid account of validation from professionals is also acceptable.[2]

Examples of Face Validation[edit]

  • The accuracy of a flight simulator's response to control inputs can be evaluated by having an experienced pilot fly the simulator through a range of maneuvers.[2]
  • Analyzing the accuracy of a poker bot simulator's response to user input to verify that the A.I. is reacting in a logical manner.
  • Having a soldier test a model that simulates a battle situation.



An audit is a verification technique performed throughout the development life cycle of a new model or simulation or during modification made to legacy models and simulations. An audit is a staff function that serves as the "eyes and ears of management". An audit is used to establish how well a model matches the guidelines that are set in place. If an audit trail is in place, any error in the model should be able to be traced back to the original source to more easily find and make the correction. An audit is conducted by meetings and following the audit trail to check for issues.[6]

Examples of Audit[edit]

  • The most common application of an audit can be seen when a citizen is "audited". While this doesn't have any direct application to the modeling and simulation methods discussed, it explains the process being discussed.



Walkthroughs are the most time-consuming and most formal of the informal methods. While they are the most time-consuming, they are also the most effective at identifying issues with the model. A walkthrough is a scheduled meeting with the author/authors in charge of the model, or documents that are set to be reviewed. In addition to the authors, there is usually a group of senior technical, and possibly business staff that are there to analyze the model. Finally, there is a facilitator who is in charge of leading the meeting. Prior to the official meeting, the author/authors will review the document/model for any potential cosmetic errors. When this has been reviewed, it is passed on to the meeting audience so they have a chance to thoroughly review it for inconsistencies prior to the meeting. The audience will gather up any questions or concerns that they might have based on their expertise in the field as well as their knowledge of the system. At the meeting, the author will present the document to the audience, explaining the methods and findings outlined. The facilitator is responsible for fielding questions from the audience and presenting them in a non-threatening way. In addition to leading the structure of the meeting, the facilitator takes notes of issues that still remain in order to be distributed and reanalyzed later. [1][4]

Examples of Walkthrough[edit]

  • Authors of a paper/book sitting down to review the content prior to submitting for publishing.
  • A software development team reviewing a product before the final product is sent for approval by the customer.



A review is similar to a walkthrough or inspection, except the team for review also contains management. A review is an overview of the whole model process, including coverage of the guidelines and specifications, with the goal of providing management with the assurance that the simulation development is being carried out to include all concept objectives. Because the focus is more than just a technical review, it is considered a high level method. Like the walkthrough process, the review process should have documents submitted prior to the meeting. The Validation and Verification agent should also prepare a set of indicators to measure such as those listed in the table below.

Review Indicators

· appropriateness of the problem definition and M&S requirements

· adequacy of all underlying assumptions

· adherence to standards

· modeling methodology

· quality of simulation representations

· model structure

· model consistency

· model completeness

· documentation

Key points are highlighted via the V&V Agent. The events of the meeting, including potential problems and recommendations, are recorded as a result of the review. From these results, actions are taken to address the points made. Deficiencies are handled, and recommendations are taken into consideration. [4][7]

Desk Checking[edit]


While not the best technique for validating and verifying, desk checking can be useful. This is the only technique where the main responsibility to verify is placed on the author of the model. Desk checking consists of the author careful stepping through the model in an attempt to catch any inconsistencies. The author will thoroughly read all original documents, notes, and goals and try to verify that the completed product accurately and completely modeled everything that it set out to do. This is also the time when any incompleteness should be caught and corrected. While the responsibility does rest on the author, that does not mean reaching out to other experts for help is out of the question. Desk checking is clearly the least formal of the informal methods discussed, but is often a good first line of defense in catching errors, and attempting to verify and validate the model.[1][8]

Examples of Desk Checking[edit]

  • Any programmer who develops software participates in the informal method of verification known as desk checking. Debugging software as it is being developed is a form of desk checking. The developer sets breakpoints or checks the output from the model to verify that it matches the algorithms developed in the conceptual model.

Turing Test[edit]


The Turing test is an informal validation method that was developed by the English mathematician Alan Turing in the 1950s, which at its roots is actually a specialized form of face validation. The reason it is a subgroup of face validation is because all humans can be seen as "experts" on being able to analyze how other humans will respond in a given situation. Specifically this model is best suited for modeling situations that are heavily attempting to modeling human behavior. One can see that a model relying so heavily on a such a complex topic could cause issues. Instead of trying to be heavily computational to account for the factors that affect human decision and the high variance between different people, this validation method focuses on how the model appears to other humans that are unaware of which source the output data comes from - other humans, or the model. The Turing test model is based on comparing whether or not the output, at a rate more than chance, matches that which is the expected output for human behavior in the situation being modeled.[2]

"When applied to the validation of human behavior models, the model is said to pass the Turing test and thus to be valid if expert observers cannot reliably distinguish between model-generated and human-generated behavior. Because the characteristic of the system-generated behavior being assessed is the degree to which it is indistinguishable from human-generated behavior, this test is clearly directly relevant to the assessment of the realism of algorithmically generated behavior, perhaps even more so than to intelligence as Turing originally proposed."[2]

Examples of Turing Test[edit]

  • Cleverbot is an interesting example. Cleverbot is an application that interacts with people by responding to questions and learning from replies. Testing of Cleverbot is best completed by using a Turing Test. Interacting with the Cleverbot allows the user to analyze whether or not they can distinguish between the fact that it is actually just code responding to them, or if they believe that it is another human.
  • Poker strategy algorithms have been developed to a degree where a user cannot tell a difference between a beginner player and the poker-bot. Although basic poker strategy is not highly complex, taking it to the next level to completely encompass an advanced strategy has not been reached.


  1. ^ a b c d Gerald D. Everett, Raymond McLeod, Jr. (2007). Software Testing: Testing Across the Entire Software Development Life Cycle. John Wiley and Sons. p. 80-99.
  2. ^ a b c d e f g Sokolowski, John; Banks, Catherine; Edited by(2010). Modeling and Simulation Fundamentals: Theoretical Underpinnings and Practical Domains. Wiley. p. 340-345. ISBN 978-0-470-48674-0
  3. ^ Balci, Osman ; (1997) VERIFICATION, VALIDATION AND ACCREDITATION OF SIMULATION MODELS Proceedings of the 1997 Winter Simulation Conference
  4. ^ a b c Richards, Adrian; Branstad, Martha; Chervniasky, John; (2007). Validation, Verification, and Testing of Computer Software .Computmg Surveys, Vol 14, No 2, June 1982
  5. ^ Schach, S.R., “Software Engineering (3rd ed.), Irwin, Homewood, IL, 1996.
  6. ^ Perry, W., Effective methods for software testing, John Wiley & Sons, NY, 1995.
  7. ^ "Verification and Validation". Department of Defense. Retrieved 2006. 
  8. ^ Funes, Ana; Aristides, Dasso; Edited by(2007). Verification, Validation And Testing in Software Engineering . IGI. p. 150-170.