0% found this document useful (0 votes)
186 views11 pages

Extreme Validation

SUnit based validation was conceived as a novel and apparently modest enhancement of that framework. Validation tests (a.k.a. Validators) are descendants of test-cases, but there are still important differences between the two frameworks. A Validator is a test-case whose main and only purpose is to diagnose the system's health.
Copyright
© Attribution Non-Commercial (BY-NC)
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
186 views11 pages

Extreme Validation

SUnit based validation was conceived as a novel and apparently modest enhancement of that framework. Validation tests (a.k.a. Validators) are descendants of test-cases, but there are still important differences between the two frameworks. A Validator is a test-case whose main and only purpose is to diagnose the system's health.
Copyright
© Attribution Non-Commercial (BY-NC)
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 11

Extreme Validation

Leandro Caniglia

March 24, 2007

Development teams started to recognize heavy unit testing as a crucial practice for quality assurance, after the introduction of the SUnit framework. Several years later, SUnit based Validation was conceived. This appeared as a novel and apparently modest enhancement of that framework. Originally intended to factor out repetitive code for the correctness of UI inputs, SUnit based Validation turned out to be a surprisingly fertile concept. This document summarizes the experience of utilizing the SUnit Validation framework in the PetroVR simulation tool-suite of the Petroleum Industry for more than two years.

What is SUnit based Validation?


It is a sub-framework of SUnit intended to group together classes and methods whose main and only purpose is to diagnose the systems health. Object TestCase(testSelector) SomeTestCase
#testThisCode #testThatCode SomeOtherTestCase Validator(object aspect)

Subhierarchy

SomeValidator
#validateThisAspect #validateThatAspect SomeOtherValidator

Because validation tests are built under the SUnit framework, they inherit some important properties from its philosophy. They are reliable and concise, highly focused on few particular aspects, provide meaningful feedback, and react early in the lifetime of defects.

Leandro Caniglia leads the development team at Caesar Systems. He can be contacted at lcaniglia at

caesarsystems dot com

Validation tests have a short life (as test-cases do). They are created on demand, run and discarded once their results are examined. Also validation suites like test-suites are automatically created by collecting all the selectors that begin with 'validate'. Exceptions associated with validation failures are instances of ValidationFailureException. They are collected as ValidationResults.

How Validators differ from Test Cases?


Validation tests (a.k.a. validators) are descendants of test cases, but there are still important differences between the two frameworks. As a result, the complementary action of both mechanisms greatly improves the system's Quality Assurance. Both test-cases and validators execute critical diagnoses of the system. However, while test cases live in the realm of the development environment, validators belong in the application system.

Classes and methods (source code) Unidirectional SUnit (testCases) static

Smalltalk Image (application) Bidirectional Validation (validators) dynamic

Validators are used differently than test cases. Test cases are explicitly run by programmers; validators instead are activated dynamically as (relevant) modifications are introduced by end-users of the system. Test cases run on objects created as examples for a snippet of source code to get activated on them. Validators verify real-life objects of the running application. While test cases are focused on source code, validators are focused on the objects of the application model. Test Case Running Space Objects Purpose Activated by Development Artificial (examples) Code correctness Programmers Validator Runtime Actual (application) Objects health System requests

Advice

Programmers

End users

The traditional and the new approaches


In the traditional approach the application processes UI inputs by validating them before incorporating new data into the system. Wait for new input

UI

Validate input data

where?

Validation passed? Yes Accept input

No

Report failures UI

model

When first introduced by Andres Valloud in 2004, the SUnit based Validation framework brilliantly identified and addressed chronic shortcomings as how input data was usually validated. Valloud found that validation code was poorly structured, erratically located between the UI and the model. If the UI sent #isValid messages to the model, the model could only answer with Booleans and the UI had to figure out what could have gone wrong. The consequence was that many systems were forced to duplicate validation code, pop up insufficiently clear warning messages, and overload the UI, the model or both with code that would otherwise belong in a more appropriate site. Worse than that, all validation results were lost once the user closed the warning dialog. By clearly assigning the responsibility of making validations to validator objects, the framework solved all those known and recurring issues, and contributed to simplify the code, improving at the same time the end-user experience. Once all the validation routines are owned by the validators, the UI just uses the validation services to accept or reject new inputs. Moreover, in case of failures, the UI is provided with full fledged validation results ready to be used for reporting purposes.

Extreme Validation
Once the SUnit based Validation framework is understood and implemented, it becomes apparent that validators are the natural place to specify all the rules every (relevant) object in the system must honor. The nice thing is that validators state explicitly constraints and rules that would otherwise remain in the head of developers. For instance, if Object A cannot be removed without modifying Object B accordingly, then that rule will get specified in a validator. Also, if Object A cannot be renamed without updating Object B, then there should be a validator that ensures that the state of Object B does not conflict with the name of Object A. Extreme validation is a pattern. It establishes that every time some input modifies any relevant object then not only that object but the whole model must be re-validated. Process Inputs Use validator services to accept new data UI validation Extreme validation Model changed? Yes Validate model

Collect failures

Extreme validation guarantees that the systems health is ubiquitously and permanently monitored so that end-users are informed in case some object gets broken (i.e. some rule is not honored). Since validators know their target objects and the broken rule, they are able to provide complete and meaningful feedback. Specifically, the validator detecting the failure will say which object has broken what rule, even when the modification that originated the side effect was not in the invalid object.

How rules get broken


Most systems work hard to assure that invalid data is always rejected in the UI. Then, if the UI is mature enough, how can a rule get suddenly dishonored? In other words, is it worth using the extreme validation pattern? Is it not enough validating just new input data? In our experience extreme validation has been crucial. Moreover, UI validation had not been sufficient and, if it had, it would have forced end-users to perform more strict sequences than necessary or convenient; which could be considered an unfriendly behavior.

UI validation is, of course, necessary. However, there are several reasons that make it insufficient. If the UI mistakenly accepts an invalid input, then some rule will be broken. Since it is unrealistic to assume that a system is bug free, it follows that under usual conditions UI validations cannot ensure that all rules are honored. Consider the case of named objects as an example. These are objects whose name acts as a key for fast access. If an object references another by its name, then should the referenced object be renamed, the reference has to be accordingly updated. If there are several references, then chances are that, because of a bug, one of those references fails to be updated. The picture below illustrates the case of an object named 'foo' with two indirect references to its name. The user renames the object to 'bar'. Because of some bug one of the indirect references is updated but the other one is not. A massive validation would discover the abnormality e.g. when the outdated object is validated the validator encounters that the referenced name cannot be resolved. Object named: 'bar' (old name) renameTo: 'bar' update current name Indirect reference 1 outdated (invalid) Indirect reference 2 'foo' (String)

!
'bar' (String) Validation failure!

Since a UI validation would be typically restricted to make sure that the new name is acceptable it will not discover the problem; only a massive re-validation of the model will detect the outdated object. When an object changes, its changes could break rules "far away" from the object's neighborhood. Since the UI is focused on the object being edited, the validations it performs are typically circumscribed to that object and the objects directly related to it. In consequence, the only way to make sure no "remote" rule is broken is through a massive validation of the system. As another example consider a system that supports multiple and concurrent sessions in a teamwork environment. After a transaction the state of a local session would be

influenced by changes in remote sessions. The impact of those changes in the local session cannot (and should not) be validated in the UI. The extreme validation pattern is robust because it prescribes a massive validation after each transaction. It is, at least, nave to think that UI restricted validations would suffice because there are so many causes that might take the system to an invalid state. Massive validations are also useful when the system loads data from disk or any other remote source. Assume your application is able to save its model to a file on disk. When the user opens the application and files in a model, a massive validation would ensure that the objects just activated in memory are sound. This practice is especially useful when the application loads old models that were saved with earlier versions of the software. The table below identifies typical cases where the extreme validation pattern has given very good results in real-life. Apply extreme validation to User inputs Imperfect code Teamwork File-in So that you Ensure model's health while relaxing strict sequences of steps Detect the side effects of bugs everywhere in the model as early as possible Make sure that the model remains valid after every transaction Deal with models loaded from file, especially after software upgrades

Performance
The responsiveness of an application could suffer from some penalty when the entire model is frequently revalidated. There are several things that can be done to alleviate the overhead. Let's recall that the concept of massive validation we have been using refers to the relevant objects in the image, and not to all the objects in the application database. When considering performance issues, take also into account one important distinction between validators and test cases. Systems with thousand of classes will usually spend hours running all their SUnit tests. However, when a model is validated the number of classes does not matter; only the objects currently in the image count as they are what we want to validate. Therefore, only things that are actually being used are tested. Unlike test cases, validations are inherently simple and fast. Our experience has shown that the main cause of performance penalty comes from the way suites are generated in SUnit. In particular we were able to identify two main sources of slowness: (1) the use of #match: to look for the appropriate selectors ('test' and 'validate'), and (2) the number of selectors scanned along the search. Problem (1) is easily solved by using #beginsWith: instead. The second issue can be solved by restricting the search to the Validator hierarchy (there is no need to go up to Object). Another useful technique we have employed to keep a good responsiveness in the GUI is to defer the massive validation as much as possible. By default, when the user is looking at one screen, only failures of objects in that screen can be shown. That way, the system

can keep a massive validation pending, and execute it only when the user goes to the screen that shows all validation failures. Other recommendations include (a) the use of visual feedback (progress bars) when large validations are run, (b) making sure that every object is validated only once, and (c) requesting massive validations only when something has changed.

The cost of writing validations


People used to SUnit know the cost of writing test cases. Programmers have to develop particular skills to write tests not tied to a particular implementation, tests that do not take too long to run, with coverage for the model and the GUI, etc. While the benefits of SUnit are huge, the investment is also considerable. Writing validators is not a problem because validators are far much simpler than test cases. The workload required to fully implement the validator of a class is proportional to the number of instance variables of the class. Typically validators express only one rule for every instance variable; plus some few special rules that combine two or more of them. Since validations can be delegated, the implementation of a validator is usually just a short list of sentences that look like self aspect: <variable name>; valueValidate where <variable name> stands for the name of some instance variable or aspect of the object being validated. Basic aspects like Booleans, Strings, Characters and the like are usually tested for nil as in self aspect: #name; valueIsDefined or for simple constraints as in self aspect: #price; valueIsPositive

Debugging with Validators


Imagine a system where every relevant object can be validated at any time. Broken objects are the manifestation of bugs or abnormal circumstances. By integrating validation menus debuggers and inspectors, the developer is provided with a wonderful tool just one click away in the conventional environment.

Usually, one runs all the validations that objects provide. However, being able to select a particular validator to run can sometimes be helpful.

By turning developers into users of validation tools, the framework evolves naturally and improves at a much greater pace.

Validators and Test Cases


Test-cases test the source code and validators test live objects. The combined action of both frameworks greatly contributes to the Quality Assurance of the system. An interesting application of validators is to use them in test-cases. Imagine you want to test some complicated functionality. Think of some command whose result cannot be easily tested. For instance, merging two structures can be complicated because of the complexity of the resulting structure. In those cases, one can simply write a unit test like this one: testMerge | structureA structureB merger result | structureA := self someStructure. structureB := self someOtherStructure. merger := StructureMerger new. merger merge: sturctureA with: structureB. result := merger result. self assert: result validate hasPassed description: 'The merge did not work' Validators do not replace test-cases; in fact, they can be used to enhance the scope of a test without adding complexity.

Validating the source code


Incorporating code from several developers into the official version of the software is one of the problems addressed by SUnit. The idea is that only code that passes all tests is acceptable. With validators we can extend that premise by requiring that only validated methods and classes are acceptable. Validating the source code is the key concept involved here. Let's see a couple of simple examples. The first one detects returns sent from error handlers like in [self doSomething] on: Error do: [:ex | ^nil] validateOnError self parseTree messageSendsDo: [:msg | msg selectorNode value = #on:do: and: [ msg arguments second allNodesDo: [:node | node isReturn ifTrue: [self failBecause: self prettyPrint , ' should not return from an error handler']]]]

The following example detects methods that send super with another selector like in someMethod self doSomething. super someOtherMethod validateSuperSend | selector | selector := self selector. self parseTree messageSendsDo: [:msg | (msg receiver isSuper and: [msg selector ~= selector]) ifTrue: [ self failBecause: self prettyPrint , ' sends a super message with another selector']] Other themes for code validators could include checks ensuring that: No file is opened in one method and closed in another. No method uses magic constants The model never opens dialogs

Specialized validators for the source code of the GUI could be added, following similar ideas. For instance, one could add a validation to check that all events triggered by the GUI are handled by the GUI rather than by its model. More incisive validators could check that some piece of code conforms to a given design or programming pattern. This is an especially interesting chapter of validators. Although design patterns have become very popular as theoretical instruments, they are rarely honored in actual implementations. An appropriate validation suite could ensure that the fundamental principles of a given pattern are present in the code.

Validators and File-In


Validators can be used when importing external data into a system. The idea consists in separating the file-in from the reification. In the first step external data is read into an intermediary structure in the Smalltalk image. This intermediary representation is validated. File-in Internal representation of data Reify Imported objects

External data source Validate before including

Reject & inform on validation failure

Massive validation

If some validation fails, then the reader can provide meaningful information as feedback. If the validation result is successful, then the internal representation of the data can be

seamlessly transformed into domain objects. In that case, a massive validation of the whole system would ensure the consistence of the complete model. The validation framework fits perfectly in a transaction oriented approach (i.e. all data is accepted or rejected). Note also the general character of the pattern: external data can be anything from a remote database to Smalltalk code, including any kind of files, active links (e.g. to spreadsheets), scripts containing user commands, etc.

Final remarks
The SUnit based Validation framework has been extensively used and further developed for two years now in the PetroVR tool-suite by Caesar Petroleum Systems. The system is a business simulation tool that has been in the Oil and Gas industry for over ten years. The benefits of validations have been numerous and, some of them, crucial for that application. The consistent use of the extreme validation pattern has allowed us to implement a UI which is much more flexible than conventional ones. In this system the user is allowed to perform actions that have the potential to cause a problem. PetroVR allows these actions instead of forbidding them in advance. The reason behind this behavior is a general principle of its GUI. PetroVR allows some inconsistent states because it considers them temporary. Chances are that the user will solve them later in the session. That's why there are validations everywhere. Thanks to validators, the system is able to give instantaneous feedback, decorated with error icons and visible explanations over yellow background. Moreover, validation feedback is permanent, i.e., it doesn't go away after the user "accepts" the action. Validators have showed us that instead of trying to forbid a suspicious action, we can be much more flexible, while providing a permanent and visible feedback about possible inconsistencies. In general, instead of popping up warnings, displaying "are you sure?" confirmation dialogs, or forcing the user to make a decision immediately, we pursue flexibility, very late-binding, and instantaneous and permanently available validations everywhere in the system.

Acknowledgment
The author wants to express his gratitude to Andrs Valloud, the creator of the SUnit based Validation framework, who generously shared his ideas and implementation and was always interested in discussing further developments and applications.

You might also like