Composition in PyModel Composition fundamentals Scenario control Validation Passive testing Composition fundamentals PyModel can use *composition* to combine models into a new model called the *product*. The PyModel analyzer pma and the tester pmt both accept argument lists that name modules that contain models of any kind: model programs, FSMs, or test suites. All of the models named in the argument list are composed into a product. It is this product which the analyzer uses to generate graphics, and the tester uses to generate traces. Composition is often used for *scenario control*: limiting the behaviors of a model program to a scenario of interest, by composing the model program with an FSM that represents the scenario. Composition is often used for *validation*, to confirm that a model program can (or cannot) execute particular traces, by composing it with a test suite that contains those traces. An extension of validation is *passive testing*: automatically checking log files (or other traces collected from instrumented systems) against a model. Composition combines models by synchronizing shared actions and interleaving unshared actions. *Shared actions* are actions in different models that have the same name. A shared action can execute in the product only when it is enabled in all of the composed models. This has the effect of synchronizing the models on that action -- whenever that action occurs, all of the models execute the action together. Therefore, compositing models with shared actions often has the effect of restricting behavior, which makes it good for scenario control. An *unshared action* is an action whose action name occurs in only one model. An unshared action can execute whenever it is enabled in that one model. This has the effect of interleaving --- each model executes its unshared actions on its own schedule, unaffected by the others. Therefore, compositing models with unshared actions often has the effect of allowing more behaviors. This makes it useful for building up complex models from simpler ones. When synchronizing on shared actions, PyModel matches both the action names and the arguments. If arguments are present, all must match (according to Python ==). So, to select particular argument values, use those argument values in the scenario FSM. In the Stack sample, the FSM in StackOneScenario.py provides an example. The FSM generated by this command in the test_graphics script demonstrates the effect: pma.py Stack StackOneScenario -m 6 -o StackSynchronized However, if the action in a scenario (FSM or TestSuite) has no arguments (its argument list is empty), it will match any action with the same name, with any arguments. So, to synchronize on action names, but to allow any argument values, write a scenario FSM with no action arguments. The Socket sample provides an example, in the FSM in NoBlockScenario.py. The FSM generated by this command in the test_graphics.py script demonstrates the effect: pma.py -o NoBlockComposeFSM Socket NoBlockScenario Actions in a model program usually obtain their arguments from the domains for that model program. However, if the action is designated an observable action (it appears in the model program's observables attribute), it obtains its argument values (if any) from actions with the same names in scenario machines (FSMs or TestSuites) with which the model program is composed. Therefore, composed scenarios can act as parameter generators for observable actions. Examples that use scenarios as parameter generators for model programs appear in: Stack/test_scenarios.py, Stack/test_scenario_graphics.py If an observable action has arguments, but no scenarios provide argument values, the observable action is not enabled; it never executes. Sometimes you may wish to prevent unshared actions (that occur in only one model) from interleaving. In order to suppress execution of these actions, make them shared by adding them to the vocabulary of the scenario by including them in the actions attribute. They need not ever be executed in the scenario. An example of this occurs in the WebApplication sample, in the FSM in OnUserNoIntScenario.py (compare to the similar FSM in OneUserScenario.py, which does not include as many shared actions). The FSM generated by this command in the test_graphics.py script demonstrates the effect: pma.py OneUserNoIntScenario WebModel Here is an example of interleaving: In the PowerSwitch sample, in the test_graphics script, the first two commands generate FSMs from the model program in PowerSwitch.ppy and the FSM in SpeedControl.py, and this command generates the FSM from the product, that shows interleaving: pma.py SpeedControl PowerSwitch -o PowerSpeed Scenario control Here are some examples of restriction used for scenario control: In the WebApplication sample, this command in the test_graphics script generates an FSM from the model alone: pma.py WebModel -m 50 Compare to the FSM generated from the product of the model and the scenario FSM in OneUserScenario.py by this command: pma.py OneUserScenario WebModel In the Socket, this command in the test_graphics script generates an FSM from the model alone (but with restricted domains): pma.py -o SocketFSMA Socket SocketSendA Compare to the FSM generated from the product of the model (with domain restriction) and the scenario FSM in NoBlockScenario.py by this command: pma.py -o NoBlockComposeFSMA Socket NoBlockScenario SocketSendA Validation Here are some examples of validation: In the PowerSwitch sample, this command in the test_scenarios script composes the model program in PowerSwitch.py with the test suite in Scenarios.py: pmt.py Scenarios PowerSwitch In the Scenaros test suite, the first two traces are allowed by the model program PowerSwitch, so the tester executes all of those two traces, both times reaching the accepting state. The second two traces in the test suite are forbidden by PowerSwitch, so the tester executes only part of the trace (third trace) or none (fourth trace), in both cases ending in a non-accepting state. In the WebApplication sample, the trace in the TestIntSuccess.py test suite is allowed by the model in WebModel.py and the trace in TestIntWrong.py is forbidden. The test_validation script executes this command to check TestIntSuccess: pma.py -o TestIntSuccessComposeFSM TestIntSuccess WebModel It generates an FSM that contains all the steps of the TestIntSuccess trace and reaches its accepting state. This command checks TestIntWrong: pma.py -o TestIntWrongComposeFSM TestIntWrong WebModel It generates an FSM that contains only the first few steps of the TestIntWrong trace and does not reach the accepting state. Passive testing *Passive testing* is automatically checking log files (or other traces collected from instrumented systems) against a model. To perform passive testing, designate all the actions in the trace to be observable actions. Express the trace (or traces) as a PyModel TestSuite, then compose the model with that TestSuite. Runs in the product that do not reach an accepting state are test failures. Passive testing does not require a test harness. All that is required is to express the log files or traces as PyModel TestSuites. Revised Nov 2012