| ::Preconditions | <ul><li>::Every public method</li><li>::Every public method in COMPONENT-NAME</li><li>::All public methods that modify data</li></ul> | ::We will use if-statements at the beginning of public methods to validate each argument value. This helps to document assumptions and catch invalid values before they can cause faults. |
| ::Assertions | <ul><li>::Every private method</li><li>::Every private method in COMPONENT-NAME</li><li>::All private methods that modify data</li></ul> | ::Assertions will be used to validate all arguments to private methods. Since these methods are only called from our other methods, arguments passed to them should always be valid, unless our code is defective. Assertions will also be used to test class invariants and some postconditions. |
| ::Static analysis | <ul><li>::Strict compiler warnings</li><li>::Automated style checking</li><li>::XML validation</li><li>Detetect common errors</li></ul> | ::We will use source code analysis tools to automatically detect errors. Style checkers will help make all of our code consistent with our coding standards. XML validation ensures that each XML document conforms to its DTD. Lint-like tools help detect common programming errors. E.g.: [lint](http://www.freebsd.org/cgi/man.cgi?query=lint), [lclint/splint](http://www.splint.org/), [jlint](http://artho.com/jlint/), [checkstyle](http://sourceforge.net/projects/checkstyle/), [Jcsc](http://sourceforge.net/projects/jcsc), [PyLint](https://www.pylint.org/), [PyChecker](http://pychecker.sourceforge.net/), [Tidy](http://www.html-tidy.org/) |
| ::Buddy review | <ul><li>::All changes to release branches</li><li>::All changes to COMPONENT-NAME</li><li>::All changes</li></ul> | ::Whenever changes must be made to code on a release branch (e.g., to prepare a maintenance release) the change will be reviewed by another developer before it is committed. The goal is to make sure that fixes do not introduce new defects. |
| ::Review meetings | <ul><li>::Weekly</li><li>::Once before release</li><li>::Every source file</li></ul> | ::We will hold review meetings where developers will perform formal inspections of selected code or documents. We choose to spend a small, predetermined amount of time and try to maximize the results by selecting review documents carefully. In the review process we will use and maintain a variety of checklists. |
| ::Unit testing | <ul><li>::100% of public methods, and 75% of statements</li><li>::100% of public methods</li><li>::75% of statements</li></ul> | ::We will develop and maintain a unit test suite using the JUnit framework. We will consider the boundary conditions for each argument and test both sides of each boundary. Tests must be run and passed before each commit, and they will also be run by the testing team. Each public method will have at least one test. And, the overall test suite will exercise at least 75% of all executable statements in the system. |
| ::Manual system testing | <ul><li>::100% of UI screens and fields</li><li>::100% of specified requirements</li></ul> | ::The QA team will author and maintain a detailed written suite of manual tests to test the entire system through the user interface. This plan will be detailed enough that a person could repeatably carry out the tests from the test suite document and other associated documents. |
| ::Automated system testing | <ul><li>::100% of UI screens and fields</li><li>::100% of specified requirements</li></ul> | ::The QA team will use a system test automation tool to author and maintain a suite of test scripts to test the entire system through the user interface. |
| ::Regression testing | <ul><li>::Run all unit tests before each commit</li><li>::Run all unit tests nightly</li><li>::Add new unit test when verifying fixes</li></ul> | ::We will adopt a policy of frequently re-running all automated tests, including those that have previously been successful. This will help catch regressions (bugs that we thought were fixed, but that appear again). |
| ::Load, stress, and capacity testing | <ul><li>::Simple load testing</li><li>::Detailed analysis of each scalability parameter</li></ul> | ::We use a load testing tool and/or custom scripts to simulate heavy usage of the system. Load will be defined by scalability parameters such as number of concurrent users, number of transactions per second, or number/size of data items stored/processed. We will verify that the system can handle loads within its capacity without crashing, producing incorrect results, mixing up results for distinct users, or corrupting the data. We will verify that when capacity limits are exceeded, the system safely rejects, ignores, or defers requests that it cannot handle. |
| ::Beta testing | <ul><li>::4 current customers</li><li>::40 members of our developers network</li><li>::1000 members of the public</li></ul> | ::We will involve outsiders in a beta test, or early access, program. We will beta testers directions to focus on specific features of the system. We will actively follow up with beta testers to encourage them to report issues. |
| ::Instrumentation and monitoring | <ul><li>::Monitor our ASP servers</li><li>::Remotely monitor customer servers</li></ul> | ::As part of our SLA, we will monitor the behavior of servers to automatically detect service outages or performance degradation. We have policies and procedures in place for failure notification, escalation, and correction. |
| ::Field failure reports | <ul><li>::Prompt users to report failures</li><li>::Automatically report failures</li></ul> | ::We want to understand each post-deployment system failure and actively take steps to correct the defect. The system has built-in capabilities for gathering detailed information from each system failure (e.g., error message, stack traceback, operating system version). This information will be transmitted back to us so that we may analyze it and act on it. |
### QA Strategy Evaluation
*TODO: Use the following table to evaluate how well your QA Strategy will