From 1cf23c7ad6b8d31febdd8f7d96e5fd8bf68c8381 Mon Sep 17 00:00:00 2001 From: Victor Woeltjen Date: Tue, 24 Nov 2015 10:57:52 -0800 Subject: [PATCH 01/14] [Documentation] Rename development cycle ...as it will no longer be the index of the process category as information about testing is added. --- docs/src/process/{index.md => cycle.md} | 0 1 file changed, 0 insertions(+), 0 deletions(-) rename docs/src/process/{index.md => cycle.md} (100%) diff --git a/docs/src/process/index.md b/docs/src/process/cycle.md similarity index 100% rename from docs/src/process/index.md rename to docs/src/process/cycle.md From eb942b0bf78b16ab437cd79cbe5487ff86d0938d Mon Sep 17 00:00:00 2001 From: Victor Woeltjen Date: Tue, 24 Nov 2015 13:08:59 -0800 Subject: [PATCH 02/14] [Documentation] Intermediary commit Begin adding test plan, procedures. WTD-1573. --- docs/src/process/index.md | 13 +++++++++++++ docs/src/process/testing/plan.md | 0 docs/src/process/testing/procedures.md | 6 ++++++ 3 files changed, 19 insertions(+) create mode 100644 docs/src/process/index.md create mode 100644 docs/src/process/testing/plan.md create mode 100644 docs/src/process/testing/procedures.md diff --git a/docs/src/process/index.md b/docs/src/process/index.md new file mode 100644 index 0000000000..61a22ed6a0 --- /dev/null +++ b/docs/src/process/index.md @@ -0,0 +1,13 @@ +# Development Process + +The process used to develop Open MCT Web is described in the following +documents: + +* [Development Cycle](cycle.md): Describes how and when specific + process points are repeated during development. +* Testing is described in two documents: + * The [Test Plan](testing/plan.md) summarizes the approaches used + to test Open MCT Web. + * The [Test Procedures](testing/procedures.md) document what + specific tests are performed to verify correctness, and how + they should be carried out. diff --git a/docs/src/process/testing/plan.md b/docs/src/process/testing/plan.md new file mode 100644 index 0000000000..e69de29bb2 diff --git a/docs/src/process/testing/procedures.md b/docs/src/process/testing/procedures.md new file mode 100644 index 0000000000..b9d159a032 --- /dev/null +++ b/docs/src/process/testing/procedures.md @@ -0,0 +1,6 @@ +# Test Procedures + +## Template + +Test procedures should use the following template: + From 7a4be9e67ea06dc9bb5abcde5fbf3bcb4eb4f88d Mon Sep 17 00:00:00 2001 From: Victor Woeltjen Date: Tue, 24 Nov 2015 15:56:02 -0800 Subject: [PATCH 03/14] [Documentation] Add test procedure template --- docs/src/process/testing/procedures.md | 73 +++++++++++++++++++++++++- 1 file changed, 71 insertions(+), 2 deletions(-) diff --git a/docs/src/process/testing/procedures.md b/docs/src/process/testing/procedures.md index b9d159a032..4bcd39d4a6 100644 --- a/docs/src/process/testing/procedures.md +++ b/docs/src/process/testing/procedures.md @@ -1,6 +1,75 @@ # Test Procedures -## Template +## Introduction -Test procedures should use the following template: +This document is intended to be used: +* By testers, to verify that Open MCT Web behaves as specified. +* By the development team, to document new test cases and to provide + guidance on how to author these. + +## Writing Procedures + +### Template + +Procedures for individual tests should use the following template, +adapted from [https://swehb.nasa.gov/display/7150/SWE-114](). + +Property | Value +---------------|--------------------------------------------------------------- +Test ID | +Relevant reqs. | +Prerequisites | +Test input | +Instructions | +Expectation | +Eval. criteria | + +For multi-line descriptions, use an asterisk or similar indicator to refer +to a longer-form description below. + +#### Example Procedure - Edit a Layout + +Property | Value +---------------|--------------------------------------------------------------- +Test ID | MCT-TEST-000X - Edit a layout +Relevant reqs. | MCT-EDIT-000Y +Prerequisites | Create a layout, as in MCT-TEST-000Z +Test input | Domain object database XYZ +Instructions | See below * +Expectation | Change to editing context † +Eval. criteria | Visual insepction + +* Follow the following steps: + +1. Verify that the created layout is currently navigated-to, + as in MCT-TEST-00ZZ. +2. Click the Edit button, identified by a pencil icon and the text "Edit" + displayed on hover. + +† Right-hand viewing area should be surrounded by a dashed +blue border when a domain object is being edited. + +### Guidelines + +Test procedures should be written assuming minimal prior knowledge of the +application: Non-standard terms should only be used when they are documented +in [the glossary](#glossary), and shorthands used for user actions should +be accompanied by useful references to test procedures describing those +actions (when available) or descriptions in user documentation. + +Test cases should be narrow in scope; if a list of steps is excessively +long (or must be written vaguely to be kept short) it should be broken +down into multiple tests which reference one another. + +All requirements satisfied by Open MCT Web should be verifiable using +one or more test procedures. + +## Glossary + +This section will contain terms used in test procedures. This may link to +a common glossary, to avoid replication of content. + +## Procedures + +This section will contain specific test procedures. From 91997ced013c99811b299f1d749337e98f0adac5 Mon Sep 17 00:00:00 2001 From: Victor Woeltjen Date: Tue, 24 Nov 2015 17:08:59 -0800 Subject: [PATCH 04/14] [Documentation] Define test levels --- docs/src/process/testing/plan.md | 19 +++++++++++++++++++ 1 file changed, 19 insertions(+) diff --git a/docs/src/process/testing/plan.md b/docs/src/process/testing/plan.md index e69de29bb2..8bedd01172 100644 --- a/docs/src/process/testing/plan.md +++ b/docs/src/process/testing/plan.md @@ -0,0 +1,19 @@ +# Test Plan + +## Test Levels + +Testing occurs regularly during development, with varying levels of +completeness. + +In order of decreasing frequency and increasing completeness, these +are: + +1. __Pre-merge testing__: Performed before changes are integrated + into the software. +2. __Partial acceptance testing__: A subset of acceptance testing + performed at regular intervals. +3. __Acceptance testing__: Performed before a new release is considered + stable. + +Each level of testing is inclusive of the levels which proceed it. + From 730878938e68e6f16ac30364567914ab8f9a0ec5 Mon Sep 17 00:00:00 2001 From: Victor Woeltjen Date: Fri, 27 Nov 2015 10:39:25 -0800 Subject: [PATCH 05/14] [Documentation] Document pre-merge testing --- CONTRIBUTING.md | 1 + docs/src/process/testing/plan.md | 45 ++++++++++++++++++++++++-------- 2 files changed, 35 insertions(+), 11 deletions(-) diff --git a/CONTRIBUTING.md b/CONTRIBUTING.md index fcd1504ef0..7e8e15eae3 100644 --- a/CONTRIBUTING.md +++ b/CONTRIBUTING.md @@ -292,6 +292,7 @@ checklist.) 2. Unit tests included and/or updated with changes? 3. Command line build passes? 4. Expect to pass code review? +5. Changes have been smoke-tested? ### Reviewer Checklist diff --git a/docs/src/process/testing/plan.md b/docs/src/process/testing/plan.md index 8bedd01172..5433a5829a 100644 --- a/docs/src/process/testing/plan.md +++ b/docs/src/process/testing/plan.md @@ -2,18 +2,41 @@ ## Test Levels -Testing occurs regularly during development, with varying levels of -completeness. +Testing for Open MCT Web includes: -In order of decreasing frequency and increasing completeness, these -are: +* _Smoke testing_: Brief, informal testing to verify that no major issues + or regressions are present in the software, or in specific features of + the software. +* _Unit testing_: Automated verification of the performance of individual + software components. +* _User testing_: Testing with a representative user base to verify + that application behaves usably and as specified. +* _Long-duration testing_: Testing which takes place over a long period + of time to detect issues which are not readily noticeable during + shorter test periods. -1. __Pre-merge testing__: Performed before changes are integrated - into the software. -2. __Partial acceptance testing__: A subset of acceptance testing - performed at regular intervals. -3. __Acceptance testing__: Performed before a new release is considered - stable. +### Smoke Testing -Each level of testing is inclusive of the levels which proceed it. +### Unit Testing +### User Testing + +## Test Performance + +Tests are performed at various levels of frequency. + +### Per-merge Testing + +Before changes are merged, the author of the changes must perform: + +* _Smoke testing_ (both generally, and for areas which interact with + the new changes.) +* _Unit testing_ (as part of the automated build step.) + +Changes are not merged until the author has affirmed that both +forms of testing have been performed successfully; this is documented +by the [Author Checklist](https://github.com/nasa/openmctweb/blob/master/CONTRIBUTING.md#author-checklist). + +### Per-sprint Testing + +### Per-release Testing From 3ac1710d83b0540c7c492267acd6b1ee9990da5a Mon Sep 17 00:00:00 2001 From: Victor Woeltjen Date: Fri, 27 Nov 2015 10:58:13 -0800 Subject: [PATCH 06/14] [Documentation] Document evaluation criteria ...for per-release and per-sprint testing. WTD-1573 --- docs/src/process/testing/plan.md | 28 ++++++++++++++++++++++++++++ 1 file changed, 28 insertions(+) diff --git a/docs/src/process/testing/plan.md b/docs/src/process/testing/plan.md index 5433a5829a..d46e152eaf 100644 --- a/docs/src/process/testing/plan.md +++ b/docs/src/process/testing/plan.md @@ -25,6 +25,11 @@ Testing for Open MCT Web includes: Tests are performed at various levels of frequency. +* _Per-merge_: Performed before any new changes are integrated into + the software. +* _Per-sprint_: Performed at the end of every [sprint](../cycle.md). +* _Per-release: Performed at the end of every [release](../cycle.md). + ### Per-merge Testing Before changes are merged, the author of the changes must perform: @@ -39,4 +44,27 @@ by the [Author Checklist](https://github.com/nasa/openmctweb/blob/master/CONTRIB ### Per-sprint Testing +Before a sprint is closed, the development team must additionally +perform: + +* _User testing_ (both generally, and for areas which interact with + changes made during the sprint.) +* _Long-duration testing_ (specifically, for 24 hours.) + +Issues are reported as a product of both forms of testing. + +A sprint is not closed until both categories have been performed on +the latest snapshot of the software, _and_ no issues labelled as +["blocker"](https://github.com/nasa/openmctweb/blob/master/CONTRIBUTING.md#issue-reporting) +remain open. + ### Per-release Testing + +As [per-sprint testing](#per-sprint-testing), except that _user testing_ +should be comprehensive, with less focus on changes from the specific +sprint or release. + +A release is not closed until both categories have been performed on +the latest snapshot of the software, _and_ no issues labelled as +["blocker" or "critical"](https://github.com/nasa/openmctweb/blob/master/CONTRIBUTING.md#issue-reporting) +remain open. From ab075e9ad8ffce531e0ed7f4549c842dafb4b3c6 Mon Sep 17 00:00:00 2001 From: Victor Woeltjen Date: Fri, 27 Nov 2015 11:08:47 -0800 Subject: [PATCH 07/14] [Documentation] Complete test plan --- docs/src/process/testing/plan.md | 40 ++++++++++++++++++++++++++++++++ 1 file changed, 40 insertions(+) diff --git a/docs/src/process/testing/plan.md b/docs/src/process/testing/plan.md index d46e152eaf..a1c586d5c5 100644 --- a/docs/src/process/testing/plan.md +++ b/docs/src/process/testing/plan.md @@ -17,10 +17,50 @@ Testing for Open MCT Web includes: ### Smoke Testing +Manual, non-rigorous testing of the software and/or specific features +of interest. Verifies that the software runs and that basic functionality +is present. + ### Unit Testing +Unit tests are automated tests which exercise individual software +components. Tests are subject to code review along with the actual +implementation, to ensure that tests are applicable and useful. + +Examples of useful tests: + +* Tests which replicate bugs (or their root causes) to verify their resolution. +* Tests which reflect details from software specifications. +* Tests which exercise edge or corner cases among inputs. +* Tests which verify expected interactions with other components in the system. + +During automated testing, code coverage metrics will be reported. +Line coverage must remain at or above 80%. + ### User Testing +User testing is performed at scheduled times involving target users +of the software or reasonable representatives, along with members of +the development team exercising known use cases. Users test the +software directly; the software should be configured as similarly to +its planned production configuration as is feasible without introducing +other risks (e.g. damage to data in a production instance.) + +User testing will focus on the following activities: + +* Verifying issues resolved since the last test session. +* Checking for regressions in areas related to recent changes. +* Using major or important features of the software, + as determined by the user. +* General "trying to break things." + +Desired outcomes of user testing are: + +* Identified software defects. +* Areas for usability improvement. +* Feature requests (particularly missed requirements.) +* Recorded issue verification. + ## Test Performance Tests are performed at various levels of frequency. From 55fc60ec8236ec73efd5bec7aa4e53898aa21e63 Mon Sep 17 00:00:00 2001 From: Victor Woeltjen Date: Fri, 27 Nov 2015 11:15:13 -0800 Subject: [PATCH 08/14] [Documentation] Describe long-duration testing --- docs/src/process/testing/plan.md | 19 +++++++++++++++++++ 1 file changed, 19 insertions(+) diff --git a/docs/src/process/testing/plan.md b/docs/src/process/testing/plan.md index a1c586d5c5..a8ba9d5b45 100644 --- a/docs/src/process/testing/plan.md +++ b/docs/src/process/testing/plan.md @@ -54,6 +54,10 @@ User testing will focus on the following activities: as determined by the user. * General "trying to break things." +During user testing, users will +[report issues](https://github.com/nasa/openmctweb/blob/master/CONTRIBUTING.md#issue-reporting) +as they are encountered. + Desired outcomes of user testing are: * Identified software defects. @@ -61,6 +65,21 @@ Desired outcomes of user testing are: * Feature requests (particularly missed requirements.) * Recorded issue verification. +### Long-duration Testing + +Long-duration testing occurs over a twenty-four hour period. The +software is run in one or more stressing cases representative of expected +usage. After twenty-four hours, the software is evaluated for: + +* Performance metrics: Have memory usage or CPU utilization increased + during this time period in unexpected or undesirable ways? +* Subjective usability: Does the software behave in the same way it did + at the start of the test? Is it as responsive? + +Any defects or unexpected behavior identified during testing should be +[reported as issues](https://github.com/nasa/openmctweb/blob/master/CONTRIBUTING.md#issue-reporting) +and reviewed for severity. + ## Test Performance Tests are performed at various levels of frequency. From bd4590ad9da381e5615094fa0ebfbfc44b4e8cbf Mon Sep 17 00:00:00 2001 From: Victor Woeltjen Date: Fri, 27 Nov 2015 11:22:11 -0800 Subject: [PATCH 09/14] [Documentation] Update terminology Update terminology in Development Cycle to reflect descriptions in Test Plan. --- docs/src/process/cycle.md | 25 +++++++++++++++---------- docs/src/process/testing/plan.md | 3 +++ 2 files changed, 18 insertions(+), 10 deletions(-) diff --git a/docs/src/process/cycle.md b/docs/src/process/cycle.md index 4a39513a91..e872044078 100644 --- a/docs/src/process/cycle.md +++ b/docs/src/process/cycle.md @@ -77,7 +77,7 @@ for the subsequent sprint. |:-----:|:-------------------------:|:------:|:---:|:----------------------------:|:-----------:| | __1__ | Sprint plan | Tag-up | | | | | __2__ | | Tag-up | | | Code freeze | -| __3__ | Sprint acceptance testing | Triage | | _Sprint acceptance testing*_ | Ship | +| __3__ | Per-sprint testing | Triage | | _Per-sprint testing*_ | Ship | * If necessary. @@ -90,8 +90,8 @@ emphasis on testing. | Week | Mon | Tue | Wed | Thu | Fri | |-------:|:-------------------------:|:------:|:---:|:----------------------------:|:-----------:| | __1__ | Sprint plan | Tag-up | | | Code freeze | -| __2__ | Acceptance testing | Triage | | | | -| __3__ | _Acceptance testing*_ | Triage | | _Acceptance testing*_ | Ship | +| __2__ | Per-release testing | Triage | | | | +| __3__ | _Per-release testing*_ | Triage | | _Per-release testing*_ | Ship | * If necessary. @@ -113,7 +113,8 @@ emphasis on testing. (and until the end of the sprint) the only changes that should be merged into the master branch should directly address issues needed to pass acceptance testing. -* __Acceptance Testing.__ Structured testing with predefined +* [__Per-release Testing.__](testing/plan.md#per-release-testing) + Structured testing with predefined success criteria. No release should ship without passing acceptance tests. Time is allocated in each sprint for subsequent rounds of acceptance testing if issues are identified during a @@ -122,23 +123,27 @@ emphasis on testing. and should be flexible enough to allow changes to plans (e.g. deferring delivery of some feature in order to ensure stability of other features.) Baseline testing includes: - * __Testathon.__ Multi-user testing, involving as many users as + * [__Testathon.__](testing/plan.md#user-testing) + Multi-user testing, involving as many users as is feasible, plus development team. Open-ended; should verify completed work from this sprint, test exploratorily for regressions, et cetera. - * __24-Hour Test.__ A test to verify that the software remains + * [__Long-Duration Test.__](testing/plan.md#long-duration-testing) A + test to verify that the software remains stable after running for longer durations. May include some combination of automated testing and user verification (e.g. checking to verify that software remains subjectively responsive at conclusion of test.) - * __Automated Testing.__ Automated testing integrated into the + * [__Unit Testing.__](testing/plan.md#unit-testing) + Automated testing integrated into the build. (These tests are verified to pass more often than once per sprint, as they run before any merge to master, but still - play an important role in acceptance testing.) -* __Sprint Acceptance Testing.__ Subset of Acceptance Testing + play an important role in per-release testing.) +* [__Per-sprint Testing.__](testing/plan.md#per-sprint-testing) + Subset of Pre-release Testing which should be performed before shipping at the end of any sprint. Time is allocated for a second round of - Sprint Acceptance Testing if the first round is not passed. + Pre-release Testing if the first round is not passed. * __Triage.__ Team reviews issues from acceptance testing and uses success criteria to determine whether or not they should block release, then formulates a plan to address these issues before diff --git a/docs/src/process/testing/plan.md b/docs/src/process/testing/plan.md index a8ba9d5b45..e9ddf829e1 100644 --- a/docs/src/process/testing/plan.md +++ b/docs/src/process/testing/plan.md @@ -123,6 +123,9 @@ As [per-sprint testing](#per-sprint-testing), except that _user testing_ should be comprehensive, with less focus on changes from the specific sprint or release. +Per-release testing should also include any acceptance testing steps +agreed upon with recipients of the software. + A release is not closed until both categories have been performed on the latest snapshot of the software, _and_ no issues labelled as ["blocker" or "critical"](https://github.com/nasa/openmctweb/blob/master/CONTRIBUTING.md#issue-reporting) From 1731b985fcf76409f90d98854ccff40e7ebb25db Mon Sep 17 00:00:00 2001 From: Victor Woeltjen Date: Fri, 27 Nov 2015 11:30:55 -0800 Subject: [PATCH 10/14] [Documentation] Fix broken markdown tag --- docs/src/process/testing/plan.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/docs/src/process/testing/plan.md b/docs/src/process/testing/plan.md index e9ddf829e1..6b13bdb8e1 100644 --- a/docs/src/process/testing/plan.md +++ b/docs/src/process/testing/plan.md @@ -87,7 +87,7 @@ Tests are performed at various levels of frequency. * _Per-merge_: Performed before any new changes are integrated into the software. * _Per-sprint_: Performed at the end of every [sprint](../cycle.md). -* _Per-release: Performed at the end of every [release](../cycle.md). +* _Per-release_: Performed at the end of every [release](../cycle.md). ### Per-merge Testing From 92f5d5f19056d9e5d1d17e1fe88ce39f92a61f23 Mon Sep 17 00:00:00 2001 From: Victor Woeltjen Date: Fri, 27 Nov 2015 12:00:25 -0800 Subject: [PATCH 11/14] [Documentation] Replace redundant docs with link --- docs/src/process/testing/plan.md | 12 +++--------- 1 file changed, 3 insertions(+), 9 deletions(-) diff --git a/docs/src/process/testing/plan.md b/docs/src/process/testing/plan.md index 6b13bdb8e1..181ba143f4 100644 --- a/docs/src/process/testing/plan.md +++ b/docs/src/process/testing/plan.md @@ -27,15 +27,9 @@ Unit tests are automated tests which exercise individual software components. Tests are subject to code review along with the actual implementation, to ensure that tests are applicable and useful. -Examples of useful tests: - -* Tests which replicate bugs (or their root causes) to verify their resolution. -* Tests which reflect details from software specifications. -* Tests which exercise edge or corner cases among inputs. -* Tests which verify expected interactions with other components in the system. - -During automated testing, code coverage metrics will be reported. -Line coverage must remain at or above 80%. +Unit tests should meet +[test standards](https://github.com/nasa/openmctweb/blob/master/CONTRIBUTING.md#test-standards) +as described in the contributing guide. ### User Testing From b25576aed8d1e95607270514569e68f57b6c5311 Mon Sep 17 00:00:00 2001 From: Victor Woeltjen Date: Fri, 4 Dec 2015 12:20:05 -0800 Subject: [PATCH 12/14] [Documentation] Sketch initial test procedures Sketch initial test procedures; add explanation on the difference between sprint and release testing. --- docs/src/process/testing/plan.md | 9 +-- docs/src/process/testing/procedures.md | 98 +++++++++++++++++++++++++- 2 files changed, 101 insertions(+), 6 deletions(-) diff --git a/docs/src/process/testing/plan.md b/docs/src/process/testing/plan.md index 181ba143f4..fead5f5a50 100644 --- a/docs/src/process/testing/plan.md +++ b/docs/src/process/testing/plan.md @@ -100,9 +100,10 @@ by the [Author Checklist](https://github.com/nasa/openmctweb/blob/master/CONTRIB Before a sprint is closed, the development team must additionally perform: -* _User testing_ (both generally, and for areas which interact with - changes made during the sprint.) -* _Long-duration testing_ (specifically, for 24 hours.) +* A relevant subset of [_user testing_](procedures.md#user-test-procedures) + identified by the acting [project manager](../cycle.md#roles). +* [_Long-duration testing_](procedures.md#long-duration-testng) + (specifically, for 24 hours.) Issues are reported as a product of both forms of testing. @@ -114,7 +115,7 @@ remain open. ### Per-release Testing As [per-sprint testing](#per-sprint-testing), except that _user testing_ -should be comprehensive, with less focus on changes from the specific +should cover all test cases, with less focus on changes from the specific sprint or release. Per-release testing should also include any acceptance testing steps diff --git a/docs/src/process/testing/procedures.md b/docs/src/process/testing/procedures.md index 4bcd39d4a6..60529d4fa7 100644 --- a/docs/src/process/testing/procedures.md +++ b/docs/src/process/testing/procedures.md @@ -38,7 +38,7 @@ Prerequisites | Create a layout, as in MCT-TEST-000Z Test input | Domain object database XYZ Instructions | See below * Expectation | Change to editing context † -Eval. criteria | Visual insepction +Eval. criteria | Visual inspection * Follow the following steps: @@ -72,4 +72,98 @@ a common glossary, to avoid replication of content. ## Procedures -This section will contain specific test procedures. +This section will contain specific test procedures. Presently, procedures +are placeholders describing general patterns for setting up and conducting +testing. + +### User Testing Setup + +These procedures describes a general pattern for setting up for user +testing. Specific deployments should customize this pattern with +relevant data and any additional steps necessary. + +Property | Value +---------------|--------------------------------------------------------------- +Test ID | MCT-TEST-SETUP0 - User Testing Setup +Relevant reqs. | TBD +Prerequisites | Build of relevant components +Test input | Exemplary database; exemplary telemetry data set +Instructions | See below +Expectation | Able to load application in a web browser (Google Chrome) +Eval. criteria | Visual inspection + +Instructions: + +1. Start telemetry server. +2. Start ElasticSearch. +3. Restore database snapshot to ElasticSearch. +4. Start telemetry playback. +5. Start HTTP server for client sources. + +### User Test Procedures + +Specific user test cases have not yet been authored. In their absence, +user testing is conducted by: + +* Reviewing the text of issues from the issue tracker to understand the + desired behavior, and exercising this behavior in the running application. + (For instance, by following steps to reproduce from the original issue.) + * Issues which appear to be resolved should be marked as such with comments + on the original issue (e.g. "verified during user testing MM/DD/YYYY".) + * Issues which appear not to have been resolved should be reopened with an + explanation of what unexpected behavior has been observed. + * In cases where an issue appears resolved as-worded but other related + undesirable behavior is observed during testing, a new issue should be + opened, and linked to from a comment in the original issues. +* General usage of new features and/or existing features which have undergone + recent changes. Defects or problems with usability should be documented + by filing issues in the issue tracker. +* Open-ended testing to discover defects, identify usability issues, and + generate feature requests. + +### Long-Duration Testing + +The purpose of long-duration testing is to identify performance issues +and/or other defects which are sensitive to the amount of time the +application is kept running. (Memory leaks, for instance.) + +Property | Value +---------------|--------------------------------------------------------------- +Test ID | MCT-TEST-LDT0 - Long-duration Testing +Relevant reqs. | TBD +Prerequisites | MCT-TEST-SETUP0 +Test input | (As for test setup.) +Instructions | See "Instructions" below * +Expectation | See "Expectations" below † +Eval. criteria | Visual inspection + +* Instructions: + +1. Start `top` or a similar tool to measure CPU usage and memory utilization. +2. Open several user-created displays (as many as would be realistically + opened during actual usage in a stressing case) in some combination of + separate tabs and windows (approximately as many tabs-per-window as + total windows.) +3. Ensure that playback data is set to run continuously for at least 24 hours + (e.g. on a loop.) +4. Record CPU usage and memory utilization. +5. In at least one tab, try some general user interface gestures and make + notes about the subjective experience of using the application. (Particularly, + the degree of responsiveness.) +6. Leave client displays open for 24 hours. +7. Record CPU usage and memory utilization again. +8. Make additional notes about the subjective experience of using the + application (again, particularly responsiveness.) +9. Check logs for any unexpected warnings or errors. + +&daggers; Expectations: + +* At the end of the test, CPU usage and memory usage should both be similar + to their levels at the start of the test. +* At the end of the test, subjective usage of the application should not + be observably different from the way it was at the start of the test. + (In particular, responsiveness should not decrease.) +* Logs should not contain any unexpected warnings or errors ("expected" + warnings or errors are those that have been documented and prioritized + as known issues, or those that are explained by transient conditions + external to the software, such as network outages.) \ No newline at end of file From e0608ddee076b7bda0c8ea4a2f45fc3449f14252 Mon Sep 17 00:00:00 2001 From: Victor Woeltjen Date: Fri, 4 Dec 2015 13:25:20 -0800 Subject: [PATCH 13/14] [Documentation] Simplify Author Checklist ...as discussed in https://github.com/nasa/openmctweb/pull/349 --- CONTRIBUTING.md | 3 +-- 1 file changed, 1 insertion(+), 2 deletions(-) diff --git a/CONTRIBUTING.md b/CONTRIBUTING.md index 7e8e15eae3..e726007800 100644 --- a/CONTRIBUTING.md +++ b/CONTRIBUTING.md @@ -291,8 +291,7 @@ checklist.) 1. Changes address original issue? 2. Unit tests included and/or updated with changes? 3. Command line build passes? -4. Expect to pass code review? -5. Changes have been smoke-tested? +4. Changes have been smoke-tested? ### Reviewer Checklist From 989c937ce1d1d5e9b9d79a8c4397688783286a20 Mon Sep 17 00:00:00 2001 From: Victor Woeltjen Date: Fri, 4 Dec 2015 13:29:19 -0800 Subject: [PATCH 14/14] [Documentation] Fix typo in HTML entity --- docs/src/process/testing/procedures.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/docs/src/process/testing/procedures.md b/docs/src/process/testing/procedures.md index 60529d4fa7..b33f88c6d1 100644 --- a/docs/src/process/testing/procedures.md +++ b/docs/src/process/testing/procedures.md @@ -156,7 +156,7 @@ Eval. criteria | Visual inspection application (again, particularly responsiveness.) 9. Check logs for any unexpected warnings or errors. -&daggers; Expectations: +† Expectations: * At the end of the test, CPU usage and memory usage should both be similar to their levels at the start of the test.