mirror of
https://github.com/nasa/openmct.git
synced 2025-04-07 19:34:25 +00:00
[Documentation] Sketch initial test procedures
Sketch initial test procedures; add explanation on the difference between sprint and release testing.
This commit is contained in:
parent
92f5d5f190
commit
b25576aed8
@ -100,9 +100,10 @@ by the [Author Checklist](https://github.com/nasa/openmctweb/blob/master/CONTRIB
|
||||
Before a sprint is closed, the development team must additionally
|
||||
perform:
|
||||
|
||||
* _User testing_ (both generally, and for areas which interact with
|
||||
changes made during the sprint.)
|
||||
* _Long-duration testing_ (specifically, for 24 hours.)
|
||||
* A relevant subset of [_user testing_](procedures.md#user-test-procedures)
|
||||
identified by the acting [project manager](../cycle.md#roles).
|
||||
* [_Long-duration testing_](procedures.md#long-duration-testng)
|
||||
(specifically, for 24 hours.)
|
||||
|
||||
Issues are reported as a product of both forms of testing.
|
||||
|
||||
@ -114,7 +115,7 @@ remain open.
|
||||
### Per-release Testing
|
||||
|
||||
As [per-sprint testing](#per-sprint-testing), except that _user testing_
|
||||
should be comprehensive, with less focus on changes from the specific
|
||||
should cover all test cases, with less focus on changes from the specific
|
||||
sprint or release.
|
||||
|
||||
Per-release testing should also include any acceptance testing steps
|
||||
|
@ -38,7 +38,7 @@ Prerequisites | Create a layout, as in MCT-TEST-000Z
|
||||
Test input | Domain object database XYZ
|
||||
Instructions | See below *
|
||||
Expectation | Change to editing context †
|
||||
Eval. criteria | Visual insepction
|
||||
Eval. criteria | Visual inspection
|
||||
|
||||
* Follow the following steps:
|
||||
|
||||
@ -72,4 +72,98 @@ a common glossary, to avoid replication of content.
|
||||
|
||||
## Procedures
|
||||
|
||||
This section will contain specific test procedures.
|
||||
This section will contain specific test procedures. Presently, procedures
|
||||
are placeholders describing general patterns for setting up and conducting
|
||||
testing.
|
||||
|
||||
### User Testing Setup
|
||||
|
||||
These procedures describes a general pattern for setting up for user
|
||||
testing. Specific deployments should customize this pattern with
|
||||
relevant data and any additional steps necessary.
|
||||
|
||||
Property | Value
|
||||
---------------|---------------------------------------------------------------
|
||||
Test ID | MCT-TEST-SETUP0 - User Testing Setup
|
||||
Relevant reqs. | TBD
|
||||
Prerequisites | Build of relevant components
|
||||
Test input | Exemplary database; exemplary telemetry data set
|
||||
Instructions | See below
|
||||
Expectation | Able to load application in a web browser (Google Chrome)
|
||||
Eval. criteria | Visual inspection
|
||||
|
||||
Instructions:
|
||||
|
||||
1. Start telemetry server.
|
||||
2. Start ElasticSearch.
|
||||
3. Restore database snapshot to ElasticSearch.
|
||||
4. Start telemetry playback.
|
||||
5. Start HTTP server for client sources.
|
||||
|
||||
### User Test Procedures
|
||||
|
||||
Specific user test cases have not yet been authored. In their absence,
|
||||
user testing is conducted by:
|
||||
|
||||
* Reviewing the text of issues from the issue tracker to understand the
|
||||
desired behavior, and exercising this behavior in the running application.
|
||||
(For instance, by following steps to reproduce from the original issue.)
|
||||
* Issues which appear to be resolved should be marked as such with comments
|
||||
on the original issue (e.g. "verified during user testing MM/DD/YYYY".)
|
||||
* Issues which appear not to have been resolved should be reopened with an
|
||||
explanation of what unexpected behavior has been observed.
|
||||
* In cases where an issue appears resolved as-worded but other related
|
||||
undesirable behavior is observed during testing, a new issue should be
|
||||
opened, and linked to from a comment in the original issues.
|
||||
* General usage of new features and/or existing features which have undergone
|
||||
recent changes. Defects or problems with usability should be documented
|
||||
by filing issues in the issue tracker.
|
||||
* Open-ended testing to discover defects, identify usability issues, and
|
||||
generate feature requests.
|
||||
|
||||
### Long-Duration Testing
|
||||
|
||||
The purpose of long-duration testing is to identify performance issues
|
||||
and/or other defects which are sensitive to the amount of time the
|
||||
application is kept running. (Memory leaks, for instance.)
|
||||
|
||||
Property | Value
|
||||
---------------|---------------------------------------------------------------
|
||||
Test ID | MCT-TEST-LDT0 - Long-duration Testing
|
||||
Relevant reqs. | TBD
|
||||
Prerequisites | MCT-TEST-SETUP0
|
||||
Test input | (As for test setup.)
|
||||
Instructions | See "Instructions" below *
|
||||
Expectation | See "Expectations" below †
|
||||
Eval. criteria | Visual inspection
|
||||
|
||||
* Instructions:
|
||||
|
||||
1. Start `top` or a similar tool to measure CPU usage and memory utilization.
|
||||
2. Open several user-created displays (as many as would be realistically
|
||||
opened during actual usage in a stressing case) in some combination of
|
||||
separate tabs and windows (approximately as many tabs-per-window as
|
||||
total windows.)
|
||||
3. Ensure that playback data is set to run continuously for at least 24 hours
|
||||
(e.g. on a loop.)
|
||||
4. Record CPU usage and memory utilization.
|
||||
5. In at least one tab, try some general user interface gestures and make
|
||||
notes about the subjective experience of using the application. (Particularly,
|
||||
the degree of responsiveness.)
|
||||
6. Leave client displays open for 24 hours.
|
||||
7. Record CPU usage and memory utilization again.
|
||||
8. Make additional notes about the subjective experience of using the
|
||||
application (again, particularly responsiveness.)
|
||||
9. Check logs for any unexpected warnings or errors.
|
||||
|
||||
&daggers; Expectations:
|
||||
|
||||
* At the end of the test, CPU usage and memory usage should both be similar
|
||||
to their levels at the start of the test.
|
||||
* At the end of the test, subjective usage of the application should not
|
||||
be observably different from the way it was at the start of the test.
|
||||
(In particular, responsiveness should not decrease.)
|
||||
* Logs should not contain any unexpected warnings or errors ("expected"
|
||||
warnings or errors are those that have been documented and prioritized
|
||||
as known issues, or those that are explained by transient conditions
|
||||
external to the software, such as network outages.)
|
Loading…
x
Reference in New Issue
Block a user