mirror of
https://github.com/nasa/openmct.git
synced 2025-04-15 15:06:47 +00:00
Merge pull request #349 from nasa/open1573
[Documentation] Document test process
This commit is contained in:
commit
dd0f9ab74e
@ -291,7 +291,7 @@ checklist.)
|
||||
1. Changes address original issue?
|
||||
2. Unit tests included and/or updated with changes?
|
||||
3. Command line build passes?
|
||||
4. Expect to pass code review?
|
||||
4. Changes have been smoke-tested?
|
||||
|
||||
### Reviewer Checklist
|
||||
|
||||
|
161
docs/src/process/cycle.md
Normal file
161
docs/src/process/cycle.md
Normal file
@ -0,0 +1,161 @@
|
||||
# Development Cycle
|
||||
|
||||
Development of Open MCT Web occurs on an iterative cycle of
|
||||
sprints and releases.
|
||||
|
||||
* A _sprint_ is three weeks in duration, and represents a
|
||||
set of improvements that can be completed and tested by the
|
||||
development team. Software at the end of the sprint is
|
||||
"semi-stable"; it will have undergone reduced testing and may carry
|
||||
defects or usability issues of lower severity, particularly if
|
||||
there are workarounds.
|
||||
* A _release_ occurs every four sprints. Releases are stable, and
|
||||
will have undergone full acceptance testing to ensure that the
|
||||
software behaves correctly and usably.
|
||||
|
||||
## Roles
|
||||
|
||||
The sprint process assumes the presence of a __project manager.__
|
||||
The project manager is responsible for
|
||||
making tactical decisions about what development work will be
|
||||
performed, and for coordinating with stakeholders to arrive at
|
||||
higher-level strategic decisions about desired functionality
|
||||
and characteristics of the software, major external milestones,
|
||||
and so forth.
|
||||
|
||||
In the absence of a dedicated project manager, this role may be rotated
|
||||
among members of the development team on a per-sprint basis.
|
||||
|
||||
Responsibilities of the project manager including:
|
||||
|
||||
* Maintaining (with agreement of stakeholders) a "road map" of work
|
||||
planned for future releases/sprints; this should be higher-level,
|
||||
usually expressed as "themes",
|
||||
with just enough specificity to gauge feasibility of plans,
|
||||
relate work back to milestones, and identify longer-term
|
||||
dependencies.
|
||||
* Determining (with assistance from the rest of the team) which
|
||||
issues to work on in a given sprint and how they shall be
|
||||
assigned.
|
||||
* Pre-planning subsequent sprints to ensure that all members of the
|
||||
team always have a clear direction.
|
||||
* Scheduling and/or ensuring adherence to
|
||||
[process points](#process-points).
|
||||
* Responding to changes within the sprint (shifting priorities,
|
||||
new issues) and re-allocating work for the sprint as needed.
|
||||
|
||||
## Sprint Calendar
|
||||
|
||||
Certain [process points](#process-points) are regularly scheduled in
|
||||
the sprint cycle.
|
||||
|
||||
### Sprints by Release
|
||||
|
||||
Allocation of work among sprints should be planned relative to release
|
||||
goals and milestones. As a general guideline, higher-risk work (large
|
||||
new features which may carry new defects, major refactoring, design
|
||||
changes with uncertain effects on usability) should be allocated to
|
||||
earlier sprints, allowing for time in later sprints to ensure stability.
|
||||
|
||||
| Sprint | Focus |
|
||||
|:------:|:--------------------------------------------------------|
|
||||
| __1__ | Prototyping, design, experimentation. |
|
||||
| __2__ | New features, refinements, enhancements. |
|
||||
| __3__ | Feature completion, low-risk enhancements, bug fixing. |
|
||||
| __4__ | Stability & quality assurance. |
|
||||
|
||||
### Sprints 1-3
|
||||
|
||||
The first three sprints of a release are primarily centered around
|
||||
development work, with regular acceptance testing in the third
|
||||
week. During this third week, the top priority should be passing
|
||||
acceptance testing (e.g. by resolving any blockers found); any
|
||||
resources not needed for this effort should be used to begin work
|
||||
for the subsequent sprint.
|
||||
|
||||
| Week | Mon | Tue | Wed | Thu | Fri |
|
||||
|:-----:|:-------------------------:|:------:|:---:|:----------------------------:|:-----------:|
|
||||
| __1__ | Sprint plan | Tag-up | | | |
|
||||
| __2__ | | Tag-up | | | Code freeze |
|
||||
| __3__ | Per-sprint testing | Triage | | _Per-sprint testing*_ | Ship |
|
||||
|
||||
* If necessary.
|
||||
|
||||
### Sprint 4
|
||||
|
||||
The software must be stable at the end of the fourth sprint; because of
|
||||
this, the fourth sprint is scheduled differently, with a heightened
|
||||
emphasis on testing.
|
||||
|
||||
| Week | Mon | Tue | Wed | Thu | Fri |
|
||||
|-------:|:-------------------------:|:------:|:---:|:----------------------------:|:-----------:|
|
||||
| __1__ | Sprint plan | Tag-up | | | Code freeze |
|
||||
| __2__ | Per-release testing | Triage | | | |
|
||||
| __3__ | _Per-release testing*_ | Triage | | _Per-release testing*_ | Ship |
|
||||
|
||||
* If necessary.
|
||||
|
||||
## Process Points
|
||||
|
||||
* __Sprint plan.__ Project manager allocates issues based on
|
||||
theme(s) for sprint, then reviews with team. Each team member
|
||||
should have roughly two weeks of work allocated (to allow time
|
||||
in the third week for testing of work completed.)
|
||||
* Project manager should also sketch out subsequent sprint so
|
||||
that team may begin work for that sprint during the
|
||||
third week, since testing and blocker resolution is unlikely
|
||||
to require all available resources.
|
||||
* __Tag-up.__ Check in and status update among development team.
|
||||
May amend plan for sprint as-needed.
|
||||
* __Code freeze.__ Any new work from this sprint
|
||||
(features, bug fixes, enhancements) must be integrated by the
|
||||
end of the second week of the sprint. After code freeze
|
||||
(and until the end of the sprint) the only changes that should be
|
||||
merged into the master branch should directly address issues
|
||||
needed to pass acceptance testing.
|
||||
* [__Per-release Testing.__](testing/plan.md#per-release-testing)
|
||||
Structured testing with predefined
|
||||
success criteria. No release should ship without passing
|
||||
acceptance tests. Time is allocated in each sprint for subsequent
|
||||
rounds of acceptance testing if issues are identified during a
|
||||
prior round. Specific details of acceptance testing need to be
|
||||
agreed-upon with relevant stakeholders and delivery recipients,
|
||||
and should be flexible enough to allow changes to plans
|
||||
(e.g. deferring delivery of some feature in order to ensure
|
||||
stability of other features.) Baseline testing includes:
|
||||
* [__Testathon.__](testing/plan.md#user-testing)
|
||||
Multi-user testing, involving as many users as
|
||||
is feasible, plus development team. Open-ended; should verify
|
||||
completed work from this sprint, test exploratorily for
|
||||
regressions, et cetera.
|
||||
* [__Long-Duration Test.__](testing/plan.md#long-duration-testing) A
|
||||
test to verify that the software remains
|
||||
stable after running for longer durations. May include some
|
||||
combination of automated testing and user verification (e.g.
|
||||
checking to verify that software remains subjectively
|
||||
responsive at conclusion of test.)
|
||||
* [__Unit Testing.__](testing/plan.md#unit-testing)
|
||||
Automated testing integrated into the
|
||||
build. (These tests are verified to pass more often than once
|
||||
per sprint, as they run before any merge to master, but still
|
||||
play an important role in per-release testing.)
|
||||
* [__Per-sprint Testing.__](testing/plan.md#per-sprint-testing)
|
||||
Subset of Pre-release Testing
|
||||
which should be performed before shipping at the end of any
|
||||
sprint. Time is allocated for a second round of
|
||||
Pre-release Testing if the first round is not passed.
|
||||
* __Triage.__ Team reviews issues from acceptance testing and uses
|
||||
success criteria to determine whether or not they should block
|
||||
release, then formulates a plan to address these issues before
|
||||
the next round of acceptance testing. Focus here should be on
|
||||
ensuring software passes that testing in order to ship on time;
|
||||
may prefer to disable malfunctioning components and fix them
|
||||
in a subsequent sprint, for example.
|
||||
* __Ship.__ Tag a code snapshot that has passed acceptance
|
||||
testing and deploy that version. (Only true if acceptance
|
||||
testing has passed by this point; if acceptance testing has not
|
||||
been passed, will need to make ad hoc decisions with stakeholders,
|
||||
e.g. "extend the sprint" or "defer shipment until end of next
|
||||
sprint.")
|
||||
|
||||
|
@ -1,156 +1,13 @@
|
||||
# Development Cycle
|
||||
|
||||
Development of Open MCT Web occurs on an iterative cycle of
|
||||
sprints and releases.
|
||||
|
||||
* A _sprint_ is three weeks in duration, and represents a
|
||||
set of improvements that can be completed and tested by the
|
||||
development team. Software at the end of the sprint is
|
||||
"semi-stable"; it will have undergone reduced testing and may carry
|
||||
defects or usability issues of lower severity, particularly if
|
||||
there are workarounds.
|
||||
* A _release_ occurs every four sprints. Releases are stable, and
|
||||
will have undergone full acceptance testing to ensure that the
|
||||
software behaves correctly and usably.
|
||||
|
||||
## Roles
|
||||
|
||||
The sprint process assumes the presence of a __project manager.__
|
||||
The project manager is responsible for
|
||||
making tactical decisions about what development work will be
|
||||
performed, and for coordinating with stakeholders to arrive at
|
||||
higher-level strategic decisions about desired functionality
|
||||
and characteristics of the software, major external milestones,
|
||||
and so forth.
|
||||
|
||||
In the absence of a dedicated project manager, this role may be rotated
|
||||
among members of the development team on a per-sprint basis.
|
||||
|
||||
Responsibilities of the project manager including:
|
||||
|
||||
* Maintaining (with agreement of stakeholders) a "road map" of work
|
||||
planned for future releases/sprints; this should be higher-level,
|
||||
usually expressed as "themes",
|
||||
with just enough specificity to gauge feasibility of plans,
|
||||
relate work back to milestones, and identify longer-term
|
||||
dependencies.
|
||||
* Determining (with assistance from the rest of the team) which
|
||||
issues to work on in a given sprint and how they shall be
|
||||
assigned.
|
||||
* Pre-planning subsequent sprints to ensure that all members of the
|
||||
team always have a clear direction.
|
||||
* Scheduling and/or ensuring adherence to
|
||||
[process points](#process-points).
|
||||
* Responding to changes within the sprint (shifting priorities,
|
||||
new issues) and re-allocating work for the sprint as needed.
|
||||
|
||||
## Sprint Calendar
|
||||
|
||||
Certain [process points](#process-points) are regularly scheduled in
|
||||
the sprint cycle.
|
||||
|
||||
### Sprints by Release
|
||||
|
||||
Allocation of work among sprints should be planned relative to release
|
||||
goals and milestones. As a general guideline, higher-risk work (large
|
||||
new features which may carry new defects, major refactoring, design
|
||||
changes with uncertain effects on usability) should be allocated to
|
||||
earlier sprints, allowing for time in later sprints to ensure stability.
|
||||
|
||||
| Sprint | Focus |
|
||||
|:------:|:--------------------------------------------------------|
|
||||
| __1__ | Prototyping, design, experimentation. |
|
||||
| __2__ | New features, refinements, enhancements. |
|
||||
| __3__ | Feature completion, low-risk enhancements, bug fixing. |
|
||||
| __4__ | Stability & quality assurance. |
|
||||
|
||||
### Sprints 1-3
|
||||
|
||||
The first three sprints of a release are primarily centered around
|
||||
development work, with regular acceptance testing in the third
|
||||
week. During this third week, the top priority should be passing
|
||||
acceptance testing (e.g. by resolving any blockers found); any
|
||||
resources not needed for this effort should be used to begin work
|
||||
for the subsequent sprint.
|
||||
|
||||
| Week | Mon | Tue | Wed | Thu | Fri |
|
||||
|:-----:|:-------------------------:|:------:|:---:|:----------------------------:|:-----------:|
|
||||
| __1__ | Sprint plan | Tag-up | | | |
|
||||
| __2__ | | Tag-up | | | Code freeze |
|
||||
| __3__ | Sprint acceptance testing | Triage | | _Sprint acceptance testing*_ | Ship |
|
||||
|
||||
* If necessary.
|
||||
|
||||
### Sprint 4
|
||||
|
||||
The software must be stable at the end of the fourth sprint; because of
|
||||
this, the fourth sprint is scheduled differently, with a heightened
|
||||
emphasis on testing.
|
||||
|
||||
| Week | Mon | Tue | Wed | Thu | Fri |
|
||||
|-------:|:-------------------------:|:------:|:---:|:----------------------------:|:-----------:|
|
||||
| __1__ | Sprint plan | Tag-up | | | Code freeze |
|
||||
| __2__ | Acceptance testing | Triage | | | |
|
||||
| __3__ | _Acceptance testing*_ | Triage | | _Acceptance testing*_ | Ship |
|
||||
|
||||
* If necessary.
|
||||
|
||||
## Process Points
|
||||
|
||||
* __Sprint plan.__ Project manager allocates issues based on
|
||||
theme(s) for sprint, then reviews with team. Each team member
|
||||
should have roughly two weeks of work allocated (to allow time
|
||||
in the third week for testing of work completed.)
|
||||
* Project manager should also sketch out subsequent sprint so
|
||||
that team may begin work for that sprint during the
|
||||
third week, since testing and blocker resolution is unlikely
|
||||
to require all available resources.
|
||||
* __Tag-up.__ Check in and status update among development team.
|
||||
May amend plan for sprint as-needed.
|
||||
* __Code freeze.__ Any new work from this sprint
|
||||
(features, bug fixes, enhancements) must be integrated by the
|
||||
end of the second week of the sprint. After code freeze
|
||||
(and until the end of the sprint) the only changes that should be
|
||||
merged into the master branch should directly address issues
|
||||
needed to pass acceptance testing.
|
||||
* __Acceptance Testing.__ Structured testing with predefined
|
||||
success criteria. No release should ship without passing
|
||||
acceptance tests. Time is allocated in each sprint for subsequent
|
||||
rounds of acceptance testing if issues are identified during a
|
||||
prior round. Specific details of acceptance testing need to be
|
||||
agreed-upon with relevant stakeholders and delivery recipients,
|
||||
and should be flexible enough to allow changes to plans
|
||||
(e.g. deferring delivery of some feature in order to ensure
|
||||
stability of other features.) Baseline testing includes:
|
||||
* __Testathon.__ Multi-user testing, involving as many users as
|
||||
is feasible, plus development team. Open-ended; should verify
|
||||
completed work from this sprint, test exploratorily for
|
||||
regressions, et cetera.
|
||||
* __24-Hour Test.__ A test to verify that the software remains
|
||||
stable after running for longer durations. May include some
|
||||
combination of automated testing and user verification (e.g.
|
||||
checking to verify that software remains subjectively
|
||||
responsive at conclusion of test.)
|
||||
* __Automated Testing.__ Automated testing integrated into the
|
||||
build. (These tests are verified to pass more often than once
|
||||
per sprint, as they run before any merge to master, but still
|
||||
play an important role in acceptance testing.)
|
||||
* __Sprint Acceptance Testing.__ Subset of Acceptance Testing
|
||||
which should be performed before shipping at the end of any
|
||||
sprint. Time is allocated for a second round of
|
||||
Sprint Acceptance Testing if the first round is not passed.
|
||||
* __Triage.__ Team reviews issues from acceptance testing and uses
|
||||
success criteria to determine whether or not they should block
|
||||
release, then formulates a plan to address these issues before
|
||||
the next round of acceptance testing. Focus here should be on
|
||||
ensuring software passes that testing in order to ship on time;
|
||||
may prefer to disable malfunctioning components and fix them
|
||||
in a subsequent sprint, for example.
|
||||
* __Ship.__ Tag a code snapshot that has passed acceptance
|
||||
testing and deploy that version. (Only true if acceptance
|
||||
testing has passed by this point; if acceptance testing has not
|
||||
been passed, will need to make ad hoc decisions with stakeholders,
|
||||
e.g. "extend the sprint" or "defer shipment until end of next
|
||||
sprint.")
|
||||
# Development Process
|
||||
|
||||
The process used to develop Open MCT Web is described in the following
|
||||
documents:
|
||||
|
||||
* [Development Cycle](cycle.md): Describes how and when specific
|
||||
process points are repeated during development.
|
||||
* Testing is described in two documents:
|
||||
* The [Test Plan](testing/plan.md) summarizes the approaches used
|
||||
to test Open MCT Web.
|
||||
* The [Test Procedures](testing/procedures.md) document what
|
||||
specific tests are performed to verify correctness, and how
|
||||
they should be carried out.
|
||||
|
127
docs/src/process/testing/plan.md
Normal file
127
docs/src/process/testing/plan.md
Normal file
@ -0,0 +1,127 @@
|
||||
# Test Plan
|
||||
|
||||
## Test Levels
|
||||
|
||||
Testing for Open MCT Web includes:
|
||||
|
||||
* _Smoke testing_: Brief, informal testing to verify that no major issues
|
||||
or regressions are present in the software, or in specific features of
|
||||
the software.
|
||||
* _Unit testing_: Automated verification of the performance of individual
|
||||
software components.
|
||||
* _User testing_: Testing with a representative user base to verify
|
||||
that application behaves usably and as specified.
|
||||
* _Long-duration testing_: Testing which takes place over a long period
|
||||
of time to detect issues which are not readily noticeable during
|
||||
shorter test periods.
|
||||
|
||||
### Smoke Testing
|
||||
|
||||
Manual, non-rigorous testing of the software and/or specific features
|
||||
of interest. Verifies that the software runs and that basic functionality
|
||||
is present.
|
||||
|
||||
### Unit Testing
|
||||
|
||||
Unit tests are automated tests which exercise individual software
|
||||
components. Tests are subject to code review along with the actual
|
||||
implementation, to ensure that tests are applicable and useful.
|
||||
|
||||
Unit tests should meet
|
||||
[test standards](https://github.com/nasa/openmctweb/blob/master/CONTRIBUTING.md#test-standards)
|
||||
as described in the contributing guide.
|
||||
|
||||
### User Testing
|
||||
|
||||
User testing is performed at scheduled times involving target users
|
||||
of the software or reasonable representatives, along with members of
|
||||
the development team exercising known use cases. Users test the
|
||||
software directly; the software should be configured as similarly to
|
||||
its planned production configuration as is feasible without introducing
|
||||
other risks (e.g. damage to data in a production instance.)
|
||||
|
||||
User testing will focus on the following activities:
|
||||
|
||||
* Verifying issues resolved since the last test session.
|
||||
* Checking for regressions in areas related to recent changes.
|
||||
* Using major or important features of the software,
|
||||
as determined by the user.
|
||||
* General "trying to break things."
|
||||
|
||||
During user testing, users will
|
||||
[report issues](https://github.com/nasa/openmctweb/blob/master/CONTRIBUTING.md#issue-reporting)
|
||||
as they are encountered.
|
||||
|
||||
Desired outcomes of user testing are:
|
||||
|
||||
* Identified software defects.
|
||||
* Areas for usability improvement.
|
||||
* Feature requests (particularly missed requirements.)
|
||||
* Recorded issue verification.
|
||||
|
||||
### Long-duration Testing
|
||||
|
||||
Long-duration testing occurs over a twenty-four hour period. The
|
||||
software is run in one or more stressing cases representative of expected
|
||||
usage. After twenty-four hours, the software is evaluated for:
|
||||
|
||||
* Performance metrics: Have memory usage or CPU utilization increased
|
||||
during this time period in unexpected or undesirable ways?
|
||||
* Subjective usability: Does the software behave in the same way it did
|
||||
at the start of the test? Is it as responsive?
|
||||
|
||||
Any defects or unexpected behavior identified during testing should be
|
||||
[reported as issues](https://github.com/nasa/openmctweb/blob/master/CONTRIBUTING.md#issue-reporting)
|
||||
and reviewed for severity.
|
||||
|
||||
## Test Performance
|
||||
|
||||
Tests are performed at various levels of frequency.
|
||||
|
||||
* _Per-merge_: Performed before any new changes are integrated into
|
||||
the software.
|
||||
* _Per-sprint_: Performed at the end of every [sprint](../cycle.md).
|
||||
* _Per-release_: Performed at the end of every [release](../cycle.md).
|
||||
|
||||
### Per-merge Testing
|
||||
|
||||
Before changes are merged, the author of the changes must perform:
|
||||
|
||||
* _Smoke testing_ (both generally, and for areas which interact with
|
||||
the new changes.)
|
||||
* _Unit testing_ (as part of the automated build step.)
|
||||
|
||||
Changes are not merged until the author has affirmed that both
|
||||
forms of testing have been performed successfully; this is documented
|
||||
by the [Author Checklist](https://github.com/nasa/openmctweb/blob/master/CONTRIBUTING.md#author-checklist).
|
||||
|
||||
### Per-sprint Testing
|
||||
|
||||
Before a sprint is closed, the development team must additionally
|
||||
perform:
|
||||
|
||||
* A relevant subset of [_user testing_](procedures.md#user-test-procedures)
|
||||
identified by the acting [project manager](../cycle.md#roles).
|
||||
* [_Long-duration testing_](procedures.md#long-duration-testng)
|
||||
(specifically, for 24 hours.)
|
||||
|
||||
Issues are reported as a product of both forms of testing.
|
||||
|
||||
A sprint is not closed until both categories have been performed on
|
||||
the latest snapshot of the software, _and_ no issues labelled as
|
||||
["blocker"](https://github.com/nasa/openmctweb/blob/master/CONTRIBUTING.md#issue-reporting)
|
||||
remain open.
|
||||
|
||||
### Per-release Testing
|
||||
|
||||
As [per-sprint testing](#per-sprint-testing), except that _user testing_
|
||||
should cover all test cases, with less focus on changes from the specific
|
||||
sprint or release.
|
||||
|
||||
Per-release testing should also include any acceptance testing steps
|
||||
agreed upon with recipients of the software.
|
||||
|
||||
A release is not closed until both categories have been performed on
|
||||
the latest snapshot of the software, _and_ no issues labelled as
|
||||
["blocker" or "critical"](https://github.com/nasa/openmctweb/blob/master/CONTRIBUTING.md#issue-reporting)
|
||||
remain open.
|
169
docs/src/process/testing/procedures.md
Normal file
169
docs/src/process/testing/procedures.md
Normal file
@ -0,0 +1,169 @@
|
||||
# Test Procedures
|
||||
|
||||
## Introduction
|
||||
|
||||
This document is intended to be used:
|
||||
|
||||
* By testers, to verify that Open MCT Web behaves as specified.
|
||||
* By the development team, to document new test cases and to provide
|
||||
guidance on how to author these.
|
||||
|
||||
## Writing Procedures
|
||||
|
||||
### Template
|
||||
|
||||
Procedures for individual tests should use the following template,
|
||||
adapted from [https://swehb.nasa.gov/display/7150/SWE-114]().
|
||||
|
||||
Property | Value
|
||||
---------------|---------------------------------------------------------------
|
||||
Test ID |
|
||||
Relevant reqs. |
|
||||
Prerequisites |
|
||||
Test input |
|
||||
Instructions |
|
||||
Expectation |
|
||||
Eval. criteria |
|
||||
|
||||
For multi-line descriptions, use an asterisk or similar indicator to refer
|
||||
to a longer-form description below.
|
||||
|
||||
#### Example Procedure - Edit a Layout
|
||||
|
||||
Property | Value
|
||||
---------------|---------------------------------------------------------------
|
||||
Test ID | MCT-TEST-000X - Edit a layout
|
||||
Relevant reqs. | MCT-EDIT-000Y
|
||||
Prerequisites | Create a layout, as in MCT-TEST-000Z
|
||||
Test input | Domain object database XYZ
|
||||
Instructions | See below *
|
||||
Expectation | Change to editing context †
|
||||
Eval. criteria | Visual inspection
|
||||
|
||||
* Follow the following steps:
|
||||
|
||||
1. Verify that the created layout is currently navigated-to,
|
||||
as in MCT-TEST-00ZZ.
|
||||
2. Click the Edit button, identified by a pencil icon and the text "Edit"
|
||||
displayed on hover.
|
||||
|
||||
† Right-hand viewing area should be surrounded by a dashed
|
||||
blue border when a domain object is being edited.
|
||||
|
||||
### Guidelines
|
||||
|
||||
Test procedures should be written assuming minimal prior knowledge of the
|
||||
application: Non-standard terms should only be used when they are documented
|
||||
in [the glossary](#glossary), and shorthands used for user actions should
|
||||
be accompanied by useful references to test procedures describing those
|
||||
actions (when available) or descriptions in user documentation.
|
||||
|
||||
Test cases should be narrow in scope; if a list of steps is excessively
|
||||
long (or must be written vaguely to be kept short) it should be broken
|
||||
down into multiple tests which reference one another.
|
||||
|
||||
All requirements satisfied by Open MCT Web should be verifiable using
|
||||
one or more test procedures.
|
||||
|
||||
## Glossary
|
||||
|
||||
This section will contain terms used in test procedures. This may link to
|
||||
a common glossary, to avoid replication of content.
|
||||
|
||||
## Procedures
|
||||
|
||||
This section will contain specific test procedures. Presently, procedures
|
||||
are placeholders describing general patterns for setting up and conducting
|
||||
testing.
|
||||
|
||||
### User Testing Setup
|
||||
|
||||
These procedures describes a general pattern for setting up for user
|
||||
testing. Specific deployments should customize this pattern with
|
||||
relevant data and any additional steps necessary.
|
||||
|
||||
Property | Value
|
||||
---------------|---------------------------------------------------------------
|
||||
Test ID | MCT-TEST-SETUP0 - User Testing Setup
|
||||
Relevant reqs. | TBD
|
||||
Prerequisites | Build of relevant components
|
||||
Test input | Exemplary database; exemplary telemetry data set
|
||||
Instructions | See below
|
||||
Expectation | Able to load application in a web browser (Google Chrome)
|
||||
Eval. criteria | Visual inspection
|
||||
|
||||
Instructions:
|
||||
|
||||
1. Start telemetry server.
|
||||
2. Start ElasticSearch.
|
||||
3. Restore database snapshot to ElasticSearch.
|
||||
4. Start telemetry playback.
|
||||
5. Start HTTP server for client sources.
|
||||
|
||||
### User Test Procedures
|
||||
|
||||
Specific user test cases have not yet been authored. In their absence,
|
||||
user testing is conducted by:
|
||||
|
||||
* Reviewing the text of issues from the issue tracker to understand the
|
||||
desired behavior, and exercising this behavior in the running application.
|
||||
(For instance, by following steps to reproduce from the original issue.)
|
||||
* Issues which appear to be resolved should be marked as such with comments
|
||||
on the original issue (e.g. "verified during user testing MM/DD/YYYY".)
|
||||
* Issues which appear not to have been resolved should be reopened with an
|
||||
explanation of what unexpected behavior has been observed.
|
||||
* In cases where an issue appears resolved as-worded but other related
|
||||
undesirable behavior is observed during testing, a new issue should be
|
||||
opened, and linked to from a comment in the original issues.
|
||||
* General usage of new features and/or existing features which have undergone
|
||||
recent changes. Defects or problems with usability should be documented
|
||||
by filing issues in the issue tracker.
|
||||
* Open-ended testing to discover defects, identify usability issues, and
|
||||
generate feature requests.
|
||||
|
||||
### Long-Duration Testing
|
||||
|
||||
The purpose of long-duration testing is to identify performance issues
|
||||
and/or other defects which are sensitive to the amount of time the
|
||||
application is kept running. (Memory leaks, for instance.)
|
||||
|
||||
Property | Value
|
||||
---------------|---------------------------------------------------------------
|
||||
Test ID | MCT-TEST-LDT0 - Long-duration Testing
|
||||
Relevant reqs. | TBD
|
||||
Prerequisites | MCT-TEST-SETUP0
|
||||
Test input | (As for test setup.)
|
||||
Instructions | See "Instructions" below *
|
||||
Expectation | See "Expectations" below †
|
||||
Eval. criteria | Visual inspection
|
||||
|
||||
* Instructions:
|
||||
|
||||
1. Start `top` or a similar tool to measure CPU usage and memory utilization.
|
||||
2. Open several user-created displays (as many as would be realistically
|
||||
opened during actual usage in a stressing case) in some combination of
|
||||
separate tabs and windows (approximately as many tabs-per-window as
|
||||
total windows.)
|
||||
3. Ensure that playback data is set to run continuously for at least 24 hours
|
||||
(e.g. on a loop.)
|
||||
4. Record CPU usage and memory utilization.
|
||||
5. In at least one tab, try some general user interface gestures and make
|
||||
notes about the subjective experience of using the application. (Particularly,
|
||||
the degree of responsiveness.)
|
||||
6. Leave client displays open for 24 hours.
|
||||
7. Record CPU usage and memory utilization again.
|
||||
8. Make additional notes about the subjective experience of using the
|
||||
application (again, particularly responsiveness.)
|
||||
9. Check logs for any unexpected warnings or errors.
|
||||
|
||||
† Expectations:
|
||||
|
||||
* At the end of the test, CPU usage and memory usage should both be similar
|
||||
to their levels at the start of the test.
|
||||
* At the end of the test, subjective usage of the application should not
|
||||
be observably different from the way it was at the start of the test.
|
||||
(In particular, responsiveness should not decrease.)
|
||||
* Logs should not contain any unexpected warnings or errors ("expected"
|
||||
warnings or errors are those that have been documented and prioritized
|
||||
as known issues, or those that are explained by transient conditions
|
||||
external to the software, such as network outages.)
|
Loading…
x
Reference in New Issue
Block a user