mirror of
https://github.com/nasa/openmct.git
synced 2024-12-27 00:31:06 +00:00
0413e77d8a
* fix: update broken locator * update eslint package * first pass of lint fixes * update package * change ruleset * update component tests to match linting rules * driveby * start to factor out bad locators * update gauge component * update notebook snapshot drop area * Update plot aria * add draggable true to tree items * update package * driveby to remove dead code * unneeded * unneeded * tells a screenreader that this is a row and a cell * adds an id for dragondrops * this should be a button * first pass at fixing tooltip selectors * review comments * Updating more tests * update to remove expect expect given our use of check functions * add expand component * move role around * update more locators * force * new local storage * remove choochoo steps * test: do `lint:fix` and also add back accidentally removed code * test: add back more removed code * test: remove `unstable` annotation from tests which are not unstable * test: remove invalid test-- the "new" time conductor doesn't allow for millisecond changes in fixed time * test: fix unstable gauge test * test: remove useless asserts-- this was secretly non-functional. now that we've fixed it, it makes no sense and just fails * test: add back accidentally removed changes * test: revert changes that break test * test: more fixes * Remove all notion of the unstable/stable e2e tests * test: eviscerate the flake with FACTS and LOGIC * test: fix anotha one * lint fixes * test: no need to wait for save dialog * test: fix more tests * lint: fix more warnings * test: fix anotha one * test: use `toHaveLength` instead of `.length).toBe()` * test: stabilize tabs view example imagery test * fix: more tests be fixed * test: more `toHaveCount()`s please * test: revert more accidentally removed fixes * test: fix selector * test: fix anotha one * update lint rules to clean up bad locators in shared fixtures * update and remove bad appActions * test: fix some restricted notebook tests * test: mass find/replace to enforce `toHaveCount()` instead of `.count()).toBe()` * Remove some bad appActions and update text * test: fix da tree tests * test: await not await await * test: fix upload plan appAction and add a11y * Updating externalFixtures with best practice locators and add missing appAction framework tests * test: fix test * test: fix appAction test for plans * test: yum yum fix'em up and get rid of some dragon drops * fix: alas, a `.only()` got my hopes up that i was done fixing tests * test: add `setTimeConductorMode` test "suite" which covers most TC related appActions * test: fix arg * test(couchdb): fix some network tests via expect polling * Stabalize visual test * getCanasPixels * test: stabilize tooltip telemetry table test, better a11y for tooltips * chore: update to use `docker compose` instead of `docker-compose` * New rules, new tests, new me * fix sort order * test: add `waitForPlotsToRender` framework test, passthru timeout override * test: remove `clockOptions` test as we have `page.clock` now * test: refactor out `overrideClock` * test: use `clock.install` instead * test: use `clock.install` instead * time clock fix * test: fix timer tests * remove ever reference to old base fixture * test: stabilize restricted notebook test * lint fixes * test: use clock.install * update timelist * test: update visual tests to use `page.clock()`, update snapshots * test: stabilize tree renaming/reordering test * a11y: add aria-label and role=region to object view * refactor: use `dragTo` * refactor: use `dragTo`, other small fixes * test: use `page.clock()` to stabilize tooltip telemetry table test * test: use web-first assertion to stabilize staleness test * test: knock out a few more `page.click`s * test: destroy all `page.click()`s * refactor: consistently use `'Ok'` instead of `'OK'` and `'Ok'` mixed * test: remove gauge aria label * test: more test fixes * test: more fixes and refactors * docs: add comment * test: refactor all instances of `dragAndDrop` * test: remove redundant test (covered in previous test steps) * test: stabilize imagery operations tests for display layout * chore: remove bad unicorn rule * chore(lint): remove unused disable directives --------- Co-authored-by: Jesse Mazzella <jesse.d.mazzella@nasa.gov>
122 lines
7.7 KiB
Markdown
122 lines
7.7 KiB
Markdown
# Testing
|
||
Open MCT Testing is iterating and improving at a rapid pace. This document serves to capture and index existing testing documentation and house documentation which no other obvious location as our testing evolves.
|
||
|
||
## General Testing Process
|
||
Documentation located [here](./docs/src/process/testing/plan.md)
|
||
|
||
## Unit Testing
|
||
Unit testing is essential part of our test strategy and complements our e2e testing strategy.
|
||
|
||
#### Unit Test Guidelines
|
||
* Unit Test specs should reside alongside the source code they test, not in a separate directory.
|
||
* Unit test specs for plugins should be defined at the plugin level. Start with one test spec per plugin named pluginSpec.js, and as this test spec grows too big, break it up into multiple test specs that logically group related tests.
|
||
* Unit tests for API or for utility functions and classes may be defined at a per-source file level.
|
||
* Wherever possible only use and mock public API, builtin functions, and UI in your test specs. Do not directly invoke any private functions. ie. only call or mock functions and objects exposed by openmct.* (eg. openmct.telemetry, openmct.objectView, etc.), and builtin browser functions (fetch, requestAnimationFrame, setTimeout, etc.).
|
||
* Where builtin functions have been mocked, be sure to clear them between tests.
|
||
* Test at an appropriate level of isolation. Eg.
|
||
* If you’re testing a view, you do not need to test the whole application UI, you can just fetch the view provider using the public API and render the view into an element that you have created.
|
||
* You do not need to test that the view switcher works, there should be separate tests for that.
|
||
* You do not need to test that telemetry providers work, you can mock openmct.telemetry.request() to feed test data to the view.
|
||
* Use your best judgement when deciding on appropriate scope.
|
||
* Automated tests for plugins should start by actually installing the plugin being tested, and then test that installing the plugin adds the desired features and behavior to Open MCT, observing the above rules.
|
||
* All variables used in a test spec, including any instances of the Open MCT API should be declared inside of an appropriate block scope (not at the root level of the source file), and should be initialized in the relevant beforeEach block. `beforeEach` is preferable to `beforeAll` to avoid leaking of state between tests.
|
||
* A `afterEach` or `afterAll` should be used to do any clean up necessary to prevent leakage of state between test specs. This can happen when functions on `window` are wrapped, or when the URL is changed. [A convenience function](https://github.com/nasa/openmct/blob/master/src/utils/testing.js#L59) is provided for resetting the URL and clearing builtin spies between tests.
|
||
|
||
#### Unit Test Examples
|
||
* [Example of an automated test spec for an object view plugin](https://github.com/nasa/openmct/blob/master/src/plugins/telemetryTable/pluginSpec.js)
|
||
* [Example of an automated test spec for API](https://github.com/nasa/openmct/blob/master/src/api/time/TimeAPISpec.js)
|
||
|
||
#### Unit Testing Execution
|
||
|
||
The unit tests can be executed in one of two ways:
|
||
`npm run test` which runs the entire suite against headless chrome
|
||
`npm run test:debug` for debugging the tests in realtime in an active chrome session.
|
||
|
||
## e2e, performance, and visual testing
|
||
Documentation located [here](./e2e/README.md)
|
||
|
||
## Code Coverage
|
||
|
||
It's up to the individual developer as to whether they want to add line coverage in the form of a unit test or e2e test.
|
||
|
||
Line Code Coverage is generated by our unit tests and e2e tests, then combined by ([Codecov.io Flags](https://docs.codecov.com/docs/flags)), and finally reported in GitHub PRs by Codecov.io's PR Bot. This workflow gives a comprehensive (if flawed) view of line coverage.
|
||
|
||
### Karma-istanbul
|
||
|
||
Line coverage is generated by our `karma-coverage-istanbul-reporter` package as defined in our `karma.conf.js` file:
|
||
|
||
```js
|
||
coverageIstanbulReporter: {
|
||
fixWebpackSourcePaths: true,
|
||
skipFilesWithNoCoverage: true,
|
||
dir: 'coverage/unit', //Sets coverage file to be consumed by codecov.io
|
||
reports: ['lcovonly']
|
||
},
|
||
```
|
||
|
||
Once the file is generated, it can be published to codecov with
|
||
|
||
```json
|
||
"cov:unit:publish": "codecov --disable=gcov -f ./coverage/unit/lcov.info -F unit",
|
||
```
|
||
|
||
### e2e
|
||
The e2e line coverage is a bit more complex than the karma implementation. This is the general sequence of events:
|
||
|
||
1. Each e2e suite will start webpack with the ```npm run start:coverage``` command with config `webpack.coverage.mjs` and the `babel-plugin-istanbul` plugin to generate code coverage during e2e test execution using our custom [baseFixture](./baseFixtures.js).
|
||
1. During testcase execution, each e2e shard will generate its piece of the larger coverage suite. **This coverage file is not merged**. The raw coverage file is stored in a `.nyc_report` directory.
|
||
1. [nyc](https://github.com/istanbuljs/nyc) converts this directory into a `lcov` file with the following command `npm run cov:e2e:report`
|
||
1. Most of the tests focus on chrome/ubuntu at a single resolution. This coverage is published to codecov with `npm run cov:e2e:ci:publish`.
|
||
1. The rest of our coverage only appears when run against persistent datastore (couchdb), non-ubuntu machines, and non-chrome browsers with the `npm run cov:e2e:full:publish` flag. Since this happens about once a day, we have leveraged codecov.io's carryforward flag to report on lines covered outside of each commit on an individual PR.
|
||
|
||
|
||
### Limitations in our code coverage reporting
|
||
Our code coverage implementation has some known limitations:
|
||
- [Variability](https://github.com/nasa/openmct/issues/5811)
|
||
- [Accuracy](https://github.com/nasa/openmct/issues/7015)
|
||
- [Vue instrumentation gaps](https://github.com/nasa/openmct/issues/4973)
|
||
|
||
## Troubleshooting CI
|
||
The following is an evolving guide to troubleshoot CI and PR issues.
|
||
|
||
### Github Checks failing
|
||
There are a few reasons that your GitHub PR could be failing beyond simple failed tests.
|
||
* Required Checks. We're leveraging required checks in GitHub so that we can quickly and precisely control what becomes and informational failure vs a hard requirement. The only way to determine the difference between a required vs information check is check for the `(Required)` emblem next to the step details in GitHub Checks.
|
||
* Not all required checks are run per commit. You may need to manually trigger addition GitHub checks with a `pr:<label>` label added to your PR.
|
||
|
||
### Flaky tests
|
||
|
||
(CircleCI's test insights feature)[https://circleci.com/blog/introducing-test-insights-with-flaky-test-detection/] collects historical data about the individual test results for both unit and e2e tests. Note: only a 14 day window of flake is available.
|
||
|
||
### Local=Pass and CI=Fail
|
||
Although rare, it is possible that your test can pass locally but fail in CI.
|
||
|
||
### Reset your workspace
|
||
It's possible that you're running with dependencies or a local environment which is out of sync with the branch you're working on. Make sure to execute the following:
|
||
|
||
```sh
|
||
nvm use
|
||
npm run clean
|
||
npm install
|
||
```
|
||
|
||
#### Run tests in the same container as CI
|
||
|
||
In extreme cases, tests can fail due to the constraints of running within a container. To execute tests in exactly the same way as run in CircleCI.
|
||
|
||
```sh
|
||
// Replace {X.X.X} with the current Playwright version
|
||
// from our package.json or circleCI configuration file
|
||
docker run --rm --network host --cpus="2" -v $(pwd):/work/ -w /work/ -it mcr.microsoft.com/playwright:v{X.X.X}-focal /bin/bash
|
||
npm install
|
||
```
|
||
|
||
At this point, you're running inside the same container and with 2 cpu cores. You can specify the unit tests:
|
||
```sh
|
||
npm run test
|
||
```
|
||
or e2e tests:
|
||
|
||
```sh
|
||
npx playwright test --config=e2e/playwright-ci.config.js --project=chrome --grep <the testcase name>
|
||
``` |