Skip to content

Commit c173257

Browse files
committed
update README.md
1 parent 2e97c9c commit c173257

File tree

1 file changed

+71
-33
lines changed

1 file changed

+71
-33
lines changed

README.md

Lines changed: 71 additions & 33 deletions
Original file line numberDiff line numberDiff line change
@@ -147,10 +147,11 @@ unwanted side effects. It is expected that before merging a pull request:
147147
intended changes of `fprettify` defaults, or because of bug fixes, the
148148
expected test results can be updated.
149149

150+
150151
### How to add a unit test
151152

152153
Can the new feature be reasonably covered by small code snippets (< 10 lines)?
153-
- Yes: add a test by starting from the following skeleton, and by adding the code to the file `fprettify/tests/unittests.py`:
154+
- **Yes**: add a test by starting from the following skeleton, and by adding the code to the file `fprettify/tests/unittests.py`:
154155

155156
```python
156157
def test_something(self):
@@ -169,52 +170,89 @@ Can the new feature be reasonably covered by small code snippets (< 10 lines)?
169170
Then run `./run_tests.py -s unittests` and check in the output that the newly added unit test passes.
170171

171172

172-
- No: add a test by adding an example Fortran source file: Add the Fortran file
173+
- **No**: add a test by adding an example Fortran source file: Add the Fortran file
173174
to `examples/in`, and the reformatted `fprettify` output to `examples/out`.
174175
If the test requires non-default `fprettify` options, specify these options
175176
as an annotation `! fprettify:` followed by the command-line arguments at the
176-
beginning of the Fortran file. You need to manually remove
177+
beginning of the Fortran file. Then you'll need to manually remove
177178
`fortran_tests/test_code/examples` to make sure that the test configuration
178179
will be updated with the changes from `examples`.
179180

180-
Then run `./run_tests.py -s builtin`, and check that the output mentions the
181-
newly added example with `checksum new ok`. Check that a new line containing
182-
the checksum for this example has been added to the file
183-
`fortran_tests/test_results/expected_results`, and commit this change along
184-
with your example. Rerun `./run_tests.py -s builtin` and check that the
185-
output mentions the newly added example with `checksum ok`.
186-
187-
### How to locally run all unit and integration tests:
188-
189-
Run
181+
Then run `./run_tests.py -s builtin`, and check that the output mentions the
182+
newly added example with `checksum new ok`. Check that a new line containing
183+
the checksum for this example has been added to the file
184+
`fortran_tests/test_results/expected_results`, and commit this change along
185+
with your example. Rerun `./run_tests.py -s builtin` and check that the
186+
output mentions the newly added example with `checksum ok`.
190187

191-
1. unit tests: `./run_tests.py -s unittests`
192-
2. builtin examples integration tests: `./run_tests.py -s builtin`
193-
3. `regular` integration test suite: `./run_tests.py -s regular`
194-
4. `cron` integration test suite (optional, takes a long time to execute): `./run_tests.py -s cron`
195188

196-
### How to debug test failures
189+
### How to add integration tests
197190

198-
Unit test failures should be rather easy to understand because the test output shows the diff of the actual vs. expected result. For integration tests, we don't store the Fortran code (as it's usually external to this repository), and the result is verified by comparing the SHA256 checksums of the actual vs. expected result. The test output shows the diff of the actual result vs. the previously tested version of the code. Thus, in order to obtain the diff of the actual vs. expected result, the following steps need to be executed:
191+
This is a mechanism to add external code bases (such as entire git repositories
192+
containing Fortran code) as test cases. In order to add a new code base as an
193+
integration test suite, add a new section to
194+
[testsuites.config](fortran_tests/testsuites.config), adhering to the following
195+
format:
199196

200-
1. Run `./run_tests.py -s` followed by the name of the failed test suite. Check
201-
the test output for lines mentioning test failures such as:
202-
`Test top-level-dir/subdir/file.f (fprettify.tests.fortrantests.FprettifyIntegrationTestCase) ... checksum FAIL`.
203-
2. Check out a version of `fprettify` for which the test passes.
204-
3. Run the integration test(s) via `./run_tests.py -n top-level-dir` (replacing
205-
`top-level-dir` with the actual directory mentioned in the test output).
206-
4. Now the `diff` shown in the test output refers to the expected result.
197+
``INI
198+
[...] # arbitrary unique section name identifying test code
199+
obtain: ... # Python command to obtain test code base
200+
path: ... # relative path pointing to test code location
201+
suite: ... # which suite this test code should belong to
202+
``
207203

208-
`fprettify` comes with **unit tests**, typically testing expected formatting of smaller code snippets. These tests are entirely self-contained, insofar as the Fortran code, the fprettify options and the expected formatting results are all set within the respective test method. `fprettify` also allows to configure **integration tests** to test expected formatting of external Fortran code. **Unit tests** are relevant when adding new features to `fprettify`, and when these features can easily be tested by small code snippets. **Integration tests** are relevant when an entire Fortran module or program is needed to test a specific feature, or when an external repository relying on `fprettify` should be checked regularly for invariance under `fprettify` formatting.
204+
For `suite`, you should pick one of the following test suites:
205+
- `regular`: for small code bases (executed for every pull request)
206+
- `cron`: for larger code bases (executed nightly)
209207

210208

211-
Run single unit test:
212-
213-
```python
214-
python3 -m unittest -v fprettify.tests.unittests.FprettifyUnitTestCase.test_whitespace
215-
```
209+
### How to locally run all unit and integration tests:
216210

217-
The testing mechanism allows you to easily test fprettify with any Fortran project of your choice. Simply clone or copy your entire project into `fortran_tests/before` and run `python setup.py test`. The directory `fortran_tests/after` contains the test output (reformatted Fortran files). If testing fails, please submit an issue!
211+
- unit tests: `./run_tests.py -s unittests`
212+
- builtin examples integration tests: `./run_tests.py -s builtin`
213+
- `regular`: integration test suite: `./run_tests.py -s regular`
214+
- `cron`: integration test suite (optional, takes a long time to execute): `./run_tests.py -s cron`
215+
- `custom`: a dedicated test suite for quick testing, shouldn't be committed.
216+
217+
218+
### How to locally run selected unit or integration tests:
219+
220+
- unit tests: run
221+
`python -m unittest -v fprettify.tests.unittests.FprettifyUnitTestCase.test_xxx`
222+
(replacing `test_xxx` with the actual name of the test method)
223+
- integration tests: run
224+
- a specific suite (`unittests`, `builtin`, `regular`, `cron` or `custom`)
225+
`./run_tests.py -s ...`
226+
- tests belonging to a config section (see [testsuites.config](fortran_tests/testsuites.config)):
227+
`./run_tests.py -n ...`
228+
229+
230+
### How to deal with test failures
231+
232+
Test failures are always due to fprettify-formatted code being different than
233+
expected. To examine what has changed, proceed as follows:
234+
- Unit tests: failures should be rather easy to understand because the test
235+
output shows the diff of the actual vs. expected result.
236+
- Integration tests: we don't store the expected version of Fortran code,
237+
instead we compare SHA256 checksums of the actual vs. expected result. The
238+
test output shows the diff of the actual result vs. the *previous* version of
239+
the code (that is, the version before `fprettify` was applied). Thus, in
240+
order to obtain the diff of the actual vs. the *expected* result, the
241+
following steps need to be executed:
242+
243+
1. Run `./run_tests.py -s` followed by the name of the failed test suite. Check
244+
the test output for lines mentioning test failures such as:
245+
`Test top-level-dir/subdir/file.f (fprettify.tests.fortrantests.FprettifyIntegrationTestCase) ... checksum FAIL`.
246+
2. Check out the reference version of `fprettify` for which the test passes (normally, `develop` branch).
247+
3. Run the integration test(s) via `./run_tests.py -n top-level-dir` (replacing
248+
`top-level-dir` with the actual directory mentioned in the test output).
249+
4. Check out the version of `fprettify` for which the test failed and run the integration tests again.
250+
5. Now the `diff` shown in the test output shows the exact changes which caused the test to fail.
251+
252+
If you decide to accept the changes as new test references, proceed as follows:
253+
- Unit tests: update the expected test result within the respective test method (third argument to function `self.assert_fprettify_result`)
254+
- Integration tests: run `./run_tests.py ... -r` and commit the updated `fortran_tests/test_results/expected_results`. Then
255+
run `./run_tests.py ...` and check that tests are passing now.
218256

219257

220258
[![Code Climate](https://codeclimate.com/github/pseewald/fprettify/badges/gpa.svg)](https://codeclimate.com/github/pseewald/fprettify)

0 commit comments

Comments
 (0)