You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
3.`regular` integration test suite: `./run_tests.py -s regular`
194
-
4.`cron` integration test suite (optional, takes a long time to execute): `./run_tests.py -s cron`
195
188
196
-
### How to debug test failures
189
+
### How to add integration tests
197
190
198
-
Unit test failures should be rather easy to understand because the test output shows the diff of the actual vs. expected result. For integration tests, we don't store the Fortran code (as it's usually external to this repository), and the result is verified by comparing the SHA256 checksums of the actual vs. expected result. The test output shows the diff of the actual result vs. the previously tested version of the code. Thus, in order to obtain the diff of the actual vs. expected result, the following steps need to be executed:
191
+
This is a mechanism to add external code bases (such as entire git repositories
192
+
containing Fortran code) as test cases. In order to add a new code base as an
193
+
integration test suite, add a new section to
194
+
[testsuites.config](fortran_tests/testsuites.config), adhering to the following
195
+
format:
199
196
200
-
1. Run `./run_tests.py -s` followed by the name of the failed test suite. Check
201
-
the test output for lines mentioning test failures such as:
2. Check out a version of `fprettify` for which the test passes.
204
-
3. Run the integration test(s) via `./run_tests.py -n top-level-dir` (replacing
205
-
`top-level-dir` with the actual directory mentioned in the test output).
206
-
4. Now the `diff` shown in the test output refers to the expected result.
197
+
``INI
198
+
[...] # arbitrary unique section name identifying test code
199
+
obtain: ... # Python command to obtain test code base
200
+
path: ... # relative path pointing to test code location
201
+
suite: ... # which suite this test code should belong to
202
+
``
207
203
208
-
`fprettify` comes with **unit tests**, typically testing expected formatting of smaller code snippets. These tests are entirely self-contained, insofar as the Fortran code, the fprettify options and the expected formatting results are all set within the respective test method. `fprettify` also allows to configure **integration tests** to test expected formatting of external Fortran code. **Unit tests** are relevant when adding new features to `fprettify`, and when these features can easily be tested by small code snippets. **Integration tests** are relevant when an entire Fortran module or program is needed to test a specific feature, or when an external repository relying on `fprettify` should be checked regularly for invariance under `fprettify` formatting.
204
+
For `suite`, you should pick one of the following test suites:
205
+
-`regular`: for small code bases (executed for every pull request)
### How to locally run all unit and integration tests:
216
210
217
-
The testing mechanism allows you to easily test fprettify with any Fortran project of your choice. Simply clone or copy your entire project into `fortran_tests/before` and run `python setup.py test`. The directory `fortran_tests/after` contains the test output (reformatted Fortran files). If testing fails, please submit an issue!
0 commit comments