-
Notifications
You must be signed in to change notification settings - Fork 84
140 testing mechanism #188
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: master
Are you sure you want to change the base?
Changes from 26 commits
e99c881
fae7d60
b9c5e7d
ce9d619
e1d7aa7
1576631
1303fe9
02728b2
bb668c5
74cb5ad
72bc6ce
24e71d9
7bfac51
2856e8e
3063fdd
eec327e
39a7f00
15b6ec2
65237e8
2e97c9c
c173257
f23c7e6
c1cbf90
7f487e6
bb37a80
c7ddfff
a26d994
6120962
File filter
Filter by extension
Conversations
Jump to
Diff view
Diff view
There are no files selected for viewing
| Original file line number | Diff line number | Diff line change |
|---|---|---|
|
|
@@ -153,4 +153,120 @@ A = [-1, 10, 0, & | |
|
|
||
| ## Contributing / Testing | ||
|
|
||
| The testing mechanism allows you to easily test fprettify with any Fortran project of your choice. Simply clone or copy your entire project into `fortran_tests/before` and run `python setup.py test`. The directory `fortran_tests/after` contains the test output (reformatted Fortran files). If testing fails, please submit an issue! | ||
| When contributing new features by opening a pull request, testing is essential | ||
| to verify that the new features behave as intended, and that there are no | ||
| unwanted side effects. It is expected that before merging a pull request: | ||
| 1. one or more unit tests are added which test formatting of small Fortran code | ||
| snippets, covering all relevant aspects of the added features. | ||
| 2. if the changes lead to failures of existing tests, these test failures | ||
| should be carefully examinated. Only if the test failures are due to | ||
| intended changes of `fprettify` defaults, or because of bug fixes, the | ||
| expected test results can be updated. | ||
|
|
||
|
|
||
| ### How to add a unit test | ||
|
|
||
| Can the new feature be reasonably covered by small code snippets (< 10 lines)? | ||
| - **Yes**: add a test by starting from the following skeleton, and by adding the code to the file `fprettify/tests/unittests.py`: | ||
|
|
||
| ```python | ||
| def test_something(self): | ||
| """short description""" | ||
|
|
||
| in = "Some Fortran code" | ||
| out = "Same Fortran code after fprettify formatting" | ||
|
|
||
| # seleced fprettify command line arguments, as documented in "fprettify.py -h": | ||
|
||
| opt = ["arg 1", "value for arg 1", "arg2", ...] | ||
|
|
||
| # helper function checking that fprettify output is equal to "out": | ||
| self.assert_fprettify_result(opt, in, out) | ||
| ``` | ||
|
|
||
| Then run `./run_tests.py -s unittests` and check in the output that the newly added unit test passes. | ||
|
|
||
|
|
||
| - **No**: add a test by adding an example Fortran source file: Add the Fortran file | ||
| to `examples/in`, and the reformatted `fprettify` output to `examples/out`. | ||
| If the test requires non-default `fprettify` options, specify these options | ||
| as an annotation `! fprettify:` followed by the command-line arguments at the | ||
| beginning of the Fortran file. Then you'll need to manually remove | ||
| `fortran_tests/test_code/examples` to make sure that the test configuration | ||
| will be updated with the changes from `examples`. | ||
|
|
||
| Then run `./run_tests.py -s builtin`, and check that the output mentions the | ||
| newly added example with `checksum new ok`. Check that a new line containing | ||
| the checksum for this example has been added to the file | ||
| `fortran_tests/test_results/expected_results`, and commit this change along | ||
| with your example. Rerun `./run_tests.py -s builtin` and check that the | ||
| output mentions the newly added example with `checksum ok`. | ||
|
|
||
|
|
||
| ### How to add integration tests | ||
|
|
||
| This is a mechanism to add external code bases (such as entire git repositories | ||
| containing Fortran code) as test cases. In order to add a new code base as an | ||
| integration test suite, add a new section to | ||
| [testsuites.config](fortran_tests/testsuites.config), adhering to the following | ||
| format: | ||
|
|
||
| ``INI | ||
| [...] # arbitrary unique section name identifying test code | ||
| obtain: ... # Python command to obtain test code base | ||
| path: ... # relative path pointing to test code location | ||
| suite: ... # which suite this test code should belong to | ||
| `` | ||
|
|
||
| For `suite`, you should pick one of the following test suites: | ||
| - `regular`: for small code bases (executed for every pull request) | ||
| - `cron`: for larger code bases (executed nightly) | ||
|
|
||
|
|
||
| ### How to locally run all unit and integration tests: | ||
|
|
||
| - unit tests: `./run_tests.py -s unittests` | ||
| - builtin examples integration tests: `./run_tests.py -s builtin` | ||
| - `regular`: integration test suite: `./run_tests.py -s regular` | ||
| - `cron`: integration test suite (optional, takes a long time to execute): `./run_tests.py -s cron` | ||
| - `custom`: a dedicated test suite for quick testing, shouldn't be committed. | ||
|
|
||
|
|
||
| ### How to locally run selected unit or integration tests: | ||
|
|
||
| - unit tests: run | ||
| `python -m unittest -v fprettify.tests.unittests.FprettifyUnitTestCase.test_xxx` | ||
| (replacing `test_xxx` with the actual name of the test method) | ||
| - integration tests: run | ||
| - a specific suite (`unittests`, `builtin`, `regular`, `cron` or `custom`) | ||
| `./run_tests.py -s ...` | ||
| - tests belonging to a config section (see [testsuites.config](fortran_tests/testsuites.config)): | ||
| `./run_tests.py -n ...` | ||
|
|
||
|
|
||
| ### How to deal with test failures | ||
|
|
||
| Test failures are always due to fprettify-formatted code being different than | ||
| expected. To examine what has changed, proceed as follows: | ||
| - Unit tests: failures should be rather easy to understand because the test | ||
| output shows the diff of the actual vs. expected result. | ||
| - Integration tests: we don't store the expected version of Fortran code, | ||
| instead we compare SHA256 checksums of the actual vs. expected result. The | ||
| test output shows the diff of the actual result vs. the *previous* version of | ||
| the code (that is, the version before `fprettify` was applied). Thus, in | ||
| order to obtain the diff of the actual vs. the *expected* result, the | ||
| following steps need to be executed: | ||
|
|
||
| 1. Run `./run_tests.py -s` followed by the name of the failed test suite. Check | ||
| the test output for lines mentioning test failures such as: | ||
| `Test top-level-dir/subdir/file.f (fprettify.tests.fortrantests.FprettifyIntegrationTestCase) ... checksum FAIL`. | ||
| 2. Check out the reference version of `fprettify` for which the test passes (normally, `develop` branch). | ||
| 3. Run the integration test(s) via `./run_tests.py -n top-level-dir` (replacing | ||
| `top-level-dir` with the actual directory mentioned in the test output). | ||
| 4. Check out the version of `fprettify` for which the test failed and run the integration tests again. | ||
| 5. Now the `diff` shown in the test output shows the exact changes which caused the test to fail. | ||
|
|
||
| If you decide to accept the changes as new test references, proceed as follows: | ||
| - Unit tests: update the expected test result within the respective test method (third argument to function `self.assert_fprettify_result`) | ||
| - Integration tests: run `./run_tests.py ... -r` and commit the updated `fortran_tests/test_results/expected_results`. Then | ||
| run `./run_tests.py ...` and check that tests are passing now. | ||
|
|
||
This file was deleted.
This file was deleted.
| Original file line number | Diff line number | Diff line change |
|---|---|---|
| @@ -1,4 +1,4 @@ | ||
|
|
||
| ! fprettify: --case 1 1 1 1 | ||
| MODULE exAmple | ||
| IMPLICIT NONE | ||
| PRIVATE | ||
|
|
||
| Original file line number | Diff line number | Diff line change | ||
|---|---|---|---|---|
| @@ -1,4 +1,4 @@ | ||||
|
|
||||
| ! fprettify: --case 1 1 1 1 | ||||
| module exAmple | ||||
| implicit none | ||||
| private | ||||
|
|
@@ -33,35 +33,35 @@ module exAmple | |||
| character(len=*), parameter :: c = 'INTEGER, "PARAMETER"' !should not change case in string | ||||
| character(len=*), parameter :: d = "INTEGER, 'PARAMETER" !should not change case in string | ||||
|
|
||||
| integer(kind=INT64), parameter :: l64 = 2_INT64 | ||||
| real(kind=REAL64), parameter :: r64a = 2._REAL64 | ||||
| real(kind=REAL64), parameter :: r64b = 2.0_REAL64 | ||||
| real(kind=REAL64), parameter :: r64c = .0_REAL64 | ||||
| real(kind=REAL64), parameter :: r64a = 2.E3_REAL64 | ||||
| real(kind=REAL64), parameter :: r64b = 2.0E3_REAL64 | ||||
| real(kind=REAL64), parameter :: r64c = .0E3_REAL64 | ||||
| integer(kind=int64), parameter :: l64 = 2_int64 | ||||
| real(kind=real64), parameter :: r64a = 2._real64 | ||||
| real(kind=real64), parameter :: r64b = 2.0_real64 | ||||
| real(kind=real64), parameter :: r64c = .0_real64 | ||||
| real(kind=real64), parameter :: r64a = 2.e3_real64 | ||||
| real(kind=real64), parameter :: r64b = 2.0e3_real64 | ||||
| real(kind=real64), parameter :: r64c = .0e3_real64 | ||||
|
|
||||
| integer, parameter :: dp = selected_real_kind(15, 307) | ||||
| type test_type | ||||
| real(kind=dp) :: r = 1.0D-3 | ||||
| real(kind=dp) :: r = 1.0d-3 | ||||
| integer :: i | ||||
| end type test_type | ||||
|
|
||||
| contains | ||||
|
|
||||
| subroutine test_routine( & | ||||
| r, i, j, k, l) | ||||
| use ISO_FORTRAN_ENV, only: INT64 | ||||
| use iso_fortran_env, only: int64 | ||||
| integer, intent(in) :: r, i, j, k | ||||
| integer, intent(out) :: l | ||||
|
|
||||
| integer(kind=INT64) :: l64 | ||||
| integer(kind=int64) :: l64 | ||||
|
|
||||
| l = test_function(r, i, j, k) | ||||
|
|
||||
| l64 = 2_INT64 | ||||
| if (l .eq. 2) l = max(l64, 2_INT64) | ||||
| if (l .eq. 2) l = max(l64, 2_INT64) | ||||
| l64 = 2_int64 | ||||
| if (l .eq. 2) l = max(l64, 2_int64) | ||||
| if (l .eq. 2) l = max(l64, 2_int64) | ||||
| if (l .eq. 2) l = max | ||||
|
|
||||
| end & | ||||
|
|
@@ -84,7 +84,7 @@ pure function test_function(r, i, j, & | |||
| l = 0 | ||||
| else | ||||
| l = 1 | ||||
| endif | ||||
| end if | ||||
|
Contributor
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. I didn't check the actual output, but should more than the case change? There are more below.
Collaborator
Author
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. this test case uses default formatting, and on top of that it lowercases keywords, as specified by a special annotation at the top of the file:
This is in line with how all unit tests are written - all options are tested on top of default options. FYI: this file has changed because only with the changes in this pr, we are able to test files with custom options, see also my comment #49 (comment)
Contributor
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. That's what I was thinking has happened. Sorry, I should have added that I just wanted to double check that's the reason. I was surprised the splitting of the |
||||
| end function | ||||
|
|
||||
| end module | ||||
|
|
@@ -107,12 +107,12 @@ program example_prog | |||
| !***************************! | ||||
| ! example 1.1 | ||||
| r = 1; i = -2; j = 3; k = 4; l = 5 | ||||
| r2 = 0.0_DP; r3 = 1.0_DP; r4 = 2.0_DP; r5 = 3.0_DP; r6 = 4.0_DP | ||||
| r1 = -(r2**i*(r3 + r5*(-r4) - r6)) - 2.E+2 | ||||
| r2 = 0.0_dp; r3 = 1.0_dp; r4 = 2.0_dp; r5 = 3.0_dp; r6 = 4.0_dp | ||||
| r1 = -(r2**i*(r3 + r5*(-r4) - r6)) - 2.e+2 | ||||
| if (r .eq. 2 .and. r <= 5) i = 3 | ||||
| write (*, *) (merge(3, 1, i <= 2)) | ||||
| write (*, *) test_function(r, i, j, k) | ||||
| t%r = 4.0_DP | ||||
| t%r = 4.0_dp | ||||
| t%i = str_function("t % i = ") | ||||
|
|
||||
| ! example 1.2 | ||||
|
|
@@ -164,13 +164,13 @@ program example_prog | |||
| do k = 1, 3 | ||||
| if (k == 1) l = l + 1 | ||||
| end do | ||||
| enddo | ||||
| endif | ||||
| enddo do_label | ||||
| end do | ||||
| end if | ||||
| end do do_label | ||||
| case (2) | ||||
| l = i + j + k | ||||
| end select | ||||
| enddo | ||||
| end do | ||||
|
|
||||
| ! example 2.2 | ||||
| do m = 1, 2 | ||||
|
|
@@ -182,13 +182,13 @@ program example_prog | |||
| do my_integer = 1, 1 | ||||
| do j = 1, 2 | ||||
| write (*, *) test_function(m, r, k, l) + i | ||||
| enddo | ||||
| enddo | ||||
| enddo | ||||
| enddo | ||||
| enddo | ||||
| enddo | ||||
| enddo | ||||
| end do | ||||
| end do | ||||
| end do | ||||
| end do | ||||
| end do | ||||
| end do | ||||
| end do | ||||
|
|
||||
| ! 3) auto alignment for linebreaks ! | ||||
| !************************************! | ||||
|
|
@@ -249,17 +249,17 @@ program example_prog | |||
| l = l + 1 | ||||
| ! unindented comment | ||||
| ! indented comment | ||||
| end do; enddo | ||||
| end do; end do | ||||
| elseif (.not. j == 4) then | ||||
| my_integer = 4 | ||||
| else | ||||
| write (*, *) " hello" | ||||
| endif | ||||
| enddo | ||||
| end if | ||||
| end do | ||||
| case (2) | ||||
| l = i + j + k | ||||
| end select | ||||
| enddo | ||||
| end do | ||||
|
|
||||
| ! example 4.2 | ||||
| if ( & | ||||
|
|
@@ -279,7 +279,7 @@ program example_prog | |||
| end & ! comment | ||||
| ! comment | ||||
| do | ||||
| endif | ||||
| end if | ||||
|
|
||||
| ! example 4.3 | ||||
| arr = [1, (/3, 4, & | ||||
|
|
@@ -294,11 +294,11 @@ program example_prog | |||
| endif = 5 | ||||
| else if (endif == 3) then | ||||
| write (*, *) endif | ||||
| endif | ||||
| end if | ||||
|
|
||||
| ! example 4.5 | ||||
| do i = 1, 2; if (.true.) then | ||||
| write (*, *) "hello" | ||||
| endif; enddo | ||||
| end if; end do | ||||
|
|
||||
| end program | ||||
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
'examinated' -> 'examined'