Skip to content
Open
Show file tree
Hide file tree
Changes from 26 commits
Commits
Show all changes
28 commits
Select commit Hold shift + click to select a range
e99c881
 Tests: change directory structure of automatic test detection
pseewald Sep 14, 2022
fae7d60
new testing mechanism working now
pseewald Sep 26, 2022
b9c5e7d
Unit tests: use fprettify as a module instead of relying on subprocesses
pseewald Oct 3, 2022
ce9d619
finish core work on test reorganization
pseewald Oct 24, 2022
e1d7aa7
suppress exceptions from test output (closes #134)
pseewald Oct 25, 2022
1576631
polishing test suites
pseewald May 4, 2024
1303fe9
Splitting test class into unit and integration tests
pseewald May 4, 2024
02728b2
minor cleanups and test fixes
pseewald May 10, 2024
bb668c5
More test fixes
pseewald May 10, 2024
74cb5ad
More test fixes
pseewald May 11, 2024
72bc6ce
Updating test results
pseewald May 11, 2024
24e71d9
Setting limit to number of lines per file (to speedup cron tests)
pseewald May 11, 2024
7bfac51
Allow to set options as annotations within Fortran file
pseewald Aug 3, 2024
2856e8e
cosmetic
pseewald Aug 3, 2024
3063fdd
Fix bug related to first line
pseewald Aug 3, 2024
eec327e
Fix regex
pseewald Aug 4, 2024
39a7f00
Update cron test results
pseewald Aug 4, 2024
15b6ec2
Clean up file
pseewald Sep 15, 2024
65237e8
Explaining test mechanism in README.md
pseewald Sep 15, 2024
2e97c9c
Further working on README.md for tests
pseewald Sep 15, 2024
c173257
update README.md
pseewald Feb 6, 2025
f23c7e6
minor change
pseewald Feb 6, 2025
c1cbf90
removed todo
pseewald Feb 6, 2025
7f487e6
Merge branch 'master' into 140-testing-mechanism
pseewald Nov 9, 2025
bb37a80
Adapt test workflows to new test mechanism
pseewald Nov 9, 2025
c7ddfff
fix issues with gh actions test workflow
pseewald Nov 9, 2025
a26d994
simplify cli as suggested by @max-models
pseewald Nov 11, 2025
6120962
 fix typos (as suggested by @dbroemmel)
pseewald Nov 11, 2025
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
45 changes: 12 additions & 33 deletions .github/workflows/test.yml
Original file line number Diff line number Diff line change
Expand Up @@ -8,59 +8,38 @@ on:
workflow_dispatch:

jobs:
resources:
runs-on: ubuntu-latest
steps:
- name: Checkout code
uses: actions/checkout@v4

- name: Create resource cache
id: cache
uses: actions/cache@v4
with:
path: ./fortran_tests/before/*/
key: resources-${{ github.event_name }}

- name: Prepare tests (default)
if: ${{ steps.cache.outputs.cache-hit != 'true' }}
run: |
.travis/prep_regular.sh

- name: Prepare tests (schedule)
if: ${{ steps.cache.outputs.cache-hit != 'true' && github.event_name == 'schedule' }}
run: |
.travis/prep_cron.sh

pip:
needs:
- resources
runs-on: ${{ matrix.os }}
strategy:
fail-fast: false
matrix:
os: [ubuntu-latest]
python: ["3.7", "3.8", "3.9", "3.10", "3.11-dev"]
python: ["3.10", "3.11", "3.12"]

steps:
- name: Checkout code
uses: actions/checkout@v4

- name: Load resources
uses: actions/cache@v4
with:
path: ./fortran_tests/before/*/
key: resources-${{ github.event_name }}

- uses: actions/setup-python@v5
with:
python-version: ${{ matrix.python }}
cache: pip

- name: Install project & dependencies
run: pip install .[dev]

- name: Run tests
- name: Run unit and short integration tests
run: |
coverage run -p --source=fprettify run_tests.py -s unittests -s builtin -s regular

- name: Run long integration tests
if: ${{ github.event_name == 'schedule' }}
run: |
coverage run --source=fprettify setup.py test
coverage run -p --source=fprettify run_tests.py -s cron

- name: Combine coverage files
run: coverage combine

- name: Coverage upload
run: coveralls --service=github
Expand Down
118 changes: 117 additions & 1 deletion README.md
Original file line number Diff line number Diff line change
Expand Up @@ -153,4 +153,120 @@ A = [-1, 10, 0, &

## Contributing / Testing

The testing mechanism allows you to easily test fprettify with any Fortran project of your choice. Simply clone or copy your entire project into `fortran_tests/before` and run `python setup.py test`. The directory `fortran_tests/after` contains the test output (reformatted Fortran files). If testing fails, please submit an issue!
When contributing new features by opening a pull request, testing is essential
to verify that the new features behave as intended, and that there are no
unwanted side effects. It is expected that before merging a pull request:
1. one or more unit tests are added which test formatting of small Fortran code
snippets, covering all relevant aspects of the added features.
2. if the changes lead to failures of existing tests, these test failures
should be carefully examinated. Only if the test failures are due to
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

'examinated' -> 'examined'

intended changes of `fprettify` defaults, or because of bug fixes, the
expected test results can be updated.


### How to add a unit test

Can the new feature be reasonably covered by small code snippets (< 10 lines)?
- **Yes**: add a test by starting from the following skeleton, and by adding the code to the file `fprettify/tests/unittests.py`:

```python
def test_something(self):
"""short description"""

in = "Some Fortran code"
out = "Same Fortran code after fprettify formatting"

# seleced fprettify command line arguments, as documented in "fprettify.py -h":
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

'seleced' -> 'selected'

opt = ["arg 1", "value for arg 1", "arg2", ...]

# helper function checking that fprettify output is equal to "out":
self.assert_fprettify_result(opt, in, out)
```

Then run `./run_tests.py -s unittests` and check in the output that the newly added unit test passes.


- **No**: add a test by adding an example Fortran source file: Add the Fortran file
to `examples/in`, and the reformatted `fprettify` output to `examples/out`.
If the test requires non-default `fprettify` options, specify these options
as an annotation `! fprettify:` followed by the command-line arguments at the
beginning of the Fortran file. Then you'll need to manually remove
`fortran_tests/test_code/examples` to make sure that the test configuration
will be updated with the changes from `examples`.

Then run `./run_tests.py -s builtin`, and check that the output mentions the
newly added example with `checksum new ok`. Check that a new line containing
the checksum for this example has been added to the file
`fortran_tests/test_results/expected_results`, and commit this change along
with your example. Rerun `./run_tests.py -s builtin` and check that the
output mentions the newly added example with `checksum ok`.


### How to add integration tests

This is a mechanism to add external code bases (such as entire git repositories
containing Fortran code) as test cases. In order to add a new code base as an
integration test suite, add a new section to
[testsuites.config](fortran_tests/testsuites.config), adhering to the following
format:

``INI
[...] # arbitrary unique section name identifying test code
obtain: ... # Python command to obtain test code base
path: ... # relative path pointing to test code location
suite: ... # which suite this test code should belong to
``

For `suite`, you should pick one of the following test suites:
- `regular`: for small code bases (executed for every pull request)
- `cron`: for larger code bases (executed nightly)


### How to locally run all unit and integration tests:

- unit tests: `./run_tests.py -s unittests`
- builtin examples integration tests: `./run_tests.py -s builtin`
- `regular`: integration test suite: `./run_tests.py -s regular`
- `cron`: integration test suite (optional, takes a long time to execute): `./run_tests.py -s cron`
- `custom`: a dedicated test suite for quick testing, shouldn't be committed.


### How to locally run selected unit or integration tests:

- unit tests: run
`python -m unittest -v fprettify.tests.unittests.FprettifyUnitTestCase.test_xxx`
(replacing `test_xxx` with the actual name of the test method)
- integration tests: run
- a specific suite (`unittests`, `builtin`, `regular`, `cron` or `custom`)
`./run_tests.py -s ...`
- tests belonging to a config section (see [testsuites.config](fortran_tests/testsuites.config)):
`./run_tests.py -n ...`


### How to deal with test failures

Test failures are always due to fprettify-formatted code being different than
expected. To examine what has changed, proceed as follows:
- Unit tests: failures should be rather easy to understand because the test
output shows the diff of the actual vs. expected result.
- Integration tests: we don't store the expected version of Fortran code,
instead we compare SHA256 checksums of the actual vs. expected result. The
test output shows the diff of the actual result vs. the *previous* version of
the code (that is, the version before `fprettify` was applied). Thus, in
order to obtain the diff of the actual vs. the *expected* result, the
following steps need to be executed:

1. Run `./run_tests.py -s` followed by the name of the failed test suite. Check
the test output for lines mentioning test failures such as:
`Test top-level-dir/subdir/file.f (fprettify.tests.fortrantests.FprettifyIntegrationTestCase) ... checksum FAIL`.
2. Check out the reference version of `fprettify` for which the test passes (normally, `develop` branch).
3. Run the integration test(s) via `./run_tests.py -n top-level-dir` (replacing
`top-level-dir` with the actual directory mentioned in the test output).
4. Check out the version of `fprettify` for which the test failed and run the integration tests again.
5. Now the `diff` shown in the test output shows the exact changes which caused the test to fail.

If you decide to accept the changes as new test references, proceed as follows:
- Unit tests: update the expected test result within the respective test method (third argument to function `self.assert_fprettify_result`)
- Integration tests: run `./run_tests.py ... -r` and commit the updated `fortran_tests/test_results/expected_results`. Then
run `./run_tests.py ...` and check that tests are passing now.

1 change: 0 additions & 1 deletion examples/example_after.f90

This file was deleted.

1 change: 0 additions & 1 deletion examples/example_before.f90

This file was deleted.

File renamed without changes.
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@

! fprettify: --case 1 1 1 1
MODULE exAmple
IMPLICIT NONE
PRIVATE
Expand Down
File renamed without changes.
File renamed without changes.
File renamed without changes.
File renamed without changes.
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@

! fprettify: --case 1 1 1 1
module exAmple
implicit none
private
Expand Down Expand Up @@ -33,35 +33,35 @@ module exAmple
character(len=*), parameter :: c = 'INTEGER, "PARAMETER"' !should not change case in string
character(len=*), parameter :: d = "INTEGER, 'PARAMETER" !should not change case in string

integer(kind=INT64), parameter :: l64 = 2_INT64
real(kind=REAL64), parameter :: r64a = 2._REAL64
real(kind=REAL64), parameter :: r64b = 2.0_REAL64
real(kind=REAL64), parameter :: r64c = .0_REAL64
real(kind=REAL64), parameter :: r64a = 2.E3_REAL64
real(kind=REAL64), parameter :: r64b = 2.0E3_REAL64
real(kind=REAL64), parameter :: r64c = .0E3_REAL64
integer(kind=int64), parameter :: l64 = 2_int64
real(kind=real64), parameter :: r64a = 2._real64
real(kind=real64), parameter :: r64b = 2.0_real64
real(kind=real64), parameter :: r64c = .0_real64
real(kind=real64), parameter :: r64a = 2.e3_real64
real(kind=real64), parameter :: r64b = 2.0e3_real64
real(kind=real64), parameter :: r64c = .0e3_real64

integer, parameter :: dp = selected_real_kind(15, 307)
type test_type
real(kind=dp) :: r = 1.0D-3
real(kind=dp) :: r = 1.0d-3
integer :: i
end type test_type

contains

subroutine test_routine( &
r, i, j, k, l)
use ISO_FORTRAN_ENV, only: INT64
use iso_fortran_env, only: int64
integer, intent(in) :: r, i, j, k
integer, intent(out) :: l

integer(kind=INT64) :: l64
integer(kind=int64) :: l64

l = test_function(r, i, j, k)

l64 = 2_INT64
if (l .eq. 2) l = max(l64, 2_INT64)
if (l .eq. 2) l = max(l64, 2_INT64)
l64 = 2_int64
if (l .eq. 2) l = max(l64, 2_int64)
if (l .eq. 2) l = max(l64, 2_int64)
if (l .eq. 2) l = max

end &
Expand All @@ -84,7 +84,7 @@ pure function test_function(r, i, j, &
l = 0
else
l = 1
endif
end if
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I didn't check the actual output, but should more than the case change? There are more below.

Copy link
Collaborator Author

@pseewald pseewald Nov 11, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

this test case uses default formatting, and on top of that it lowercases keywords, as specified by a special annotation at the top of the file:

! fprettify: --case 1 1 1 1
)

This is in line with how all unit tests are written - all options are tested on top of default options.

FYI: this file has changed because only with the changes in this pr, we are able to test files with custom options, see also my comment #49 (comment)

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

That's what I was thinking has happened. Sorry, I should have added that I just wanted to double check that's the reason. I was surprised the splitting of the endif/end if would only show up there.

end function

end module
Expand All @@ -107,12 +107,12 @@ program example_prog
!***************************!
! example 1.1
r = 1; i = -2; j = 3; k = 4; l = 5
r2 = 0.0_DP; r3 = 1.0_DP; r4 = 2.0_DP; r5 = 3.0_DP; r6 = 4.0_DP
r1 = -(r2**i*(r3 + r5*(-r4) - r6)) - 2.E+2
r2 = 0.0_dp; r3 = 1.0_dp; r4 = 2.0_dp; r5 = 3.0_dp; r6 = 4.0_dp
r1 = -(r2**i*(r3 + r5*(-r4) - r6)) - 2.e+2
if (r .eq. 2 .and. r <= 5) i = 3
write (*, *) (merge(3, 1, i <= 2))
write (*, *) test_function(r, i, j, k)
t%r = 4.0_DP
t%r = 4.0_dp
t%i = str_function("t % i = ")

! example 1.2
Expand Down Expand Up @@ -164,13 +164,13 @@ program example_prog
do k = 1, 3
if (k == 1) l = l + 1
end do
enddo
endif
enddo do_label
end do
end if
end do do_label
case (2)
l = i + j + k
end select
enddo
end do

! example 2.2
do m = 1, 2
Expand All @@ -182,13 +182,13 @@ program example_prog
do my_integer = 1, 1
do j = 1, 2
write (*, *) test_function(m, r, k, l) + i
enddo
enddo
enddo
enddo
enddo
enddo
enddo
end do
end do
end do
end do
end do
end do
end do

! 3) auto alignment for linebreaks !
!************************************!
Expand Down Expand Up @@ -249,17 +249,17 @@ program example_prog
l = l + 1
! unindented comment
! indented comment
end do; enddo
end do; end do
elseif (.not. j == 4) then
my_integer = 4
else
write (*, *) " hello"
endif
enddo
end if
end do
case (2)
l = i + j + k
end select
enddo
end do

! example 4.2
if ( &
Expand All @@ -279,7 +279,7 @@ program example_prog
end & ! comment
! comment
do
endif
end if

! example 4.3
arr = [1, (/3, 4, &
Expand All @@ -294,11 +294,11 @@ program example_prog
endif = 5
else if (endif == 3) then
write (*, *) endif
endif
end if

! example 4.5
do i = 1, 2; if (.true.) then
write (*, *) "hello"
endif; enddo
end if; end do

end program
File renamed without changes.
File renamed without changes.
File renamed without changes.
Loading