Skip to content

Conversation

@pseewald
Copy link
Collaborator

@pseewald pseewald commented Nov 9, 2025

I finally found time to wrap up refactoring of the testing mechanism, so that it's better maintainable and so that results are more transparent. The new mechanism is documented in README.md. This fixes #44, fixes #140 and fixes #134.

This overwrites input fortran files which can be useful for
- testing idempotency (by running tests twice)
- tracking formatting changes when tests are failing (by running tests twice with different fprettify versions)

Dump original Fortran files into timestamped backup directory
this bug caused 2 test cases to be indeterministic:
- RosettaCodeData/Task/Cartesian-product-of-two-or-more-lists/Fortran/cartesian-product-of-two-or-more-lists.f
- RosettaCodeData/Task/Evolutionary-algorithm/Fortran/evolutionary-algorithm-1.f
This regex caused fprettify to be stuck on file
c2k/src/beta_gamma_psi.F
and remove lengthy tests from regular testing
@coveralls
Copy link

coveralls commented Nov 9, 2025

Pull Request Test Coverage Report for Build 19276268481

Details

  • 474 of 484 (97.93%) changed or added relevant lines in 4 files are covered.
  • 21 unchanged lines in 3 files lost coverage.
  • Overall coverage increased (+4.6%) to 96.521%

Changes Missing Coverage Covered Lines Changed/Added Lines %
fprettify/init.py 88 90 97.78%
fprettify/tests/fortrantests.py 141 149 94.63%
Files with Coverage Reduction New Missed Lines %
fprettify/init.py 2 97.94%
fprettify/version.py 6 0.0%
fprettify/_version.py 13 0.0%
Totals Coverage Status
Change from base Build 10803218841: 4.6%
Covered Lines: 1526
Relevant Lines: 1581

💛 - Coveralls

run_tests.py Outdated
Comment on lines 35 to 44
parser.add_argument("-n", "--name", type=str, help="select tests by name (sections in testsuites.config).")

suite_default = ["unittests", "builtin"]
parser.add_argument("-s", "--suite", type=str, default=suite_default, help="select suite.", choices=["unittests", "builtin", "regular", "cron", "custom"], action='append')

args = parser.parse_args()

suite = unittest.TestLoader().loadTestsFromTestCase(FPrettifyTestCase)
unittest.TextTestRunner(verbosity=2).run(suite)
# remove defaults if user has specified anything
if args.suite[:2] == suite_default and len(args.suite) > 2:
args.suite = args.suite[2:]

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I am not sure I completely understand what you are trying to achieve here, but it seems a bit like a hack, wouldn't this be equivalent?

parser.add_argument(
    "-s", "--suite",
    nargs="+",
    default=["unittests", "builtin"],
    choices=["unittests", "builtin", "regular", "cron", "custom"],
    help="select suite."
)

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You're right, that's more direct, I'll implement your suggestion.

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks good now a26d994!

README.md Outdated
1. one or more unit tests are added which test formatting of small Fortran code
snippets, covering all relevant aspects of the added features.
2. if the changes lead to failures of existing tests, these test failures
should be carefully examinated. Only if the test failures are due to
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

'examinated' -> 'examined'

README.md Outdated
in = "Some Fortran code"
out = "Same Fortran code after fprettify formatting"

# seleced fprettify command line arguments, as documented in "fprettify.py -h":
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

'seleced' -> 'selected'

else
l = 1
endif
end if
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I didn't check the actual output, but should more than the case change? There are more below.

Copy link
Collaborator Author

@pseewald pseewald Nov 11, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

this test case uses default formatting, and on top of that it lowercases keywords, as specified by a special annotation at the top of the file:

! fprettify: --case 1 1 1 1
)

This is in line with how all unit tests are written - all options are tested on top of default options.

FYI: this file has changed because only with the changes in this pr, we are able to test files with custom options, see also my comment #49 (comment)

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

That's what I was thinking has happened. Sorry, I should have added that I just wanted to double check that's the reason. I was surprised the splitting of the endif/end if would only show up there.

Copy link

@max-models max-models left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I couldn't find any breaking errors. Here and there I saw some points on the python code style which I would have done differently, but I don't think it's worth quibbling over. The PR looks good to me now :)

output.
"""
if cls.n_parsefail + cls.n_internalfail > 0:
format = "{:<20}{:<6}"

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You are overwriting the built-in function format() with a local variable named format, maybe it would be better if you renamed the variable to fmt or format_string or so.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

Streamline and document testing mechanism bug: exceptions thrown during CI Improve testing

4 participants