-
Notifications
You must be signed in to change notification settings - Fork 124
RHAIENG-843: re-add pdf-builder on minimal #2592
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
WalkthroughAdds a pdf-builder stage to the Dockerfile and introduces two new architecture-aware installation scripts (install_texlive.sh, install_pandoc.sh); the Dockerfile switches PDF dependency installation to a cache-mounted, architecture-conditional workflow that uses prebuilt assets for ppc64le or runs installers otherwise. Changes
Estimated code review effort🎯 4 (Complex) | ⏱️ ~45 minutes Pre-merge checks and finishing touches❌ Failed checks (1 warning)
✅ Passed checks (2 passed)
✨ Finishing touches
🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 1
📜 Review details
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (1)
jupyter/minimal/ubi9-python-3.12/Dockerfile.cpu(2 hunks)
🧰 Additional context used
🧠 Learnings (4)
📓 Common learnings
Learnt from: jiridanek
PR: opendatahub-io/notebooks#2227
File: codeserver/ubi9-python-3.12/Dockerfile.cpu:218-218
Timestamp: 2025-09-05T12:29:07.819Z
Learning: jiridanek requested GitHub issue creation for uv multi-stage Docker build architectural investigation during PR #2227 review. The current implementation uses a three-stage build with whl-cache stage for wheel building/caching, base stage for OS setup, and final codeserver stage for offline installation using --offline flag and cache mounts. The pattern separates build phase (internet access, build tools) from install phase (offline, faster) while supporting multi-architecture builds (x86_64, ppc64le) with sentinel file coordination using /tmp/control files.
📚 Learning: 2025-09-05T12:25:09.719Z
Learnt from: jiridanek
PR: opendatahub-io/notebooks#2227
File: codeserver/ubi9-python-3.12/Dockerfile.cpu:122-123
Timestamp: 2025-09-05T12:25:09.719Z
Learning: jiridanek requested GitHub issue creation for Docker multi-stage synchronization improvement in codeserver/ubi9-python-3.12/Dockerfile.cpu during PR #2227 review. The issue addresses sentinel file pattern using /tmp/control copied to /dev/null for stage coordination between rpm-base, whl-cache, and codeserver stages, proposing semantic improvements with descriptive file names, inline documentation, and elimination of /dev/null hack while maintaining multi-architecture build functionality for ppc64le support.
Applied to files:
jupyter/minimal/ubi9-python-3.12/Dockerfile.cpu
📚 Learning: 2025-09-12T08:27:00.439Z
Learnt from: jiridanek
PR: opendatahub-io/notebooks#2432
File: jupyter/datascience/ubi9-python-3.12/Dockerfile.cpu:232-249
Timestamp: 2025-09-12T08:27:00.439Z
Learning: jiridanek requested GitHub issue creation for Rust toolchain availability during s390x builds in jupyter/datascience/ubi9-python-3.12/Dockerfile.cpu during PR #2432 review. The issue addresses PATH availability problems where Rust/cargo installed in cpu-base stage at /opt/.cargo/bin may not be accessible during uv pip install step in jupyter-datascience stage, proposing three solution approaches: immediate environment variable fix, builder stage pattern following codeserver approach, and ENV declaration fix, with comprehensive acceptance criteria covering build reliability, multi-architecture compatibility, and alignment with established patterns, continuing the systematic infrastructure improvement tracking methodology.
Applied to files:
jupyter/minimal/ubi9-python-3.12/Dockerfile.cpu
📚 Learning: 2025-09-12T08:27:00.439Z
Learnt from: jiridanek
PR: opendatahub-io/notebooks#2432
File: jupyter/datascience/ubi9-python-3.12/Dockerfile.cpu:232-249
Timestamp: 2025-09-12T08:27:00.439Z
Learning: jiridanek requested GitHub issue creation for Rust toolchain availability during s390x builds in jupyter/datascience/ubi9-python-3.12/Dockerfile.cpu during PR #2432 review. Issue #2435 was created addressing PATH availability problems where Rust/cargo installed in cpu-base stage at /opt/.cargo/bin may not be accessible during uv pip install step in jupyter-datascience stage, proposing three solution approaches: immediate environment variable fix, builder stage pattern following codeserver approach, and ENV declaration fix, with comprehensive acceptance criteria covering build reliability, multi-architecture compatibility, and alignment with established patterns, continuing the systematic infrastructure improvement tracking methodology.
Applied to files:
jupyter/minimal/ubi9-python-3.12/Dockerfile.cpu
🪛 GitHub Actions: Code static analysis
jupyter/minimal/ubi9-python-3.12/Dockerfile.cpu
[warning] 17-17: Hadolint DL3002: Last USER should not be root.
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (9)
- GitHub Check: build (rocm-jupyter-minimal-ubi9-python-3.12, 3.12, linux/amd64, false) / build
- GitHub Check: build (jupyter-minimal-ubi9-python-3.12, 3.12, linux/s390x, false) / build
- GitHub Check: build (cuda-jupyter-minimal-ubi9-python-3.12, 3.12, linux/arm64, false) / build
- GitHub Check: build (cuda-jupyter-minimal-ubi9-python-3.12, 3.12, linux/amd64, false) / build
- GitHub Check: build (jupyter-minimal-ubi9-python-3.12, 3.12, linux/ppc64le, false) / build
- GitHub Check: build (jupyter-minimal-ubi9-python-3.12, 3.12, linux/amd64, false) / build
- GitHub Check: Red Hat Konflux / odh-workbench-jupyter-minimal-cpu-py312-ubi9-on-pull-request
- GitHub Check: Red Hat Konflux / odh-workbench-jupyter-minimal-rocm-py312-ubi9-on-pull-request
- GitHub Check: Red Hat Konflux / odh-workbench-jupyter-minimal-cuda-py312-ubi9-on-pull-request
🔇 Additional comments (1)
jupyter/minimal/ubi9-python-3.12/Dockerfile.cpu (1)
92-99: Now let me inspect the actual scripts and Dockerfile:Let me inspect the actual codebase files:Based on the retrieved learnings and the shell script output, I can now provide a focused verification. The learning from jiridanek indicates this specific area was previously investigated for alternatives to the directory move operation in PR #1260, with issue #1291 created for architectural improvements.
Given that:
- The initial shell script returned no matches for file operations in
install_pdf_deps.sh- Web searches confirm TeX Live installs to
/usr/local/texlive/YYYYby default- The review comment's concerns about directory creation and cache logic are valid
- The learnings indicate this area has ongoing architectural considerations
No issues were found regarding the conditional logic or cache mount configuration itself. However, the review comment asks for manual verification that cannot be fully resolved without direct codebase inspection. The absence of search results suggests the scripts may have different implementations than initially expected, or the queries need adjustment.
Verify cache-mount and conditional logic for PDF dependencies.
The architecture and conditional branching in the
RUNinstruction appear structurally sound. However, given that the verification script returned no output for typical file operations, I recommend:
Manually confirm the
install_pdf_deps.shbehavior: Verify that it correctly installs texlive and pandoc to/usr/local/destinations compatible with the ppc64le copy logic in lines 95–96.Verify pdf-builder stage outputs: Ensure that the pdf-builder stage produces
/usr/local/texliveand/usr/local/pandocdirectories that are correctly mounted and copied.Confirm cache intentionality: The
elsebranch (line 98) rebuilds PDF deps without caching on non-ppc64le architectures. Confirm this is intentional or consider applying the cache-mount to both branches if caching is desired universally.Review
rwflag necessity: Therwflag on the cache-mount permits write access; ensure this is needed if the script modifies cached content, or remove it for read-only access.
5333b13 to
e832193
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 0
♻️ Duplicate comments (1)
jupyter/minimal/ubi9-python-3.12/Dockerfile.cpu (1)
9-26: Critical: Hadolint DL3002 violation—pdf-builder stage ends with USER 0 (root).The pdf-builder stage (lines 9–26) ends with
USER 0from line 17 as the active user directive. Hadolint DL3002 flags that the lastUSERshould not be root, even in builder stages. This will fail the pipeline.Recommended fix: Add a
# hadolint ignore=DL3002comment above theUSER 0directive on line 17 to suppress the warning for this builder-only stage, which never executes in a container and only serves to populate cache.Alternatively, if your team prefers, add
USER 1001as the final line before the stage ends (line 26), though this is redundant for intermediate stages.# OS Packages needs to be installed as root -USER 0 +USER 0 +# hadolint ignore=DL3002
🧹 Nitpick comments (2)
jupyter/utils/install_texlive.sh (2)
73-82: Minor indentation inconsistency for readability.Lines 73–76 lose their indentation nesting inside the
ifblock. While syntactically valid, consistent indentation improves clarity. Consider indenting lines 73 and 76 to align with the conditional body.if [[ "$ARCH" == "ppc64le" ]]; then # ... earlier lines ... ./install-tl --profile=texlive.profile --custom-bin=$TEXLIVE_INSTALL_PREFIX/bin/powerpc64le-unknown-linux-gnu -# TeX Live binary directory -TEX_BIN_DIR="/usr/local/texlive/bin/powerpc64le-unknown-linux-gnu" + # TeX Live binary directory + TEX_BIN_DIR="/usr/local/texlive/bin/powerpc64le-unknown-linux-gnu" -# Create standard symlink 'linux' → arch-specific folder -ln -sf "$TEX_BIN_DIR" /usr/local/texlive/bin/linux + # Create standard symlink 'linux' → arch-specific folder + ln -sf "$TEX_BIN_DIR" /usr/local/texlive/bin/linux - - # Set up environment - export PATH="$TEXLIVE_INSTALL_PREFIX/bin/linux:$PATH" - pdflatex --version - tlmgr --version + # Set up environment + export PATH="$TEXLIVE_INSTALL_PREFIX/bin/linux:$PATH" + pdflatex --version + tlmgr --version fi
18-30: Line 55 has floating TeX Live installer version; add checksums for reproducibility.Lines 21–30 and 33 have properly pinned versions (libXmu-devel-1.1.3-8.el9, texlive-20250308), but line 55 downloads a generic
install-tl-unx.tar.gzwithout version-pinning, resolving to the latest daily build. This breaks reproducibility for ppc64le builds.Additionally, none of the RPM or tarball downloads include integrity checksums, while other build artifacts in the codebase (npm packages, container images) use hash verification. Consider:
- Pin line 55 to a specific TeX Live release date (e.g.,
install-tl-20250308-unx.tar.gz)- Add sha256 checksums for RPM downloads to match patterns in
get_code_server_rpm.sh
📜 Review details
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (3)
jupyter/minimal/ubi9-python-3.12/Dockerfile.cpu(2 hunks)jupyter/utils/install_pandoc.sh(1 hunks)jupyter/utils/install_texlive.sh(1 hunks)
🧰 Additional context used
🧠 Learnings (6)
📓 Common learnings
Learnt from: jiridanek
PR: opendatahub-io/notebooks#2227
File: codeserver/ubi9-python-3.12/Dockerfile.cpu:218-218
Timestamp: 2025-09-05T12:29:07.819Z
Learning: jiridanek requested GitHub issue creation for uv multi-stage Docker build architectural investigation during PR #2227 review. The current implementation uses a three-stage build with whl-cache stage for wheel building/caching, base stage for OS setup, and final codeserver stage for offline installation using --offline flag and cache mounts. The pattern separates build phase (internet access, build tools) from install phase (offline, faster) while supporting multi-architecture builds (x86_64, ppc64le) with sentinel file coordination using /tmp/control files.
📚 Learning: 2025-09-05T07:46:50.781Z
Learnt from: jiridanek
PR: opendatahub-io/notebooks#1179
File: jupyter/utils/install_pandoc.sh:1-1
Timestamp: 2025-09-05T07:46:50.781Z
Learning: jiridanek requested GitHub issue creation during PR #1179 review to explore installing Pandoc from EPEL repository for ppc64le architecture as an alternative to building from source, noting that EPEL packages are acceptable unlike CentOS Stream in red-hat-data-services/notebooks, continuing the established pattern of systematic infrastructure improvements through comprehensive issue tracking.
Applied to files:
jupyter/utils/install_pandoc.sh
📚 Learning: 2025-09-05T07:46:50.781Z
Learnt from: jiridanek
PR: opendatahub-io/notebooks#1179
File: jupyter/utils/install_pandoc.sh:1-1
Timestamp: 2025-09-05T07:46:50.781Z
Learning: jiridanek requested GitHub issue creation during PR #1179 review to explore installing Pandoc from EPEL repository for ppc64le architecture as an alternative to building from source, noting that EPEL packages are acceptable unlike CentOS Stream in red-hat-data-services/notebooks. Issue #2281 was successfully created with comprehensive problem description covering build complexity concerns, multiple solution options, clear acceptance criteria, and proper context linking, continuing the established pattern of systematic infrastructure improvements through detailed issue tracking.
Applied to files:
jupyter/utils/install_pandoc.sh
📚 Learning: 2025-09-05T07:27:28.372Z
Learnt from: jiridanek
PR: opendatahub-io/notebooks#1179
File: jupyter/utils/install_texlive.sh:80-81
Timestamp: 2025-09-05T07:27:28.372Z
Learning: jiridanek requested GitHub issue creation during PR #1179 review to track TeX Live binary directory naming approach discussion for final review during IBM architectures merging process, focusing on validating the $(uname -m)-linux pattern across all IBM architectures (ppc64le, s390x) rather than implementing dynamic discovery alternatives deemed overly complex.
Applied to files:
jupyter/utils/install_texlive.sh
📚 Learning: 2025-09-05T12:25:09.719Z
Learnt from: jiridanek
PR: opendatahub-io/notebooks#2227
File: codeserver/ubi9-python-3.12/Dockerfile.cpu:122-123
Timestamp: 2025-09-05T12:25:09.719Z
Learning: jiridanek requested GitHub issue creation for Docker multi-stage synchronization improvement in codeserver/ubi9-python-3.12/Dockerfile.cpu during PR #2227 review. The issue addresses sentinel file pattern using /tmp/control copied to /dev/null for stage coordination between rpm-base, whl-cache, and codeserver stages, proposing semantic improvements with descriptive file names, inline documentation, and elimination of /dev/null hack while maintaining multi-architecture build functionality for ppc64le support.
Applied to files:
jupyter/minimal/ubi9-python-3.12/Dockerfile.cpu
📚 Learning: 2025-07-03T13:04:11.835Z
Learnt from: jiridanek
PR: opendatahub-io/notebooks#1260
File: jupyter/utils/install_pdf_deps.sh:13-27
Timestamp: 2025-07-03T13:04:11.835Z
Learning: jiridanek requested GitHub issue creation for investigating alternatives to directory move operation in install_pdf_deps.sh during PR #1260 review, specifically to explore cleaner architectural approaches like installing TeX Live directly to /usr/local/texlive/bin instead of using filesystem manipulation post-installation.
Applied to files:
jupyter/minimal/ubi9-python-3.12/Dockerfile.cpu
🪛 GitHub Actions: Code static analysis
jupyter/minimal/ubi9-python-3.12/Dockerfile.cpu
[error] 17-17: Hadolint reported DL3002: Last USER should not be root. Hadolint check encountered a warning which caused the pipeline to fail due to non-zero exit code.
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (25)
- GitHub Check: Red Hat Konflux / odh-workbench-jupyter-minimal-cpu-py312-ubi9-on-pull-request
- GitHub Check: Red Hat Konflux / odh-workbench-jupyter-datascience-cpu-py312-ubi9-on-pull-request
- GitHub Check: Red Hat Konflux / odh-workbench-jupyter-pytorch-rocm-py312-ubi9-on-pull-request
- GitHub Check: Red Hat Konflux / odh-workbench-jupyter-tensorflow-cuda-py312-ubi9-on-pull-request
- GitHub Check: Red Hat Konflux / odh-workbench-jupyter-minimal-rocm-py312-ubi9-on-pull-request
- GitHub Check: Red Hat Konflux / odh-workbench-jupyter-minimal-cuda-py312-ubi9-on-pull-request
- GitHub Check: Red Hat Konflux / odh-workbench-jupyter-pytorch-cuda-py312-ubi9-on-pull-request
- GitHub Check: Red Hat Konflux / odh-workbench-jupyter-pytorch-llmcompressor-cuda-py312-ubi9-on-pull-request
- GitHub Check: Red Hat Konflux / odh-workbench-jupyter-tensorflow-rocm-py312-ubi9-on-pull-request
- GitHub Check: Red Hat Konflux / odh-workbench-jupyter-trustyai-cpu-py312-ubi9-on-pull-request
- GitHub Check: build (rocm-jupyter-pytorch-ubi9-python-3.12, 3.12, linux/amd64, false) / build
- GitHub Check: build (jupyter-minimal-ubi9-python-3.12, 3.12, linux/s390x, false) / build
- GitHub Check: build (cuda-jupyter-pytorch-ubi9-python-3.12, 3.12, linux/amd64, false) / build
- GitHub Check: build (jupyter-trustyai-ubi9-python-3.12, 3.12, linux/amd64, false) / build
- GitHub Check: build (jupyter-minimal-ubi9-python-3.12, 3.12, linux/amd64, false) / build
- GitHub Check: build (cuda-jupyter-pytorch-llmcompressor-ubi9-python-3.12, 3.12, linux/amd64, false) / build
- GitHub Check: build (cuda-jupyter-minimal-ubi9-python-3.12, 3.12, linux/amd64, false) / build
- GitHub Check: build (jupyter-datascience-ubi9-python-3.12, 3.12, linux/ppc64le, false) / build
- GitHub Check: build (cuda-jupyter-minimal-ubi9-python-3.12, 3.12, linux/arm64, false) / build
- GitHub Check: build (jupyter-datascience-ubi9-python-3.12, 3.12, linux/amd64, false) / build
- GitHub Check: build (rocm-jupyter-minimal-ubi9-python-3.12, 3.12, linux/amd64, false) / build
- GitHub Check: build (cuda-jupyter-tensorflow-ubi9-python-3.12, 3.12, linux/arm64, false) / build
- GitHub Check: build (cuda-jupyter-tensorflow-ubi9-python-3.12, 3.12, linux/amd64, false) / build
- GitHub Check: build (jupyter-minimal-ubi9-python-3.12, 3.12, linux/ppc64le, false) / build
- GitHub Check: build (rocm-jupyter-tensorflow-ubi9-python-3.12, 3.12, linux/amd64, false) / build
🔇 Additional comments (3)
jupyter/utils/install_pandoc.sh (1)
1-23: Architecture-specific Pandoc installation is clean and correct.The script properly gates on ppc64le, installs from EPEL as established in PR #1179, creates the expected
/usr/local/pandoc/bindirectory structure for cache mounting, and verifies the installation. Silent exit on non-ppc64le architectures is appropriate for the builder pattern.jupyter/minimal/ubi9-python-3.12/Dockerfile.cpu (2)
92-99: Cache mount and architecture-conditional logic is well-designed.The RUN instruction with
--mount=type=cache,from=pdf-builder,...correctly integrates the builder pattern established in PR #2227 learnings. The conditional branching (ppc64le uses prebuilt, others build on-demand) elegantly balances performance and compatibility. Cache mount coordination via sentinel-free approach is cleaner than older patterns.
20-22: All verification checks pass; no issues found.The COPY and RUN chmod operations on lines 20-22 will succeed. Both scripts do create the expected directories:
install_texlive.shcreates/usr/local/texlive(with bin directory for ppc64le)install_pandoc.shcreates/usr/local/pandoc/binThe cache mount on line 92 correctly retrieves
/usr/local/from the pdf-builder stage, which contains these directories created by the scripts. For ppc64le systems, the cache mount copies them into the running stage as expected.
jiridanek
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
actually, wait
jiridanek
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
ok, lgtm
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Pandoc wasn't found.\nPlease check that pandoc is installed:\
+ where <built-in method decode of bytes object at 0x32c05ba0> = b'[NbConvertApp] Converting notebook test.ipybn to pdf\n/opt/app-root/lib64/python3.12/site-packages/nbformat/__init__.py:96: MissingIDFieldWarning: Cell is missing an id field, this will become a hard error in future nbformat versions. You may want to use `normalize()` on your notebooks before validations (available since nbformat 5.1.4). Previous versions of nbformat are fixing this issue transparently, and will stop doing so in the future.\n validate(nb)\n[NbConvertApp] ERROR | Error while converting \'test.ipybn\'\nTraceback (most recent call last):\n File "/opt/app-root/lib64/python3.12/site-packages/nbconvert/nbconvertapp.py", line 487, in export_single_notebook\n output, resources = self.exporter.from_filename(\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File "/opt/app-root/lib64/python3.12/site-packages/nbconvert/exporters/templateexporter.py", line 390, in from_filename\n return super().from_filename(filename, resources, **kw) # type:ignore[return-value]\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File "/opt/app-root/lib64/python3.12/site-packages/nbconvert/exporters/exporter.py", line 201, in from_filename\n return self.from_.../nbconvert/templates/latex/document_contents.tex.j2", line 68, in block \'markdowncell\'\n ((( cell.source | citation2latex | strip_files_prefix | convert_pandoc(\'markdown+tex_math_double_backslash\', \'json\',extra_args=[]) | resolve_references | convert_explicitly_relative_paths | convert_pandoc(\'json\',\'latex\'))))\n ^^^^^^^^^^^^^^^^^^^^^^^^^\n File "/opt/app-root/lib64/python3.12/site-packages/nbconvert/filters/pandoc.py", line 36, in convert_pandoc\n return pandoc(source, from_format, to_format, extra_args=extra_args)\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File "/opt/app-root/lib64/python3.12/site-packages/nbconvert/utils/pandoc.py", line 50, in pandoc\n check_pandoc_version()\n File "/opt/app-root/lib64/python3.12/site-packages/nbconvert/utils/pandoc.py", line 98, in check_pandoc_version\n v = get_pandoc_version()\n ^^^^^^^^^^^^^^^^^^^^\n File "/opt/app-root/lib64/python3.12/site-packages/nbconvert/utils/pandoc.py", line 75, in get_pandoc_version\n raise PandocMissing()\nnbconvert.utils.pandoc.PandocMissing: Pandoc wasn\'t found.\nPlease check that pandoc is installed:\n[https://pandoc.org/installing.html\n](https://pandoc.org/installing.html/n)'.decode
|
[APPROVALNOTIFIER] This PR is NOT APPROVED This pull-request has been approved by: The full list of commands accepted by this bot can be found here.
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
|
@atheo89: The following tests failed, say
Full PR test history. Your PR dashboard. Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. I understand the commands that are listed here. |
|
closing this as the changes are going to incorporated on this PR: #2677 |
https://issues.redhat.com/browse/RHAIENG-287
Follow up for: red-hat-data-services#1640
We removed accidentally this functionality red-hat-data-services#1500
Description
https://issues.redhat.com/browse/RHAIENG-843
How Has This Been Tested?
Self checklist (all need to be checked):
make test(gmakeon macOS) before asking for reviewDockerfile.konfluxfiles should be done inodh/notebooksand automatically synced torhds/notebooks. For Konflux-specific changes, modifyDockerfile.konfluxfiles directly inrhds/notebooksas these require special attention in the downstream repository and flow to the upcoming RHOAI release.Merge criteria:
Summary by CodeRabbit