Skip to content

Conversation

@UCaromel
Copy link
Contributor

@UCaromel UCaromel commented Aug 1, 2025

mhd wip

@UCaromel UCaromel marked this pull request as draft August 1, 2025 15:35
@coderabbitai
Copy link

coderabbitai bot commented Aug 1, 2025

Important

Review skipped

More than 25% of the files skipped due to max files limit. The review is being skipped to prevent a low-quality review.

70 files out of 223 files are above the max files limit of 100. Please upgrade to Pro plan to get higher limits.

You can disable this status message by setting the reviews.review_status to false in the CodeRabbit configuration file.

✨ Finishing touches
  • 📝 Generate docstrings
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.



class MHDModel(object):
def defaulter(self, input, value):

Check notice

Code scanning / CodeQL

Explicit returns mixed with implicit (fall through) returns Note

Mixing implicit and explicit returns may indicate an error as implicit returns always return None.
Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 72

🔭 Outside diff range comments (7)
src/amr/tagging/default_tagger_strategy.hpp (3)

138-138: Fix header guard comment to match the actual guard name

The comment doesn't match the actual header guard defined at line 1.

-#endif // DEFAULT_HYBRID_TAGGER_STRATEGY_H
+#endif // DEFAULT_TAGGER_STRATEGY_H

50-68: Remove dead code block

This block will never execute due to the and false condition. If this is legacy code, it should be removed to improve code clarity.

-    if constexpr (dimension == 1 and false)
-    {
-        // at interporder 1 we choose not to tag the last patch cell since
-        // the 5 points stencil may go beyond the last ghost node.
-        // for interp order 2 and 3 this is ok
-        for (auto iCell = 0u, ix = start_x; iCell <= end_x; ++ix, ++iCell)
-        {
-            auto crit_by_x = (By(ix + 2) - By(ix)) / (1 + By(ix + 1) - By(ix));
-            auto crit_bz_x = (Bz(ix + 2) - Bz(ix)) / (1 + Bz(ix + 1) - Bz(ix));
-            auto criter    = std::max(crit_by_x, crit_bz_x);
-
-            if (criter > threshold_)
-            {
-                tagsv(iCell) = 1;
-            }
-            else
-                tagsv(iCell) = 0;
-        }
-    }

31-135: Implement 3D tagging or explicitly disable it in DefaultTaggerStrategy::tag

The current DefaultTaggerStrategy<Model>::tag only has branches for dimension == 1 and dimension == 2. For dimension == 3, none of the if constexpr blocks fire, so no tags are produced.

Please update src/amr/tagging/default_tagger_strategy.hpp to either:

  • Add an if constexpr (dimension == 3) block with the appropriate 3D refinement criterion, or
  • Insert a static_assert(dimension != 3, "DefaultTaggerStrategy does not support 3D yet"); (or document that 3D is unsupported)

This will prevent silent no-ops when Model::dimension == 3.

src/amr/load_balancing/concrete_load_balancer_strategy_nppc.hpp (1)

3-4: Update include guard to match renamed class

The include guard still contains "HYBRID" but the class has been renamed to be model-agnostic.

-#ifndef PHARE_CONCRERTE_LOAD_BALANCER_HYBRID_STRATEGY_NPPC_HPP
-#define PHARE_CONCRERTE_LOAD_BALANCER_HYBRID_STRATEGY_NPPC_HPP
+#ifndef PHARE_CONCRETE_LOAD_BALANCER_STRATEGY_NPPC_HPP
+#define PHARE_CONCRETE_LOAD_BALANCER_STRATEGY_NPPC_HPP

Also note the typo fix: "CONCRERTE" → "CONCRETE"

src/amr/tagging/concrete_tagger.hpp (1)

2-3: Update header guard to match new class name

The header guard still references HYBRID_TAGGER but the class has been renamed to ConcreteTagger.

-#ifndef PHARE_HYBRID_TAGGER_HPP
-#define PHARE_HYBRID_TAGGER_HPP
+#ifndef PHARE_CONCRETE_TAGGER_HPP
+#define PHARE_CONCRETE_TAGGER_HPP
src/amr/data/field/coarsening/electric_field_coarsener.hpp (2)

18-31: Update outdated documentation

The class documentation still references "MagneticFieldCoarsener" but the class is now "ElectricFieldCoarsener". This needs to be updated to reflect the current implementation.

 /** @brief This class gives an operator() that performs the coarsening of N fine nodes onto a
  * given coarse node
  *
- * A MagneticFieldCoarsener object is created each time the refine() method of the
+ * An ElectricFieldCoarsener object is created each time the refine() method of the
  * FieldCoarsenOperator is called and its operator() is called for each coarse index.
  * It is the default coarsening policy and used for any field that does not come with
  * specific constraints (such as conserving some property in the coarsening process).
  *
  *
- * This coarsening operation is defined so to conserve the magnetic flux.
- * This is done by assigning to a magnetic field component on a coarse face, the average
+ * This coarsening operation is defined so to conserve the electric flux.
+ * This is done by assigning to an electric field component on a coarse face, the average
  * of the enclosed fine faces
  *
  */

163-168: Critical: Duplicate condition causes incorrect Z direction handling

Line 163 has direction == Direction::Y but it should be direction == Direction::Z. This bug will cause the Z direction case to never execute.

-        if constexpr (direction == Direction::Y)
+        if constexpr (direction == Direction::Z)
         {
             F_B.x += -J.y * coef;
             F_B.y += J.x * coef;
             F_Etot += (J.x * B.y - J.y * B.x) * coef;
         }
🧹 Nitpick comments (51)
pyphare/pyphare/mock_mhd_simulator/global_vars.py (1)

1-2: Add type hints & docstring to clarify the singleton handle

A minimal docstring plus a forward-referenced type annotation improves readability and keeps static type-checkers quiet.

-sim = None
+"""Global handle to the current mock‐MHD `Simulation` instance (acts as a singleton)."""
+
+from typing import TYPE_CHECKING
+
+if TYPE_CHECKING:            # avoid import cycles
+    from .simulation import Simulation
+
+sim: "Simulation | None" = None
pyphare/pyphare/mock_mhd_simulator/simulation.py (2)

4-14: Singleton implementation has some design concerns.

The singleton pattern implementation works but has several issues:

  1. Using object.__setattr__ is unnecessary - regular self.k = v assignment would be clearer
  2. The error message could be more descriptive
  3. No thread safety considerations
 def __init__(self, **kwargs):
     if global_vars.sim is not None:
-        raise RuntimeError("simulation is already created")
+        raise RuntimeError("Simulation instance already exists. Only one Simulation can be created.")
 
     global_vars.sim = self
 
     for k, v in kwargs.items():
-        object.__setattr__(self, k, v)
+        setattr(self, k, v)

19-20: Consider potential state inconsistency with clear() method.

The clear() method sets the global sim to None but doesn't affect the current instance. This could lead to confusion where an instance exists but global_vars.sim is None.

Consider whether this method is necessary or if it should also reset the instance state.

pyphare/pyphare/pharein/initialize/hybrid.py (1)

22-24: Redundant negative value validation

The check for nbr_populations() < 0 is redundant because add_size_t already validates that the value is non-negative and raises a RuntimeError if it receives a negative value.

-    if init_model.nbr_populations() < 0:
-        raise RuntimeError("Number of populations cannot be negative")
     add_size_t("simulation/ions/nbrPopulations", init_model.nbr_populations())
src/amr/load_balancing/load_balancer_estimator_mhd.hpp (1)

17-22: Consider more descriptive template parameter name

The template parameter PHARE_T is functional but could be more descriptive. Consider PHARETypes or PHAREConfig to better convey its purpose as a type configuration container.

-template<typename PHARE_T>
+template<typename PHARETypes>
 class LoadBalancerEstimatorMHD : public LoadBalancerEstimator
 {
-    using MHDModel  = typename PHARE_T::MHDModel_t;
+    using MHDModel  = typename PHARETypes::MHDModel_t;
pyphare/pyphare/mock_mhd_simulator/__init__.py (1)

1-1: Remove unused import

Static analysis correctly identifies that clearDict is imported but never used in this module.

-from pyphare.pharein import clearDict, fn_wrapper
+from pyphare.pharein import fn_wrapper
src/amr/data/field/field_variable.hpp (1)

68-68: Cosmetic member variable default change

The default value change from false to true for fineBoundaryRepresentsVariable_ is cosmetic since the constructor initializer list always overrides this value. Consider removing the default value to avoid confusion.

-bool const fineBoundaryRepresentsVariable_ = true;
+bool const fineBoundaryRepresentsVariable_;
src/amr/load_balancing/load_balancer_strategy.hpp (1)

2-3: Update include guard to match renamed class

The include guard still references HYBRID_STRATEGY but the class has been generalized to LoadBalancerStrategy. Update for consistency.

-#ifndef PHARE_LOAD_BALANCER_HYBRID_STRATEGY_HPP
-#define PHARE_LOAD_BALANCER_HYBRID_STRATEGY_HPP
+#ifndef PHARE_LOAD_BALANCER_STRATEGY_HPP
+#define PHARE_LOAD_BALANCER_STRATEGY_HPP
src/amr/messengers/mhd_hybrid_messenger_strategy.hpp (1)

145-149: Reflux method needs implementation.

The new reflux method has an empty implementation. This suggests the MHD-hybrid reflux functionality is not yet implemented.

The reflux method is currently empty. This is likely part of the work-in-progress MHD implementation. Would you like me to help implement the reflux logic or create an issue to track this missing functionality?

src/amr/load_balancing/load_balancer_hybrid_strategy_factory.hpp (1)

14-14: Template parameter name is misleading for a generic factory.

The template parameter HybridModel suggests this factory is specific to hybrid models, but based on the refactoring context and usage of generic LoadBalancerStrategy<HybridModel>, this factory is meant to work with any model type. Consider renaming to Model for clarity.

-template<typename HybridModel>
+template<typename Model>
 class LoadBalancerHybridStrategyFactory
 {
 public:
-    static std::unique_ptr<LoadBalancerStrategy<HybridModel>> create(std::string strat_name,
-                                                                     int const id)
+    static std::unique_ptr<LoadBalancerStrategy<Model>> create(std::string strat_name,
+                                                               int const id)
     {
         if (strat_name == "nppc")
         {
-            return std::make_unique<ConcreteLoadBalancerStrategyNPPC<HybridModel>>(id);
+            return std::make_unique<ConcreteLoadBalancerStrategyNPPC<Model>>(id);
         }
 
         else if (strat_name == "homogeneous")
         {
-            return std::make_unique<ConcreteLoadBalancerStrategyHomogeneous<HybridModel>>(id);
+            return std::make_unique<ConcreteLoadBalancerStrategyHomogeneous<Model>>(id);
         }
pyphare/pyphare/simulator/simulator.py (1)

152-162: Simplify parameter extraction using a configuration dictionary

The repeated getattr calls with False defaults could be simplified for better maintainability.

-            refined_particle_nbr = getattr(
-                self.simulation, "refined_particle_nbr", False
-            )
-            mhd_timestepper = getattr(self.simulation, "mhd_timestepper", False)
-            reconstruction = getattr(self.simulation, "reconstruction", False)
-            limiter = getattr(self.simulation, "limiter", False)
-            riemann = getattr(self.simulation, "riemann", False)
-            hall = getattr(self.simulation, "hall", False)
-            res = getattr(self.simulation, "res", False)
-            hyper_res = getattr(self.simulation, "hyper_res", False)
+            mhd_params = {
+                "refined_particle_nbr": "refined_particle_nbr",
+                "mhd_timestepper": "mhd_timestepper",
+                "reconstruction": "reconstruction",
+                "limiter": "limiter",
+                "riemann": "riemann",
+                "hall": "hall",
+                "res": "res",
+                "hyper_res": "hyper_res"
+            }
+            
+            params = {k: getattr(self.simulation, v, False) 
+                     for k, v in mhd_params.items()}

Then update the function call to use **params.

src/amr/data/tensorfield/tensor_field_overlap.hpp (3)

17-25: Remove duplicate documentation comments

There are duplicate documentation blocks. The first block appears to be from FieldOverlap and should be removed.

-    /** \brief FieldOverlap is used to represent a region where data will be communicated betwen two
-     * AMR patches
-     *
-     *  It will contain the exact form of the overlap between two patch for a fieldData with the
-     * same quantity. It will also store any transformation between a source and destination patch.
-     */
     /**
      * @brief The FieldOverlap class
      */

49-49: Use logical AND assignment instead of bitwise for boolean operations

While functionally equivalent for booleans, using logical operators makes the intent clearer.

-                isOverlapEmpty_ &= components_[i]->isOverlapEmpty();
+                isOverlapEmpty_ = isOverlapEmpty_ && components_[i]->isOverlapEmpty();

77-84: Remove redundant underscore prefix from private static method

The method _get_index_for has a leading underscore which is redundant since it's already private. In C++, the underscore prefix convention is typically used for member variables, not methods.

-        auto static _get_index_for(core::Component component)
+        auto static get_index_for(core::Component component)
src/amr/data/field/coarsening/mhd_flux_coarsener.hpp (1)

57-60: Consider using std::transform for index computation

The index computation loop could be more concise.

-        for (auto i = std::size_t{0}; i < dimension; ++i)
-        {
-            fineStartIndex[i] = coarseIndex[i] * this->ratio_;
-        }
+        std::transform(coarseIndex.begin(), coarseIndex.end(), fineStartIndex.begin(),
+                       [this](auto idx) { return idx * this->ratio_; });
pyphare/pyphare/pharein/mhd_model.py (1)

5-20: Improve default lambda functions

The default lambda functions can be simplified and should properly handle unused parameters.

         if self.dim == 1:
-            return lambda x: value + x * 0
+            return lambda x: value
         if self.dim == 2:
-            return lambda x, y: value
+            return lambda x, y: value + 0 * (x + y)  # Use parameters to avoid warnings
         if self.dim == 3:
-            return lambda x, y, z: value
+            return lambda x, y, z: value + 0 * (x + y + z)  # Use parameters to avoid warnings

Alternatively, consider using *args for cleaner syntax:

return lambda *args: value
src/amr/resources_manager/resources_manager_utilities.hpp (1)

107-119: Consider reducing code duplication

The tensor field and field cases have identical implementations. Consider combining them to reduce duplication.

 auto static make_shared_variable(ResourceView const& view)
 {
-    if constexpr (is_tensor_field_v<ResourceView>)
-        return std::make_shared<typename type::variable_type>(view.name(),
-                                                              view.physicalQuantity());
-    else if constexpr (is_particles_v<ResourceView>)
+    if constexpr (is_particles_v<ResourceView>)
         return std::make_shared<typename type::variable_type>(view.name());
-    else if constexpr (is_field_v<ResourceView>)
+    else if constexpr (is_tensor_field_v<ResourceView> || is_field_v<ResourceView>)
         return std::make_shared<typename type::variable_type>(view.name(),
                                                               view.physicalQuantity());
     else
         throw std::runtime_error("bad condition");
 }
src/amr/level_initializer/mhd_level_initializer.hpp (1)

38-39: Consider validating the cast to MHDModel

While the static_cast is likely safe due to the template design, consider adding a debug assertion to ensure type safety.

         auto& mhdModel = static_cast<MHDModel&>(model);
+        TBOX_ASSERT(dynamic_cast<MHDModel*>(&model) != nullptr);
         auto& level    = amr_types::getLevel(*hierarchy, levelNumber);
src/amr/data/field/refine/mhd_flux_refiner.hpp (1)

58-73: Document the hardcoded refinement ratio assumption

The methods assume a refinement ratio of 2, as noted in the comment. This limitation should be documented in the class documentation or made configurable.

Consider adding a class-level comment or static assertion:

 template<std::size_t dimension>
 class MHDFluxRefiner
 {
+    // This refiner currently only supports a refinement ratio of 2
+    static constexpr int REFINEMENT_RATIO = 2;
 public:
src/amr/messengers/refiner.hpp (1)

35-35: Consider making tensor field rank configurable

The comment indicates the tensor field rank is hardcoded to 2. This might limit future extensibility if higher-rank tensors are needed.

Consider parameterizing the tensor rank or documenting why rank 2 is sufficient for current use cases.

src/amr/data/field/time_interpolate/field_linear_time_interpolate.hpp (1)

91-92: Remove empty comment.

The comment on line 91 is empty and should be removed.

-    }
-
-    //
-}
+    }
+}
src/amr/messengers/messenger.hpp (1)

13-13: Clean up commented code

Consider removing this commented include statement if it's no longer needed, or add a comment explaining why it's preserved.

-// #include "core/data/grid/gridlayout.hpp"
pyphare/pyphare/mock_mhd_simulator/simulator.py (1)

10-30: Consider adding validation for dynamic function name construction

The dynamic construction of the simulator function name through string concatenation could fail at runtime if the expected function doesn't exist in the C++ bindings. Consider adding error handling or validation.

 def make_cpp_simulator(
     dim,
     interp,
     time_integrator,
     reconstruction,
     riemann,
     hall=False,
     resistivity=False,
     hyper_resistivity=False,
     limiter="",
 ):
     import pybindlibs.pyMHD
 
     hall_suffix = "_hall" if hall else ""
     resistivity_suffix = "_res" if resistivity else ""
     hyper_res_suffix = "_hyperres" if hyper_resistivity else ""
     limiter_suffix = f"_{limiter}" if limiter else ""
 
     make_sim = f"make_mhd_mock_simulator_{dim}_{interp}_{time_integrator}_{reconstruction}{limiter_suffix}_{riemann}{hall_suffix}{resistivity_suffix}{hyper_res_suffix}"
-    return getattr(pyMHD(), make_sim)()
+    try:
+        return getattr(pyMHD(), make_sim)()
+    except AttributeError:
+        raise ValueError(f"No simulator found for configuration: {make_sim}")
pyphare/pyphare/pharein/simulation.py (1)

636-636: Simplify kwargs.get() call

Remove the redundant None default parameter as kwargs.get() already returns None by default.

-    model_options = kwargs.get("model_options", None)
+    model_options = kwargs.get("model_options")
src/core/models/mhd_state.hpp (2)

114-131: Consider encapsulation for member variables

All state variables are public, which violates encapsulation principles. Consider making them private with appropriate accessor methods if needed.

Consider restructuring the class to provide better encapsulation:

  • Make member variables private
  • Provide const and non-const accessors as needed
  • This will help maintain invariants and provide better API control

57-95: Optimize string concatenation in constructors

Multiple string concatenations in member initializers could be optimized by using string formatting or pre-computing the concatenated strings.

-    : rho{dict["name"].template to<std::string>() + "_" + "rho", MHDQuantity::Scalar::rho}
+    : rho{dict["name"].template to<std::string>() + "_rho", MHDQuantity::Scalar::rho}

This applies to all similar concatenations in both constructors.

src/amr/solvers/time_integrator/tvdrk2_integrator.hpp (2)

178-179: Document TVDRK2 coefficients

The weights w0_ and w1_ should be documented as the standard TVDRK2 coefficients.

+    // Standard TVDRK2 coefficients for combining stages
     static constexpr auto w0_{0.5};
     static constexpr auto w1_{0.5};

51-68: Consider adding documentation for the TVDRK2 algorithm

The operator() implements the TVDRK2 time integration scheme but lacks documentation explaining the algorithm steps.

Add a comment block explaining the TVDRK2 algorithm:

// TVDRK2 time integration scheme:
// Stage 1: U1 = Euler(Un, tn->tn+1)
// Stage 2: U2 = Euler(U1, tn->tn+1)
// Combine: Un+1 = 0.5*Un + 0.5*U2
src/core/numerics/MHD_equations/MHD_equations.hpp (1)

146-149: Clarify the dual use of resistive_contributions_

The comment mentions the method can be used for both resistivity and hyper-resistivity, but the parameter naming (J vs LaplJ) should be clearer.

Consider renaming the parameter to be more generic or creating separate methods for clarity:

// Generic name since it can be J or LaplJ
void resistive_contributions_(auto const& coef, auto const& B, auto const& current_like, auto& F_B, auto& F_Etot) const
src/amr/data/tensorfield/tensor_field_data_factory.hpp (2)

89-103: Clarify the use of dummy values in getBoxGeometry

The method uses hardcoded dl = 0.01 and origin = 0. While the comment states these are dummy values and only nbCell is useful, having magic numbers in the code is confusing.

Consider using named constants or a more explicit approach:

+        // Dummy values - only nbCell is used by TensorFieldGeometry
+        constexpr double DUMMY_DL = 1.0;  // Actual value doesn't matter
+        constexpr double DUMMY_ORIGIN = 0.0;
+        
         for (std::size_t iDim = 0; iDim < dimension; ++iDim)
         {
-            dl[iDim]     = 0.01;
+            dl[iDim]     = DUMMY_DL;
             nbCell[iDim] = box.numberCells(iDim);
-            origin[iDim] = 0;
+            origin[iDim] = DUMMY_ORIGIN;
         }

35-35: Use move semantics for string parameter

The constructor takes name by const reference but stores it. Consider using move semantics for efficiency.

     TensorFieldDataFactory(bool fineBoundaryRepresentsVariable, bool dataLivesOnPatchBorder,
-                           std::string const& name, tensor_t qty)
+                           std::string name, tensor_t qty)
         : SAMRAI::hier::PatchDataFactory(
               SAMRAI::hier::IntVector{SAMRAI::tbox::Dimension(dimension), n_ghosts})
         , fineBoundaryRepresentsVariable_{fineBoundaryRepresentsVariable}
         , dataLivesOnPatchBorder_{dataLivesOnPatchBorder}
         , quantity_{qty}
-        , name_{name}
+        , name_{std::move(name)}
     {
     }
src/core/mhd/mhd_quantities.hpp (1)

81-81: Remove unnecessary blank lines.

Apply this diff to remove the unnecessary blank lines:

-    if (qty == Vector::rhoV)
-        return {{Scalar::rhoVx, Scalar::rhoVy, Scalar::rhoVz}};
-
-
-    if (qty == Vector::E)
+    if (qty == Vector::rhoV)
+        return {{Scalar::rhoVx, Scalar::rhoVy, Scalar::rhoVz}};
+
+    if (qty == Vector::E)
         return {{Scalar::Ex, Scalar::Ey, Scalar::Ez}};

     if (qty == Vector::J)
         return {{Scalar::Jx, Scalar::Jy, Scalar::Jz}};
-
-
-    if (qty == Vector::VecFlux_x)
+
+    if (qty == Vector::VecFlux_x)
         return {{Scalar::VecFluxX_x, Scalar::VecFluxY_x, Scalar::VecFluxZ_x}};

Also applies to: 88-88

src/core/data/grid/gridlayoutimplyee_mhd.hpp (1)

642-642: Fix incomplete namespace closing comment.

The namespace closing comment is incomplete.

-    }; // namespace core
+    }; // class GridLayoutImplYeeMHD
src/amr/physical_models/mhd_model.hpp (1)

110-110: Consider adding error handling for dynamic_cast.

The dynamic_cast could throw std::bad_cast if the info is not of the expected type. Consider adding validation or using a safer casting approach.

-    auto& MHDInfo = dynamic_cast<amr::MHDMessengerInfo&>(*info);
+    auto* MHDInfoPtr = dynamic_cast<amr::MHDMessengerInfo*>(info.get());
+    if (!MHDInfoPtr) {
+        throw std::runtime_error("Invalid messenger info type for MHD model");
+    }
+    auto& MHDInfo = *MHDInfoPtr;
src/amr/data/field/refine/field_refine_operator.hpp (1)

32-32: Consider using explicit template parameters instead of auto.

The function uses auto for several parameters. Consider using explicit template parameters for better type safety and documentation.

-void refine_field(Dst& destinationField, auto& sourceField, auto& intersectionBox, auto& refiner)
+template<typename Src, typename Box, typename Refiner>
+void refine_field(Dst& destinationField, Src& sourceField, Box& intersectionBox, Refiner& refiner)
src/amr/data/field/coarsening/field_coarsen_operator.hpp (1)

256-256: Fix typo in comment.

-     * selected. Finnaly loop over the indexes in the box, and apply the coarsening defined
+     * selected. Finally loop over the indexes in the box, and apply the coarsening defined
src/amr/data/tensorfield/tensor_field_data.hpp (1)

488-489: Fix incorrect namespace closing comment

The namespace closing comment on line 488 incorrectly states // namespace PHARE when it should be // namespace PHARE::amr.

-}; // namespace PHARE
+}; // namespace PHARE::amr

Also applies to: 519-519

src/core/data/grid/gridlayout.hpp (2)

828-833: Simplify tensor centering implementation

The tensor centering method uses for_N_make_array with a lambda that ignores its parameter. This can be simplified.

-NO_DISCARD constexpr static std::array<std::array<QtyCentering, dimension>, 6>
-centering(HybridQuantity::Tensor hybridQuantity)
-{
-    return for_N_make_array<6>(
-        [](auto) { return ConstArray<QtyCentering, dimension>(QtyCentering::primal); });
-}
+NO_DISCARD constexpr static std::array<std::array<QtyCentering, dimension>, 6>
+centering(HybridQuantity::Tensor hybridQuantity)
+{
+    std::array<std::array<QtyCentering, dimension>, 6> result{};
+    for (auto& arr : result)
+        arr = ConstArray<QtyCentering, dimension>(QtyCentering::primal);
+    return result;
+}

1112-1144: Consider reducing code duplication in next/previous template methods

The next and previous template methods have very similar structure with only the operation differing (+1 vs -1). Consider refactoring to reduce duplication.

Consider creating a helper template that takes the increment as a parameter:

template<auto direction, int increment>
static MeshIndex<dimension> step(MeshIndex<dimension> index)
{
    if constexpr (dimension == 1)
    {
        return make_index(index[0] + increment);
    }
    else if constexpr (dimension == 2)
    {
        if constexpr (direction == Direction::X)
            return make_index(index[0] + increment, index[1]);
        else if constexpr (direction == Direction::Y)
            return make_index(index[0], index[1] + increment);
    }
    else if constexpr (dimension == 3)
    {
        if constexpr (direction == Direction::X)
            return make_index(index[0] + increment, index[1], index[2]);
        else if constexpr (direction == Direction::Y)
            return make_index(index[0], index[1] + increment, index[2]);
        else if constexpr (direction == Direction::Z)
            return make_index(index[0], index[1], index[2] + increment);
    }
}

template<auto direction>
static MeshIndex<dimension> next(MeshIndex<dimension> index)
{
    return step<direction, 1>(index);
}

template<auto direction>
static MeshIndex<dimension> previous(MeshIndex<dimension> index)
{
    return step<direction, -1>(index);
}

Also applies to: 1147-1179

src/amr/data/field/refine/magnetic_refine_patch_strategy.hpp (1)

56-58: Remove or document commented-out code

There's a commented-out alternative return value. Either remove it or add a comment explaining why it's kept.

-// return SAMRAI::hier::IntVector(dim, 0); // hard-coded 0th order base interpolation
 return SAMRAI::hier::IntVector(dim, 1); // hard-coded 0th order base interpolation

Or if the alternative is important to keep:

-// return SAMRAI::hier::IntVector(dim, 0); // hard-coded 0th order base interpolation
+// Alternative: return SAMRAI::hier::IntVector(dim, 0); // for 0th order interpolation
 return SAMRAI::hier::IntVector(dim, 1); // hard-coded 0th order base interpolation
src/amr/solvers/solver_mhd.hpp (3)

63-95: Consider refactoring flux initialization to reduce duplication.

The flux initialization contains repetitive patterns that could be simplified using a helper function or compile-time generation.

Consider creating a helper to generate flux names:

-        : ISolver<AMR_Types>{"MHDSolver"}
-        , fluxes_{{"rho_fx", MHDQuantity::Scalar::ScalarFlux_x},
-                  {"rhoV_fx", MHDQuantity::Vector::VecFlux_x},
-                  {"B_fx", MHDQuantity::Vector::VecFlux_x},
-                  {"Etot_fx", MHDQuantity::Scalar::ScalarFlux_x},
-
-                  {"rho_fy", MHDQuantity::Scalar::ScalarFlux_y},
-                  {"rhoV_fy", MHDQuantity::Vector::VecFlux_y},
-                  {"B_fy", MHDQuantity::Vector::VecFlux_y},
-                  {"Etot_fy", MHDQuantity::Scalar::ScalarFlux_y},
-
-                  {"rho_fz", MHDQuantity::Scalar::ScalarFlux_z},
-                  {"rhoV_fz", MHDQuantity::Vector::VecFlux_z},
-                  {"B_fz", MHDQuantity::Vector::VecFlux_z},
-                  {"Etot_fz", MHDQuantity::Scalar::ScalarFlux_z}}
+        : ISolver<AMR_Types>{"MHDSolver"}
+        , fluxes_{makeFluxes("", MHDQuantity::Scalar::ScalarFlux_x, MHDQuantity::Vector::VecFlux_x,
+                            MHDQuantity::Scalar::ScalarFlux_y, MHDQuantity::Vector::VecFlux_y,
+                            MHDQuantity::Scalar::ScalarFlux_z, MHDQuantity::Vector::VecFlux_z)}

322-330: Consider extracting state copying logic.

The pattern of setting time and copying data is repetitive and could be extracted.

-        mhdModel.resourcesManager->setTime(stateOld_.rho, *patch, currentTime);
-        mhdModel.resourcesManager->setTime(stateOld_.rhoV, *patch, currentTime);
-        mhdModel.resourcesManager->setTime(stateOld_.B, *patch, currentTime);
-        mhdModel.resourcesManager->setTime(stateOld_.Etot, *patch, currentTime);
-
-        stateOld_.rho.copyData(rho);
-        stateOld_.rhoV.copyData(rhoV);
-        stateOld_.B.copyData(B);
-        stateOld_.Etot.copyData(Etot);
+        auto copyStateField = [&](auto& oldField, auto& newField) {
+            mhdModel.resourcesManager->setTime(oldField, *patch, currentTime);
+            oldField.copyData(newField);
+        };
+        
+        copyStateField(stateOld_.rho, rho);
+        copyStateField(stateOld_.rhoV, rhoV);
+        copyStateField(stateOld_.B, B);
+        copyStateField(stateOld_.Etot, Etot);

438-438: Avoid trailing underscore in local variable names.

The variable name Bold_ with trailing underscore is unconventional for local variables.

-        auto& Bold_ = stateOld_.B;
+        auto& Bold = stateOld_.B;
src/amr/messengers/mhd_messenger.hpp (1)

56-56: Consider a more descriptive strategy name.

The current name "MHDModel-MHDModel" is redundant and not very descriptive.

-        static inline std::string const stratName    = "MHDModel-MHDModel";
+        static inline std::string const stratName    = "MHD-MHD-Messenger";
src/amr/messengers/hybrid_hybrid_messenger_strategy.hpp (6)

157-158: Document the purpose of Btemp_

The temporary magnetic field Btemp_ is registered but its purpose is not immediately clear from the code.

Add a comment explaining why this temporary field is needed:

+            // Temporary magnetic field used during regriding to avoid corrupting
+            // the actual magnetic field during intermediate refinement steps
             resourcesManager_->registerResources(Btemp_); // temporary magnetic field for regriding

202-209: Improve error handling with more specific information

The error message could be more helpful by including which specific magnetic field variable IDs are missing.

             if (!b_id)
             {
                 throw std::runtime_error(
-                    "HybridHybridMessengerStrategy: missing magnetic field variable IDs");
+                    "HybridHybridMessengerStrategy: missing magnetic field variable IDs for '"
+                    + hybridInfo->modelMagnetic.name() + "'");
             }

259-274: Consider refactoring repetitive reflux registration code

The reflux algorithm registration for electric field components (Ex, Ey, Ez) is repetitive and could be simplified.

Consider creating a helper function to reduce code duplication:

auto registerRefluxComponent = [&](auto& algo, auto reflux_id, auto fluxsum_id, 
                                  auto& fillPattern, const std::string& component) {
    if (!reflux_id || !fluxsum_id) {
        throw std::runtime_error(
            "HybridHybridMessengerStrategy: missing electric refluxing field variable IDs for " + component);
    }
    algo.registerCoarsen(*reflux_id, *fluxsum_id, electricFieldCoarseningOp_, fillPattern);
};

registerRefluxComponent(RefluxAlgo, ex_reflux_id, ex_fluxsum_id, xVariableFillPattern, "Ex");
registerRefluxComponent(RefluxAlgo, ey_reflux_id, ey_fluxsum_id, yVariableFillPattern, "Ey");
registerRefluxComponent(RefluxAlgo, ez_reflux_id, ez_fluxsum_id, zVariableFillPattern, "Ez");

366-374: Document the rationale for zeroing J field

The code zeros out the J field but the comment doesn't fully explain why this is necessary specifically during regriding.

Enhance the comment to explain the complete rationale:

-            // Jx not used in 1D ampere and construct-init to NaN
-            // therefore J needs to be set to 0 whenever SAMRAI may construct
-            // J patchdata. This occurs on level init (root or refined)
-            // and here in regriding as well.
+            // In 1D simulations, Jx is not used in Ampere's law and is initialized to NaN
+            // by SAMRAI's patch data constructor. We must explicitly zero J to ensure
+            // valid values whenever SAMRAI creates new patch data. This occurs during:
+            // 1. Level initialization (both root and refined levels)
+            // 2. Regriding operations (when patches are created/destroyed)
+            // This prevents NaN propagation in the simulation.

953-985: Inefficient implementation of resetRegridTemporaries

The method sets NaN values on ghost nodes for each component separately, which could be optimized.

Consider combining the component loops:

-                    layout.evalOnGhostBox(Btemp_(core::Component::X),
-                                          [&](auto const&... args) mutable {
-                                              Btemp_(core::Component::X)(args...)
-                                                  = std::numeric_limits<double>::quiet_NaN();
-                                          });
-
-                    layout.evalOnGhostBox(Btemp_(core::Component::Y),
-                                          [&](auto const&... args) mutable {
-                                              Btemp_(core::Component::Y)(args...)
-                                                  = std::numeric_limits<double>::quiet_NaN();
-                                          });
-
-                    layout.evalOnGhostBox(Btemp_(core::Component::Z),
-                                          [&](auto const&... args) mutable {
-                                              Btemp_(core::Component::Z)(args...)
-                                                  = std::numeric_limits<double>::quiet_NaN();
-                                          });
+                    for (auto component : {core::Component::X, core::Component::Y, core::Component::Z})
+                    {
+                        layout.evalOnGhostBox(Btemp_(component),
+                                              [&](auto const&... args) mutable {
+                                                  Btemp_(component)(args...)
+                                                      = std::numeric_limits<double>::quiet_NaN();
+                                              });
+                    }

1199-1200: Inconsistent member variable naming

The new member variables BfieldRegridOp_ should follow the naming convention of other similar members.

Consider renaming for consistency:

-        RefOp_ptr BfieldRegridOp_{std::make_shared<MagneticFieldRegridOp>()};
+        RefOp_ptr magneticFieldRegridOp_{std::make_shared<MagneticFieldRegridOp>()};
src/amr/load_balancing/load_balancer_estimator_hybrid.hpp (1)

17-42: Consider architectural consistency in the generalization effort

The class mixes abstraction levels by being a hybrid-specific estimator (LoadBalancerEstimatorHybrid) that uses general strategies (LoadBalancerStrategy<HybridModel>). If the goal is full generalization, consider:

  1. Making the estimator itself generic: LoadBalancerEstimator<ModelType>
  2. Or keeping hybrid-specific components together until a full refactoring is complete

This mixed approach might lead to confusion about which components are model-specific vs. general.

Comment on lines 4 to 19
def defaulter(self, input, value):
if input is not None:
import inspect

params = list(inspect.signature(input).parameters.values())
assert len(params)
param_per_dim = len(params) == self.dim
has_vargs = params[0].kind == inspect.Parameter.VAR_POSITIONAL
assert param_per_dim or has_vargs
return input
if self.dim == 1:
return lambda x: value + x * 0
if self.dim == 2:
return lambda x, y: value
if self.dim == 3:
return lambda x, y, z: value
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Improve error handling and lambda implementations in defaulter method.

Issues:

  1. Assertions without messages make debugging difficult
  2. The 1D lambda unnecessarily multiplies by 0
  3. 2D/3D lambdas have unused parameters
 def defaulter(self, input, value):
     if input is not None:
         import inspect
 
         params = list(inspect.signature(input).parameters.values())
-        assert len(params)
+        assert len(params) > 0, "Input function must have at least one parameter"
         param_per_dim = len(params) == self.dim
         has_vargs = params[0].kind == inspect.Parameter.VAR_POSITIONAL
-        assert param_per_dim or has_vargs
+        assert param_per_dim or has_vargs, f"Function must have {self.dim} parameters or accept *args"
         return input
     if self.dim == 1:
-        return lambda x: value + x * 0
+        return lambda x: value
     if self.dim == 2:
-        return lambda x, y: value
+        return lambda x, y: value + 0 * (x + y)  # Use parameters to avoid warnings
     if self.dim == 3:
-        return lambda x, y, z: value
+        return lambda x, y, z: value + 0 * (x + y + z)  # Use parameters to avoid warnings

Alternatively, consider using _ = x pattern or # pylint: disable=unused-argument if the linter complains about unused parameters.

📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
def defaulter(self, input, value):
if input is not None:
import inspect
params = list(inspect.signature(input).parameters.values())
assert len(params)
param_per_dim = len(params) == self.dim
has_vargs = params[0].kind == inspect.Parameter.VAR_POSITIONAL
assert param_per_dim or has_vargs
return input
if self.dim == 1:
return lambda x: value + x * 0
if self.dim == 2:
return lambda x, y: value
if self.dim == 3:
return lambda x, y, z: value
def defaulter(self, input, value):
if input is not None:
import inspect
params = list(inspect.signature(input).parameters.values())
assert len(params) > 0, "Input function must have at least one parameter"
param_per_dim = len(params) == self.dim
has_vargs = params[0].kind == inspect.Parameter.VAR_POSITIONAL
assert param_per_dim or has_vargs, f"Function must have {self.dim} parameters or accept *args"
return input
if self.dim == 1:
return lambda x: value
if self.dim == 2:
return lambda x, y: value + 0 * (x + y) # Use parameters to avoid warnings
if self.dim == 3:
return lambda x, y, z: value + 0 * (x + y + z) # Use parameters to avoid warnings
🤖 Prompt for AI Agents
In pyphare/pyphare/mock_mhd_simulator/mhd_model.py lines 4 to 19, improve the
defaulter method by replacing assertions with explicit error checks that raise
exceptions including descriptive messages for easier debugging. Simplify the 1D
lambda to directly return the constant value without multiplying by zero. For 2D
and 3D lambdas, explicitly mark unused parameters using the `_ = param` pattern
or add comments to disable linter warnings about unused arguments.

hyper_resistivity=False,
limiter="",
):
import pybindlibs.pyMHD
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

Remove unused import

The pybindlibs.pyMHD import is redundant since you're using dynamic import via pyMHD() function.

-    import pybindlibs.pyMHD
-
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
import pybindlibs.pyMHD
🧰 Tools
🪛 Ruff (0.12.2)

21-21: pybindlibs.pyMHD imported but unused

Remove unused import: pybindlibs.pyMHD

(F401)

🤖 Prompt for AI Agents
In pyphare/pyphare/mock_mhd_simulator/simulator.py at line 21, remove the import
statement for pybindlibs.pyMHD because it is unused and redundant due to the
dynamic import via the pyMHD() function.


def population_in_model(population):
return population in [p for p in global_vars.sim.model.populations]
return population in [p for p in global_vars.sim.maxwellian_fluid_model.populations]
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Verification agent

🧩 Analysis chain

Verify the model attribute change aligns with simulation refactoring.

The change from global_vars.sim.model.populations to global_vars.sim.maxwellian_fluid_model.populations suggests a refactoring of the simulation model structure. This should be consistent across the codebase.


🏁 Script executed:

#!/bin/bash
# Verify that maxwellian_fluid_model attribute exists and is used consistently
echo "Searching for maxwellian_fluid_model usage:"
rg "maxwellian_fluid_model" --type py -A 2 -B 2

echo -e "\nSearching for old model.populations pattern:"
rg "\.model\.populations" --type py -A 2 -B 2

Length of output: 5089


Ensure consistent use of maxwellian_fluid_model.populations

The switch to sim.maxwellian_fluid_model.populations leaves behind at least one reference to the old sim.model.populations. Please update all occurrences—for example:

• tests/functional/tdtagged/td1dtagged.py

- for pop in sim.model.populations:
+ for pop in sim.maxwellian_fluid_model.populations:

Committable suggestion skipped: line range outside the PR's diff.

🤖 Prompt for AI Agents
In pyphare/pyphare/pharein/diagnostics.py at line 251, the code uses
sim.maxwellian_fluid_model.populations but there are still references to
sim.model.populations elsewhere. Search the entire codebase, especially in
tests/functional/tdtagged/td1dtagged.py, and replace all instances of
sim.model.populations with sim.maxwellian_fluid_model.populations to ensure
consistent usage.

Comment on lines +22 to +24
except ...:
print(f"UNKNOWN Exception caught in pharein/init::get_user_inputs")
sys.exit(1)
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

Fix invalid except syntax and improve exception handling.

The except ...: syntax is invalid Python. Also, the f-string on line 23 has no placeholders.

-    except ...:
-        print(f"UNKNOWN Exception caught in pharein/init::get_user_inputs")
+    except:
+        print("UNKNOWN Exception caught in pharein/init::get_user_inputs")

However, consider using a more specific exception hierarchy instead of a bare except: to avoid catching system exceptions like KeyboardInterrupt:

-    except:
-        print("UNKNOWN Exception caught in pharein/init::get_user_inputs")
+    except BaseException as e:
+        print(f"UNKNOWN Exception caught in pharein/init::get_user_inputs: {e}")
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
except ...:
print(f"UNKNOWN Exception caught in pharein/init::get_user_inputs")
sys.exit(1)
except BaseException as e:
print(f"UNKNOWN Exception caught in pharein/init::get_user_inputs: {e}")
sys.exit(1)
🧰 Tools
🪛 Ruff (0.12.2)

22-22: except handlers should only be exception classes or tuples of exception classes

(B030)


23-23: f-string without any placeholders

Remove extraneous f prefix

(F541)

🤖 Prompt for AI Agents
In pyphare/pyphare/pharein/init.py around lines 22 to 24, replace the invalid
'except ...:' syntax with a more specific exception clause, such as 'except
Exception:', to avoid catching system-exiting exceptions like KeyboardInterrupt.
Also, remove the unnecessary f-string formatting in the print statement since
there are no placeholders, or add relevant exception details to the message for
clarity.

Comment on lines +8 to +9
addInitFunction = getattr(pp, "addInitFunction{:d}".format(sim.ndim) + "D")

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Add error handling for dynamic function retrieval

The dynamic function retrieval using getattr could fail if sim.ndim has an unexpected value. Consider adding error handling to provide a clear error message.

-    addInitFunction = getattr(pp, "addInitFunction{:d}".format(sim.ndim) + "D")
+    func_name = "addInitFunction{:d}D".format(sim.ndim)
+    addInitFunction = getattr(pp, func_name, None)
+    if addInitFunction is None:
+        raise RuntimeError(f"Unsupported simulation dimension: {sim.ndim}")
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
addInitFunction = getattr(pp, "addInitFunction{:d}".format(sim.ndim) + "D")
func_name = "addInitFunction{:d}D".format(sim.ndim)
addInitFunction = getattr(pp, func_name, None)
if addInitFunction is None:
raise RuntimeError(f"Unsupported simulation dimension: {sim.ndim}")
🤖 Prompt for AI Agents
In pyphare/pyphare/pharein/initialize/hybrid.py around lines 8 to 9, the use of
getattr to dynamically retrieve addInitFunction based on sim.ndim may raise an
AttributeError if the attribute does not exist. Add a try-except block around
the getattr call to catch AttributeError and raise a new error with a clear
message indicating the invalid sim.ndim value and the missing function, to
improve error clarity and debugging.

Comment on lines +108 to +118
static_cast<std::size_t>(HybridQuantity::Scalar::count)> const _QtyCentering{
Bx, By, Bz, Ex, Ey, Ez, Jx, Jy, Jz, Rho,
Vx, Vy, Vz, P, Mxx, Mxy, Mxz, Myy, Myz, Mzz};

return hybridQtyCentering;
return _QtyCentering;
}



//! says for each HybridQuantity::Quantity whether it is primal or dual, in each direction
constexpr static std::array<std::array<QtyCentering, NBR_COMPO>,
static_cast<std::size_t>(HybridQuantity::Scalar::count)> const
hybridQtyCentering_{initLayoutCentering_()};
_QtyCentering_{initLayoutCentering_()};
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Avoid leading underscores in identifier names.

The renaming from hybridQtyCentering_ to _QtyCentering_ and hybridQuantity to _Quantity introduces leading underscores which is problematic in C++. Names beginning with underscore followed by uppercase letter are reserved, and this naming pattern is generally discouraged.

Consider using a clearer naming convention:

-                       static_cast<std::size_t>(HybridQuantity::Scalar::count)> const _QtyCentering{
+                       static_cast<std::size_t>(HybridQuantity::Scalar::count)> const QtyCentering{
                Bx, By, Bz, Ex, Ey,  Ez,  Jx,  Jy,  Jz,  Rho,
                Vx, Vy, Vz, P,  Mxx, Mxy, Mxz, Myy, Myz, Mzz};

-            return _QtyCentering;
+            return QtyCentering;
         }

         //! says for each HybridQuantity::Quantity whether it is primal or dual, in each direction
         constexpr static std::array<std::array<QtyCentering, NBR_COMPO>,
                                     static_cast<std::size_t>(HybridQuantity::Scalar::count)> const
-            _QtyCentering_{initLayoutCentering_()};
+            QtyCentering_{initLayoutCentering_()};

And similarly for method parameters:

-        centering(HybridQuantity::Scalar _Quantity)
+        centering(HybridQuantity::Scalar quantity)
🤖 Prompt for AI Agents
In src/core/data/grid/gridlayoutimplyee.hpp around lines 108 to 118, the
identifiers _QtyCentering and _QtyCentering_ use leading underscores followed by
uppercase letters, which is reserved and discouraged in C++. Rename these
variables to remove the leading underscore or use a lowercase letter after the
underscore, for example, rename _QtyCentering to qtyCentering or qty_centering,
and similarly rename _QtyCentering_ to qtyCentering_ or qty_centering_. Apply
the same naming convention to any related method parameters to avoid reserved
identifier usage.

Comment on lines +243 to +249
template<FloatingPoint U = DataType, typename... Nodes>
explicit NdArrayVector(Nodes... nodes)
: nCells_{nodes...}
, data_((... * nodes), static_cast<U>(std::nan("")))
{
static_assert(sizeof...(Nodes) == dim);
}
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

Consider portability of std::nan("") and potential overflow

The use of std::nan("") might not be portable across all platforms. Additionally, the size calculation using fold expression (... * nodes) could overflow for large dimensions.

Consider using std::numeric_limits for better portability:

 template<FloatingPoint U = DataType, typename... Nodes>
 explicit NdArrayVector(Nodes... nodes)
     : nCells_{nodes...}
-    , data_((... * nodes), static_cast<U>(std::nan("")))
+    , data_((... * nodes), std::numeric_limits<U>::quiet_NaN())

For overflow protection, consider:

auto size = std::size_t{1};
((size = std::multiplies<>{}(size, static_cast<std::size_t>(nodes))), ...);
if (size == 0 || size > std::numeric_limits<std::size_t>::max() / sizeof(U))
    throw std::overflow_error("Array size too large");
🤖 Prompt for AI Agents
In src/core/data/ndarray/ndarray_vector.hpp around lines 243 to 249, replace the
use of std::nan("") with std::numeric_limits<U>::quiet_NaN() for better
portability. Also, instead of directly multiplying nodes with a fold expression,
calculate the total size using a loop or fold that checks for overflow by
verifying if the size is zero or exceeds the maximum allowable size before
allocation. Throw an overflow_error if the size is too large to prevent
potential overflow issues.

Comment on lines +251 to +257
template<FloatingPoint U = DataType>
explicit NdArrayVector(std::array<std::uint32_t, dim> const& ncells,
type const& value = static_cast<U>(std::nan("")))
: nCells_{ncells}
, data_(std::accumulate(ncells.begin(), ncells.end(), 1, std::multiplies<int>()), value)
{
}
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Use std::numeric_limits for NaN initialization

For consistency with the previous constructor and better portability, use std::numeric_limits instead of std::nan("").

 template<FloatingPoint U = DataType>
 explicit NdArrayVector(std::array<std::uint32_t, dim> const& ncells,
-                       type const& value = static_cast<U>(std::nan("")))
+                       type const& value = std::numeric_limits<U>::quiet_NaN())
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
template<FloatingPoint U = DataType>
explicit NdArrayVector(std::array<std::uint32_t, dim> const& ncells,
type const& value = static_cast<U>(std::nan("")))
: nCells_{ncells}
, data_(std::accumulate(ncells.begin(), ncells.end(), 1, std::multiplies<int>()), value)
{
}
template<FloatingPoint U = DataType>
explicit NdArrayVector(std::array<std::uint32_t, dim> const& ncells,
type const& value = std::numeric_limits<U>::quiet_NaN())
: nCells_{ncells}
, data_(std::accumulate(ncells.begin(), ncells.end(), 1, std::multiplies<int>()), value)
{
}
🤖 Prompt for AI Agents
In src/core/data/ndarray/ndarray_vector.hpp around lines 251 to 257, replace the
NaN initialization using std::nan("") with std::numeric_limits<U>::quiet_NaN()
to ensure consistency with the other constructor and improve portability. Update
the default value parameter to use std::numeric_limits for NaN instead of the
current static_cast with std::nan("").

Comment on lines 85 to 95
void setBuffer(std::nullptr_t ptr)
{
return for_N<N, for_N_R_mode::forward_tuple>(
[&](auto i) -> auto& { return components_[i]; });
for_N<N, for_N_R_mode::forward_tuple>([&](auto i) { components_[i].setBuffer(nullptr); });
}
NO_DISCARD auto getCompileTimeResourcesViewList() const

template<typename Fields>
void setBuffer(Fields* const fields)
{
return for_N<N, for_N_R_mode::forward_tuple>(
[&](auto i) -> auto& { return components_[i]; });
if (!fields)
throw std::runtime_error("use other fn");
for_N<N, for_N_R_mode::forward_tuple>(
[&](auto i) { components_[i].setBuffer(&(*fields)[i]); });
}
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

⚠️ Potential issue

Improve error handling and method design in setBuffer.

Issues:

  1. The error message "use other fn" is unclear. It should indicate to use the nullptr_t overload.
  2. No bounds checking when accessing (*fields)[i].
  3. The design is confusing - having one overload for nullptr and another that throws for null pointers.

Consider this improved implementation:

 void setBuffer(std::nullptr_t ptr)
 {
     for_N<N, for_N_R_mode::forward_tuple>([&](auto i) { components_[i].setBuffer(nullptr); });
 }
 
 template<typename Fields>
 void setBuffer(Fields* const fields)
 {
     if (!fields)
-        throw std::runtime_error("use other fn");
+        throw std::runtime_error("setBuffer: null pointer passed. Use setBuffer(nullptr) to clear buffers.");
+    if (fields->size() != N)
+        throw std::runtime_error("setBuffer: fields container size mismatch");
     for_N<N, for_N_R_mode::forward_tuple>(
         [&](auto i) { components_[i].setBuffer(&(*fields)[i]); });
 }

Alternatively, consider a single method that handles both cases:

template<typename Fields>
void setBuffer(Fields* const fields)
{
    if (!fields)
    {
        for_N<N, for_N_R_mode::forward_tuple>([&](auto i) { components_[i].setBuffer(nullptr); });
    }
    else
    {
        if (fields->size() != N)
            throw std::runtime_error("setBuffer: fields container size mismatch");
        for_N<N, for_N_R_mode::forward_tuple>(
            [&](auto i) { components_[i].setBuffer(&(*fields)[i]); });
    }
}
🤖 Prompt for AI Agents
In src/core/data/tensorfield/tensorfield.hpp around lines 85 to 97, improve the
setBuffer method by merging the nullptr_t overload and the pointer overload into
a single template method that handles both null and non-null pointers. Remove
the separate nullptr_t overload, update the error message to be clear and
instructive, and add a bounds check to verify that the fields container size
matches N before accessing elements. If fields is nullptr, set all component
buffers to nullptr; otherwise, perform the size check and set buffers
accordingly.


#include "core/hybrid/hybrid_quantities.hpp"
#include "core/models/physical_state.hpp"
#include "core/numerics/primite_conservative_converter/to_conservative_converter.hpp"
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

Fix typo in include path

"primite" should be "primitive".

-#include "core/numerics/primite_conservative_converter/to_conservative_converter.hpp"
+#include "core/numerics/primitive_conservative_converter/to_conservative_converter.hpp"
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
#include "core/numerics/primite_conservative_converter/to_conservative_converter.hpp"
#include "core/numerics/primitive_conservative_converter/to_conservative_converter.hpp"
🤖 Prompt for AI Agents
In src/core/models/mhd_state.hpp at line 4, there is a typo in the include path
"primite_conservative_converter". Correct the typo by changing "primite" to
"primitive" so the include reads
"core/numerics/primitive_conservative_converter/to_conservative_converter.hpp".

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Review continued from previous batch...

Comment on lines +102 to +104
auto& overlap = dynamic_cast<FieldOverlap const&>(*basic_overlap);
auto destinationBoxes = overlap.getDestinationBoxContainer();
auto& casted = dynamic_cast<FieldGeometryBase<dimension> const&>(*geom);
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

Add null checks after dynamic_cast operations.

The dynamic_cast operations could fail and should be checked.

-        auto& overlap         = dynamic_cast<FieldOverlap const&>(*basic_overlap);
-        auto destinationBoxes = overlap.getDestinationBoxContainer();
-        auto& casted          = dynamic_cast<FieldGeometryBase<dimension> const&>(*geom);
+        auto* overlapPtr = dynamic_cast<FieldOverlap const*>(basic_overlap.get());
+        auto* castedPtr = dynamic_cast<FieldGeometryBase<dimension> const*>(geom.get());
+        if (!overlapPtr || !castedPtr) {
+            throw std::runtime_error("Invalid type in computeFillBoxesOverlap");
+        }
+        auto& overlap = *overlapPtr;
+        auto destinationBoxes = overlap.getDestinationBoxContainer();
+        auto& casted = *castedPtr;
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
auto& overlap = dynamic_cast<FieldOverlap const&>(*basic_overlap);
auto destinationBoxes = overlap.getDestinationBoxContainer();
auto& casted = dynamic_cast<FieldGeometryBase<dimension> const&>(*geom);
auto* overlapPtr = dynamic_cast<FieldOverlap const*>(basic_overlap.get());
auto* castedPtr = dynamic_cast<FieldGeometryBase<dimension> const*>(geom.get());
if (!overlapPtr || !castedPtr) {
throw std::runtime_error("Invalid type in computeFillBoxesOverlap");
}
auto& overlap = *overlapPtr;
auto destinationBoxes = overlap.getDestinationBoxContainer();
auto& casted = *castedPtr;
🤖 Prompt for AI Agents
In src/amr/data/field/field_variable_fill_pattern.hpp around lines 102 to 104,
the dynamic_cast operations may fail and return null pointers. Add null checks
after each dynamic_cast to verify the cast succeeded before using the resulting
references or pointers. If a cast fails, handle the error appropriately, such as
by throwing an exception or returning early, to prevent undefined behavior.

Comment on lines +179 to +188
auto& casted = dynamic_cast<TensorFieldGeometryBase<dimension> const&>(*geom);
auto& toverlap = dynamic_cast<TensorFieldOverlap<rank_> const&>(*basic_overlap);
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

Add safety checks for dynamic_cast operations.

Dynamic casts should be validated before dereferencing.

-        auto& casted   = dynamic_cast<TensorFieldGeometryBase<dimension> const&>(*geom);
-        auto& toverlap = dynamic_cast<TensorFieldOverlap<rank_> const&>(*basic_overlap);
+        auto* castedPtr = dynamic_cast<TensorFieldGeometryBase<dimension> const*>(geom.get());
+        auto* toverlapPtr = dynamic_cast<TensorFieldOverlap<rank_> const*>(basic_overlap.get());
+        if (!castedPtr || !toverlapPtr) {
+            throw std::runtime_error("Invalid type in TensorFieldFillPattern::computeFillBoxesOverlap");
+        }
+        auto& casted = *castedPtr;
+        auto& toverlap = *toverlapPtr;
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
auto& casted = dynamic_cast<TensorFieldGeometryBase<dimension> const&>(*geom);
auto& toverlap = dynamic_cast<TensorFieldOverlap<rank_> const&>(*basic_overlap);
// Safely cast geometry and overlap, then validate
auto* castedPtr = dynamic_cast<TensorFieldGeometryBase<dimension> const*>(geom.get());
auto* toverlapPtr = dynamic_cast<TensorFieldOverlap<rank_> const*>(basic_overlap.get());
if (!castedPtr || !toverlapPtr) {
throw std::runtime_error(
"Invalid type in TensorFieldFillPattern::computeFillBoxesOverlap"
);
}
auto& casted = *castedPtr;
auto& toverlap = *toverlapPtr;
🤖 Prompt for AI Agents
In src/amr/data/field/field_variable_fill_pattern.hpp around lines 179 to 180,
the dynamic_cast results are dereferenced without checking if the cast
succeeded, which can cause undefined behavior if the cast fails. Modify the code
to store the results of dynamic_cast in pointers, check if these pointers are
not null, and handle the failure case appropriately before dereferencing.

Comment on lines 189 to 139
TensorFieldRefineOperator(bool node_only = false)
: SAMRAI::hier::RefineOperator{"FieldRefineOperator"}
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

Fix operator name in constructor.

The constructor uses "FieldRefineOperator" but this is TensorFieldRefineOperator.

-        : SAMRAI::hier::RefineOperator{"FieldRefineOperator"}
+        : SAMRAI::hier::RefineOperator{"TensorFieldRefineOperator"}
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
TensorFieldRefineOperator(bool node_only = false)
: SAMRAI::hier::RefineOperator{"FieldRefineOperator"}
TensorFieldRefineOperator(bool node_only = false)
: SAMRAI::hier::RefineOperator{"TensorFieldRefineOperator"}
🤖 Prompt for AI Agents
In src/amr/data/field/refine/field_refine_operator.hpp at lines 189-190, the
constructor of TensorFieldRefineOperator incorrectly passes
"FieldRefineOperator" as the operator name. Change the string to
"TensorFieldRefineOperator" to correctly reflect the class name in the
constructor initializer.

Comment on lines +12 to +13
#include "core/data/grid/gridlayoutdefs.hpp"
#include "core/data/tensorfield/tensorfield.hpp"
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

Remove duplicate include statements

The following headers are included multiple times:

  • core/data/grid/gridlayoutdefs.hpp (lines 7 and 13)
  • amr/data/field/field_overlap.hpp (line 16, after field-related includes)
  • amr/resources_manager/amr_utils.hpp (lines 6 and 17)
 #include "amr/data/field/field_geometry.hpp"
 #include "amr/data/tensorfield/tensor_field_overlap.hpp"
 #include "amr/resources_manager/amr_utils.hpp"
 #include "core/data/grid/gridlayoutdefs.hpp"
 #include "core/def/phare_mpi.hpp" // IWYU pragma: keep
 
 #include "core/logger.hpp"
 #include "core/data/field/field_box.hpp"
-#include "core/data/grid/gridlayoutdefs.hpp"
 #include "core/data/tensorfield/tensorfield.hpp"
 
 
-#include "amr/data/field/field_overlap.hpp"
-#include "amr/resources_manager/amr_utils.hpp"
 #include "amr/data/tensorfield/tensor_field_geometry.hpp"

Also applies to: 16-18

🤖 Prompt for AI Agents
In src/amr/data/tensorfield/tensor_field_data.hpp around lines 6 to 18, there
are duplicate include statements for core/data/grid/gridlayoutdefs.hpp,
amr/data/field/field_overlap.hpp, and amr/resources_manager/amr_utils.hpp.
Remove the repeated includes so each header is included only once to clean up
the code and avoid redundancy.

Comment on lines 29 to 36
// We use another class here so that we can specialize specifics function: copy , pack , unpack
// on the dimension and we don't want to loose non specialized function related to SAMRAI
// interface
template<typename GridLayoutT, std::size_t dim, typename Grid_t, typename PhysicalQuantity>
class TensorFieldDataInternals
{
};

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

Remove unused class template or implement its intended functionality

The TensorFieldDataInternals class template is defined but never used. The comment suggests it's intended for dimension-specific specializations, but no specializations are provided.

Either remove this unused code or implement the intended specializations:

-// We use another class here so that we can specialize specifics function: copy , pack , unpack
-// on the dimension and we don't want to loose non specialized function related to SAMRAI
-// interface
-template<typename GridLayoutT, std::size_t dim, typename Grid_t, typename PhysicalQuantity>
-class TensorFieldDataInternals
-{
-};
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
// We use another class here so that we can specialize specifics function: copy , pack , unpack
// on the dimension and we don't want to loose non specialized function related to SAMRAI
// interface
template<typename GridLayoutT, std::size_t dim, typename Grid_t, typename PhysicalQuantity>
class TensorFieldDataInternals
{
};
🤖 Prompt for AI Agents
In src/amr/data/tensorfield/tensor_field_data.hpp around lines 29 to 36, the
class template TensorFieldDataInternals is defined but unused and lacks the
intended dimension-specific specializations. To fix this, either remove the
entire TensorFieldDataInternals class template if it is unnecessary, or
implement the required specializations for different dimensions as indicated by
the comment, ensuring the class provides the specialized copy, pack, and unpack
functions for each dimension.

Comment on lines 463 to 437
auto& fromCoarser = dynamic_cast<Messenger&>(fromCoarserMessenger);
auto level = hierarchy.getPatchLevel(levelNumber);
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

Add null checks for dynamic_cast operations.

Dynamic casts should validate results before use.

-    auto& modelView   = dynamic_cast<ModelViews_t&>(view);
-    auto& fromCoarser = dynamic_cast<Messenger&>(fromCoarserMessenger);
+    auto* modelViewPtr = dynamic_cast<ModelViews_t*>(&view);
+    auto* fromCoarserPtr = dynamic_cast<Messenger*>(&fromCoarserMessenger);
+    if (!modelViewPtr || !fromCoarserPtr) {
+        throw std::runtime_error("Invalid type in advanceLevel");
+    }
+    auto& modelView = *modelViewPtr;
+    auto& fromCoarser = *fromCoarserPtr;
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
auto& modelView = dynamic_cast<ModelViews_t&>(view);
auto& fromCoarser = dynamic_cast<Messenger&>(fromCoarserMessenger);
auto* modelViewPtr = dynamic_cast<ModelViews_t*>(&view);
auto* fromCoarserPtr = dynamic_cast<Messenger*>(&fromCoarserMessenger);
if (!modelViewPtr || !fromCoarserPtr) {
throw std::runtime_error("Invalid type in advanceLevel");
}
auto& modelView = *modelViewPtr;
auto& fromCoarser = *fromCoarserPtr;
🤖 Prompt for AI Agents
In src/amr/solvers/solver_mhd.hpp around lines 462 to 463, the dynamic_cast
operations for modelView and fromCoarser need null checks to ensure the casts
succeeded before using the references. Modify the code to use pointers for
dynamic_cast, check if the result is not null, and handle the error or throw an
exception if the cast fails to prevent undefined behavior.

void SolverPPC<HybridModel, AMR_Types>::resetFluxSum(IPhysicalModel_t& model,
SAMRAI::hier::PatchLevel& level)
{
PHARE_LOG_SCOPE(1, "SolverPPC::accumulateFluxSum");
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

Fix incorrect log message.

The log message says "accumulateFluxSum" but this is the resetFluxSum method.

Apply this diff to fix the log message:

-    PHARE_LOG_SCOPE(1, "SolverPPC::accumulateFluxSum");
+    PHARE_LOG_SCOPE(1, "SolverPPC::resetFluxSum");
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
PHARE_LOG_SCOPE(1, "SolverPPC::accumulateFluxSum");
PHARE_LOG_SCOPE(1, "SolverPPC::resetFluxSum");
🤖 Prompt for AI Agents
In src/amr/solvers/solver_ppc.hpp at line 314, the log message incorrectly
states "accumulateFluxSum" while it should reflect the current method name
"resetFluxSum". Update the PHARE_LOG_SCOPE call to use "SolverPPC::resetFluxSum"
as the log message to accurately represent the method being executed.

Comment on lines +70 to +99
NO_DISCARD static constexpr std::array<Scalar, 3> componentsQuantities(Vector qty)
{
if (qty == Vector::V)
return {{Scalar::Vx, Scalar::Vy, Scalar::Vz}};

if (qty == Vector::B)
return {{Scalar::Bx, Scalar::By, Scalar::Bz}};

if (qty == Vector::rhoV)
return {{Scalar::rhoVx, Scalar::rhoVy, Scalar::rhoVz}};


if (qty == Vector::E)
return {{Scalar::Ex, Scalar::Ey, Scalar::Ez}};

if (qty == Vector::J)
return {{Scalar::Jx, Scalar::Jy, Scalar::Jz}};


if (qty == Vector::VecFlux_x)
return {{Scalar::VecFluxX_x, Scalar::VecFluxY_x, Scalar::VecFluxZ_x}};

if (qty == Vector::VecFlux_y)
return {{Scalar::VecFluxX_y, Scalar::VecFluxY_y, Scalar::VecFluxZ_y}};

if (qty == Vector::VecFlux_z)
return {{Scalar::VecFluxX_z, Scalar::VecFluxY_z, Scalar::VecFluxZ_z}};

throw std::runtime_error("Error - invalid Vector");
}
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Refactor if-chain to switch statement for better performance and readability.

The chain of if statements can be replaced with a switch statement, which is more efficient and cleaner for enum comparisons.

Apply this diff to use a switch statement:

 NO_DISCARD static constexpr std::array<Scalar, 3> componentsQuantities(Vector qty)
 {
-    if (qty == Vector::V)
-        return {{Scalar::Vx, Scalar::Vy, Scalar::Vz}};
-
-    if (qty == Vector::B)
-        return {{Scalar::Bx, Scalar::By, Scalar::Bz}};
-
-    if (qty == Vector::rhoV)
-        return {{Scalar::rhoVx, Scalar::rhoVy, Scalar::rhoVz}};
-
-
-    if (qty == Vector::E)
-        return {{Scalar::Ex, Scalar::Ey, Scalar::Ez}};
-
-    if (qty == Vector::J)
-        return {{Scalar::Jx, Scalar::Jy, Scalar::Jz}};
-
-
-    if (qty == Vector::VecFlux_x)
-        return {{Scalar::VecFluxX_x, Scalar::VecFluxY_x, Scalar::VecFluxZ_x}};
-
-    if (qty == Vector::VecFlux_y)
-        return {{Scalar::VecFluxX_y, Scalar::VecFluxY_y, Scalar::VecFluxZ_y}};
-
-    if (qty == Vector::VecFlux_z)
-        return {{Scalar::VecFluxX_z, Scalar::VecFluxY_z, Scalar::VecFluxZ_z}};
-
-    throw std::runtime_error("Error - invalid Vector");
+    switch (qty)
+    {
+        case Vector::V:
+            return {{Scalar::Vx, Scalar::Vy, Scalar::Vz}};
+        case Vector::B:
+            return {{Scalar::Bx, Scalar::By, Scalar::Bz}};
+        case Vector::rhoV:
+            return {{Scalar::rhoVx, Scalar::rhoVy, Scalar::rhoVz}};
+        case Vector::E:
+            return {{Scalar::Ex, Scalar::Ey, Scalar::Ez}};
+        case Vector::J:
+            return {{Scalar::Jx, Scalar::Jy, Scalar::Jz}};
+        case Vector::VecFlux_x:
+            return {{Scalar::VecFluxX_x, Scalar::VecFluxY_x, Scalar::VecFluxZ_x}};
+        case Vector::VecFlux_y:
+            return {{Scalar::VecFluxX_y, Scalar::VecFluxY_y, Scalar::VecFluxZ_y}};
+        case Vector::VecFlux_z:
+            return {{Scalar::VecFluxX_z, Scalar::VecFluxY_z, Scalar::VecFluxZ_z}};
+        default:
+            throw std::runtime_error("Error - invalid Vector");
+    }
 }
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
NO_DISCARD static constexpr std::array<Scalar, 3> componentsQuantities(Vector qty)
{
if (qty == Vector::V)
return {{Scalar::Vx, Scalar::Vy, Scalar::Vz}};
if (qty == Vector::B)
return {{Scalar::Bx, Scalar::By, Scalar::Bz}};
if (qty == Vector::rhoV)
return {{Scalar::rhoVx, Scalar::rhoVy, Scalar::rhoVz}};
if (qty == Vector::E)
return {{Scalar::Ex, Scalar::Ey, Scalar::Ez}};
if (qty == Vector::J)
return {{Scalar::Jx, Scalar::Jy, Scalar::Jz}};
if (qty == Vector::VecFlux_x)
return {{Scalar::VecFluxX_x, Scalar::VecFluxY_x, Scalar::VecFluxZ_x}};
if (qty == Vector::VecFlux_y)
return {{Scalar::VecFluxX_y, Scalar::VecFluxY_y, Scalar::VecFluxZ_y}};
if (qty == Vector::VecFlux_z)
return {{Scalar::VecFluxX_z, Scalar::VecFluxY_z, Scalar::VecFluxZ_z}};
throw std::runtime_error("Error - invalid Vector");
}
NO_DISCARD static constexpr std::array<Scalar, 3> componentsQuantities(Vector qty)
{
switch (qty)
{
case Vector::V:
return {{Scalar::Vx, Scalar::Vy, Scalar::Vz}};
case Vector::B:
return {{Scalar::Bx, Scalar::By, Scalar::Bz}};
case Vector::rhoV:
return {{Scalar::rhoVx, Scalar::rhoVy, Scalar::rhoVz}};
case Vector::E:
return {{Scalar::Ex, Scalar::Ey, Scalar::Ez}};
case Vector::J:
return {{Scalar::Jx, Scalar::Jy, Scalar::Jz}};
case Vector::VecFlux_x:
return {{Scalar::VecFluxX_x, Scalar::VecFluxY_x, Scalar::VecFluxZ_x}};
case Vector::VecFlux_y:
return {{Scalar::VecFluxX_y, Scalar::VecFluxY_y, Scalar::VecFluxZ_y}};
case Vector::VecFlux_z:
return {{Scalar::VecFluxX_z, Scalar::VecFluxY_z, Scalar::VecFluxZ_z}};
default:
throw std::runtime_error("Error - invalid Vector");
}
}
🤖 Prompt for AI Agents
In src/core/mhd/mhd_quantities.hpp between lines 70 and 99, replace the if-chain
that compares the Vector enum with a switch statement on the qty parameter. This
will improve performance and readability by directly switching on the enum
values and returning the corresponding array for each case. Ensure to include a
default case that throws the runtime error for invalid Vector values.

Comment on lines +92 to +105
template<auto direction>
auto compute(auto const& u, auto const& J, auto const& LaplJ) const
{
PerIndex f = compute<direction>(u);

if constexpr (Hall)
hall_contribution_<direction>(u.rho, u.B, J, f.B, f.P);
if constexpr (Resistivity)
resistive_contributions_<direction>(eta_, u.B, J, f.B, f.P);

resistive_contributions_<direction>(nu_, u.B, J, f.B, f.P);

return f;
}
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

Missing HyperResistivity template check

The third compute overload always applies hyper-resistivity contributions regardless of the HyperResistivity template parameter.

     template<auto direction>
     auto compute(auto const& u, auto const& J, auto const& LaplJ) const
     {
         PerIndex f = compute<direction>(u);
 
         if constexpr (Hall)
             hall_contribution_<direction>(u.rho, u.B, J, f.B, f.P);
         if constexpr (Resistivity)
             resistive_contributions_<direction>(eta_, u.B, J, f.B, f.P);
 
-        resistive_contributions_<direction>(nu_, u.B, J, f.B, f.P);
+        if constexpr (HyperResistivity)
+            resistive_contributions_<direction>(nu_, u.B, LaplJ, f.B, f.P);
 
         return f;
     }
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
template<auto direction>
auto compute(auto const& u, auto const& J, auto const& LaplJ) const
{
PerIndex f = compute<direction>(u);
if constexpr (Hall)
hall_contribution_<direction>(u.rho, u.B, J, f.B, f.P);
if constexpr (Resistivity)
resistive_contributions_<direction>(eta_, u.B, J, f.B, f.P);
resistive_contributions_<direction>(nu_, u.B, J, f.B, f.P);
return f;
}
template<auto direction>
auto compute(auto const& u, auto const& J, auto const& LaplJ) const
{
PerIndex f = compute<direction>(u);
if constexpr (Hall)
hall_contribution_<direction>(u.rho, u.B, J, f.B, f.P);
if constexpr (Resistivity)
resistive_contributions_<direction>(eta_, u.B, J, f.B, f.P);
if constexpr (HyperResistivity)
resistive_contributions_<direction>(nu_, u.B, LaplJ, f.B, f.P);
return f;
}
🤖 Prompt for AI Agents
In src/core/numerics/MHD_equations/MHD_equations.hpp around lines 92 to 105, the
compute function always applies hyper-resistivity contributions without checking
the HyperResistivity template parameter. Add an if constexpr check for
HyperResistivity before calling resistive_contributions_ with nu_ to ensure it
only applies when HyperResistivity is enabled.

Comment on lines 163 to 168
if constexpr (direction == Direction::Y)
{
F_B.x += -J.y * coef;
F_B.y += J.x * coef;
F_Etot += (J.x * B.y - J.y * B.x) * coef;
}
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

Critical: Duplicate condition in resistive contributions

Same bug as in electric_field_coarsener.hpp: Line 163 has direction == Direction::Y but should be direction == Direction::Z.

-        if constexpr (direction == Direction::Y)
+        if constexpr (direction == Direction::Z)
         {
             F_B.x += -J.y * coef;
             F_B.y += J.x * coef;
             F_Etot += (J.x * B.y - J.y * B.x) * coef;
         }
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
if constexpr (direction == Direction::Y)
{
F_B.x += -J.y * coef;
F_B.y += J.x * coef;
F_Etot += (J.x * B.y - J.y * B.x) * coef;
}
if constexpr (direction == Direction::Z)
{
F_B.x += -J.y * coef;
F_B.y += J.x * coef;
F_Etot += (J.x * B.y - J.y * B.x) * coef;
}
🤖 Prompt for AI Agents
In src/core/numerics/MHD_equations/MHD_equations.hpp around lines 163 to 168,
the condition checking if direction equals Direction::Y is duplicated but should
instead check for Direction::Z. Change the condition from direction ==
Direction::Y to direction == Direction::Z to correctly handle the resistive
contributions for the Z direction.

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 9

♻️ Duplicate comments (23)
src/amr/solvers/time_integrator/euler.hpp (2)

25-30: Add error handling for dictionary access

The constructor directly accesses dictionary entries without checking if they exist. This could throw exceptions if the keys are missing.

Add validation before accessing dictionary entries:

 Euler(PHARE::initializer::PHAREDict const& dict)
-    : fvm_{dict["fv_method"]}
-    , to_primitive_{dict["to_primitive"]}
-    , to_conservative_{dict["to_conservative"]}
+    : fvm_{dict.contains("fv_method") ? dict["fv_method"] : throw std::runtime_error("Missing 'fv_method' in dictionary")}
+    , to_primitive_{dict.contains("to_primitive") ? dict["to_primitive"] : throw std::runtime_error("Missing 'to_primitive' in dictionary")}
+    , to_conservative_{dict.contains("to_conservative") ? dict["to_conservative"] : throw std::runtime_error("Missing 'to_conservative' in dictionary")}

Alternatively, add a validation method that checks all required keys before initialization.


32-65: Validate time parameters and consider refactoring

The method should validate that newTime > currentTime to avoid negative or zero timesteps. Additionally, the method is quite long and could benefit from being split into smaller, more focused methods.

Add time validation at the beginning:

 void operator()(MHDModel& model, auto& state, auto& statenew, auto& fluxes, auto& bc,
                 level_t& level, double const currentTime, double const newTime)
 {
+    if (newTime <= currentTime)
+    {
+        throw std::invalid_argument("newTime must be greater than currentTime");
+    }
     double const dt = newTime - currentTime;

Consider extracting the magnetic flux ghost filling logic (lines 48-58) into a separate method for better readability:

private:
    void fillMagneticFluxGhosts(auto& bc, auto& fluxes, int levelNumber, double time)
    {
        bc.fillMagneticFluxesXGhosts(fluxes.B_fx, levelNumber, time);
        
        if constexpr (MHDModel::dimension >= 2)
        {
            bc.fillMagneticFluxesYGhosts(fluxes.B_fy, levelNumber, time);
            
            if constexpr (MHDModel::dimension == 3)
            {
                bc.fillMagneticFluxesZGhosts(fluxes.B_fz, levelNumber, time);
            }
        }
    }
src/amr/solvers/time_integrator/euler_integrator.hpp (2)

1-2: Fix header guard naming to match file location

The header guard uses PHARE_CORE_NUMERICS_EULER_INTEGRATOR_HPP but the file is located in src/amr/solvers/time_integrator/.

-#ifndef PHARE_CORE_NUMERICS_EULER_INTEGRATOR_HPP
-#define PHARE_CORE_NUMERICS_EULER_INTEGRATOR_HPP
+#ifndef PHARE_AMR_SOLVERS_TIME_INTEGRATOR_EULER_INTEGRATOR_HPP
+#define PHARE_AMR_SOLVERS_TIME_INTEGRATOR_EULER_INTEGRATOR_HPP

92-102: Reduce code duplication for electric field component handling

The reset and accumulation logic for E field components is repetitive. Consider extracting a helper lambda or template function.

+        auto resetEComponent = [&](auto component) {
+            layout.evalOnGhostBox(butcherE_(component), [&](auto const&... args) mutable {
+                butcherE_(component)(args...) = 0.0;
+            });
+        };
+        resetEComponent(core::Component::X);
+        resetEComponent(core::Component::Y);
+        resetEComponent(core::Component::Z);
-            layout.evalOnGhostBox(butcherE_(core::Component::X), [&](auto const&... args) mutable {
-                butcherE_(core::Component::X)(args...) = 0.0;
-            });
-
-            layout.evalOnGhostBox(butcherE_(core::Component::Y), [&](auto const&... args) mutable {
-                butcherE_(core::Component::Y)(args...) = 0.0;
-            });
-
-            layout.evalOnGhostBox(butcherE_(core::Component::Z), [&](auto const&... args) mutable {
-                butcherE_(core::Component::Z)(args...) = 0.0;
-            });

Also applies to: 123-135

src/amr/data/field/refine/mhd_flux_refiner.hpp (1)

211-211: Fix incorrect header guard comment

The comment references PHARE_ELECTRIC_FIELD_REFINER_HPP but this file defines PHARE_MHD_FLUX_REFINER_HPP.

-#endif // PHARE_ELECTRIC_FIELD_REFINER_HPP
+#endif // PHARE_MHD_FLUX_REFINER_HPP
src/amr/physical_models/mhd_model.hpp (1)

90-90: Document the purpose of the tags member

The tags map holds, for each patch (keyed by its global-ID string), a shared pointer to an integer NdArray used by the AMR tagging machinery and diagnostics. Please add a brief comment explaining this.

+    /// Map from patch global-ID (as string) to a shared NdArray of per-cell integer tags,
+    /// used for AMR tagging/refinement indicators and diagnostic output.
     std::unordered_map<std::string, std::shared_ptr<core::NdArrayVector<dimension, int>>> tags;
src/amr/data/field/refine/field_refine_operator.hpp (1)

191-192: Fix operator name in constructor

The constructor uses "FieldRefineOperator" but this is TensorFieldRefineOperator.

-        : SAMRAI::hier::RefineOperator{"FieldRefineOperator"}
+        : SAMRAI::hier::RefineOperator{"TensorFieldRefineOperator"}
src/amr/solvers/solver_mhd.hpp (6)

147-149: Fix logic error in TimeSetter::operator().

The loop variable state appears to be incorrectly used. The code iterates over views but views is not a container.


165-210: Reduce code duplication in resource registration.

The resource registration follows a repetitive pattern that could be simplified.


276-276: Add null check after dynamic_cast.

The dynamic_cast result should be checked before use to prevent potential null pointer dereference.


356-369: Refactor repetitive electric field component handling.

The code for each electric field component is identical except for the component index.


409-410: Add safety checks for dynamic_cast operations.

Dynamic casts should be checked for null results.


441-442: Add null checks for dynamic_cast operations.

Dynamic casts should validate results before use.

src/amr/data/field/field_variable_fill_pattern.hpp (2)

102-104: Add null checks after dynamic_cast operations.

The dynamic_cast operations could fail and should be checked.


178-179: Add safety checks for dynamic_cast operations.

Dynamic casts should be validated before dereferencing.

src/amr/messengers/mhd_messenger.hpp (5)

91-277: Refactor registerQuantities to reduce complexity.

This method is over 180 lines long with repetitive patterns. Consider extracting helper methods.


99-100: Unsafe unique_ptr release in dynamic_cast.

The release() call transfers ownership, which could lead to memory leak if the cast fails.


409-409: Use dynamic_cast instead of static_cast.

For type safety, use dynamic_cast when downcasting from base class.


458-458: Replace static_cast with dynamic_cast.

Use dynamic_cast for safer type conversion.


342-342: Use dynamic_cast instead of static_cast.

For type safety, use dynamic_cast when downcasting from base class.

-            auto& mhdModel = static_cast<MHDModel&>(model);
+            auto& mhdModel = dynamic_cast<MHDModel&>(model);
src/amr/messengers/hybrid_hybrid_messenger_strategy.hpp (3)

8-12: Remove duplicate includes

Multiple header files are included twice in the file. Clean up the duplicates to improve maintainability.

-#include "core/utilities/index/index.hpp"
-#include "core/numerics/interpolator/interpolator.hpp"
-#include "core/hybrid/hybrid_quantities.hpp"
-#include "core/data/particles/particle_array.hpp"
-#include "core/data/vecfield/vecfield.hpp"
-#include "core/utilities/point/point.hpp"

Also applies to: 37-42


735-740: Add error handling in reflux method

The reflux method accesses map elements without verifying they exist, which could cause crashes if the levels are not properly initialized.

 void reflux(int const coarserLevelNumber, int const fineLevelNumber,
             double const syncTime) override
 {
+    if (refluxSchedules.find(fineLevelNumber) == refluxSchedules.end())
+    {
+        throw std::runtime_error("Reflux schedule not found for level " 
+                               + std::to_string(fineLevelNumber));
+    }
+    if (patchGhostRefluxedSchedules.find(coarserLevelNumber) == patchGhostRefluxedSchedules.end())
+    {
+        throw std::runtime_error("Patch ghost reflux schedule not found for level " 
+                               + std::to_string(coarserLevelNumber));
+    }
     refluxSchedules[fineLevelNumber]->coarsenData();
     patchGhostRefluxedSchedules[coarserLevelNumber]->fillData(syncTime);
 }

19-19: Remove duplicate include of amr_utils.hpp

The header amr/resources_manager/amr_utils.hpp is included twice - first at line 19 and again at line 35.

-#include "amr/resources_manager/amr_utils.hpp"
-

Also applies to: 35-35

🧹 Nitpick comments (3)
src/amr/messengers/mhd_messenger_info.hpp (1)

14-52: Consider adding default member initializers

The class has many string members that could be left uninitialized. Consider adding default member initializers or a constructor to ensure consistent initialization.

 class MHDMessengerInfo : public IMessengerInfo
 {
 public:
-    std::string modelDensity;
-    std::string modelVelocity;
+    std::string modelDensity{};
+    std::string modelVelocity{};
     // ... apply to all members

Or add a default constructor that initializes all members to empty strings.

src/amr/solvers/time_integrator/tvdrk2_integrator.hpp (1)

61-62: Fix misleading comment

The comment suggests U1 = Euler(U1), but this step actually computes a new intermediate state. Consider updating the comment to accurately reflect the TVDRK2 algorithm.

-        // U1 = Euler(U1)
+        // U2 = Euler(U1)
src/python3/cpp_mhd_parameters.hpp (1)

244-340: Consider enabling comprehensive variant generation.

The commented code provides a framework for generating all valid parameter combinations. This could be valuable for comprehensive testing or when users need specific solver configurations not in the default set.

Would you like me to help refactor this into a configurable system where specific variants can be enabled via compile-time flags or configuration?

📜 Review details

Configuration used: .coderabbit.yaml
Review profile: CHILL
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 634c9d1 and f61eb86.

📒 Files selected for processing (26)
  • src/amr/data/field/field_geometry.hpp (3 hunks)
  • src/amr/data/field/field_variable.hpp (1 hunks)
  • src/amr/data/field/field_variable_fill_pattern.hpp (5 hunks)
  • src/amr/data/field/refine/electric_field_refiner2.hpp (1 hunks)
  • src/amr/data/field/refine/field_refine_operator.hpp (5 hunks)
  • src/amr/data/field/refine/magnetic_field_refiner.hpp (1 hunks)
  • src/amr/data/field/refine/magnetic_field_regrider.hpp (1 hunks)
  • src/amr/data/field/refine/mhd_field_refiner.hpp (1 hunks)
  • src/amr/data/field/refine/mhd_flux_refiner.hpp (1 hunks)
  • src/amr/messengers/hybrid_hybrid_messenger_strategy.hpp (31 hunks)
  • src/amr/messengers/mhd_messenger.hpp (2 hunks)
  • src/amr/messengers/mhd_messenger_info.hpp (2 hunks)
  • src/amr/messengers/synchronizer_pool.hpp (0 hunks)
  • src/amr/physical_models/mhd_model.hpp (1 hunks)
  • src/amr/resources_manager/resources_manager.hpp (8 hunks)
  • src/amr/resources_manager/tensor_field_resource.hpp (1 hunks)
  • src/amr/solvers/solver.hpp (2 hunks)
  • src/amr/solvers/solver_mhd.hpp (1 hunks)
  • src/amr/solvers/time_integrator/euler.hpp (1 hunks)
  • src/amr/solvers/time_integrator/euler_integrator.hpp (1 hunks)
  • src/amr/solvers/time_integrator/tvdrk2_integrator.hpp (1 hunks)
  • src/amr/solvers/time_integrator/tvdrk3_integrator.hpp (1 hunks)
  • src/core/data/vecfield/vecfield.hpp (0 hunks)
  • src/core/numerics/godunov_fluxes/godunov_utils.hpp (1 hunks)
  • src/diagnostic/diagnostic_model_view.hpp (5 hunks)
  • src/python3/cpp_mhd_parameters.hpp (1 hunks)
💤 Files with no reviewable changes (2)
  • src/amr/messengers/synchronizer_pool.hpp
  • src/core/data/vecfield/vecfield.hpp
🚧 Files skipped from review as they are similar to previous changes (8)
  • src/amr/data/field/refine/magnetic_field_refiner.hpp
  • src/amr/data/field/field_geometry.hpp
  • src/amr/data/field/field_variable.hpp
  • src/amr/resources_manager/tensor_field_resource.hpp
  • src/amr/solvers/time_integrator/tvdrk3_integrator.hpp
  • src/amr/solvers/solver.hpp
  • src/amr/data/field/refine/mhd_field_refiner.hpp
  • src/amr/resources_manager/resources_manager.hpp
🧰 Additional context used
📓 Path-based instructions (1)
**/*.hpp

⚙️ CodeRabbit Configuration File

Review the C++ code, point out issues relative to principles of clean code, expressiveness, and performance.

Files:

  • src/amr/solvers/time_integrator/euler.hpp
  • src/amr/data/field/refine/field_refine_operator.hpp
  • src/amr/data/field/refine/mhd_flux_refiner.hpp
  • src/amr/physical_models/mhd_model.hpp
  • src/amr/data/field/refine/magnetic_field_regrider.hpp
  • src/core/numerics/godunov_fluxes/godunov_utils.hpp
  • src/amr/data/field/refine/electric_field_refiner2.hpp
  • src/amr/solvers/time_integrator/euler_integrator.hpp
  • src/amr/solvers/time_integrator/tvdrk2_integrator.hpp
  • src/amr/messengers/mhd_messenger_info.hpp
  • src/diagnostic/diagnostic_model_view.hpp
  • src/amr/solvers/solver_mhd.hpp
  • src/amr/data/field/field_variable_fill_pattern.hpp
  • src/python3/cpp_mhd_parameters.hpp
  • src/amr/messengers/mhd_messenger.hpp
  • src/amr/messengers/hybrid_hybrid_messenger_strategy.hpp
🧠 Learnings (4)
📓 Common learnings
Learnt from: PhilipDeegan
PR: PHAREHUB/PHARE#888
File: pyphare/pyphare/pharesee/hierarchy/hierarchy_utils.py:566-568
Timestamp: 2024-09-09T13:57:02.285Z
Learning: Avoid commenting on draft PRs for PhilipDeegan unless explicitly requested.
📚 Learning: in `src/core/utilities/point/point.hpp`, the `as_signed()` and `as_unsigned()` methods are intention...
Learnt from: PhilipDeegan
PR: PHAREHUB/PHARE#998
File: src/core/utilities/point/point.hpp:195-210
Timestamp: 2025-04-17T14:41:53.643Z
Learning: In `src/core/utilities/point/point.hpp`, the `as_signed()` and `as_unsigned()` methods are intentionally designed to cause compilation errors for unsupported types (only 4-byte types are supported). This forces developers to implement proper conversions themselves when extending the functionality to new types. The comment "else no return cause not yet handled" indicates this is a deliberate design choice.

Applied to files:

  • src/amr/data/field/field_variable_fill_pattern.hpp
📚 Learning: the bulk velocity calculation in phare must maintain physical consistency across both ion and electr...
Learnt from: rochSmets
PR: PHAREHUB/PHARE#921
File: src/core/data/ions/ions.hpp:138-142
Timestamp: 2025-02-06T10:52:56.461Z
Learning: The bulk velocity calculation in PHARE must maintain physical consistency across both ion and electron calculations. Zero density cases need special handling that preserves the physics, particularly since the calculations are used in level initialization, ion updates, and electron physics.

Applied to files:

  • src/amr/messengers/hybrid_hybrid_messenger_strategy.hpp
📚 Learning: in phare's physics simulations, handling of zero density cells in velocity calculations must be base...
Learnt from: rochSmets
PR: PHAREHUB/PHARE#921
File: src/core/data/ions/ions.hpp:138-142
Timestamp: 2025-02-06T10:52:56.461Z
Learning: In PHARE's physics simulations, handling of zero density cells in velocity calculations must be based on physical requirements rather than simple numerical fixes. The solution must account for population-specific behavior in the bulk velocity computation.

Applied to files:

  • src/amr/messengers/hybrid_hybrid_messenger_strategy.hpp
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (7)
  • GitHub Check: build (macos-latest)
  • GitHub Check: Analyze (python)
  • GitHub Check: build (macos-14)
  • GitHub Check: build (macos-13)
  • GitHub Check: Analyze (cpp)
  • GitHub Check: build (ubuntu-latest, clang)
  • GitHub Check: build (ubuntu-latest, gcc)
🔇 Additional comments (13)
src/diagnostic/diagnostic_model_view.hpp (4)

13-13: LGTM!

The addition of <type_traits> is necessary for the template metaprogramming using std::enable_if_t.


34-114: Well-structured base class extraction.

The refactoring to extract common functionality into BaseModelView improves code organization and reduces duplication between model-specific implementations.


117-208: Clean template specialization for hybrid model.

The SFINAE-based specialization pattern is well-implemented. The momentum tensor handling with memcpy is appropriate for POD types and the ghost interpolation setup follows the established patterns.


212-238: Appropriate MHD model specialization.

The accessor methods provide clean access to MHD state variables. The exception in getE() clearly communicates that electric field diagnostics are not available for MHD models.

src/python3/cpp_mhd_parameters.hpp (2)

55-58: Well-designed enum definitions.

The enums include a count member which is useful for compile-time iteration and validation. Good practice for extensible design.


190-195: Physically motivated simulator filtering.

The constraint that hyper-resistivity requires Hall effect is physically sound and efficiently implemented using compile-time evaluation.

src/amr/data/field/field_variable_fill_pattern.hpp (2)

135-209: Well-designed tensor field fill pattern.

The TensorFieldFillPattern properly extends the scalar field pattern to handle tensor fields with rank-aware operations. The use of core::for_N for component iteration is clean and efficient.


330-395: Consistent tensor field ghost interpolation pattern.

The TensorFieldGhostInterpOverlapFillPattern correctly aggregates scalar overlaps for each tensor component, maintaining consistency with the scalar field version.

src/amr/messengers/mhd_messenger.hpp (1)

589-628: Well-implemented ghost node marking strategy.

The NaN marking approach for ghost nodes is clearly documented and properly implemented. This helps track which nodes have been touched by refinement operations.

src/amr/messengers/hybrid_hybrid_messenger_strategy.hpp (4)

74-78: Well-structured type aliases for AMR types

The introduction of type aliases for SAMRAI types and the use of TensorFieldData for vector field abstractions improves code clarity and aligns well with the MHD support objectives.

Also applies to: 86-86


898-946: Good implementation of ghost node NaN marking

The setNaNsOnFieldGhosts methods correctly mark untouched ghost nodes with NaN values, which helps the refinement operators identify which nodes need to be processed. The implementation properly handles box geometry operations and field-specific ghost regions.


184-189: Excellent error handling for missing variable IDs

The addition of null checks and descriptive error messages for missing magnetic and electric field variable IDs improves robustness and helps with debugging configuration issues.

Also applies to: 201-205, 214-218


1050-1058: Proper initialization of fill patterns

The creation of both overwriting and non-overwriting tensor field fill patterns provides the necessary flexibility for different refinement scenarios. The explicit overwrite_interior parameter makes the behavior clear.

Comment on lines 30 to 35
SAMRAI::hier::IntVector const& ratio)
: fineBox_{destinationGhostBox}
, coarseBox_{sourceGhostBox}
, centerings_{centering}
{
}
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

Unused refinement ratio parameter

The ratio parameter is accepted in the constructor but not stored or used. The implementation assumes a fixed refinement ratio of 2 (evident from the even/odd index checks in lines 64-73). Consider either:

  1. Removing the unused parameter if only ratio 2 is supported
  2. Storing and using it to support variable refinement ratios
🤖 Prompt for AI Agents
In src/amr/data/field/refine/electric_field_refiner2.hpp around lines 30 to 35,
the constructor accepts a refinement ratio parameter that is not stored or used,
while the code assumes a fixed ratio of 2. To fix this, either remove the unused
ratio parameter from the constructor if only ratio 2 is supported, or modify the
class to store this ratio and update the logic to handle variable refinement
ratios accordingly.

Comment on lines 116 to 187
if (centerings_[dirX] == core::QtyCentering::dual
and centerings_[dirY] == core::QtyCentering::primal)
{
if (onCoarseYFace_(fineIndex))
{
// we're on a fine edge shared with coarse mesh
// take the coarse face value
// if (std::isnan(fineField(ilfx, ilfy)))
fineField(ilfx, ilfy) = coarseField(ilcx, ilcy);
}
else
{
// we're on a fine edge in between two coarse edges
// we take the average
// if (std::isnan(fineField(ilfx, ilfy)))
fineField(ilfx, ilfy)
= 0.5 * (coarseField(ilcx, ilcy) + coarseField(ilcx, ilcy + 1));
}
}
// Ey
else if (centerings_[dirX] == core::QtyCentering::primal
and centerings_[dirY] == core::QtyCentering::dual)
{
if (onCoarseXFace_(fineIndex))
{
// we're on a coarse edge at j=1/4 and 3/4
// we thus copy the coarse at j=1/2
// both fine Ey e.g. at j=100 and 101 will take j=50 on coarse
// so no need to look at whether jfine is even or odd
// just take the value at the local coarse index
// if (std::isnan(fineField(ilfx, ilfy)))
fineField(ilfx, ilfy) = coarseField(ilcx, ilcy);
}
else
{
// we're on a fine edge in between two coarse ones
// we take the average
// if (std::isnan(fineField(ilfx, ilfy)))
fineField(ilfx, ilfy)
= 0.5 * (coarseField(ilcx, ilcy) + coarseField(ilcx + 1, ilcy));
}
}
// and this is now Ez
else if (centerings_[dirX] == core::QtyCentering::primal
and centerings_[dirY] == core::QtyCentering::primal)
{
if (onCoarseXFace_(fineIndex) and onCoarseYFace_(fineIndex))
{
// if (std::isnan(fineField(ilfx, ilfy)))
fineField(ilfx, ilfy) = coarseField(ilcx, ilcy);
}
else if (onCoarseXFace_(fineIndex))
{
// if (std::isnan(fineField(ilfx, ilfy)))
fineField(ilfx, ilfy)
= 0.5 * (coarseField(ilcx, ilcy) + coarseField(ilcx, ilcy + 1));
}
else if (onCoarseYFace_(fineIndex))
{
// if (std::isnan(fineField(ilfx, ilfy)))
fineField(ilfx, ilfy)
= 0.5 * (coarseField(ilcx, ilcy) + coarseField(ilcx + 1, ilcy));
}
else
{
// if (std::isnan(fineField(ilfx, ilfy)))
fineField(ilfx, ilfy)
= 0.25
* (coarseField(ilcx, ilcy) + coarseField(ilcx + 1, ilcy)
+ coarseField(ilcx, ilcy + 1) + coarseField(ilcx + 1, ilcy + 1));
}
}
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

Commented out NaN checks in 2D refinement

All NaN checks in the 2D refinement logic are commented out. This could lead to overwriting already refined values, which may cause incorrect results when multiple refinement operations overlap.

Consider uncommenting these checks to maintain consistency with the 1D and 3D implementations:

-                // if (std::isnan(fineField(ilfx, ilfy)))
+                if (std::isnan(fineField(ilfx, ilfy)))
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
if (centerings_[dirX] == core::QtyCentering::dual
and centerings_[dirY] == core::QtyCentering::primal)
{
if (onCoarseYFace_(fineIndex))
{
// we're on a fine edge shared with coarse mesh
// take the coarse face value
// if (std::isnan(fineField(ilfx, ilfy)))
fineField(ilfx, ilfy) = coarseField(ilcx, ilcy);
}
else
{
// we're on a fine edge in between two coarse edges
// we take the average
// if (std::isnan(fineField(ilfx, ilfy)))
fineField(ilfx, ilfy)
= 0.5 * (coarseField(ilcx, ilcy) + coarseField(ilcx, ilcy + 1));
}
}
// Ey
else if (centerings_[dirX] == core::QtyCentering::primal
and centerings_[dirY] == core::QtyCentering::dual)
{
if (onCoarseXFace_(fineIndex))
{
// we're on a coarse edge at j=1/4 and 3/4
// we thus copy the coarse at j=1/2
// both fine Ey e.g. at j=100 and 101 will take j=50 on coarse
// so no need to look at whether jfine is even or odd
// just take the value at the local coarse index
// if (std::isnan(fineField(ilfx, ilfy)))
fineField(ilfx, ilfy) = coarseField(ilcx, ilcy);
}
else
{
// we're on a fine edge in between two coarse ones
// we take the average
// if (std::isnan(fineField(ilfx, ilfy)))
fineField(ilfx, ilfy)
= 0.5 * (coarseField(ilcx, ilcy) + coarseField(ilcx + 1, ilcy));
}
}
// and this is now Ez
else if (centerings_[dirX] == core::QtyCentering::primal
and centerings_[dirY] == core::QtyCentering::primal)
{
if (onCoarseXFace_(fineIndex) and onCoarseYFace_(fineIndex))
{
// if (std::isnan(fineField(ilfx, ilfy)))
fineField(ilfx, ilfy) = coarseField(ilcx, ilcy);
}
else if (onCoarseXFace_(fineIndex))
{
// if (std::isnan(fineField(ilfx, ilfy)))
fineField(ilfx, ilfy)
= 0.5 * (coarseField(ilcx, ilcy) + coarseField(ilcx, ilcy + 1));
}
else if (onCoarseYFace_(fineIndex))
{
// if (std::isnan(fineField(ilfx, ilfy)))
fineField(ilfx, ilfy)
= 0.5 * (coarseField(ilcx, ilcy) + coarseField(ilcx + 1, ilcy));
}
else
{
// if (std::isnan(fineField(ilfx, ilfy)))
fineField(ilfx, ilfy)
= 0.25
* (coarseField(ilcx, ilcy) + coarseField(ilcx + 1, ilcy)
+ coarseField(ilcx, ilcy + 1) + coarseField(ilcx + 1, ilcy + 1));
}
}
// we're on a fine edge shared with coarse mesh
// take the coarse face value
- // if (std::isnan(fineField(ilfx, ilfy)))
+ if (std::isnan(fineField(ilfx, ilfy)))
fineField(ilfx, ilfy) = coarseField(ilcx, ilcy);
🤖 Prompt for AI Agents
In src/amr/data/field/refine/electric_field_refiner2.hpp between lines 116 and
187, the NaN checks before assigning values to fineField are commented out,
which risks overwriting existing refined values during overlapping refinements.
To fix this, uncomment all the std::isnan checks so that fineField is only
updated if its current value is NaN, ensuring consistency with the 1D and 3D
refinement logic and preventing unintended overwrites.

Comment on lines 313 to 315
if (std::isnan(fineField(ilfx, ilfy, ilfz)))
fineField(locFineIdx[dirX], locFineIdx[dirY], locFineIdx[dirZ])
= 0.5 * (coarseField(ilcx, ilcy, ilcz) + coarseField(ilcx, ilcy + 1, ilcz));
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

Inconsistent index usage

Line 314 uses locFineIdx array access instead of the consistent ilfx, ilfy, ilfz pattern used elsewhere.

-                    fineField(locFineIdx[dirX], locFineIdx[dirY], locFineIdx[dirZ])
+                    fineField(ilfx, ilfy, ilfz)
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
if (std::isnan(fineField(ilfx, ilfy, ilfz)))
fineField(locFineIdx[dirX], locFineIdx[dirY], locFineIdx[dirZ])
= 0.5 * (coarseField(ilcx, ilcy, ilcz) + coarseField(ilcx, ilcy + 1, ilcz));
if (std::isnan(fineField(ilfx, ilfy, ilfz)))
fineField(ilfx, ilfy, ilfz)
= 0.5 * (coarseField(ilcx, ilcy, ilcz) + coarseField(ilcx, ilcy + 1, ilcz));
🤖 Prompt for AI Agents
In src/amr/data/field/refine/electric_field_refiner2.hpp around lines 313 to
315, the code inconsistently uses locFineIdx array indices on line 314 instead
of the ilfx, ilfy, ilfz variables used elsewhere. To fix this, replace
locFineIdx[dirX], locFineIdx[dirY], locFineIdx[dirZ] with ilfx, ilfy, ilfz to
maintain consistent indexing when assigning the value to fineField.

Comment on lines 333 to 336
= 0.25
* (coarseField(ilcx, ilcy, ilcz) + coarseField(ilcx + 1, ilcy, ilcz)
+ coarseField(ilcx, ilcy + 1, ilcz + 1)
+ coarseField(ilcx + 1, ilcy + 1, ilcz));
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

Potential index error in Ez refinement

The coarse field access pattern appears incorrect. For consistency with the surrounding logic, it should likely be:

                          * (coarseField(ilcx, ilcy, ilcz) + coarseField(ilcx + 1, ilcy, ilcz)
-                             + coarseField(ilcx, ilcy + 1, ilcz + 1)
+                             + coarseField(ilcx, ilcy + 1, ilcz)
                             + coarseField(ilcx + 1, ilcy + 1, ilcz));
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
= 0.25
* (coarseField(ilcx, ilcy, ilcz) + coarseField(ilcx + 1, ilcy, ilcz)
+ coarseField(ilcx, ilcy + 1, ilcz + 1)
+ coarseField(ilcx + 1, ilcy + 1, ilcz));
= 0.25
* (coarseField(ilcx, ilcy, ilcz) + coarseField(ilcx + 1, ilcy, ilcz)
+ coarseField(ilcx, ilcy + 1, ilcz)
+ coarseField(ilcx + 1, ilcy + 1, ilcz));
🤖 Prompt for AI Agents
In src/amr/data/field/refine/electric_field_refiner2.hpp around lines 333 to
336, the indexing used to access the coarseField for Ez refinement is
inconsistent and may cause an index error. Review the indices used in the four
coarseField accesses and adjust them to match the correct neighboring points as
per the surrounding logic, ensuring the coordinates align properly for Ez
component refinement.

Comment on lines 36 to 40
MagneticFieldRegrider(std::array<core::QtyCentering, dimension> const& centering,
SAMRAI::hier::Box const& destinationGhostBox,
SAMRAI::hier::Box const& sourceGhostBox,
SAMRAI::hier::IntVector const& ratio)
: fineBox_{destinationGhostBox}
, coarseBox_{sourceGhostBox}
, centerings_{centering}
{
}
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

Validate refinement ratio in constructor

Similar to MHDFluxRefiner, this class appears to assume a specific refinement ratio (2) based on the even/odd index checks, but doesn't validate this in the constructor.

Add validation in the constructor:

 MagneticFieldRegrider(std::array<core::QtyCentering, dimension> const& centering,
                       SAMRAI::hier::Box const& destinationGhostBox,
                       SAMRAI::hier::Box const& sourceGhostBox,
                       SAMRAI::hier::IntVector const& ratio)
     : fineBox_{destinationGhostBox}
     , coarseBox_{sourceGhostBox}
     , centerings_{centering}
 {
+    for (int d = 0; d < dimension; ++d)
+    {
+        if (ratio[d] != 2)
+        {
+            throw std::invalid_argument("MagneticFieldRegrider only supports refinement ratio of 2");
+        }
+    }
 }
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
MagneticFieldRegrider(std::array<core::QtyCentering, dimension> const& centering,
SAMRAI::hier::Box const& destinationGhostBox,
SAMRAI::hier::Box const& sourceGhostBox,
SAMRAI::hier::IntVector const& ratio)
: fineBox_{destinationGhostBox}
, coarseBox_{sourceGhostBox}
, centerings_{centering}
{
}
MagneticFieldRegrider(std::array<core::QtyCentering, dimension> const& centering,
SAMRAI::hier::Box const& destinationGhostBox,
SAMRAI::hier::Box const& sourceGhostBox,
SAMRAI::hier::IntVector const& ratio)
: fineBox_{destinationGhostBox}
, coarseBox_{sourceGhostBox}
, centerings_{centering}
{
for (int d = 0; d < dimension; ++d)
{
if (ratio[d] != 2)
{
throw std::invalid_argument(
"MagneticFieldRegrider only supports refinement ratio of 2");
}
}
}
🤖 Prompt for AI Agents
In src/amr/data/field/refine/magnetic_field_regrider.hpp around lines 36 to 44,
the constructor does not validate that the refinement ratio is 2, which is
assumed later in the code. Add a check in the constructor to verify that the
ratio parameter equals 2 for all dimensions, and if not, throw an exception or
handle the error appropriately to ensure the class is used correctly.

Comment on lines +185 to +186
// we're on a coarse X face
// take the coarse face value
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

Fix incorrect comment in Bz refinement

The comment incorrectly states "we're on a coarse X face" when it should refer to a Z face in the Bz refinement case.

-                    // we're on a coarse X face
+                    // we're on a coarse Z face
                     // take the coarse face value
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
// we're on a coarse X face
// take the coarse face value
// we're on a coarse Z face
// take the coarse face value
🤖 Prompt for AI Agents
In src/amr/data/field/refine/magnetic_field_regrider.hpp at lines 185-186, the
comment incorrectly mentions "we're on a coarse X face" while it should refer to
a Z face for the Bz refinement. Update the comment to correctly state that it is
a coarse Z face to accurately reflect the code's logic.

Comment on lines +28 to +35
MHDFluxRefiner(std::array<core::QtyCentering, dimension> const& centering,
SAMRAI::hier::Box const& destinationGhostBox,
SAMRAI::hier::Box const& sourceGhostBox, SAMRAI::hier::IntVector const& ratio)
: fineBox_{destinationGhostBox}
, coarseBox_{sourceGhostBox}
, centerings_{centering}
{
}
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

Validate refinement ratio in constructor

The implementation assumes a refinement ratio of 2 throughout (e.g., even/odd index checks), but the constructor doesn't validate this assumption.

Add validation in the constructor:

 MHDFluxRefiner(std::array<core::QtyCentering, dimension> const& centering,
                SAMRAI::hier::Box const& destinationGhostBox,
                SAMRAI::hier::Box const& sourceGhostBox, SAMRAI::hier::IntVector const& ratio)
     : fineBox_{destinationGhostBox}
     , coarseBox_{sourceGhostBox}
     , centerings_{centering}
 {
+    for (int d = 0; d < dimension; ++d)
+    {
+        if (ratio[d] != 2)
+        {
+            throw std::invalid_argument("MHDFluxRefiner only supports refinement ratio of 2");
+        }
+    }
 }
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
MHDFluxRefiner(std::array<core::QtyCentering, dimension> const& centering,
SAMRAI::hier::Box const& destinationGhostBox,
SAMRAI::hier::Box const& sourceGhostBox, SAMRAI::hier::IntVector const& ratio)
: fineBox_{destinationGhostBox}
, coarseBox_{sourceGhostBox}
, centerings_{centering}
{
}
MHDFluxRefiner(std::array<core::QtyCentering, dimension> const& centering,
SAMRAI::hier::Box const& destinationGhostBox,
SAMRAI::hier::Box const& sourceGhostBox, SAMRAI::hier::IntVector const& ratio)
: fineBox_{destinationGhostBox}
, coarseBox_{sourceGhostBox}
, centerings_{centering}
{
for (int d = 0; d < dimension; ++d)
{
if (ratio[d] != 2)
{
throw std::invalid_argument("MHDFluxRefiner only supports refinement ratio of 2");
}
}
}
🤖 Prompt for AI Agents
In src/amr/data/field/refine/mhd_flux_refiner.hpp around lines 28 to 35, the
constructor assumes a refinement ratio of 2 but does not validate this. Add a
check in the constructor to verify that each component of the ratio parameter
equals 2, and if not, handle the error appropriately, such as throwing an
exception or logging an error, to ensure the assumption is enforced.

};
} // namespace PHARE::solver

#endif
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

Fix header guard closing comment to match the define

The closing comment should match the header guard defined at the top of the file.

-#endif
+#endif // PHARE_CORE_NUMERICS_EULER_INTEGRATOR_HPP
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
#endif
#endif // PHARE_CORE_NUMERICS_EULER_INTEGRATOR_HPP
🤖 Prompt for AI Agents
In src/amr/solvers/time_integrator/euler_integrator.hpp at line 145, the closing
comment for the header guard does not match the macro defined at the top of the
file. Update the comment after #endif to exactly match the header guard macro
name used in the #ifndef directive at the beginning of the file.

Comment on lines +181 to +207
NO_DISCARD auto getCompileTimeResourcesViewList()
{
if constexpr (dimension == 1)
return std::forward_as_tuple(rho_fx, rhoV_fx, B_fx, Etot_fx);
else if constexpr (dimension == 2)
return std::forward_as_tuple(rho_fx, rhoV_fx, B_fx, Etot_fx, rho_fy, rhoV_fy, B_fy,
Etot_fy);
else if constexpr (dimension == 3)
return std::forward_as_tuple(rho_fx, rhoV_fx, B_fx, Etot_fx, rho_fy, rhoV_fy, B_fy,
Etot_fy, rho_fz, rhoV_fz, B_fz, Etot_fz);
else
throw std::runtime_error("Error - AllFluxes - dimension not supported");
}

NO_DISCARD auto getCompileTimeResourcesViewList() const
{
if constexpr (dimension == 1)
return std::forward_as_tuple(rho_fx, rhoV_fx, B_fx, Etot_fx);
else if constexpr (dimension == 2)
return std::forward_as_tuple(rho_fx, rhoV_fx, B_fx, Etot_fx, rho_fy, rhoV_fy, B_fy,
Etot_fy);
else if constexpr (dimension == 3)
return std::forward_as_tuple(rho_fx, rhoV_fx, B_fx, Etot_fx, rho_fy, rhoV_fy, B_fy,
Etot_fy, rho_fz, rhoV_fz, B_fz, Etot_fz);
else
throw std::runtime_error("Error - AllFluxes - dimension not supported");
}
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

Replace runtime errors with compile-time checks

The getCompileTimeResourcesViewList methods throw runtime errors for unsupported dimensions, but these methods are meant for compile-time use. Since dimension is a compile-time constant, use static_assert instead:

     NO_DISCARD auto getCompileTimeResourcesViewList()
     {
+        static_assert(dimension >= 1 && dimension <= 3, "Dimension must be 1, 2, or 3");
         if constexpr (dimension == 1)
             return std::forward_as_tuple(rho_fx, rhoV_fx, B_fx, Etot_fx);
         else if constexpr (dimension == 2)
             return std::forward_as_tuple(rho_fx, rhoV_fx, B_fx, Etot_fx, rho_fy, rhoV_fy, B_fy,
                                          Etot_fy);
-        else if constexpr (dimension == 3)
+        else // dimension == 3
             return std::forward_as_tuple(rho_fx, rhoV_fx, B_fx, Etot_fx, rho_fy, rhoV_fy, B_fy,
                                          Etot_fy, rho_fz, rhoV_fz, B_fz, Etot_fz);
-        else
-            throw std::runtime_error("Error - AllFluxes - dimension not supported");
     }
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
NO_DISCARD auto getCompileTimeResourcesViewList()
{
if constexpr (dimension == 1)
return std::forward_as_tuple(rho_fx, rhoV_fx, B_fx, Etot_fx);
else if constexpr (dimension == 2)
return std::forward_as_tuple(rho_fx, rhoV_fx, B_fx, Etot_fx, rho_fy, rhoV_fy, B_fy,
Etot_fy);
else if constexpr (dimension == 3)
return std::forward_as_tuple(rho_fx, rhoV_fx, B_fx, Etot_fx, rho_fy, rhoV_fy, B_fy,
Etot_fy, rho_fz, rhoV_fz, B_fz, Etot_fz);
else
throw std::runtime_error("Error - AllFluxes - dimension not supported");
}
NO_DISCARD auto getCompileTimeResourcesViewList() const
{
if constexpr (dimension == 1)
return std::forward_as_tuple(rho_fx, rhoV_fx, B_fx, Etot_fx);
else if constexpr (dimension == 2)
return std::forward_as_tuple(rho_fx, rhoV_fx, B_fx, Etot_fx, rho_fy, rhoV_fy, B_fy,
Etot_fy);
else if constexpr (dimension == 3)
return std::forward_as_tuple(rho_fx, rhoV_fx, B_fx, Etot_fx, rho_fy, rhoV_fy, B_fy,
Etot_fy, rho_fz, rhoV_fz, B_fz, Etot_fz);
else
throw std::runtime_error("Error - AllFluxes - dimension not supported");
}
NO_DISCARD auto getCompileTimeResourcesViewList()
{
static_assert(dimension >= 1 && dimension <= 3, "Dimension must be 1, 2, or 3");
if constexpr (dimension == 1)
return std::forward_as_tuple(rho_fx, rhoV_fx, B_fx, Etot_fx);
else if constexpr (dimension == 2)
return std::forward_as_tuple(rho_fx, rhoV_fx, B_fx, Etot_fx, rho_fy, rhoV_fy, B_fy,
Etot_fy);
else // dimension == 3
return std::forward_as_tuple(rho_fx, rhoV_fx, B_fx, Etot_fx, rho_fy, rhoV_fy, B_fy,
Etot_fy, rho_fz, rhoV_fz, B_fz, Etot_fz);
}
NO_DISCARD auto getCompileTimeResourcesViewList() const
{
if constexpr (dimension == 1)
return std::forward_as_tuple(rho_fx, rhoV_fx, B_fx, Etot_fx);
else if constexpr (dimension == 2)
return std::forward_as_tuple(rho_fx, rhoV_fx, B_fx, Etot_fx, rho_fy, rhoV_fy, B_fy,
Etot_fy);
else if constexpr (dimension == 3)
return std::forward_as_tuple(rho_fx, rhoV_fx, B_fx, Etot_fx, rho_fy, rhoV_fy, B_fy,
Etot_fy, rho_fz, rhoV_fz, B_fz, Etot_fz);
else
throw std::runtime_error("Error - AllFluxes - dimension not supported");
}
🤖 Prompt for AI Agents
In src/core/numerics/godunov_fluxes/godunov_utils.hpp around lines 181 to 207,
the getCompileTimeResourcesViewList methods throw runtime errors for unsupported
dimensions, which is inappropriate since dimension is a compile-time constant.
Replace the runtime throw statements with static_assert checks to enforce
dimension validity at compile time, ensuring unsupported dimensions cause
compilation errors rather than runtime exceptions.

@UCaromel UCaromel force-pushed the mhd-refluxing2 branch 2 times, most recently from fd0bd53 to 33ba980 Compare September 12, 2025 16:44
Copy link

@github-advanced-security github-advanced-security bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

CodeQL found more than 20 potential problems in the proposed changes. Check the Files changed tab for more details.

"Pzz": "primal",

# finite volume mhd quantities are 3ple dual
"mhdRho": "dual",
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I would guess for all dual quantites, we could just have a "dual": "dual" entry per dim?

we might want that for other reasons too tbh

UCaromel and others added 12 commits September 18, 2025 11:29
- fine_boundary_represents_var for copy **before** refinement
- PatchData NaN initialized on construction
- fix tests failing as result of above
- comment a field refinement test (useless, wrong refinement op for E,B)
- debug plots for advance field overlap test
- copy done before refinement (boolean false in variable)
- overwrite_interior false also for refinement is default for
  FieldFillPattern
- J manually init to zero in model init, fine init and regrid init (Jx
  unused in ampere but used in Ohm with its now NaN values)
- Grid/NdarrayVector take default value overrides (for test)
- UsableTensorField is default constructed with zero init.
- TensorFieldData
- with refluxing
… and added some in code documentation comments
@UCaromel UCaromel mentioned this pull request Nov 7, 2025
@PhilipDeegan
Copy link
Member

we can probably close this given #1104 ?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants