Skip to content
5 changes: 5 additions & 0 deletions crates/bevy_asset/src/server/info.rs
Original file line number Diff line number Diff line change
Expand Up @@ -314,6 +314,11 @@ impl AssetInfos {
self.get_index_handle(ErasedAssetIndex::new(index, type_id))
}

pub(crate) fn get_path_ref(&self, index: ErasedAssetIndex) -> Option<&AssetPath<'static>> {
let info = self.infos.get(&index)?;
info.path.as_ref()
}

pub(crate) fn get_path_indices<'a>(
&'a self,
path: &'a AssetPath<'_>,
Expand Down
46 changes: 46 additions & 0 deletions crates/bevy_asset/src/server/mod.rs
Original file line number Diff line number Diff line change
Expand Up @@ -1365,6 +1365,31 @@ impl AssetServer {
Some(info.path.as_ref()?.clone())
}

/// Returns a reference to the path for the given `id`, if it has one.
///
/// This method is useful when you need to access the path without cloning it,
/// which can be more efficient when working with many asset handles.
///
/// Note: The returned reference is tied to a temporary lock guard that lives
/// as long as the returned value is in scope. For more complex use cases,
/// consider using [`AssetServer::get_path`] instead.
pub fn get_path_ref<'a>(
&'a self,
id: impl Into<UntypedAssetId>,
) -> Option<impl core::ops::Deref<Target = AssetPath<'static>> + 'a> {
let index = id.into().try_into().ok()?;
let infos = self.read_infos();

if infos.get_path_ref(index).is_some() {
Some(AssetPathRef {
_guard: infos,
index,
})
} else {
None
}
}

/// Returns the [`AssetServerMode`] this server is currently in.
pub fn mode(&self) -> AssetServerMode {
self.data.mode
Expand Down Expand Up @@ -2133,6 +2158,27 @@ fn format_missing_asset_ext(exts: &[String]) -> String {
}
}

/// A reference to an asset path that keeps the internal lock alive.
///
/// This type is returned by [`AssetServer::get_path_ref`] and allows
/// accessing the path without cloning it, while keeping the necessary
/// read lock alive for the duration of its lifetime.
pub struct AssetPathRef<'a> {
_guard: RwLockReadGuard<'a, AssetInfos>,
index: ErasedAssetIndex,
}

impl<'a> core::ops::Deref for AssetPathRef<'a> {
type Target = AssetPath<'static>;

fn deref(&self) -> &Self::Target {
// SAFETY: We checked in get_path_ref that the path exists, so this unwrap is safe
self._guard
.get_path_ref(self.index)
.expect("AssetPathRef should only be created when path exists")
}
}

impl core::fmt::Debug for AssetServer {
fn fmt(&self, f: &mut core::fmt::Formatter<'_>) -> core::fmt::Result {
f.debug_struct("AssetServer")
Expand Down
25 changes: 23 additions & 2 deletions crates/bevy_ecs/src/schedule/config.rs
Original file line number Diff line number Diff line change
Expand Up @@ -386,6 +386,14 @@ pub trait IntoScheduleConfigs<T: Schedulable<Metadata = GraphInfo, GroupMetadata
/// Each individual condition will be evaluated at most once (per schedule run),
/// right before the corresponding system prepares to run.
///
/// # Tick and Change Detection Behavior
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If we're mentioning that each copy of the system keeps its own ticks, then it might also be worth mentioning that each one also keeps its own parameter state. Notably, MessageReader will have a separate cursor for each one, so copies of on_message may return different values.

And similarly for the mentions of tick state and change detection in fn chain().

///
/// When using `distributive_run_if`:
/// - Each system evaluates the condition independently
/// - System ticks are updated individually based on when each system attempts to run
/// - Change detection remains accurate since each system maintains its own tick state
/// - In chained systems, later systems will still update their ticks even if earlier ones don't run
///
/// This is equivalent to calling [`run_if`](IntoScheduleConfigs::run_if) on each individual
/// system, as shown below:
///
Expand Down Expand Up @@ -417,8 +425,11 @@ pub trait IntoScheduleConfigs<T: Schedulable<Metadata = GraphInfo, GroupMetadata

/// Run the systems only if the [`SystemCondition`] is `true`.
///
/// The `SystemCondition` will be evaluated at most once (per schedule run),
/// the first time a system in this set prepares to run.
/// Multiple conditions can be added in two ways:
/// - Chained calls like `.run_if(a).run_if(b)` evaluate all conditions independently
/// - Single call like `run_if(a.and(b))` short-circuits (`b` won't run if `a` is false)
///
/// For per-system condition evaluation, use [`distributive_run_if`].
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The "Note" section below already mentions distributive_run_if with a bit more context, so I'd be inclined to leave this line out (or else move the whole "Note" up here).

///
/// If this set contains more than one system, calling `run_if` is equivalent to adding each
/// system to a common set and configuring the run condition on that set, as shown below:
Expand Down Expand Up @@ -549,6 +560,16 @@ impl<T: Schedulable<Metadata = GraphInfo, GroupMetadata = Chain>> IntoScheduleCo
self
}

/// Chain systems to run in sequence, by implicitly adding ordering constraints between them.
///
/// Systems in a chain:
/// - Execute in sequential order
/// - Each maintain their own tick state
/// - Have independent change detection
/// - Update ticks even when skipped by conditions
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is this right? I thought systems only set their last_run tick when they actually ran, and not when skipped by conditions. Chaining them shouldn't affect that, should it?

/// - Equivalent to using `before` / `after` system orderings.
/// Later systems in the chain will see changes made by earlier systems while maintaining
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Somewhere in here, I think it would be helpful to explicitly call out that this is equivalent to adding before / after system orderings.

/// their own change detection state.
fn chain(self) -> Self {
self.chain_inner()
}
Expand Down
Loading