@@ -92,15 +92,15 @@ class HSGP(Base):
9292
9393 The `gp.HSGP` class is an implementation of the Hilbert Space Gaussian process. It is a
9494 reduced rank GP approximation that uses a fixed set of basis vectors whose coefficients are
95- random functions of a stationary covariance function's power spectral density. It's usage
95+ random functions of a stationary covariance function's power spectral density. Its usage
9696 is largely similar to `gp.Latent`. Like `gp.Latent`, it does not assume a Gaussian noise model
9797 and can be used with any likelihood, or as a component anywhere within a model. Also like
9898 `gp.Latent`, it has `prior` and `conditional` methods. It supports any sum of covariance
9999 functions that implement a `power_spectral_density` method. (Note, this excludes the
100100 `Periodic` covariance function, which uses a different set of basis functions for a
101101 low rank approximation, as described in `HSGPPeriodic`.).
102102
103- For information on choosing appropriate `m`, `L`, and `c`, refer Ruitort-Mayol et al. or to
103+ For information on choosing appropriate `m`, `L`, and `c`, refer to Ruitort-Mayol et al. or to
104104 the PyMC examples that use HSGP.
105105
106106 To work with the HSGP in its "linearized" form, as a matrix of basis vectors and a vector of
@@ -117,14 +117,14 @@ class HSGP(Base):
117117 `active_dim`.
118118 c: float
119119 The proportion extension factor. Used to construct L from X. Defined as `S = max|X|` such
120- that `X` is in `[-S, S]`. `L` is the calculated as `c * S`. One of `c` or `L` must be
120+ that `X` is in `[-S, S]`. `L` is calculated as `c * S`. One of `c` or `L` must be
121121 provided. Further information can be found in Ruitort-Mayol et al.
122122 drop_first: bool
123123 Default `False`. Sometimes the first basis vector is quite "flat" and very similar to
124124 the intercept term. When there is an intercept in the model, ignoring the first basis
125125 vector may improve sampling. This argument will be deprecated in future versions.
126126 parameterization: str
127- Whether to use `centred ` or `noncentered` parameterization when multiplying the
127+ Whether to use the `centered ` or `noncentered` parameterization when multiplying the
128128 basis by the coefficients.
129129 cov_func: Covariance function, must be an instance of `Stationary` and implement a
130130 `power_spectral_density` method.
@@ -245,16 +245,16 @@ def prior_linearized(self, Xs: TensorLike):
245245 """Linearized version of the HSGP. Returns the Laplace eigenfunctions and the square root
246246 of the power spectral density needed to create the GP.
247247
248- This function allows the user to bypass the GP interface and work directly with the basis
248+ This function allows the user to bypass the GP interface and work with the basis
249249 and coefficients directly. This format allows the user to create predictions using
250250 `pm.set_data` similarly to a linear model. It also enables computational speed ups in
251- multi-GP models since they may share the same basis. The return values are the Laplace
251+ multi-GP models, since they may share the same basis. The return values are the Laplace
252252 eigenfunctions `phi`, and the square root of the power spectral density.
253253
254254 Correct results when using `prior_linearized` in tandem with `pm.set_data` and
255255 `pm.MutableData` require two conditions. First, one must specify `L` instead of `c` when
256256 the GP is constructed. If not, a RuntimeError is raised. Second, the `Xs` needs to be
257- zero-centered, so it's mean must be subtracted. An example is given below.
257+ zero-centered, so its mean must be subtracted. An example is given below.
258258
259259 Parameters
260260 ----------
@@ -286,9 +286,9 @@ def prior_linearized(self, Xs: TensorLike):
286286 # L = [10] means the approximation is valid from Xs = [-10, 10]
287287 gp = pm.gp.HSGP(m=[200], L=[10], cov_func=cov_func)
288288
289- # Order is important. First calculate the mean, then make X a shared variable,
290- # then subtract the mean. When X is mutated later, the correct mean will be
291- # subtracted.
289+ # Order is important.
290+ # First calculate the mean, then make X a shared variable, then subtract the mean.
291+ # When X is mutated later, the correct mean will be subtracted.
292292 X_mean = np.mean(X, axis=0)
293293 X = pm.MutableData("X", X)
294294 Xs = X - X_mean
@@ -301,9 +301,14 @@ def prior_linearized(self, Xs: TensorLike):
301301 # as m_star.
302302 beta = pm.Normal("beta", size=gp._m_star)
303303
304- # The (non-centered) GP approximation is given by
304+ # The (non-centered) GP approximation is given by:
305305 f = pm.Deterministic("f", phi @ (beta * sqrt_psd))
306306
307+ # The centered approximation can be more efficient when
308+ # the GP is stronger than the noise
309+ # beta = pm.Normal("beta", sigma=sqrt_psd, size=gp._m_star)
310+ # f = pm.Deterministic("f", phi @ beta)
311+
307312 ...
308313
309314
0 commit comments