-
Notifications
You must be signed in to change notification settings - Fork 332
Introduction Version of Funnel Analysis Notebok #2070
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
|
Check out this pull request on See visual diffs & provide feedback on Jupyter Notebooks. Powered by ReviewNB |
|
View / edit / reply to this conversation on ReviewNB cetagostini commented on 2025-11-07T08:35:07Z Probably you don't want this, you are plotting the posterior of a transform variable predicted with another transform. If this was original scale or a simple example where we don't need to use transformations support the idea, but not here. |
|
View / edit / reply to this conversation on ReviewNB cetagostini commented on 2025-11-07T08:35:08Z The recover posterior is correct. juanitorduz commented on 2025-11-07T21:36:19Z :) |
|
View / edit / reply to this conversation on ReviewNB cetagostini commented on 2025-11-07T08:35:08Z What are you trying to do here? I feel estimate something is wrong here and looks like in sample fit? If so, I'll remove, we are working on a transform space this should say nothing.
ps: After this part, I'm not sure I follow, but I guess you are trying to show a recover of X4 (real variable not transform). Can be tricky, thats not in the original notebook, and never though about it. juanitorduz commented on 2025-11-07T21:37:18Z Here I am showing the x1 contribution and the posterior predictive mean to the likelihood |
|
View / edit / reply to this conversation on ReviewNB cetagostini commented on 2025-11-07T08:35:10Z You need to replace by:
# Create xarray with proper broadcasting for chain, draw, date dimensions
impressions_x1_values = X_train["impressions_x1"].values # shape: (date,)
saturation_beta_scaled = (
second_causal_mmm.idata.posterior.saturation_beta
* second_causal_mmm.scalers._target.item()
) # shape: (chain, draw)
# Broadcast to create (chain, draw, date) output
posterior_result = (
impressions_x1_values[None, None, :] * saturation_beta_scaled.values[:, :, None]
)
posterior_contribution_x1_over_x4 = xr.DataArray(
posterior_result,
dims=["chain", "draw", "date"],
coords={
"chain": second_causal_mmm.idata.posterior.chain,
"draw": second_causal_mmm.idata.posterior.draw,
"date": X_train.index,
},)
Sample can be complicated because transformations but you have the coefficient recover anyways. This should be equivalent. |
|
:) View entire conversation on ReviewNB |
|
Here I am showing the x1 contribution and the posterior predictive mean to the likelihood View entire conversation on ReviewNB |
The notebook https://www.pymc-marketing.io/en/latest/notebooks/mmm/mmm_upper_funnel_causal_approach.html is fantastic 🔥 ! We want to have a shorter version with more accessible comments and more detailed explanations for newcomers.
📚 Documentation preview 📚: https://pymc-marketing--2070.org.readthedocs.build/en/2070/