Skip to content

Commit ec022a4

Browse files
committed
tighten writing.
Signed-off-by: Nathaniel <NathanielF@users.noreply.github.com>
1 parent c4cd50b commit ec022a4

File tree

2 files changed

+16
-12
lines changed

2 files changed

+16
-12
lines changed

examples/case_studies/bayesian_sem_workflow.ipynb

Lines changed: 8 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -5909,9 +5909,9 @@
59095909
"id": "59c4d17a",
59105910
"metadata": {},
59115911
"source": [
5912-
"In an applied setting it's these kinds of implications that are crucially important to surface and understand. From a workflow point of view we want to ensure that our modelling drives clarity on these precise points and avoids adding noise generally. This is where parameter recovery exercises can lend assurances and bolster confidence in the findings of empirical work.\n",
5912+
"In an applied setting it's these kinds of implications that are crucially important to surface and understand. From a workflow point of view we want to ensure that our modelling drives clarity on these precise points and avoids adding noise generally. If we're assessing a particular hypothesis or aiming to estimate a concrete quantity, the model specification should be robust enough to support those inferences. This is where parameter recovery exercises can lend assurances and bolster confidence in the findings of empirical work. Here we've shown that our model specification will support inferences about about a class of particular causal contrasts i.e. how treatment changes the direct effects of one latent construct on another.\n",
59135913
"\n",
5914-
"Another way we might interrogate the implications of a model is to see how well it can predict \"downstream\" outcomes of the implied model. How does job-satisfaction relate to attrition risk?"
5914+
"Another way we might interrogate the implications of a model is to see how well it can predict \"downstream\" outcomes of the implied model. How does job-satisfaction relate to attrition risk and approaches to work?"
59155915
]
59165916
},
59175917
{
@@ -5921,7 +5921,7 @@
59215921
"source": [
59225922
"## Discrete Choice Component\n",
59235923
"\n",
5924-
"Combining SEM structures with Discrete choice models involves simply adding a an extra likelihood term dependent on the latent factors. HR managers everywhere need to monitor attrition decisions. Often, they conceptualise the rationale for these decisions as being driven by abstract notions of job satisfaction. We now have tools to measure the latent constructs, but can we predict attrition outcomes from these latent predictors? \n",
5924+
"Combining SEM structures with Discrete choice models involves adding an extra likelihood term dependent on the latent factors. HR managers everywhere need to monitor attrition decisions. Often, they conceptualise the rationale for these decisions as being driven by abstract notions of job satisfaction. We now have tools to measure the latent constructs, but can we predict attrition outcomes from these latent predictors? \n",
59255925
"\n",
59265926
"Let's include a discrete choice scenario into the SEM model context. We're aiming to predict a categorical decision about whether the employee `quits/stays/quiet-quits` as the result of their job satisfaction, and their view of the utility of work. Again, we'll see this up as a parameter recovery exercise. \n",
59275927
"\n",
@@ -5933,7 +5933,7 @@
59335933
"id": "306e09b8",
59345934
"metadata": {},
59355935
"source": [
5936-
"The discrete choice setting is intuitive in this context because we can model the individual's subjective utility of work. This is conceptualised (in rational-choice theory) to determine the choice outcome."
5936+
"The discrete choice setting is intuitive in this context because we can model the individual's subjective utility of work as a function of their job satisfaction. This utility measure is conceptualised (in rational-choice theory) to determine the choice outcome."
59375937
]
59385938
},
59395939
{
@@ -5954,7 +5954,7 @@
59545954
"id": "d80e7e2f",
59555955
"metadata": {},
59565956
"source": [
5957-
"The modelling is similar to the basic SEM set up, but we've additionally included a multinomial outcome for each of the available alternatives. Note however, that we have no alternative-specific covariates (i.e. price of the choice) since the draws of the latent constructs are fixed predictors for each of the three outcomes. As such we need to constrain one of the alternatives to 0 so it acts as the reference class and allows identification of the coefficient weights for the other alternatives. This is a basic implementation of a discrete choice model where we allow alternative-specific intercept terms to interact with the beta coefficient for each latent construct. In this way we infer how e.g. job satisfaction drives career choice."
5957+
"The modelling is similar to the basic SEM set up, but we've additionally included a multinomial outcome for each of the available alternatives. Note however, that we have no alternative-specific covariates (i.e. price of the choice) since the draws of the latent constructs are fixed predictors for each of the three outcomes. As such we need to constrain one of the alternatives to 0 so it acts as the reference class and allows identification of the coefficient weights for the other alternatives. This is a basic implementation of a discrete choice model where we allow alternative-specific intercept terms to interact with the beta coefficient for each latent construct. Other variants are possible, but this example will allow us to infer how job satisfaction drives choices about work."
59585958
]
59595959
},
59605960
{
@@ -6930,7 +6930,9 @@
69306930
"source": [
69316931
"We can recover the inverse relationship we encoded in the outcomes between job-satisfaction and the choice to stay. This is encouraging. \n",
69326932
"\n",
6933-
"The \"action\" in human decision making is often understood to be driven by these hard-to-quantify constructs that determine motivation. SEM with a discrete component offers us a way to model these processes allowing for measurement error between the observables and the latent drivers of choice. Secondly, we are triangulating the values of the system between two sources of observable data. On the one hand, we measure latent constructs in the SEM with a range of survey measures (`JW1`, `JW2`, ... ) but then calibrate the consequences of that measurement against revealed choice data. This is a powerful technique for abstracting over the expressed attitudes of rational agents, and deriving an interpretable representation of the latent attitude in their expressions. These representations can be further calibrated against the observed choices made by the agent. This two-step of information compression and prediction serves to concisely quantify and evaluate the idiosyncratic attitudes of a complex agent. As we iteratively layer-in these constructs in our model development, we come to understand their baseline and interactive effects. This perspective helps us gauge the coherence between attitudes and actions of the agents under study. "
6933+
"The \"action\" in human decision making is often understood to be driven by these hard-to-quantify constructs that determine motivation. SEM with a discrete choice component offers us a way to model these processes, while allowing for measurement error between the observables and the latent drivers of choice. Secondly, we are triangulating the values of the system between two sources of observable data. On the one hand, we measure latent constructs in the SEM with a range of survey measures (`JW1`, `JW2`, ... ) but then calibrate the consequences of that measurement against revealed choice data. This is a powerful technique for abstracting over the expressed attitudes of rational agents, and deriving an interpretable representation of the latent attitude in their expressions. These representations are then further calibrated against the observed choices made by the agent. \n",
6934+
"\n",
6935+
"This two-step of information compression and prediction serves to concisely quantify and evaluate the idiosyncratic attitudes of a complex agent. As we iteratively layer-in these constructs in our model development, we come to understand their baseline and interactive effects. This perspective helps us gauge the coherence between attitudes and actions of the agents under study. "
69346936
]
69356937
},
69366938
{

examples/case_studies/bayesian_sem_workflow.myst.md

Lines changed: 8 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -1234,23 +1234,23 @@ plt.suptitle(
12341234
);
12351235
```
12361236

1237-
In an applied setting it's these kinds of implications that are crucially important to surface and understand. From a workflow point of view we want to ensure that our modelling drives clarity on these precise points and avoids adding noise generally. This is where parameter recovery exercises can lend assurances and bolster confidence in the findings of empirical work.
1237+
In an applied setting it's these kinds of implications that are crucially important to surface and understand. From a workflow point of view we want to ensure that our modelling drives clarity on these precise points and avoids adding noise generally. If we're assessing a particular hypothesis or aiming to estimate a concrete quantity, the model specification should be robust enough to support those inferences. This is where parameter recovery exercises can lend assurances and bolster confidence in the findings of empirical work. Here we've shown that our model specification will support inferences about about a class of particular causal contrasts i.e. how treatment changes the direct effects of one latent construct on another.
12381238

1239-
Another way we might interrogate the implications of a model is to see how well it can predict "downstream" outcomes of the implied model. How does job-satisfaction relate to attrition risk?
1239+
Another way we might interrogate the implications of a model is to see how well it can predict "downstream" outcomes of the implied model. How does job-satisfaction relate to attrition risk and approaches to work?
12401240

12411241
+++
12421242

12431243
## Discrete Choice Component
12441244

1245-
Combining SEM structures with Discrete choice models involves simply adding a an extra likelihood term dependent on the latent factors. HR managers everywhere need to monitor attrition decisions. Often, they conceptualise the rationale for these decisions as being driven by abstract notions of job satisfaction. We now have tools to measure the latent constructs, but can we predict attrition outcomes from these latent predictors?
1245+
Combining SEM structures with Discrete choice models involves adding an extra likelihood term dependent on the latent factors. HR managers everywhere need to monitor attrition decisions. Often, they conceptualise the rationale for these decisions as being driven by abstract notions of job satisfaction. We now have tools to measure the latent constructs, but can we predict attrition outcomes from these latent predictors?
12461246

12471247
Let's include a discrete choice scenario into the SEM model context. We're aiming to predict a categorical decision about whether the employee `quits/stays/quiet-quits` as the result of their job satisfaction, and their view of the utility of work. Again, we'll see this up as a parameter recovery exercise.
12481248

12491249
![](dcm_sem.png)
12501250

12511251
+++
12521252

1253-
The discrete choice setting is intuitive in this context because we can model the individual's subjective utility of work. This is conceptualised (in rational-choice theory) to determine the choice outcome.
1253+
The discrete choice setting is intuitive in this context because we can model the individual's subjective utility of work as a function of their job satisfaction. This utility measure is conceptualised (in rational-choice theory) to determine the choice outcome.
12541254

12551255
```{code-cell} ipython3
12561256
observed_data_discrete = make_sample(cov_matrix, 250, FEATURE_COLUMNS)
@@ -1259,7 +1259,7 @@ coords["obs"] = range(len(observed_data_discrete))
12591259
coords["alts"] = ["stay", "quit", "quiet quit"]
12601260
```
12611261

1262-
The modelling is similar to the basic SEM set up, but we've additionally included a multinomial outcome for each of the available alternatives. Note however, that we have no alternative-specific covariates (i.e. price of the choice) since the draws of the latent constructs are fixed predictors for each of the three outcomes. As such we need to constrain one of the alternatives to 0 so it acts as the reference class and allows identification of the coefficient weights for the other alternatives. This is a basic implementation of a discrete choice model where we allow alternative-specific intercept terms to interact with the beta coefficient for each latent construct. In this way we infer how e.g. job satisfaction drives career choice.
1262+
The modelling is similar to the basic SEM set up, but we've additionally included a multinomial outcome for each of the available alternatives. Note however, that we have no alternative-specific covariates (i.e. price of the choice) since the draws of the latent constructs are fixed predictors for each of the three outcomes. As such we need to constrain one of the alternatives to 0 so it acts as the reference class and allows identification of the coefficient weights for the other alternatives. This is a basic implementation of a discrete choice model where we allow alternative-specific intercept terms to interact with the beta coefficient for each latent construct. Other variants are possible, but this example will allow us to infer how job satisfaction drives choices about work.
12631263

12641264
```{code-cell} ipython3
12651265
def make_discrete_choice_conditional(observed_data, priors, conditional=True):
@@ -1486,7 +1486,9 @@ axs[1].legend();
14861486

14871487
We can recover the inverse relationship we encoded in the outcomes between job-satisfaction and the choice to stay. This is encouraging.
14881488

1489-
The "action" in human decision making is often understood to be driven by these hard-to-quantify constructs that determine motivation. SEM with a discrete component offers us a way to model these processes allowing for measurement error between the observables and the latent drivers of choice. Secondly, we are triangulating the values of the system between two sources of observable data. On the one hand, we measure latent constructs in the SEM with a range of survey measures (`JW1`, `JW2`, ... ) but then calibrate the consequences of that measurement against revealed choice data. This is a powerful technique for abstracting over the expressed attitudes of rational agents, and deriving an interpretable representation of the latent attitude in their expressions. These representations can be further calibrated against the observed choices made by the agent. This two-step of information compression and prediction serves to concisely quantify and evaluate the idiosyncratic attitudes of a complex agent. As we iteratively layer-in these constructs in our model development, we come to understand their baseline and interactive effects. This perspective helps us gauge the coherence between attitudes and actions of the agents under study.
1489+
The "action" in human decision making is often understood to be driven by these hard-to-quantify constructs that determine motivation. SEM with a discrete choice component offers us a way to model these processes, while allowing for measurement error between the observables and the latent drivers of choice. Secondly, we are triangulating the values of the system between two sources of observable data. On the one hand, we measure latent constructs in the SEM with a range of survey measures (`JW1`, `JW2`, ... ) but then calibrate the consequences of that measurement against revealed choice data. This is a powerful technique for abstracting over the expressed attitudes of rational agents, and deriving an interpretable representation of the latent attitude in their expressions. These representations are then further calibrated against the observed choices made by the agent.
1490+
1491+
This two-step of information compression and prediction serves to concisely quantify and evaluate the idiosyncratic attitudes of a complex agent. As we iteratively layer-in these constructs in our model development, we come to understand their baseline and interactive effects. This perspective helps us gauge the coherence between attitudes and actions of the agents under study.
14901492

14911493
+++
14921494

0 commit comments

Comments
 (0)