Replies: 4 comments
-
|
Few thoughts:
Re point 1: If you have n>2 choice options, and you don't want to use accuracy coding, but instead follow an approach where you model one accumulator per choice, as the paper you cite suggests, you could use the We are aiming to increase model coverage quite a bit over the next few months. |
Beta Was this translation helpful? Give feedback.
-
|
Thank you for your reply. Regarding the first point, I used accuracy coding here because there are 3 possible responses. I was wondering if it is conceptually wrong to use accuracy coding like that and vary the parameters with the emotion (which also becomes the response here). The models do converge. |
Beta Was this translation helpful? Give feedback.
-
|
Hi @Lipika-T , The core idea behind accuracy coding is that you break things down into 'correct' and 'incorrect' answers in a binary way. The way you model parameters e.g. the bias, you just need to make sure that logically this makes sense (having an 'accuracy bias'). If you actually have three options that could be modeled individually, you loose some information. It's neither globally right or globally wrong depends on your case. Whether or not the models converge is only part of the story, the logical questions concerning overall interpretation of what the parameters mean is the other. Interpretation is a bit more straightforward if one instead models options, with associated dedicated accumulators, explicitly. Regarding point 4: A trade-off in parameters just couples two (or more) parameters in the posterior. This most often shows up quite directly as high posterior correlations. The idea here is that the setting of some parameter X determines the setting of parameter Y in the posterior, so they can't be analyzed independently, they move together. The degenerate example is that these parameter move exactly on a line, so you can always make up for a setting of X with another exact setting of Y. In that case, you get marginal posterior distributions that are very very wide, and you can get a wrong idea of whether or not a parameter actually matters in your model. You can usually drop parameter in the model as a result. If the example is not extremely pathological, sampling (and convergence) can still work, but it's worth checking for this phenomenon regardless. |
Beta Was this translation helpful? Give feedback.
-
|
Thank you for your inputs. I will check for these in my data |
Beta Was this translation helpful? Give feedback.
Uh oh!
There was an error while loading. Please reload this page.
-
Hello,
I am trying to fit a DDM to responses from an emotion categorization task with 3 emotions - Happy, Fear and Neutral. I'm using accuracy coding and I am varying drift rate, threshold and non-decision time with the emotion condition. Is this approach correct for a task where there are 3 possible responses? I read in this paper that in accuracy coding the drift rate should not be varied by the stimulus type.
https://www.frontiersin.org/journals/psychology/articles/10.3389/fpsyg.2015.00336/full
The accuracy is also high for this task, is that an issue for accuracy coding?
I also wanted to understand what 'z' means for accuracy coding. I read that it should be fixed to 0.5. I first tried to fit this model keeping 'z' as a free parameter. But for multiple models 'z' was estimated to be around 0.3 - 0.4. What could this mean? These models did converge (r-hat of z < 1.01).
Beta Was this translation helpful? Give feedback.
All reactions