Slack:
https://bcccd.slack.com/archives/C02PQ5W9W4TTamás Szűcs, Attila KrajcsiEötvös Loránd University, Budapest, Department of Cognitive PsychologyReliability is underreported in the cognitive science literature and when it is reported it is often low. The typically low number of trials in cognitive developmental studies makes them especially prone to bad reliability. This is an issue because, e.g., bad reliability can attenuate the retrievable correlation coefficient, or it can decrease statistical power. Thus, it is important to take reliability into account in studies focusing on individual differences. One of the ways of dealing with bad reliability is attempting to filter out unexplained variance, or noise and raise the reliability.
In the current study, we examined the possibilities of increasing the reliability of variables using two data filtering techniques: filtering out supposedly noisy trials (1), and attempting to account for unexplained variance by entering additional regressors into a multiple regression model (2).
We analyzed the reliability of three numerical effects in the number comparison task measured with different metrics (e.g. reaction times, error rates and drift rates) and the reliability of the lure discrimination index in the abstract pattern separation task and the mnemonic similarity task. Reliability was calculated using Spearman-Brown corrected even-odd split-half reliability and an additional bootstrapping technique that circumvents some of the known issues of the split-half coefficient.
Surprisingly, our results show that data filtering techniques do not increase the reliability of variables, instead showing very inconsistent effects. Our results suggest that current data filtering practices should be re-evaluated in various areas, including developmental research.
- Session 2, Tuesday, 11 Jan, 07:00 - 08:30 (UTC +0)
- Session 6, Wednesday, 12 Jan, 13:00 - 14:30 (UTC +0)