Circular analysis

From Wikipedia the free encyclopedia

In statistics, circular analysis is the selection of the details of a data analysis using the data that is being analysed. It is often referred to as double dipping, as one uses the same data twice. Circular analysis unjustifiably inflates the apparent statistical strength of any results reported and, at the most extreme, can lead to the apparently significant result being found in data that consists only of noise. In particular, where an experiment is implemented to study a postulated effect, it is a misuse of statistics to initially reduce the complete dataset by selecting a subset of data in ways that are aligned to the effects being studied. A second misuse occurs where the performance of a fitted model or classification rule is reported as a raw result, without allowing for the effects of model-selection and the tuning of parameters based on the data being analyzed.

Examples[edit]

At its most simple, it can include the decision to remove outliers, after noticing this might help improve the analysis of an experiment. The effect can be more subtle. In functional magnetic resonance imaging (fMRI) data, for example, considerable amounts of pre-processing is often needed. These might be applied incrementally until the analysis 'works'. Similarly, the classifiers used in a multivoxel pattern analysis of fMRI data require parameters, which could be tuned to maximise the classification accuracy.

In geology, the potential for circular analysis has been noted[1] in the case of maps of geological faults, where these may be drawn on the basis of an assumption that faults develop and propagate in a particular way, with those maps being later used as evidence that faults do actually develop in that way.

Solutions[edit]

Careful design of the analysis one plans to perform, prior to collecting the data, means the analysis choice is not affected by the data collected. Alternatively, one might decide to perfect the classification on one or two participants, and then use the analysis on the remaining participant data. Regarding the selection of classification parameters, a common method is to divide the data into two sets, and find the optimum parameter using one set and then test using this parameter value on the second set. This is a standard technique[citation needed] used (for example) by the princeton MVPA classification library.[2]

Notes[edit]

  1. ^ Scott, D. L.; Braun, J.; Etheridge, M. A. (1994). "Dip analysis as a tool for estimating regional kinematics in extensional terranes". Journal of Structural Geology. 16 (3): 393. doi:10.1016/0191-8141(94)90043-4.
  2. ^ "Princeton Multi-Voxel Pattern Analysis (MVPA) Toolbox | Neuroscience". pni.princeton.edu. Retrieved 2019-07-23.

References[edit]