Your point about not all data being equal is well-taken. Thus, findings from larger studies, and from studies with better treatment fidelity and other controls, should in general be taken more seriously. You also raised an interesting question: Does researcher's allegiance influence outcome? Although intuitively it seems like it would, for this very reason we have a tradition of introducing controls to minimize bias. Re the published EMDR studies, there are several that I can think of in which there appears to be evidence relevant to this issue: 1. In the Devilly and Spence study, the very high dropout rate in the EMDR condition - an outlier, compared to the rest of the literature - occurred almost entirely prior to the onset of the EMDR treatment. Normally, most dropout occurs once the participant has had a taste of the treatment and finds it unappetizing. I cannot help but suspect that EMDR was presented in a non-standard and off-putting manner in this study. Note that the EMDR outcomes were also particularly poor in this study, and not consistent with other findings in the literature (not even consistent with studies in which CBT may have outperformed EMDR). Of course, treatment fidelity was also an issue in this study, but the dropout pattern in particular suggests that bias may have impacted outcome - the very thing you were concerned with. 2. In the Carlson et al study, EMDR was selected as a "sham treatment" control condition. The researchers were intending to test their relaxation/biofeedback approach, and in addition to waitlist, wanted a control condition that would make participants believe that they were receiving a credible alternative treatment. In this case, the bias was explicitly against EMDR, yet EMDR outperformed the preferred treatment. Here, the evidence suggests that the bias did not impact outcome. 3. In the Edmunds and Rubin study comparing EMDR to eclectic treatment for adult survivors of sexual abuse, they were explicitly concerned with this issue. Although the same four therapists conducted the treatment for both treatment groups, three of the therapists had not heard of EMDR before being trained for the study; and indicated varying degrees of skepticism - which controlled somewhat for bias, and turned out to be unrelated to their effectiveness. 4. In the Lee et al study, Powers et al, and Ironson et al (maybe others too), my understanding is that the therapists had first been trained in the CBT approach, and thus might be considered to have a primary allegiance to that. Later they were trained in EMDR. Maybe their allegiance changed, maybe not, how can we know? Furthermore, even though EMDR came out better (in certain ways) in these studies, the CBT approach also performed very well, consistent with outcomes reported by the proponents of those approaches. Therefore, I think that in these studies, it is most reasonable to assume that both treatments were conducted appropriately and that researcher bias did not influence outcome. In conclusion, I think that this is an important issue to be aware of. On the other hand, I think that the available evidence on this issue is sufficient to preclude discounting all studies in which EMDR looks good as being primarily the result of bias. Furthermore, I think that it's inappropriate to conclude that just because someone reports that EMDR works better than something else, they must be biased towards EMDR or have some vested interest. I think that it's entirely possible that their bias is for empiricism and for progress in developing effective treatments. We should be alert for bias, but not biased about assuming that it is the driving force in all cases.
Replies:
|
| Behavior OnLine Home Page | Disclaimer |
Copyright © 1996-2004 Behavior OnLine, Inc. All rights reserved.