Hostname: page-component-8448b6f56d-sxzjt Total loading time: 0 Render date: 2024-04-17T23:12:18.994Z Has data issue: false hasContentIssue false

Illness Perceptions Predict Cognitive Performance Validity

Published online by Cambridge University Press:  29 April 2018

George K. Henry*
Affiliation:
David Geffen School of Medicine at UCLA, Los Angeles, California
Robert L. Heilbronner
Affiliation:
Chicago Neuropsychology Group, Chicago, Illinois Feinberg School of Medicine at Northwestern University, Chicago, Illinois
Julie Suhr
Affiliation:
Ohio University, Athens, Ohio
Jeffrey Gornbein
Affiliation:
David Geffen School of Medicine at UCLA, Los Angeles, California
Eveleigh Wagner
Affiliation:
Vanderbilt University Medical Center, Nashville, Tennessee
Daniel L. Drane
Affiliation:
Emory University School of Medicine, Atlanta, Georgia
*
Correspondence and reprint requests to: George K. Henry, 11601 Wilshire Blvd., 5th Floor, Los Angeles, California, 90025. E-mail: GHenry0249@aol.com
Rights & Permissions [Opens in a new window]

Abstract

Objectives: The aim of this study was to investigate the relationship of psychological variables to cognitive performance validity test (PVT) results in mixed forensic and nonforensic clinical samples. Methods: Participants included 183 adults who underwent comprehensive neuropsychological examination. Criterion groups were formed, that is, Credible Group or Noncredible Group, based upon their performance on the Word Memory Test and other stand-alone and embedded PVT measures. Results: Multivariate logistic regression analysis identified three significant predictors of cognitive performance validity. These included two psychological constructs, for example, Cogniphobia (perception that cognitive effort will exacerbate neurological symptoms), and Symptom Identity (perception that current symptoms are the result of illness or injury), and one contextual factor (forensic). While there was no interaction between these factors, elevated scores were most often observed in the forensic sample, suggesting that these independently contributing intrinsic psychological factors are more likely to occur in a forensic environment. Conclusions: Illness perceptions were significant predictors of cognitive performance validity particularly when they reached very elevated levels. Extreme elevations were more common among participants in the forensic sample, and potential reasons for this pattern are explored. (JINS, 2018, 24, 1–11)

Type
Research Articles
Copyright
Copyright © The International Neuropsychological Society 2018 

INTRODUCTION

Evaluating the validity of the psychometric database is critical in neuropsychological assessment and typically relies upon results of both performance validity testing (PVT) and symptom validity testing (SVT). PVT uses both stand alone and embedded measures such as the Word Memory Test (WMT; Green, Reference Green2005) and Reliable Digit Span (RDS; Greiffenstein, Baker, & Gola, Reference Greiffenstein, Baker and Gola1994). The validity of examinee self-report (symptom validity) is typically assessed via self-report measures such as the Minnesota Multiphasic Personality Inventory-2nd Edition-Restructured Form validity scales (MMPI-2-RF; Ben-Porath & Tellegen, Reference Ben-Porath and Tellegen2008). The relationship between PVT and SVT is modest at best (Haggerty, Frazier, Busch, & Naugle, Reference Haggerty, Frazier, Busch and Naugle2007), with some studies showing self-report measures, such as the MMPI-2-RF, to explain much of the variance in PVT performance (Martin, Schroeder, Heinrichs, & Baade, Reference Martin, Schroeder, Heinrichs and Baade2015; Peck et al., Reference Peck, Schroeder, Heinrichs, VonDran, Brockman, Webster and Baade2013), whereas others have not (Van Dyke, Millis, Axelrod, & Hanks, Reference Van Dyke, Millis, Axelrod and Hanks2013).

The relationship between PVT and SVT may vary by context. In general, studies of personal injury litigants and disability claimants with external financial incentives who fail PVT (Gervais, Wygant, Sellbom, & Ben-Porath, Reference Gervais, Wygant, Sellbom and Ben-Porath2011; Nguyen, Green, & Barr, Reference Nguyen, Green and Barr2015; Schroeder et al., Reference Schroeder, Baade, Peck, VonDran, Brockman, Webster and Heinrichs2012; Tarescavage, Wygant, Gervais, & Ben-Porath, Reference Tarescavage, Wygant, Gervais and Ben-Porath2012; Youngjohn, Wershba, Stevenson, Sturgeon, & Thomas, Reference Youngjohn, Wershba, Stevenson, Sturgeon and Thomas2011; Wygant et al., Reference Wygant, Ben-Porath, Arbisi, Berry, Freeman and Heilbronner2009) show a relationship between PVT performance and over-reporting of somatic and cognitive symptoms as measured by the MMPI-2-RF Response Bias Scale (RBS; Gervais, Ben-Porath, Wygant, & Green, Reference Gervais, Ben-Porath, Wygant and Green2007) and the Symptom Validity Scale (FBS-r).

In contrast, criminal defendants who fail PVT (Wygant et al., Reference Wygant, Sellbom, Gervais, Ben-Porath, Stafford, Freeman and Heilbronner2010; Sellbom, Toomey, Wygant, Kucharski, & Duncan, Reference Sellbom, Toomey, Wygant, Kucharski and Duncan2010) tend to over-report psychiatric symptoms, as indicated by the Infrequent Responses Scale (F-r) and Infrequent Psychopathology Responses Scale (F-p). A recent meta-analysis of the MMPI-2-RF over-reporting scales showed large effect sizes for detecting noncredible neurocognitive dysfunction (Ingram & Ternes, Reference Ingram and Ternes2016). A more recent empirically derived embedded measure of symptom over-reporting developed for the MMPI-2-RF, the 11-item Henry-Heilbronner Index-r (HHI-r; Henry, Heilbronner, Algina, & Kaya, Reference Henry, Heilbronner, Algina and Kaya2013), has also been shown to identify symptom exaggeration in personal injury litigants and disability claimants.

Adequate performance within one validity domain does not automatically ensure adequate performance in the other (Lees-Haley, Iverson, Lange, Fox, & Allen, Reference Lees-Haley, Iverson, Lange, Fox and Allen2002; Sweet, Condit, & Nelson, Reference Sweet, Condit and Nelson2008; Van Dyke et al., Reference Van Dyke, Millis, Axelrod and Hanks2013). Individuals who pass both PVT and SVT are classified as producing valid test results, while individuals who fail both may be described as producing invalid test results. Test results may also be of mixed validity where subjects may fail PVT, but pass SVT, and vice versa. Under these hybrid scenarios a portion of the test results may be considered valid and the other portion invalid (Van Dyke et al., Reference Van Dyke, Millis, Axelrod and Hanks2013). While base rates for performance invalidity vary from 30% to 54% across forensic contexts (Ardolf, Denney, & Houston, Reference Ardolf, Denney and Houston2007; Larrabee, Reference Larrabee2003; Mittenberg, Patton, Canyock, & Condit, Reference Mittenberg, Patton, Canyock and Condit2002), performance invalidity ranging from 11% to 48% is also seen in non-forensic or clinical contexts with no known external incentives (An, Zakanis, & Joordens, Reference An, Zakzanis and Joordens2012; DeRight & Jorgensen, Reference DeRight and Jorgensen2015; Forbey & Lee, Reference Forbey and Lee2011; Forbey, Lee, Ben-Porath, Arbisi, & Gartland, Reference Forbey, Lee, Ben-Porath, Arbisi and Gartland2013; Kemp et al., Reference Kemp, Coughlan, Rowbottom, Wilkinson, Teggart and Baker2009; Schroeder & Marshall, Reference Schroeder and Marshall2011; Silk-Eglit, et al., Reference Silk-Eglit, Stenclik, Gavett, Adam, Lynch and McCaffrey2014).

There may be multiple reasons for performance invalidity. Historically, PVT failures in a forensic context with external incentives have been attributed to malingering, while failure in non-forensic contexts with no known external incentives has been claimed to be due to a variety of other factors, including mood, sleep deprivation, and fatigue (Vaquez-Justo, Alvarez, & Otero, Reference Vaquez-Justo, Alvarez and Otero2003); physical, emotional, or sexual abuse (Williamson, Holsman, Chayton, Miller, & Drane, Reference Williamson, Holsman, Clayton, Miller and Drane2012); chronic pain, somatoform disorders, and medically unexplained illnesses (Drane et al., Reference Drane, Williamson, Stroup, Holmes, Jung, Koerner and Miller2006; Johnson, Reference Johnson2008; Kemp et al., Reference Kemp, Coughlan, Rowbottom, Wilkinson, Teggart and Baker2009; Lamberty, Reference Lamberty2008; Suhr, Reference Suhr2003); interictal epileptiform activity and recent seizures (Drane et al., Reference Drane, Ojemann, Kim, Gross, Miller, Faught and Loring2016; Loring, Lee, & Meador, Reference Loring, Lee and Meador2005); low intelligence (Dean, Victor, Boone, & Arnold, Reference Dean, Victor, Boone and Arnold2008); and dementia (Dean, Victor, Boone, Philpott, & Hess, Reference Dean, Victor, Boone, Philpott and Hess2009).

A survey of 316 North American neuropsychologists (Martin, Schroeder, & Odland, Reference Martin, Schroeder and Odland2015) revealed that the most frequent interpretation of PVT failure by examinees in a forensic context, (e.g., personal injury, disability claimants, criminal) was malingering, whereas PVT failure in a clinical non-forensic context was attributed to non-somatoform psychiatric issues. A recent selective survey of 24 board-certified North American neuropsychologists with expertise in PVT (Martin, Schroder, & Odland, Reference Martin, Schroeder and Odland2016) showed the majority believed that test invalidity most often resulted from malingering in a forensic context, versus somatoform/conversion disorder in non-forensic settings.

Lately, there has been increasing reticence by neuropsychologists to diagnose malingering when PVTs are failed in a forensic context. Martin and colleagues, (Reference Martin, Schroeder, Heinrichs and Baade2015) advise, “Base rates and other relevant literature, examinee characteristics, and the nature and extent of any convergent evidence should be examined when offering explanations for validity test failure” (p. 14). In short, there is a growing interest in achieving a better understanding of both contextual and subject variables that affect performance invalidity. The present study examined several “intrinsic” or subjective psychological factors and their potential role in explaining PVT performance.

Intrinsic Factors

Self-efficacy, suggestibility, dissociation, and illness perceptions including symptom identity, illness consequences, psychological effects of illness, and cogniphobia represent just a few intrinsic psychological factors possibly influencing symptom expression and cognitive test performance. Self-efficacy is the belief, or lack thereof, that one has the capacity to perform successfully at a certain task, and to estimate how much effort will be required for successful performance. According to social cognitive theory (Bandura, Reference Bandura1997), individuals with high self-efficacy would be expected to have a higher probability of performance success, and vice versa. Thus, a person who believes they have an illness or injury that is responsible for their poor cognition (low self- efficacy) might not only perform more poorly on cognitive tests, but also be at greater risk of failing PVTs.

Suggestibility is a tendency to be easily influenced by what we see and hear in the world around us. According to Delis and Wetter (Reference Delis and Wetter2007), “highly suggestible individuals may be especially prone to exaggerate cognitive dysfunction particularly in a context that reinforces a belief in those deficits” (p. 592). Highly suggestible individuals may also adopt a selective attentional bias (Mittenberg, DiGiulio, Perrin, & Bass, Reference Mittenberg, DiGiulio, Perrin and Bass1992) causing them to “overly focus on common cognitive difficulties, interpret them as reflecting significant brain dysfunction, and possibly acting out these deficits in their daily lives or during the assessment process” (Delis & Wetter, Reference Delis and Wetter2007, p. 592). Thus, high suggestibility may be associated with a greater probability of failing PVT.

Dissociation is a change in normal consciousness or awareness arising from reduced or altered access to one’s thoughts feelings, perceptions, and/or memories and likely exists along a continuum. Research suggests dissociative tendencies may negatively affect basic cognitive processes including memory, attention, and executive functions (Amrhein, Hengmith, Maragkos, & Hennig-Fast, Reference Amrhein, Hengmith, Maragkos and Hennig-Fast2008; DePrince & Freyd, Reference DePrince and Freyd1999; Freyd, Martello, Alvarado, Hayes, & Christman, Reference Freyd, Martorello, Alvarado, Hayes and Christman1998). Thus, individuals with a greater tendency to dissociate may perform more poorly on cognitive tests and PVTs.

Symptom identity refers to the extent to which patients endorse symptoms as relevant to their current illness or injury. Thus, a person with a strong symptom identity might not only endorse a high number of symptoms, but also attribute the majority of their symptoms to a remote illness/injury. Research suggests that individuals attributing high base rate symptoms to a remote mild traumatic brain injury (TBI) versus other causes are more likely to report greater overall symptom severity (Belanger, Barwick, Kip, Kretzmer, & Vanderploeg, Reference Belanger, Barwick, Kip, Kretzmer and Vanderploeg2013).

Illness perceptions, that is, thoughts and beliefs relative to one’s diagnosis are influenced by cultural, institutional, social, and personal factors. Furthermore, individuals tend to assume their perceptions are true based upon credible information (Fragale & Heath, Reference Fragale and Health2004). Thus, symptom maintenance may be partially reinforced by well-intended healthcare providers who inform such patients their symptoms may be permanent, that is, iatrogenic effect. In contrast, reassurance, education, and discussions of expected recovery time have been shown to be effective for reducing both the magnitude and timeline of PCS complaints (Mittenberg, Tremont, Zielinski, Fichera, & Rayls, Reference Mittenberg, Tremont, Zielinski, Fichera and Rayls1996). There is some evidence that health beliefs may influence symptom reporting (Diefenbach & Leventhal, Reference Diefenbach and Leventhal1996), as individuals with mild TBI who believe that their symptoms will persist and have negative consequences report significantly more post-concussive symptoms at three months (Whittaker, Kemp, & House, Reference Whittaker, Kemp and House2007) and six months post-injury (Snell, Hay-Smith, Surgenor, & Siegert, Reference Snell, Hay-Smith, Surgenor and Siegert2013).

Another factor that may influence the formation of illness perceptions is Cogniphobia (Suhr & Spickard, Reference Suhr and Spickard2012), a construct that arose from research documenting the effects of pain-related fear and avoidance (kinesiophobia) in health outcomes for individuals with chronic pain disorders, particularly with regard to their relationship to avoidance of physical activity (Crombez, Verbaet, Lysens, Baeyens, & Eelen, Reference Crombez, Verbaet, Lysens, Baeyers and Eelen1998). This construct was expanded into the cognitive realm, tapping patient beliefs that engagement in mental activities demanding greater cognitive effort may actually exacerbate their neurological condition (Martelli, Zasler, Grayson, & Liljedahl, Reference Martelli, Zasler, Grayson and Liljedahl1999). Suhr and Spickard (Reference Suhr and Spickard2012) showed that Cogniphobia was related to diminished performance on sustained attention tasks, PVTs, and pain pressure thresholds in individuals with chronic headache. Individuals high in Cogniphobia might be expected to perform worse on PVTs.

Hypotheses

Given evidence for the potential impact of intrinsic factors on the evolution of symptom complaints and task performance, we examined the relationship of self-efficacy, suggestibility, dissociation, symptom identity, and illness perceptions to PVT performance in a mixed forensic and nonforensic clinical sample. Given the known relationship between SVT and PVT performance, we included over-reporting validity scales from the MMPI-2-RF and the HHI-r. We predicted that not only context, but also intrinsic factors (e.g., self-efficacy, suggestibility, dissociative tendency, symptom identity, and illness perceptions including consequences, psychological effects, and cogniphobia) as well as MMPI-2-RF validity scales, especially the RBS, FBS-r and HHI-r, would be predictors of PVT performance.

METHODS

Background

This multi-site 5-year study involving four neuropsychological laboratories included 198 consecutive referrals of adults ages 18–85 years, who underwent comprehensive neuropsychological examination from 2010 to 2015. The research was completed in accordance with the Helsinki Declaration. Informed consent was obtained before study inclusion. Fifteen participants were omitted due to the presence of interictal discharges during the evaluation as determined by simultaneous ambulatory electroencephalogram (EEG) recordings or video-EEG monitoring, as such activity has been shown to affect PVT performance (Drane et al., Reference Drane, Ojemann, Kim, Gross, Miller, Faught and Loring2016; Williamson et al., Reference Williamson, Drane, Stroup, Holmes, Wilensky and Miller2005). The entire sample of 183 participants was comprised of private clinical patients (n=52), personal injury litigants (n=40), disability claimants (n=41), and a mixture of university students and community members (n=50) seeking neuropsychological evaluation for complaints associated with a medical condition as part of a research project. The personal injury litigants were comprised of mostly mild traumatic brain injury (47.5%), while the most frequent diagnosis for the disability claimants was major depression (29.3%).

Performance Validity Measures

All 183 participants were administered the Word Memory Test (WMT; Green, Reference Green2005), while a subsample of 81 examinees were also administered one additional stand-alone measure of cognitive performance validity including the Test of Memory Malingering (TOMM; Tombaugh, Reference Tombaugh1996), or the Victoria Symptom Validity Test (VSVT; Slick, Hopp, Strauss, & Thompson, Reference Slick, Hopp, Strauss and Thompson1997). Additional embedded performance validity measures were also administered, but differed among the four laboratories based upon each laboratory’s specific clinical practice and research needs. These included Reliable Digit Span (RDS; Greiffenstein et al., Reference Greiffenstein, Baker and Gola1994), failure to maintain set from the Wisconsin Card Sorting Test (Greve, Heinly, Bianchini, & Love, Reference Greve, Heinly, Bianchini and Love2009), the Rey Auditory Verbal Learning Test (RAVLT) Delayed Recognition trial (Boone, Lu, & Wen, Reference Boone, Lu and Wen2005), Effort Index (Barash, Suhr, & Manzel, Reference Barash, Suhr and Manzel2004), the California Verbal Learning Test- 2nd Edition (CVLT-2) Forced Choice Recognition (Delis, Kramer, Kaplan, & Ober, Reference Delis, Kramer, Kaplan and Ober2000), CVLT-2 Effort Equation (Wolfe et al., Reference Wolfe, Millis, Hanks, Fichtenber, Larrabee and Sweet2010), and the Rey-15 Item Memory Recognition Procedure (Boone, Salazar, Lu, Warner-Chacon, & Razani, Reference Boone, Salazar, Lu, Warner-Chacon and Razani2002).

Traditional Symptom Validity Measures

Self-report symptom validity was assessed via the MMPI-2-RF. We included the five over-reporting validity scales of the MMPI-2-RF (F-r, Fp-r, Fs, FBS-r, RBS), as well as the HHI-r (Henry et al., Reference Henry, Heilbronner, Algina and Kaya2013).

Psychological Scales and Questionnaires

All participants were administered psychological measures to assess self-efficacy, suggestibility, dissociation, symptom identity, and illness perceptions including consequences and cogniphobia. Self-efficacy was measured via the Self-Efficacy Scale (SES; Schwarzer & Jerusalem, Reference Schwarzer and Jerusalem1995), a 10-item self-report measure with scores ranging from 0 to 40 (higher scores representing greater self-efficacy). Suggestibility was examined via the 21-item Short Suggestibility Scale (SSS; Kotor, Bellman, & Watson, Reference Kotor, Bellman and Watson2004); scores range from 0 to 84 with higher scores representing greater suggestibility. Dissociation was evaluated via the 28-item Dissociation Experiences Scale (DES; Bernstein & Putnam, Reference Bernstein and Putnam1986) with scores ranging from 0 to 112 (higher scores representing greater tendency to dissociate when not under the influence of alcohol or other drugs).

Symptom Identity was quantified using an adapted version of the Illness Perception Questionnaire-Revised (IPQ-R; Moss-Morris et al., Reference Moss-Morris, Weinman, Petrie, Horne, Cameron and Buick2002) which is commonly used to assess illness perception consistent with the health beliefs model. The IPQ-R is designed to be revised so that the symptomatic items are consistent with the illness/disorder of interest. Therefore, the Symptom Identity Scale, which asks participants to respond yes/no whether they have experienced a symptom since their illness/injury onset, and yes/no whether they believe the symptom is related to their illness/injury, included 31 items likely to be associated with neuropsychological presentations. The score for the 31 (Symptom Identity Scale) items was the total number of “yes” endorsement of symptoms they believed were related to their illness/injury.

Illness perceptions were also measured using two constructs from the IPQ-R including: (1) Consequences (perception that the consequences of their illness or injury are severe) and (2) Psychological Effects (perception that their illness or injury has significant emotional and psychological effects on their day to day functioning). We added to those two constructs five additional items that assessed Cogniphobia via a Likert scale format with higher scores indicative of greater Cogniphobia (perception that cognitive effort will exacerbate neurologic symptoms). The Cogniphobia items were initially developed by Todd, Martelli, and Grayson (Reference Todd, Martelli and Grayson1998) for use in headache and were further tested by Suhr and Spickard (Reference Suhr and Spickard2012) in a headache population, where they were found to be internally consistent and to relate to other measures of pain catastrophizing and pain dangerousness, as well as worse performance on sustained attention tasks, a performance validity measure (suggesting poor effort), and lower pain pressure thresholds on the head.

Word Memory Test and Gatekeeping

Given that all 183 participants were administered the WMT, we used scores on the WMT as a “gatekeeper” to initially sort examinees into a Credible Group (CG: Pass WMT), or Noncredible Group (NCG: Fail WMT). The WMT has the added benefit of eliminating false positive classification errors by identifying examinees who produce a Genuine Memory Impairment Profile (GMIP), which is typically associated with dementia syndromes. No examinee displayed a GMIP. The Noncredible Group (NCG, n=72) was further refined by including only participants who, in addition to failing the WMT, also failed one additional stand-alone or embedded cognitive performance validity measure. The failure rates for all cognitive performance validity measures are summarized in Table 1.

Table 1 Cognitive performance validity failure rates for the noncredible group (n=72)

Note: Cutscores in parentheses.

IR=Immediate Recall, DR=Delayed Recall, CNS=Consistency of Recall; WCST=Wisconsin Card Sorting Test; RAVLT=Rey Auditory Verbal Learning Test; CVLT-2 LDFCR=California Verbal Learning Test-2nd Edition Long Delay Forced Choice Recognition.

The Credible Group (CG, n=111) was comprised of all subjects who initially passed the WMT and did not fail any embedded or additional stand-alone PVT. Seven subjects in the CG who passed the WMT but failed one embedded measure of performance validity were assigned an “ambiguous” credibility status and removed from the CG for any final analyses. The participants in the NCG were comprised of the following broad diagnostic categories: neurologic (54.2%), psychiatric (32.8%), and general medical (13%), while participants in the CG were represented by 67.6% neurologic, 26.6% general medical, and 6.8% psychiatric patients. The two groups did not differ significantly on age (p=.13), years of education (p=.08), or gender (p=.07), but did differ on ethnicity (p=.01). Group demographics are presented in Table 2.

Table 2 Demographic and participant characteristics for the credible and noncredible groups

The composition of specific diagnoses within each diagnostic category is presented in Table 3. Participants were diagnosed with mild traumatic brain injury according to published criteria by the American Congress of Rehabilitation Medicine (1993). Epilepsy and psychogenic nonepileptic seizures were diagnosed on the basis of video-EEG monitoring at a tertiary care epilepsy center. Their neuropsychological evaluations were completed on the monitoring unit while undergoing video-EEG or as an outpatient with simultaneous ambulatory EEG.

Table 3 Diagnostic classes and diagnoses for the credible and noncredible groups

Note: TBI=traumatic brain injury; PNES=pseudoneurologic epileptic seizures; PTSD=post-traumatic stress disorder; GAD=generalized anxiety disorder; COPD=chronic obstructive pulmonary disease; CSF=chronic fatigue syndrome; OSA=obstructive sleep apnea; Szs=seizures.

Data Analysis

Data were checked for invalid response sets and outliers. No participants were removed from data analysis due to an MMPI-2-RF invalid response set (TRIN, VRIN >80T; >15 omissions), and there were no outliers. We first conducted a bivariate logistic regression looking at each predictor, one at a time, without controlling for the other 13 variables. Linearity between each continuous predictor and the noncredible log odds (logit) was assessed using restricted cubic splines. We followed up the bivariate logistic regression with a backward stepwise multivariate (multivariable) logistic regression by simultaneously considering all 14 potential predictor variables including 1 categorical variable (forensic) and 13 continuous predictors, including the 5 MMPI-2-RF over-reporting validity scales (F-r, Fp-r, Fs, FBS-r, RBS), the HHI-r, and 7 “intrinsic” psychological variables including self-efficacy, suggestibility, dissociation, and illness perceptions including consequences, psychological effects, Symptom Identity, and Cogniphobia.

The model was also modified to use dichotomized FBS-r>17 (equivalent to 80T in the MMPI-2-RF normative sample), and Symptom Identity >24 in place of their continuous versions to account for nonlinearity. A p<.05 variable retention criterion was used. In addition a gradient boost regression (Tutz & Binder, Reference Tutz and Binder2006) was also carried out using the same candidates as an alternative strategy for variable selection. Correlations and the variance inflation factors were examined among the 14 predictor variables to evaluate collinearity. Model accuracy statistics [receiver operating characteristics curve (ROC) curve area, sensitivity, specificity, and accuracy defined as the average of sensitivity and specificity] are reported. The R Foundation for Statistical Computing software, version 3.2.2, was used for most computations.

RESULTS

Bivariate and Multivariate Logistic Regression

Table 4 shows the results of the bivariate logistic regression. The odds ratio (OR) is reported for a one-unit increase in any predictor except for FBS-rRaw and Symptom Identity. For example, the OR=1.30 for Cogniphobia suggests that the odds (not risk) of PVT failure (vs. no failure) increase 1.3 times on average for every one-unit increase in the raw Cogniphobia score.

Table 4 Bivariate OR from logistic regression using one predictor at a time

Note: F-r=Infrequent Responses Scale; Fp-r=Infrequent Psychopathology Responses Scale; Fs=Infrequent Somatic Responses Scale; FBS-r=Symptom Validity Scale; RBS=Response Bias Scale; HHI-r=Henry-Heilbronner Index; SES10=Self-Efficacy Scale; SSS21=Short Suggestibility Scale; DES=Dissociative Experiences Scale; IBQ=Illness Beliefs Questionnaire; IPQ-RN=Illness Perception Questionnaire-Revised Neurologic; OR=Odds ratio for one unit increase in variable; Lower and Upper=95% confidence intervals; OR=odds ratio.

In the bivariate logistic regression 13 of the 14 variables were significant predictors of PVT status. Only the Short Suggestibility Scale was not significant. One predictor, Self-Efficacy Scale, was associated with an OR less than 1, indicating that as scores declined the rate of PVT failure increased. All other predictors in the bivariate analysis were associated with positive ORs, indicating that as scores increased on the psychological predictors the rate of PVT failure also increased. However, 2 of the 13 continuous predictor variables, FBS-rRaw and Symptom Identity Scale, were dichotomized as they were found to have a nonlinear relation with the noncredible logit.

The multivariate logistic regression identified a final set of two psychological variables and one context variable as simultaneously significant predictors of noncredible PVT performance [area under the ROC curve (AUC)=0.812]. In decreasing order of significance, the three predictors were Cogniphobia (OR=1.21, p=.0014), Symptom Identity ≥24 (OR=11.6; p=.0018), and forensic context (OR=2.84; p=.0102) (see Table 5). The gradient boost regression also gave the same results.

Table 5 Multivariable Logistic Model for the Noncredible Versus Credible Group Using 3 Psychological Predictors as Candidates

Note: *p-Value significant <.01; ROC AUC=0.812 and is the Concordance or C statistic (C stat=ROC AUC), which is a measure of accuracy; AIC=174.1 and is the Akaike Information Criterion, which is a measure of the relative quality of statistical models for a given set of data; sensitivity=60.2%, specificity=88%, accuracy=74.1%, n=183; OR=odds ratio for one unit increase in variable; Lower and Upper=95% confidence intervals. IBQ=Illness Beliefs Questionnaire; IPQ-RN=Illness Perception Questionnaire-Revised Neurologic.

The model posits that for every one unit increase in the raw Cogniphobia score, the odds of PVT failure increase 1.21 times, controlling for Symptom Identity and forensic status. For Symptom Identity, the OR of 11.60 suggests that examinees with a Symptom Identity score ≥24 have more than an 11-fold odds of PVT failure compared to those with a Symptom Identity score ≤24, that is, a “threshold” effect. The odds of PVT failure were 2.84 times greater in examines tested under a condition of external financial incentives (forensic context=54.3% fail) versus non-forensic context (19.2% fail).

Forensic participants scored significantly higher on Cogniphobia (M=16.94±4.07) compared to clinical patients (M=14.57±3.76; p<.001). A Cogniphobia raw score of ≥19 was associated with ≥.90 specificity. Thirty-two of 172 participants (11 of the 183 missing data) scored ≥19. Most high scorers (n=24 or 75%) were forensic, which was significantly greater than the number of high scorers who were clinical patients (n=8; 25%) according to the exact binomial test. Forensic participants also scored significantly higher on Symptom Identity (M=18.75 ± 7.04) compared to clinical patients (M=11.91±6.70; p<.001). A Symptom Identity raw score ≥24 was associated with ≥.90 specificity. Twenty-seven of 167 participants (16 of the 183 missing data) scored ≥24. Most (n=24; 89%) were forensic, which was significantly greater than the number of clinical patients (n=3; 12%) according to the exact binomial test.

Stepwise Logistic Regression by Candidate Blocks

To further investigate the accuracy of our model, we performed a logistic regression blocking procedure to analyze whether the “intrinisic” variables are related to PVT failure once forensic context (presence/absence of external incentive), and invalid symptom report (MMPI-2-RF F-family validity scales and the HHI-r) have been controlled. First, we entered forensic context, which was a significant predictor (p<.001) of PVT status (AUC=.679). Next, we block entered the five MMPI-2-RF symptom over-reporting scales (F-r, Fs, Fp-r, FBS-rRaw>13, RBS) as well as the HHI-r scale. Forensic context remained a significant predictor (p<.001), while only the RBS was retained as a significant predictor (p<.001) with an AUC=.769.

Finally, we block entered all 7 “intrinsic” psychological constructs (Self-Efficacy, Suggestibility, Dissociation, and Symptom Identity>24, Illness Perception Consequences, Illness Perception Psychological Effects, and Cogniphobia) as well as RBS and forensic context. The final model identified the same three significant predictors of PVT performance that our non-blocking, multivariate logistic regression analysis produced: forensic context (p=.01), Cogniphobia (p=.001), and Symptom Identity>24 (p=.002) with an AUC=.812.

Collinearity and Variance Inflation Factors

The highest correlation among the three predictors in our multivariate model was r=0.46 (Symptom Identity and Cogniphobia). There was no evidence of collinearity. The highest correlation among all potential predictors (r=.81) was between FBS-r and HHI-r, but this was not surprising as eight of the 11 HHI-r items are also found on the 30-item FBS-r. The two intrinsic predictors, that is, Cogniphobia and Symptom Identity, showed moderate correlations with RBS (r=.461 and r=.509, respectively) and FBS-r (r=.455 and r=.613, respectively). There were no significant interactions among the three predictors (overall p=.218). The correlational matrix is presented in Table 6.

Table 6 Correlational matrix for all 14 potential predictors of PVT performance

Note: *=Raw Score; SES=Self-Efficacy Scale; SSS=Short Suggestibility Scale; DES=Dissociative Experiences Scale; SIS=IPQ-RN Symptom Identity Scale; CON=IBQ-Consequences Scale; PSY=IBQ-Psychological Effects Scale.

DISCUSSION

Results revealed that illness perceptions including Cogniphobia (perception that cognitive effort will exacerbate neurological symptoms) and Symptom Identity (perception that current symptoms are the result of illness/injury), and forensic context best predicted outcomes on PVTs in a large mixed forensic and nonforensic clinical sample.

Our prediction that MMPI-2-RF over-reporting validity scales and the HHI-r would be significant predictors of PVT failure was not supported. The failure of any of the MMPI-2-RF over-reporting validity scales to emerge as significant predictors of PVT was unexpected given the bulk of clinical research supporting such a relationship (Ingram & Ternes, Reference Ingram and Ternes2016) showing the relative higher sensitivity of the MMPI-2-RF validity scales, and especially the RBS, a scale that was developed based on MMPI-2 items that distinguished 1212 non–head-injured disability claimants who passed PVTs versus failed PVTs (Gervais et al., Reference Gervais, Ben-Porath, Wygant and Green2007).

Methodological differences may help to explain why RBS did not survive our analyses. A review of prior PVT studies revealed that some confined their analyses to only the RBS and other MMPI-2-RF validity scales (Jones, Ingram, & Ben-Porath, Reference Jones, Ingram and Ben-Porath2012; Nguyen et al., Reference Nguyen, Green and Barr2015; Rogers, Gillard, Berry, & Granacher, Reference Rogers, Gillard, Berry and Granacher2011; Wygant et al., Reference Wygant, Ben-Porath, Arbisi, Berry, Freeman and Heilbronner2009, Reference Wygant, Sellbom, Gervais, Ben-Porath, Stafford, Freeman and Heilbronner2010;), while others included the Substantive scales (Gervais et al., Reference Gervais, Wygant, Sellbom and Ben-Porath2011; Jones et al., Reference Jones, Ingram and Ben-Porath2012; Nelson et al., Reference Nelson, Hoelzle, McGuire, Sim, Goldman, Ferrier-Auerbach and Sponheim2011; Sellbom & Bagby, Reference Sellbom and Bagby2010; Sellbom et al., Reference Sellbom, Toomey, Wygant, Kucharski and Duncan2010; Tarescavage et al., Reference Tarescavage, Wygant, Gervais and Ben-Porath2012; Thomas & Youngjohn, Reference Thomas and Youngjohn2009). The majority of studies used some version of analysis of variance, while others relied upon a regression analysis or used correlational matrices to select predictor variables. Furthermore, predictor variables of PVT performance were not simultaneously considered. Although the RBS was a significant predictor in our bivariate logistic regression, it did not remain a significant predictor of PVT status in our multivariate logistic regression. This may have been due to the addition of our “intrinsic factors” coupled with simultaneous consideration of all predictors.

Our prediction that intrinsic factors would be related to PVT failure was partially supported. These included Cogniphobia and Symptom Identity, which were associated with log OR values greater than 1, indicating that, as scores increased on Cogniphobia or Symptom Identity, the risk of PVT failure also increased. A Cogniphobia score of ≥19 was associated with .92 specificity, .35 sensitivity, and overall classification accuracy of 65.7% for PVT failure. Thirty-two of 172 participants (18.6%) scored ≥19 on the Cogniphobia Scale. However, a within-group analysis revealed that most of the high scorers (75%) were examined in a forensic context, which was significantly greater than the percentage of clinical nonforensic participants scoring ≥19 on the Cogniphobia Scale (25%). The most prevalent diagnosis was mild TBI (25%), followed by post-traumatic stress disorder (PTSD) (17.5%), and a subgroup within our epilepsy sample consisting of a combination of subjects with a diagnosis of epilepsy, or mixed seizure disorder (17.5%).

A large number of forensic participants (32%) had a primary diagnosis of mild TBI, and it is reasonable to assume that these are the ones more likely to be told about cognitive rest, etc. (perhaps creating a cogniphobic response) versus epilepsy participants, etc., who never get this message. Future studies are needed to investigate the role of specific diagnoses, for example, mild TBI and epilepsy, and diagnosis threat in the genesis of Cogniphobia. Our results indicate that Cogniphobia is an important psychological construct that appears to be related to PVT failure, and occurs more frequently at elevated levels in a forensic sample.

The psychological construct Symptom Identity was also a significant predictor of PVT failure. As scores increased for Symptom Identity, the risk of PVT failure also increased. However, Symptom Identity scores exhibited a “threshold” effect with scores ≥24 associated with more than an 11-fold increase in the odds of PVT failure compared to examinees with a score <24. Twenty-seven of 167 participants (16.2%) scored ≥24 on the Symptom Identity Scale. Twenty-four of the 27 high scorers (88.8%) were examined in a forensic context, which was significantly greater than the percentage of clinical nonforensic participants scoring ≥24 on the Symptom Identity Scale (25%). The most prevalent diagnosis was mild TBI (28%), followed by PTSD and moderate-severe TBI (20%, respectively). The present study has expanded evidence for a relationship between Symptom Identity and PVT performance, although once again, as with Cogniphobia, extremely elevated scores occurred at a higher percentage in participants examined in a forensic context.

The finding that illness perceptions are significant predictors of cognitive performance validity in a mixed forensic and nonforensic clinical sample suggests that performance invalidity is more complex than historically indicated by its relationship with the MMPI-2-RF validity scales. The present finding requires cross validation before routine clinical use would be considered, and in the meantime clinicians should continue to consider the MMPI-2-RF validity scales in all of their clinical assessments. The current findings may prompt some clinicians to reconsider the influence of other factors when interpreting PVT performance.

Limitations and Suggestions

First, although the current study represents a fairly large sample size for neuropsychological research, it has to be viewed as an exploratory study and relatively small in scale for the statistical analyses used. Thus, further replication in a larger independent sample will be necessary. Second, given that 44% of our participants were referred by attorneys or disability carriers, we cannot rule out the potential for referral bias as participants were not randomly selected. However, results were in the expected direction as 39% of our sample produced noncredible PVT performance. Third, while backward (and forward) stepwise variable selection techniques have a high likelihood of capitalizing on chance, a gradient boost regression selection, as well as a logistic regression blocking procedure, identified the same three predictors out of the pool of 14 potential predictors providing some reassurance that the results are not statistical artifacts. Moreover, variables identified by stepwise search are less likely to be artifacts when correlations among them are mostly low to moderate as shown in Table 6.

Fourth, the current results do not extend to a criminal context. Fifth, the Symptom Identity and Cogniphobia constructs that emerged in the current exploratory prospective study may function as “psychological proxies” for cognitive performance validity in situations where traditional stand alone and embedded performance validity measures are absent or lacking. This simply means that in a database devoid of any cognitive PVTs, but where there are scores for Cogniphobia and/or Symptom Identity, then scores on the latter instruments may serve as “proxies”, that is, substitutes, to provide some guidance on credibility of neuropsychological performance. High scores suggest a relationship between abnormal health beliefs and cognitive performance validity. While the confidence bounds for the ORs for Symptom Identity are wide, the more critical clinically relevant statistics for predicting PVT credibility are not the odds ratios, but the sensitivity, specificity, and accuracy. The 74% accuracy in our study suggests that there are other factors that affect PVT credibility and ideally these results should be validated in a future study with a larger sample size.

Sixth, while Cogniphobia and Symptom Identity were the best predictors of PVT performance, it is important to remember that these two illness perceptions can co-exist with malingering. Clinical judgment pertaining to their co-existence, or lack thereof, needs to be based upon consideration of the entire neuropsychological database including not only context of the examination, but also clinical history and evolution of symptom presentation. As extreme elevations on the relevant intrinsic psychological variables were more common in the forensic setting, it would be useful to explore these findings in a sample with significant somatization tendencies who lack external incentive [e.g., pseudoneurologic epileptic seizures (PNES) patients with no evidence of financial incentive]. This would add confidence to the suggestion that PVT measures can be failed due to variables other than intentional underperformance.

Of note, a compelling case for this possibility has been made in the PNES population, where researchers demonstrated that financial incentive was not a contributing factor to PVT performance (Williamson, et al., Reference Williamson, Drane, Stroup, Holmes, Wilensky and Miller2005). Future studies with a PNES or similar somatic/functional population with or without external incentive would be useful for further confirmation and clarification of these findings.

Finally, on a somewhat cautionary note, for clinicians engaged in the assessment of patients with epilepsy, the current findings highlight the possible effect of interictal epileptiform discharges on cognitive performance and PVT measures. One of the clinical samples included a large number of epilepsy patients, and all of these patients had simultaneous EEG data during their test sessions. We excluded any patients with recent (<24 hr) or concurrent epileptiform activity based on recent studies suggesting a relationship between these variables (Drane et al., Reference Drane, Ojemann, Kim, Gross, Miller, Faught and Loring2016; Williamson et al., Reference Williamson, Drane, Stroup, Holmes, Wilensky and Miller2005). An elevated rate of PVT failure was noted in our epilepsy sample with concurrent epileptiform activity, which suggests a possible relationship between verbal PVTs and dominant TL epileptiform activity. This should represent a caution to studies involving patients with epilepsy, as well as clinical neuropsychological assessment of this patient population as well. Future studies are needed to further explore the relationship between dominant left temporal lobe epileptiform activity and performance on verbal PVTs.

CONCLUSION

Current findings implicate psychological factors in the form of illness perceptions that need to be considered when analyzing PVT performance (i.e., not limited to malingering) and can co-exist with malingering. Psychological correlates of cognitive performance validity deserve additional investigation and consideration when interpreting patient profiles generated during forensic and clinical neuropsychological examinations.

ACKNOWLEDGMENTS

Authors (G.H., R.H.) declare there was a fee for service for the 81 forensic referrals, while authors (J.S., J.G., W.E., D.D.) report no potential conflicts of interest. Partial funding was provided by NCS Pearson, Minneapolis, MN, for MMPI-2-RF scoring costs.

References

REFERENCES

American Congress of Rehabilitation Medicine. (1993). Definition of mild traumatic brain injury. Journal of Head Trauma Rehabilitation, 8, 8687.Google Scholar
Amrhein, C., Hengmith, S., Maragkos, M., & Hennig-Fast, K. (2008). Neuropsychological characteristics of highly dissociative healthy individuals. Journal of Trauma Dissociation, 9, 525542.Google Scholar
An, K.Y., Zakzanis, K.K., & Joordens, S. (2012). Conducting research with nonclinical healthy undergraduates: Does effort play a role in neuropsychological performance? Archives of Clinical Neuropsychology, 27, 849857.Google Scholar
Ardolf, B.K., Denney, R.L., & Houston, C.M. (2007). Base rates of negative response bias and malingered neurocognitive dysfunction among criminal defendants referred for neuropsychological evaluation. The Clinical Neuropsychologist, 21, 899916.Google Scholar
Bandura, A. (1997). Self-efficacy: The exercise of control. New York: Freeman.Google Scholar
Barash, J., Suhr, J.A., & Manzel, K. (2004). Detecting poor effort and malingering with an expanded version of the Auditory Verbal Learning Test (AVLT): Validation with clinical samples. Journal of Clinical and Experimental Neuropsychology, 26, 125140.Google Scholar
Belanger, H.G., Barwick, F.H., Kip, K.E., Kretzmer, T., & Vanderploeg, R.D. (2013). Postconcussive symptom complaints and potentially malleable positive predictors. The Clinical Neuropsychologist, 27, 343355.Google Scholar
Ben-Porath, Y.S., & Tellegen, A. (2008). Minnesota Multiphasic Personality Inventory-2-Restructured Form: Manual for administration, scoring, and interpretation. Minneapolis, MN: University of Minnesota by NCS Pearson, Inc.Google Scholar
Bernstein, E.M., & Putnam, F.W. (1986). Development, reliability, and validity of a dissociation scale. Journal of Nervous and Mental Diseases, 174, 727734.Google Scholar
Boone, K.B., Salazar, X., Lu, P., Warner-Chacon, K., & Razani, J. (2002). The Rey 15-item recognition trial: A technique to enhance sensitivity of the Rey 15-item memorization test. Journal of Experimental Neuropsychology, 24, 561573.Google Scholar
Boone, K., Lu, P., & Wen, J. (2005). Comparison of various RAVLT scores in the detection of noncredible memory performance. Archives of Clinical Neuropsychology, 20, 301319.Google Scholar
Crombez, G., Verbaet, L., Lysens, R., Baeyers, F., & Eelen, P. (1998). Avoidance and confrontation of painful, back straining movements in chronic back pain patients. Behavior Modification, 22, 6277.Google Scholar
Dean, A.C., Victor, T.L., Boone, K.B., & Arnold, G. (2008). The relationship of IQ to effort test performance. The Clinical Neuropsychologist, 22, 705722.Google Scholar
Dean, A.C., Victor, T.L., Boone, K.B., Philpott, L., & Hess, R. (2009). Dementia and effort test performance. The Clinical Neuropsychologist, 23, 133152.Google Scholar
Delis, D.C., Kramer, J.H., Kaplan, E., & Ober, B.A. (2000). California Verbal Learning Test-II (2nd Ed.). San Antonio, TX: The Psychological Corporation.Google Scholar
Delis, D.C., & Wetter, S.R. (2007). Cogniform Disorder and Cogniform Condition: Proposed diagnoses for excessive cognitive symptoms. Archives of Clinical Neuropsychology, 22, 589604.Google Scholar
DePrince, A.P., & Freyd, J.J. (1999). Dissociative tendencies, attention and memory. Psychological Science, 10, 449452.Google Scholar
DeRight, J., & Jorgensen, R.S. (2015). I just want my research credit: Frequency of suboptimal effort in a non-clinical healthy undergraduate sample. The Clinical Neuropsychologist, 29, 101107.Google Scholar
Diefenbach, M.A., & Leventhal, H. (1996). The Common-Sense Model of Illness Representation: Theoretical and practical considerations. Journal of Social Distress and the Homeless, 5, 1138.Google Scholar
Drane, D.L., Williamson, D.J., Stroup, E.S., Holmes, M.D., Jung, M., Koerner, E., & Miller, J.W. (2006). Cognitive impairment is not equal in patients with epileptic and psychogenic nonepileptic seizures. Epilepsia, 47, 18791886.Google Scholar
Drane, D.L., Ojemann, J.G., Kim, M., Gross, R.E., Miller, J.W., Faught, R.E. Jr., & Loring, D.W. (2016). Interictal epileptiform discharge effects on neuropsychological assessment and epilepsy surgical planning. Epilepsia and Behavior, 56, 131138.Google Scholar
Forbey, J.D., & Lee, T.T.C. (2011). An exploration of the impact of invalid MMPI-2 protocols on collateral self-report measure scores. Journal of Personality Assessment, 93, 556565.Google Scholar
Forbey, J.D., Lee, T.C.C., Ben-Porath, Y.S., Arbisi, P.A., & Gartland, D. (2013). Associations between MMPI-2-RF validity scales and extra-test measure of personality and psychopathology. Assessment, 20, 448461.Google Scholar
Fragale, A.R., & Health, C. (2004). Evolving informational credentials: The (mis) attribution of believable facts to credible sources. Personality and Social psychology Bulletin, 30, 226236.Google Scholar
Freyd, J., Martorello, S.R., Alvarado, J.S., Hayes, A.E., & Christman, J.C. (1998). Cognitive environments and dissociative tendencies: Performance on the Standard Stroop Task for high versus low dissociators. Applied Cognitive Psychology, 12, S91S103.Google Scholar
Gervais, R.O., Ben-Porath, Y.S., Wygant, D.B., & Green, P. (2007). Development and validation of a Response Bias Scale (RBS) for the MMPI-2. Assessment, 14, 196208.Google Scholar
Gervais, R.O., Wygant, D.B., Sellbom, M., & Ben-Porath, Y.S. (2011). Associations between Symptom Validity Test failure and scores on the MMPI-2-RF validity and substantive scales. Journal of Personality Assessment, 93, 508517.Google Scholar
Green, P. (2005). Green’s Word Memory Test for Windows: User’s manual. Edmonton, Canada: Green’s Publishing, Inc.Google Scholar
Greiffenstein, M., Baker, W., & Gola, T. (1994). Validation of malingered amnesia measures with a large clinical sample. Psychological Assessment, 6, 218224.Google Scholar
Greve, K.W., Heinly, M.T., Bianchini, K.J., & Love, J.M. (2009). Malingering detection with the Wisconsin Card Sorting Test in mild traumatic brain injury. The Clinical Neuropsychologist, 23, 343362.Google Scholar
Haggerty, K.A., Frazier, T.W., Busch, R.M., & Naugle, R.I. (2007). Relationships among Victoria Symptom Validity Test indices and personality assessment inventory validity scales in a large clinical sample. The Clinical Neuropsychologist, 21, 917928.Google Scholar
Henry, G.K., Heilbronner, R.L., Algina, J., & Kaya, Y. (2013). Derivation of the MMPI-2-RF Henry-Heilbronner Index-r (HHI-r) Scale. The Clinical Neuropsychologist, 27, 509515.Google Scholar
Ingram, P.B., & Ternes, M.S. (2016). The detection of content-based invalid responding: A meta-analysis of the MMPI-2-RF over-reporting scales. The Clinical Neuropsychologist, 30, 473496.Google Scholar
Jones, A., Ingram, V.M., & Ben-Porath, Y.S. (2012). Scores on the MMPI-2-RF Scales as a function of increasing levels of failure on cognitive symptom validity tests in a military sample. The Clinical Neuropsychologist, 26, 790815.Google Scholar
Johnson, S.K. (2008). Medically unexplained illness: Gender and biopsychosocial implications. Washington, DC: American Psychological Association.Google Scholar
Kemp, S., Coughlan, A.K., Rowbottom, C., Wilkinson, K., Teggart, V., & Baker, G. (2009). The base rate of effort test failure in patients with medically unexplained symptoms. Journal of Psychosomatic Research, 65, 319325.Google Scholar
Kotor, R.I., Bellman, S.B., & Watson, D.B. (2004). Multidimensional Iowa Suggestibility Scale. Stony Brook University: roman.Kotov@stonybrook.edu.Google Scholar
Lamberty, G.J. (2008). Understanding somatization in the practice of clinical neuropsychology. New York: Oxford University Press, Inc.Google Scholar
Larrabee, G.J. (2003). Detection of malingering using atypical performance patterns on standard neuropsychological tests. The Clinical Neuropsychologist, 17, 410425.Google Scholar
Lees-Haley, P.R., Iverson, G.L., Lange, R.T., Fox, D.D., & Allen, L.M. III (2002). Malingering in forensic neuropsychology: Daubert and the MMPI-2. Journal of Forensic Neuropsychology, 3, 167203.Google Scholar
Loring, D.W., Lee, G.P., & Meador, K.J. (2005). Victoria Symptom Validity Test performance in nonlitigating epilepsy surgery candidates. Journal of Clinical and Experimental Neuropsychology, 27, 610617.Google Scholar
Martelli, M.F., Zasler, N.D., Grayson, R.I., & Liljedahl, E.L. (1999). Kinesiophobia and cogniphobia: Assessment of avoidance conditioned pain related disability (ACPRD). Poster presentation. National Academy of Neuropsychology, San Antonio, TX.Google Scholar
Martin, P.K., Schroeder, R.W., Heinrichs, R.J., & Baade, L.E. (2015). Does true neurocognitive dysfunction contribute to Minnesota Multiphasic Personality Inventory-2 Edition-Restructured Form cognitive validity scale scores? Archives of Clinical Neuropsychology, 30, 377386.Google Scholar
Martin, P.K., Schroeder, R.W., & Odland, A.P. (2015). Neuropsychologists’ validity testing beliefs and practices: A survey of North American professionals. The Clinical Neuropsychologist, 29, 741776.Google Scholar
Martin, P.K., Schroeder, R.W., & Odland, A.P. (2016). Expert beliefs and practices regarding neuropsychological validity testing. The Clinical Neuropsychologist, 30, 515535.Google Scholar
Mittenberg, W., DiGiulio, V., Perrin, S., & Bass, A.E. (1992). Symptoms following mild head injury: Expectation as aetiology. Journal of Neurology, Neurosurgery, and Psychiatry, 55, 200204.Google Scholar
Mittenberg, W., Tremont, G., Zielinski, R.E., Fichera, S., & Rayls, K.R. (1996). Cognitive behavioral prevention of postconcussion syndrome. Archives of Clinical Neuropsychology, 11, 139145.Google Scholar
Mittenberg, W., Patton, C., Canyock, E.M., & Condit, D.C. (2002). Base rate of malingering and symptom exaggeration. Journal of Clinical and Experimental Neuropsychology, 24, 10941102.Google Scholar
Moss-Morris, R., Weinman, J., Petrie, K.J., Horne, R., Cameron, L.D., & Buick, D. (2002). The Revised Illness Perception Questionnaire (IPQ-R). Psychology and Health, 17, 116.Google Scholar
Nelson, N.W., Hoelzle, J.B., McGuire, K.A., Sim, A.H., Goldman, D., Ferrier-Auerbach, A.G., & Sponheim, S.R. (2011). Self-report of psychological function among OEF/OIF personnel who also report combat-related concussion. The Clinical Neuropsychologist, 25, 716740.Google Scholar
Nguyen, C.T., Green, D., & Barr, W.B. (2015). Evaluation of the MMPI-2-RF for detecting over-reported symptoms in a civil forensic and disability setting. The Clinical Neuropsychologist, 2, 255271.Google Scholar
Peck, C.P., Schroeder, R.W., Heinrichs, R.J., VonDran, E.J., Brockman, C.J., Webster, B.K., & Baade, L.E. (2013). Differences in MMPI-2 FBS and RBS scores in brain injury, probable malingering, and conversion disorder groups: A preliminary study. The Clinical Neuropsychologist, 27, 693707.Google Scholar
Rogers, R.R., Gillard, N.D., Berry, D.T.R., & Granacher, R.P. (2011). Effectiveness of the MMPI-2-RF validity scales for feigned mental disorders and cognitive impairment. Journal of Psychopathology and Behavioral Assessment, 33, 355367.Google Scholar
Schroeder, R.W., Baade, L.E., Peck, C.P., VonDran, E.J., Brockman, C.J., Webster, B.K., & Heinrichs, R.J. (2012). Validation of MMPI-2-RF validity scales in criterion group neuropsychological samples. The Clinical Neuropsychologist, 26, 129146.Google Scholar
Schroeder, R.W., & Marshall, P.S. (2011). Evaluation of the appropriateness of multiple symptom validity indices in psychotic and non-psychotic psychiatric populations. The Clinical Neuropsychologist, 25, 437453.Google Scholar
Schwarzer, R., & Jerusalem, M. (1995). The General Self-Efficacy Scale. In J. Weinman, S. Wright, & M. Johnson (Eds.), Measures in health psychology: A user’s portfolio. Casual and central beliefs. Windsor, UK: NFER-NELSON.Google Scholar
Sellbom, M., & Bagby, R.M. (2010). Detection of over reported psychopathology with the MMPI-2-RF [corrected] validity scales. Psychological Assessment, 22, 757767.Google Scholar
Sellbom, M., Toomey, J.A., Wygant, D.B., Kucharski, L.T., & Duncan, S. (2010). Utility of the MMPI-2-RF (restructured form) validity scales in detecting malingering in a criminal forensic setting: a known-groups design. Psychological Assessment, 22, 2231.Google Scholar
Silk-Eglit, G.M., Stenclik, J.H., Gavett, B.E., Adam, J.W., Lynch, J.K., & McCaffrey, R.J. (2014). Base rate of performance invalidity among non-clinical undergraduate research participants. Archives of Clinical Neuropsychology, 29, 415421.Google Scholar
Slick, D., Hopp, M.A., Strauss, E., & Thompson, G.B. (1997). Victoria Symptom Validity Test. Lutz, FL: Psychological Assessment Resources.Google Scholar
Snell, D.L., Hay-Smith, E.J., Surgenor, L.J., & Siegert, R.J. (2013). Examination of outcome after mild traumatic brain injury: The contribution of injury beliefs and Leventhal’s common sense model. Neuropsychological Rehabilitation, 23, 333362.Google Scholar
Suhr, J.A. (2003). Neuropsychological impairment in fibromyalgia: Relation to depression, fatigue and pain. Journal of Psychosomatic Research, 55, 321329.Google Scholar
Suhr, J.A., & Spickard, B. (2012). Pain-related fear is associated with cognitive task avoidance: Exploration of the cogniphobia construct in a recurrent headache sample. The Clinical Neuropsychologist, 26, 11281141.Google Scholar
Sweet, J.J., Condit, D.C., & Nelson, N.W. (2008). Feigned amnesia and memory loss. In R. Rogers (Ed.), Clinical assessment of malingering and deception (3rd ed.). New York: Guilford Press.Google Scholar
Tarescavage, A.M., Wygant, D.B., Gervais, R.O., & Ben-Porath, Y.S. (2012). Association between the MMPI-2 restructured form (MMPI-2-RF) and malingered neurocognitive dysfunction among non-head injury disability claimants. The Clinical Neuropsychologist, 27, 313335.Google Scholar
Thomas, M.L., & Youngjohn, J.R. (2009). Let’s not get hysterical: Comparing the MMPI-2 validity, clinical, and RC scales in TBI litigants tested for effort. The Clinical Neuropsychologist, 23, 10671084.Google Scholar
Todd, D.D., MartelliM.,.F. M.,.F., & Grayson, R.L. (1998). The Cogniphobia Scale (C-Scale) (white paper). Retrieved from www.angelfire.com/va/MFMartelliPhD/nanposters.html.Google Scholar
Tombaugh, T.N. (1996). Test of Memory Malingering. North Tonawanda, NY: Multi-Health Systems, Inc.Google Scholar
Tutz, G., & Binder, H. (2006). Generalized additive modeling with implicit variable selection by likelihood-based boosting. Biometrics, 62, 961971.Google Scholar
Van Dyke, S.A., Millis, S.R., Axelrod, B.N., & Hanks, R.A. (2013). Assessing effort: Differentiating performance and symptom validity. The Clinical Neuropsychologist, 27, 12341246.Google Scholar
Vaquez-Justo, E., Alvarez, M.R., & Otero, M.J.F. (2003). Influence of depressed mood on neuropsychological performance in HIV-seropositive drug users. Psychiatry and Clinical Neurosciences, 57, 251258.Google Scholar
Whittaker, R., Kemp, S., & House, A. (2007). Illness perceptions and outcome in mild head injury: A longitudinal study. Journal of Neurology, Neurosurgery, and Psychiatry, 78, 644646.Google Scholar
Williamson, D.J., Drane, D.L., Stroup, E.S., Holmes, M.D., Wilensky, A.J., & Miller, J.W. (2005). Recent seizures may distort the validity of neurocognitive test scores in patients with epilepsy. Epilepsia, 46(Suppl. 8), 74.Google Scholar
Williamson, D.J., Holsman, M., Clayton, N., Miller, J.W., & Drane, D. (2012). Abuse, not financial incentive predicts noncredible cognitive performance in patients with psychogenic nonepileptic seizures. The Clinical Neuropsychologist, 26, 588598.Google Scholar
Wolfe, P.L., Millis, S.R., Hanks, R., Fichtenber, N., Larrabee, G.J., & Sweet, J.J. (2010). Effort indicators within the California Verbal Learning Test-II (CVLT-II). The Clinical Neuropsychologist, 24, 153168.Google Scholar
Wygant, D.B., Sellbom, M., Gervais, R.O., Ben-Porath, Y.S., Stafford, K.P., Freeman, D.B., & Heilbronner, R.L. (2010). Further validation of the MMPI-2 and MMPI-2-RF Response Bias Scale: Findings from disability and criminal forensic settings. Psychological Assessment, 22, 745756.Google Scholar
Wygant, D.B., Ben-Porath, Y.S., Arbisi, P.A., Berry, D.T., Freeman, D.B., & Heilbronner, R.L. (2009). Examination of the MMPI-2 restructured form (MMPI-2-RF) validity scales in civil forensic settings: Findings from simulation and known group samples. Archives of Clinical Neuropsychology, 24, 671680.Google Scholar
Youngjohn, J.R., Wershba, R., Stevenson, M., Sturgeon, J., & Thomas, M.L. (2011). Independent validation of the MMPI-2-RF somatic/cognitive and validity scales in TBI litigants tested for effort. The Clinical Neuropsychologist, 25, 463476.Google Scholar
Figure 0

Table 1 Cognitive performance validity failure rates for the noncredible group (n=72)

Figure 1

Table 2 Demographic and participant characteristics for the credible and noncredible groups

Figure 2

Table 3 Diagnostic classes and diagnoses for the credible and noncredible groups

Figure 3

Table 4 Bivariate OR from logistic regression using one predictor at a time

Figure 4

Table 5 Multivariable Logistic Model for the Noncredible Versus Credible Group Using 3 Psychological Predictors as Candidates

Figure 5

Table 6 Correlational matrix for all 14 potential predictors of PVT performance