More Sensitive Test Norms Better Predict Who Might Develop Alzheimer's Disease

Doctors should come up with better testing norms when examining older people with high IQs for pre-clinical signs of Alzheimer’s disease, suggests a new study which appears in Neuropsychology (January 2004), published by the American Psychological Association (APA).

The study reports that higher test cut-offs instead of standard group average gave more accurate predictions about how many highly intelligent people would deteriorate over time.

It’s important to diagnose the Alzheimer’s as early as possible, with the new medical and psychological interventions that can slow pace of disease.

It has also been found that, on average, very intelligent people show clinical signs of Alzheimer’s later than most people. But once they do, they decline more rapidly.

The study takes a logical approach, saying that a "different pattern may call for a different approach to diagnosis." Armed with these findings, tests should "reflect their greater mental reserves."

Lead author Dorene Rentz, PsyD, of Brigham and Women’s Hospital’s Department of Neurology and Harvard Medical School, says "Highly intelligent elders are often told their memory changes are typical of normal aging when they are not. As a result, they would miss the advantages of disease-modifying medications when they become available."

Rentz and her team conducted a study which involved 42 people with IQs of 120 or more drawn from a longitudinal study of aging and Alzheimer’s disease.

The team studied participant performance on measures of cognitive ability including word generation, visuospatial processing and memory. Scores were collected at the start of the study, and again 3 years later.

Rentz and her team then asked which of the two test norms (the standard, derived from a large cross-section of the population; and the adjusted high-IQ norm which measured changes against higher ability levels) predicted problems better.

The adjusted performed better.

Raised cut-offs (the adjusted norm) predicted that 11 of the participants were at risk for the disease and eventual decline.

In contrast, standard cutoffs showed they were normal.

Consider that what’s normal for normal people is not normal for those with higher IQ.

As predicted (by the raised cut-offs or the adjust norm), 9 of those 11 people had declined 3 ½ years later. Six eventually developed mild cognitive impairment (MCI), a transitional illness from normal aging to a dementia. Five of them were diagnosed with Alzheimer’s disease, two years after the study was submitted.

Lead author Rentz says "With standard norms, people with higher levels of ability would be classified as normal for up to three years before they began demonstrating a decline on standardized tests. In this case, they could be at risk for not receiving early treatment intervention."

Rentz her team used this approach: igh-IQ people were scored against an average that was "normal" for them, proportionately higher than the cross-sectional average of 100. Scores were considered abnormal if, according to standard practice, they were 1.5 standard deviations or more below the (adjusted) norm.

Rentz believes that adjusting test norms apply to low IQ people as well. Rentz explains that "People with below-average intelligence have the potential for being misdiagnosed as ‘demented’ when they are not, because they score below the normative cutoffs." Adjusting cutoffs or test norms to match the ability level of an individual being examined may be the best predictor of Alzheimer’s-type dementia.

IQ adjustment may also be useful for women compared with traditional adjustments for education. "Education-based methods often underestimate ability in women who did not have the same educational advantages as men, particularly in this aged cohort," she says.

IQ-adjusted norms may also help control some inaccuracies that might have made their way into normative data, since these norms were "were derived from cross-sectional populations who might not have been adequately screened for preclinical Alzheimer’s disease." But since developing new databases from "longitudinal studies, se of an IQ-adjusted method may provide a temporary solution for clinicians and research investigators evaluating older highly intelligent individuals at risk for Alzheimer’s disease," says the researches.



Leave a Reply