ACES Logo
Assessment editors help give test takers an even chance

Assessment editors help give test takers an even chance

August 2, 2017 By Kate Karp Conferences

What subterfuges did you use as a kid to fake out multiple-choice tests?

Seated at your desk, chewing on the pencil, you might have bubbled in options like all (or none) of the above, looked for a choice that was longer or shorter than the rest, or knew full well, with the cunning of a linguist-in-waiting, that if the last word in the question’s stem was a(n) and there was only one option beginning with a vowel other than long u, that one was the correct choice.

A simple keyword search on the internet using words like cheat and assessment will yield a trove of information for test takers wanting to up their scores.

But now, the pencil’s in the other hand — or, lacking ambidexterity, you’re holding a red rollerball and a clear set of editorial standards. You want to make sure that test items measure what they’re supposed to measure, i.e. what a student has learned.

“The goal of an assessment is to find out in an unbiased way how much a student knows about the test topics. An assessment assesses,” said Evelyn Mellone, senior editor at Defense Language Institute in Monterey, California.

Mellone and co-presenter David Pisano, a Defense Language editor, discussed assessment editing during an ACES 2017 workshop in March. 

Mellone’s experience as an assessment editor with McGraw-Hill Educational Publishing led her to develop ways to more deeply edit tests and ensure that they assess what they’re supposed to assess.

When she was hired by her current employer, she said that "at first, all they wanted me to tell them was that there were periods at the end of the sentences, question marks at the ends of questions, and everything was spelled right." 

But certain things jumped out at her — for example, if only one option has a capital letter or a hyphen, that could be a clue to it being the correct choice.

Fill 7

Certain things jumped out at Mellone — for example, if only one option has a capital letter or a hyphen, that could be a clue to it being the correct choice.

Fill 7

Mellone and Pisano defined assessment terminology as follows:

stem: question

option: answer choice

key: correct answer choice

distractor: incorrect answer choice

item: stem and options as a whole

passage: reading or listening text

examinee: test taker, student

Because of the session’s time limitations, the discussion covered only multiple-choice items in reading comprehension.

Four options per item, Mellone said, is indicated by research as a best practice for number of options.

She and Pisano laid out the process of writing effective multiple-choice items and how to analyze their integrity. They identified four types of edits for an item:

The presenters describe the process as “looking at multiple-choice test questions from all directions.”

The final direction, bottom up, mitigates opportunities for the test taker to become “test wise” by using clues and ruses.

Assessment editors aren’t there to trick test takers, Mellone said, but to give them as even a chance as possible to show what they know and can infer about the subject matter.

“Even someone who may know the material may overthink and get confused, thinking that the test makers are trying to confuse them,” she said.

Pisano said lower-level distractors can make higher-level students infer details or relationships that don’t exist.

Fill 7

“Item writing is a real art and skill,” Mellone said. “A short question — but to get it right, it takes a lot of experience.”

Fill 7

Mellone added that initial words or phrases, repeated words or phrases, and final words or phrases "can distract the test-taker, increase cognitive load, or at worst, even provide a clue to the key." Negatively worded statements are also avoided; sometimes, she said, a student will go right by the except or the not.

“Item writing is a real art and skill,” Mellone said. “A short question — but to get it right, it takes a lot of experience. It’s not a highly respected writing project, unfortunately.”

As the presenters stressed, bad test items will lead to bad assessments. Test takers may be thrown off by unclear items or will get a score they don’t deserve. And people gathering statistics on the test result won’t have accurate information.

Ensuring that test takers can neither psych out the items nor be thrown off by them will in turn give educators a true picture of what a student knows and is able to infer. It might also give educators information about what they can do to fine-tune their curricula and how they’re teaching.

The complete presentation is available in the conference handout section. There are clear details about what goes into a good test item.

And what typifies a good test item?

  “One with no faults,” Mellone said


Contributions to this article were made by Evelyn Mellone and David Pisano.

Recent Posts

The late Henry Fuhrmann chosen to receive the 2024 Glamann Award

Neil Holdway's term as President of the ACES Board has ended

Highlights From ACES 2024 San Diego