Item Logic
Downloads
-
The logic of an item is a rough outline or abstraction of the cognitive path that the items prompts — or is intended to prompt — in test takers. Like outline and abstractions more generally, the level of detail in item logic can vary greatly.
There is no standard format or list of required elements for an explanation for the logic of an item. Historically, item logic has often been taken for granted and been under-examined. They have most often appeared in the rationales that CDPs sometimes write to explain multiple choice items’ answer options — though those rationales are both more detailed and less complete than item logic requires.
Because item logic lacks the details and particular of an individual item, it can be easier to spot flaws in the design of an item by examining its logic. That is, the lack of districting details that might call for additional work or obscure test taker’s anticipated cognition, a CDP can focus more clearly on an item’s ability to elicit the targeted cognition.
Item logic is not an item template, because item templates merely explain what an item should look like and what should be presented to test takers. Rather, item logic is about the cognitive path of test takers. Exactly what might be presented to prompt that cognitive path can vary. Item logic is always missing from item specifications and inclusion of item logic is an important part of RTD task models.
An explanation of an item’s logic should make clear how the item elicit’s evidence of the targeted cognition.
Where Item Logic Comes From
CDPs, teacher and students have vast experience with content and with tests — standardized and others. The adult were once students themselves, learning the content on the test in ways that are rarely that different than today’s students do. Content is often assessed — both in classroom assessment and in formal assessment — is rather conventional ways. Yes, that is how we were assessed on that content, and yes, that is how it seems right to assess it today. Or, as Tevye famously sang, Tradition!
Some of the most valuable innovation in assessment is in item logic. When items writes and CDPS develop a new way to get at important content, they can open doors to more valid items. They may offer novelty in ways that undermine the emphasis on drills and other inapproipjte test prep that reduce the cognitive complexity of assessments.
Unfortunately, item logic is often borrowed from other standards or assessment targets. But logic that works to elicit evidence for one standard is usually inappropriate to elicit evidence for another standard, unless stated in such abstract terms as to provide no insight into an item.
Elements of Item Logic
Because of the many ways that standard vary and the fact that there is no singular desired level of detail to explanations of item logic, there is no definitive list of the elements that should be included. But we can offer a list of some of the elements that might be included.
What the test taker is expected to know before encountering the item
How the test taker might make use of the elements of the stimuli of the item or item set
The steps that a test taker might take to get to their response. (required, though the level of detail varies)
The form of the test taker’s response
How the modality of the item influences the test taker’s cognitive path
How the targeted cognition fits into the test taker’s cognitive path
How the test taker’s response constitutes affirmative and/or negative evidence of their proficiency with the targeted cognition.
Like so many aspects of professional practice in any field, less experienced practitioners benefit from being more explicit when thinking through their work and more experienced practitioners can usually trust their finely honed professional judgment and instincts to work though things less consciously. Of course, when the most expert practitioners go wrong, it is because they took something for granted that they could have recognized had they been more deliberate in their thinking.
Therefore, the best practitioners get a sense for when they need to slow down and be more explicit and deliberate, consciously thinking through more elements of the logic of the item they are examining.
Some Common Mistakes in Item Logic
Many problems with items exist on the level of its logic. These problems cannot be fixed with tinkering or wordsmithing. Rather, they usually require wholesale reworking of an item. When CDPs review items for initial intake from item writers, they should be looking at whether each item’s logic actually supports the targeted cognition. If they do not, that item is likely not worth the additional work it would take to make it align properly.
One of the the most common mistake is when an item’s logic is simply too underdeveloped. Yes, it should be an outline or abstraction, but there are limits. It is not enough to say, “The item prompts the test taker to solve a problem that requires — or perhaps just could – be solved with the targeted cognition.” That is so generic as to apply to virtually any items in a content area. Item logic needs more detail and specificity than that. Perhaps something like, “In order to get to the correct sum of two number, the test taker must carry (i.e., the targeted cognition) once.” That would be better, but might still be lacking in other areas.
The item logic does not account for the modality and item type. For some reason, the item writer and/or CDP has not recognized that the modality of the item substantially alters the nature of the cognitive path that is introduced. Generally, selected response item require test takers to recognize or pick out an answer, rather than to come to an answer themselves. For example, if an assessment target was, “Students can come up with rhymes for words,” it would be quite different to i) ask them “What rhymes with hard?” than to ii) ask them, “Which of of the following words rhyme with hard? a. harm b. cored c. bard d. far.”
The logic is borrowed from a related standard and so it feels like it is is appropriate, but it does not actually lead to evidence of the targeted cognition.
Items that use some of the language of a standard might not actually prompt the targeted cognition. Such echos from standard to item are no substitute for actually thinking through what an item actually might prompt.
Without a doubt, the single most common mistake in item logic is a major contributor to most other mistakes: the item’s logic is taken for granted and under examined. That is, the item writer and/or CDP did not think it through to make sure it was appropriate and effective.
Of course, there are other sorts of mistakes, as well. To learn more about them, see the Item Logic packet, downloadable from the sidebar to the left.
Making use of the Logic of an Item in Item Development
When one is faced with an item in development, the first step in making use of a the logic of an item is merely to look for it. Examine the item and think about cognitive path it prompts in yourself or might prompt in test takers. One might start by looking at the verbs in the directions or the stem of the item. Simply thinking through the plain language of the item and how test takers would respond is the safest starting point.
With some sense of the logic of the item in mind — if not set down on paper — a CDP should examine it in light of the assessment target or standard. Is the logic actually aimed at the targeted cognition?
Even when the logic of an item is aligned to the targeted cognition, there is still opportunity for an item to go astray. I see the logic that this item is trying to use, but does it actually stick to it successfully? There are many ways in which an item can offer alternative paths to a successful response and/or construct-irrelevant obstacles, such that the implementation of the item departs from the original logic of the item. In fact, this is quite common. Much of the work of item refinement is reworking the item to hew more closely to its intended logic.
One of the common sorts of item logic is for reading items that ask the test taker to identify a particular detail from a reading passage. What did Jack break? Test takers are expected to read the story first, and them remember the particular detail, picking it out from a list. This would show they they understood the story they read.
and how it attempts to prompt that cognition. It is the basic design of the item and it links the targeted cognition, prior knowledge and whatever stimuli might be linked to the item. It is tied to the item type and presentation, because the same sort of question can prompt a very different cognitive for different sorts of item types (e.g., short answer vs. multiple choice).
The logic of an item is not simply the standards, assessment target or targeted cognition. It is not just the ideal response to an item. When fully stated explicitly, the logic of an item should make clear how an item will elicit evidence of various levels of proficiency — both successful and unsuccessful responses.
Examples of Item Logic
One of the common sorts of item logic is for reading items that ask the test taker to identify a particular detail from a reading passage. What did Jack break? Test takers are expected to read the story first, and them remember the particular detail, picking it out from a list. This would show they they understood the story they read.
A very different sort of item could recognize when something is explicit in a story and when it is not. What caused Jack to break his crown? Test takers are expected to read the story first and then recognize that none of the offered answer options is explicitly included in the story. Or, if the item required the the test taker to supply their own answer, the test would have to independently recognize that no reasons is given in the nursery rhyme and test takers who do not understand that stories do not always explain everything might mistake the mechanism (i.e., falling down) for the actual cause.
In mathematics, the most common and basic item logic is simply that when given a bare math (e.g., arithmetic or algebra problem) requiring the use of a particular skill that test takers will demonstrate their skill (or lack thereof) by attempting the item and offering (or selecting) an answer. Word problems and multi-part problems have quite different item logic. Items written to assess a particular skill (e.g., carrying in addition) might be based on a more specific item logic that requires test tests to select the answer that follows from correctly carrying rather than the answer that follows from failing to correctly carry.
Taken for Granted
Because of all of our long experience as students and as test takers, we are all deeply familiar with the logic of many types of items. Teachers, item writers, content development professionals and students(!) have seen so many of the same sorts of questions for a particular kind of KSA that barely even think about it. Instead, they take for granted that that is the right way — or a right way — to assess those KSAs.
These taken for granted item logics are — almost by definition — vague and soft. They are not spelled out and or examined. Instead, they are traditional, and their tenure gives them an undeserved credibility.
Making the Logic of an Item More Explicit
There certainly is not standard way to express the logic of an item, or often a place to record it — though RTD Task Models should contain the elements of acceptable item logic. Generally, the logic of an item should be recognizable to and explainable by a CDP.
The CDP should be able to explain the logic of an item in terms of the standards and the specific KSAs it targets.
The CDP should be able to explain how the cognitive path(s) prompted by the item make use of the targeted cognition.
The CDP should be able to explain how the response or work product generated by the tests taker would be evidence of their proficiency (or lack of proficiency) with the targeted cognition.
The CDP should be able to explain how, if it all, test takers might work with the various elements of the item and stimuli along their cognitive path.
The CDP should be able to narrate the cognitive path(s) that test takers might take in response to the item.
Mistakes
blah blah blah
Heading
blah blah blah
Heading
blah blah blah
Heading
blah blah blah