The test window dates you select are extremely important. Setting appropriate test windows allows you to accurately monitor the growth of students over time and to take full advantage of MAP reports. The date range for a test window must be within the date range of a term.
Use this table to determine the optimal timing for test windows.
Time Category |
NWEA Recommendation |
Why? |
---|---|---|
Test window length for all schools in a district that will be compared (for example, all elementary schools) |
Keep it short (three weeks at most). |
Captures test results for all students who will be compared to each other when they have all received about the same amount of instruction. |
Test window placement |
Keep the timing consistent from one academic year to the next. |
Supplies a valid comparison of your schools’ growth data from year to year. |
Fall test window timing |
Beginning of academic year (weeks 1 to 7) |
Enables MAP reports to accurately compare your students’ test data to growth and status norms, so additional effort to compare with the norms data is not required. |
Winter test window timing |
Middle of academic year |
|
Spring test window timing |
End of academic year |
|
Time between fall and spring testing |
Roughly 32 weeks of instruction |
|
Time between testing in consecutive fall terms (or consecutive spring terms) |
Roughly 36 weeks of instruction |
MAP reports compare your students’ test data gathered during the test window to norms data. Norms data were calculated from a broad sample of MAP test events from other school districts. The norms data are based on average test window timing that matches the timing recommended in the table above.
The timing and length of your test window determine whether comparisons to norms data are valid—representing a similar amount of instructional time between measurements—or not.
© 2010—2014 Northwest Evaluation Association. All rights reserved.
Trademark Statements