Item analysis provides statistics on overall test performance and individual test questions. This data helps you recognize questions that might be poor discriminators of student performance. You can use this information to improve questions for future test administrations or to adjust credit on current attempts.
Roles with grading privileges (such as instructors, graders, and teaching assistants) access item analysis in three locations within the assessment workflow. It is available in the contextual menu for a:
For best results, run item analyses on single-attempt tests after all attempts have been submitted and all manually graded questions are scored. Interpret the item analysis data carefully and with the awareness that the statistics are influenced by the number of test attempts, the type of students taking the test, and chance errors.
You can run item analyses on tests that include single or multiple attempts, question sets, random blocks, auto-graded question types, and questions that need manual grading.For tests with manually graded questions that have not yet been assigned scores, statistics are generated only for the scored questions. After you manually grade questions, run the item analysis again. Statistics for the manually graded questions are generated and the test summary statistics are updated.
Only graded attempts are used in item analysis calculations. If there are attempts in progress, those attempts are ignored until they are submitted and you run the item analysis report again.