Item analysis provides statistics on overall test performance and individual test questions. This data helps you recognize questions that might be poor discriminators of student performance. You can use this information to improve questions for future test administrations or to adjust credit on current attempts.
Roles with grading privileges (such as instructors, graders, and teaching assistants) access item analysis in three locations within the assessment workflow. It is available in the contextual menu for a:
- Test deployed in a content area.
- Deployed test listed on the Tests page.
- Grade Center column.
For best results, run item analyses on single-attempt tests after all attempts have been submitted and all manually graded questions are scored. Interpret the item analysis data carefully and with the awareness that the statistics are influenced by the number of test attempts, the type of students taking the test, and chance errors.
You can run item analyses on tests that include single or multiple attempts, question sets, random blocks, auto-graded question types, and questions that need manual grading.For tests with manually graded questions that have not yet been assigned scores, statistics are generated only for the scored questions. After you manually grade questions, run the item analysis again. Statistics for the manually graded questions are generated and the test summary statistics are updated.
- Go to one of the following locations to access item analysis:
- A test deployed in a content area.
- A deployed test listed on the Tests page.
- A Grade Center column for a test.
- A test deployed in a content area.
- Access the test's contextual menu.
- Select Item Analysis.
- In the Select Test drop-down list, select a test. Only deployed tests are listed.
- Click Run.
- View the item analysis by clicking the new report's link under the Available Analysis heading or by clicking View Analysis in the status receipt at the top of the page.
- You will then be given the Item Analysis report
- Edit Test provides access to the Test Canvas.
- The Test Summary provides statistics on the test, including:
- Possible Points: The total number of points for the test.
- Possible Questions: The total number of questions in the test.
- In Progress Attempts: The number of students currently taking the test that have not yet submitted it.
- Completed Attempts: The number of submitted tests.
- Average Score: Scores denoted with an * indicate that some attempts are not graded and that the average score might change after all attempts are graded. The score displayed here is the average score reported for the test in the Grade Center.
- Average Time: The average completion time for all submitted attempts.
- Discrimination: This area shows the number of questions that fall into the Good (greater than 0.3), Fair(between 0.1 and 0.3), and Poor (less than 0.1) categories. A discrimination value is listed as Cannot Calculate when the question's difficulty is 100% or when all students receive the same score on a question. Questions with discrimination values in the Good and Fair categories are better at differentiating between students with higher and lower levels of knowledge. Questions in the Poor category are recommended for review.
- Difficulty: This area shows the number of questions that fall into the Easy (greater than 80%), Medium(between 30% and 80%) and Hard (less than 30%) categories. Difficulty is the percentage of students who answered the question correctly. Questions in the Easy or Hard categories are recommended for review and are indicated with a red circle.
Only graded attempts are used in item analysis calculations. If there are attempts in progress, those attempts are ignored until they are submitted and you run the item analysis report again.
- You can also view statistics on an individual question by clicking on the link to that question.
- You will then be taken to the Question Details page with statistics for that particular question.