Versions Compared


  • This line was added.
  • This line was removed.
  • Formatting was changed.

Item analysis provides statistics on overall test performance and individual test questions. This data helps you recognize questions that might be poor discriminators of student performance. You can use this information to improve questions for future test administrations or to adjust credit on current attempts.

Roles with grading privileges (such as instructors, graders, and teaching assistants) access item analysis in three locations within the assessment workflow. It is available in the contextual menu for a:

  • Test deployed in a content area.
  • Deployed test listed on the Tests page.
  • Grade Center column.

You can run item analyses on deployed tests with submitted attempts, but not on surveys. Access previously run item analyses under the Available Analysis heading or select a deployed test from the drop-down list and click Run to generate a new report. The new report's link appears under the Available Analysis heading or in the status receipt at the top of the page.

For best results, run item analyses on single-attempt tests after all attempts have been submitted and all manually graded questions are scored. Interpret the item analysis data carefully and with the awareness that the statistics are influenced by the number of test attempts, the type of students taking the test, and chance errors.

Step-by-step guide

You can run item analyses on tests that include single or multiple attempts, question sets, random blocks, auto-graded question types, and questions that need manual grading.For tests with manually graded questions that have not yet been assigned scores, statistics are generated only for the scored questions. After you manually grade questions, run the item analysis again. Statistics for the manually graded questions are generated and the test summary statistics are updated.

  1. Go to one of the following locations to access item analysis:
    • A test deployed in a content area.
      Image RemovedImage Added

    • A deployed test listed on the Tests page.
      Image RemovedImage Added

    • A Grade Center column for a test.
      Image RemovedImage Added

  2. Access the test's contextual menu.
  3. Select Item Analysis.
  4. In the Select Test drop-down list, select a test. Only deployed tests are listed.
    Image RemovedImage Added

  5. Click Run.
  6. View the item analysis by clicking the new report's link under the Available Analysis heading or by clicking View Analysis in the status receipt at the top of the page.
    Image RemovedImage Added

  7. You will then be given the Item Analysis report
    Image RemovedImage Added

  8. Edit Test provides access to the Test Canvas.
    Image RemovedImage Added

  9. The Test Summary provides statistics on the test, including:
    Image RemovedImage Added

    • Possible Points: The total number of points for the test.
    • Possible Questions: The total number of questions in the test.
    • In Progress Attempts: The number of students currently taking the test that have not yet submitted it.
    • Completed Attempts: The number of submitted tests.
    • Average Score: Scores denoted with an * indicate that some attempts are not graded and that the average score might change after all attempts are graded. The score displayed here is the average score reported for the test in the Grade Center.
    • Average Time: The average completion time for all submitted attempts.
    • Discrimination: This area shows the number of questions that fall into the Good (greater than 0.3), Fair(between 0.1 and 0.3), and Poor (less than 0.1) categories. A discrimination value is listed as Cannot Calculate when the question's difficulty is 100% or when all students receive the same score on a question. Questions with discrimination values in the Good and Fair categories are better at differentiating between students with higher and lower levels of knowledge. Questions in the Poor category are recommended for review.
      Image RemovedImage Added

    • Difficulty: This area shows the number of questions that fall into the Easy (greater than 80%), Medium(between 30% and 80%) and Hard (less than 30%) categories. Difficulty is the percentage of students who answered the question correctly. Questions in the Easy or Hard categories are recommended for review and are indicated with a red circle.
      Image RemovedImage Added

      Only graded attempts are used in item analysis calculations. If there are attempts in progress, those attempts are ignored until they are submitted and you run the item analysis report again.

  10. You can also view statistics on an individual question by clicking on the link to that question.
    Image RemovedImage Added

  11. You will then be taken to the Question Details page with statistics for that particular question.
    Image RemovedImage Added


Related articles:

Content by Label
labelsLMS Blackboard
cqllabel in ("blackboard","lms") and type = "page" and space = "KB"
labelsLMS Blackboard


For questions or comments, contact the Computer Services Help Desk