Export of Data and Analytics reports are available in AT&S to help you analyze offline how your students are doing and how good your test questions are. To access the reports for any of your assessments, click on Export of Data under Grading.
Three reports are included: Export Summary, Item Analysis, and Assessment Statistics.
Ability to download the summary of data results per student in a spreadsheet for offline review.
Item analysis can be a powerful tool to help instructors improve their tests and ensure that they are valid measures of learning.
It has long been accepted that optimal item discrimination is obtained when the upper and lower groups each contain twenty-seven percent of the total group. This is what we have followed in AT&S. When doing item analysis, the group of students taking a test is divided into upper, middle, and lower groups on the basis of students’ scores on the test. This division is essential if information is to be provided concerning the operation of distracters (incorrect options) and to compute an easily interpretable index of discrimination.
The difficulty index is one of the most useful item analysis statistic. This measure asks you to examine the proportion of students who answered a test item accurately. To determine the difficulty level of test items, a measure called Difficulty Index is used.
If the item difficulty is more than .75 (or 0.75), it is an easy item. If the difficulty is below .25, it means that it’s a difficult item.
If more students select an incorrect answer than the correct answer, it’s also an indicator that the item should be reviewed closely to find out if there are answer choices that should be replaced.
Items that are too hard or too easy don’t contribute much to test reliability.
Etudes calculates the difficulty for you, using the data from each student’s BEST test submission.
How the difficulty index is calculated:
• After a test is scored, we arrange the results from highest to lowest.
• The top upper 27 percent and the bottom / lower 27 percent are separated.
• We get the number of the students from the upper and lower groups who got the item correct for each of the items.
• We convert the frequencies to percentage of the upper 27 percent who got the item correct and the lower 27 percent who got it correct.
Then, we can compute the Difficulty Index (Dif.In) by adding the number of correct responses in the top group (RU) to the number of correct responses in the lower group (RL), and then dividing this sum by the total number of students in both groups. This is expressed as proportion, called the p-value. e.g. 67% difficulty is expressed as p=0.67.
RU + RL
Dif.In = _______________________________________________
Total number of students in the top and bottom groups
Another way to calculate the difficulty is by adding the correct percentage of responses by both groups and dividing by 2.
PU + PL
Dif.In = ______________
PU = Percentage of the upper 27 percent who got the item correct
PL = Percentage of the lower 27 percent who got the item correct.
The Discrimination Index refers to how well an assessment differentiates between high and low scores. It helps you differentiate between students who know the material and those who do not. Further, this statistic looks at the relationship between a student’s performance on the given item (correct or incorrect) and the student’s performance (overall score) in the test.
Etudes calculates the discrimination index for you, using the data from each student’s BEST submission.
To get the discrimination, we subtract the percentage of correct responses of the lower group (PL) from the percentage of correct responses from the upper group (PU), and divide by 100.
PU – PL
DI = ________
Another way to calculate the discrimination index is by subtracting the total correct responses of the lower group (RL) from the total correct responses of the upper group (RU), and then dividing by half the number of students involved in both groups.
RU – RL
DI = _______________________________________
Half the number of students from both groups
How does this information help you?
For an item that is highly discriminating, the students who respond to a test item correctly also do well in the overall test; whereas, the students who respond incorrectly in the item tend to perform poorly on the overall test.
You should be able to expect that the high-performing students would select the correct answer for each question more often than the low-performing students. If this is true, then the assessment should have a positive discrimination index (between 0 and 1). The higher the value, the better that test item is able to discriminate between strong and weak students. This indicates that students who received a high total score chose the correct answer for a specific item more often than the students who had a lower overall score.
Negative values indicate that weak students get the item right at a higher rate than strong students. If you see that more of the low-performing students got a specific item correct, then the item has a negative discrimination index (between -1 and 0). When low-performing students are more likely to get an item correct, the question should be carefully examined and probably deleted or revised. It may be that some of the distracters are confusing, ambiguous, or the item has an incorrect answer key.
A good question may be one with a moderate difficulty level (.60) and a positive discrimination index (0.8).
Have a look at this sample worksheet from the Florida Center for Instructional Technology, as a quick example. It should help you see why question #9 with a negative discrimination should be removed from the test. Weak students are getting it right!
(Reference: Florida Center for Instructional Technology, Classroom Assessment, http://fcit.usf.edu/assessment/index.html)
The Frequency column lists the number of choices that were left unanswered. A ‘0’ frequency means that all students chose an answer. If students choose to leave questions unanswered, it could be
Additional worksheets (tabs) in the spreadsheet are included for the raw answers for each type of objective question. These item frequency analysis reports provide detailed frequency analysis information for each question on an assessment in terms of how many students selected each answer (for true/false and multiple choice) and what common answers they chose for fill-in or matching.
A report that provides instructors with the names, student id’s, the start and finish time of every student’s assessment, auto score (if objective test), final score, and if the scores are released to students. If students have submitted an assessment multiple times, every submission’s details are included in the report.