This section expands on a few particularly popular themes related to the application of Lertap 5.

 

In visual item analysis you'll see an example of the use of response trace lines. Haladyna & Rodriguez (2013) (and others) have suggested that these trace lines are a very useful way to judge how well cognitive items have performed, an alternative to looking at tables of item statistics.

 

Mastery tests often use one or more cut-scores to determine whether or not candidates have met minimum performance criteria. Lertap has had support in this area for decades. Read more.

 

An examination of how item responses and test scores vary over groups of respondents is the subject of two epistles, one that includes DIF, differential item functioning, and another which gets into such things as boxplots.
 

Cheating?  Have test respondents perhaps been able to "share" answers? Read more.

 

Test reliability is a classic topic in measurement. This epistle delves more into this time-tested topic.

 

Lertap began its career as a tool for applying "CTT", classical test theory, to analyze responses to tests and surveys. But it also supports "IRT", item response theory.

 

We all know that tests and surveys commonly use multiple-choice question formats. This topic discusses ways to score "supply" questions, ones where respondents write (or type) their answers.