provide an aggregate view for all reviewer scores and basic statistics on reviewers scores

- show standard deviation in main applicant grid so OA can see if a student has skewed review resultss

We just received kind of an interesting question/request from Bob Holland over at MATC. His Help Desk ticket is as follows:

Hi Kat,

Our college President served as a scholarship reviewer in the recently completed (and archived) scholarship cycle. He's asking how his scores compare to the other reviewers within his "reviewer group". How do I access that iniformation now that it has been archived?

Thanks!

Bob

Essentially, there's not a great way to get at this information in the system at the moment. You can look up individual reviewer scores on an applicant-by-applicant basis, but there's not a great way to get ALL the individual scores at once for an overall comparison. (Additionally, there's no way to view how one particular reviewer's scores fall in line against other reviewers in the same group.)

If the opportunity hadn't been archived and the applications not yet rejected, I would have suggested to him to log into the reviewer's portal to grab a list of his scores to at least compare against the opportunity Applications tab's Reviewer Score column -- but again, that's not quite what he's wanting and not possible with this specific situation.
  • Deleted User
  • Feb 3 2016
  • Reviewed: Voting Open
Client Name "shard name" cvtc, matc, uoregon
User Opportunity Admin, System Admin
Functional Unit Reviews, Grids
  • Attach files
  • Deleted User commented
    October 16, 2018 20:31

    See similar use case from uoregon below:

    Scoring:  We really need a way to ‘normalize’ or ‘standardize’ the rubric scores provided by Reviewers.  No matter how many in-depth trainings we do, no matter how structured our rubrics are, and no matter how much help text we provide, there is always such a variance in the way people score.  For example, if the Reviewer1 gives all their scores in the 10-60 range (out of 100) and Reviewer2 gives all their scores in the 50-90 range, then the applications scored by Reviewer2 will always end up getting awarded at higher rates than anyone who fell into Reviewer1’s list. This is incredibly unfair to the student. In our previous scholarship system, we would never sort on raw scores, we would always use a standard deviation type calculation to put everyone’s scores on the same scale. 
     
    Because we couldn’t do this in AW last year, I ended up exporting all the raw Reviewer scores into Excel and then converting each reviewer’s scores into a Percentile Score. That way, I could sort on the Percentile and see everyone in the Top-10th Percentile, or Top-20th Percentile.  For this reason, we didn’t do any of our awarding with the system, we did it all in Excel and then manually entered it into Banner.  But last year, we only had one opportunity per Review Group (for our centrally-administered awards). This year we plan to have review groups review on the Conditional level, and then we will use that for multiple auto-match opportunities, so we won’t be able to use our ‘cheat’ system in Excel.
     
    The feedback we have received from the Help Desk team is just to add better training or add more help text. But I really think that this is missing the point. The way AW looks at review scores is inherently unfair to students – unless we have a system where every reviewer reads every scholarship. This may work for smaller schools, but would never work for our large school.