|
Edited on Tue May-20-08 10:22 AM by HereSince1628
My point was that ordinal data can actually be compared.
There certainly should be issues with the validity of evaluation devices used by instructors that end up pooled into summative data. Listening to students coming through my 1st year biology and zoology courses it is clear that in high school they have been exposed complex systems of assessments. It is common for these students to expect a course will include attendance; participation points; homework; group and individual inquiry-based submitted as writing projects and/or oral reports and/or power-point presentations; objective (lol!) exams; standardized national exams, and the devil's playmate-'extra credit.' Unfortunately among the end-users of the 'grades' (students, parents, graduation committees, admissions committees for graduate and professional programs and a few employers) there is little concern that this array of components can be comparably assessed let alone pooled with similar contributions of variance to a single categorical score that is typical of the student's performance in a course. There is even less concern among the end-users that a student assessment be diagnostic and well suited to guide student development at the collegiate/university level.
Setting all that aside and turning to a different topic, it seems to me that the above article and the proposed solution referred to in the title manifest how tradition places blinkers on the summary representations of data of student performance.
Why is the summative performance assessment to be based on the arithmetic mean of constituent component assessments (which themselves are not equivalent in type, validity, or contributions of variance) used in the course? Everyone knows averages are leveraged by outliers. Why isn't the level of assessment based on the median? Why is there no measure of dispersion? Why not assign grades based on interquartile ranges of the underlying assessment data? Don't the end-users of the grades want to both know how a student performs and how consitent a performer the student is? (No! They don't seem to want that at all.)
With the myriad ways to possibly create grading systems, why mandate one that uses naked arithmetic averages linked to categorical scores? I suspect it is because the end-consumers of grades want it that way. They want one value that sums up and communicates a judgement of a student's performance. They think in terms of performance 'on average,' with no nuance towards various ways to describe "typical" performance. And, perhaps most importantly, they (especially, parents and relatives) want a reported measure to work across generations. I suspect that collectively the end-consumers want it just like it is. And that they want it that way even more than they want its structure and underlying principles to be useful to educators or to be mathematically sound. The folks outside education want tradition albeit with students hoping, if not praying, for a bit of bias in their favor.
|