Measurement in learning is not an end in itself, but a means to increase the value of education.
BY: EDWIN MUSONYE AND MARIANA KEMEI
A notable view on measurement is by John Parson; who although was focusing on business culture in his piece ‘Measuring to learn whilst learning to measure’, opined that majority of measurement efforts centre on learning to measure – that is on tools and techniques. He thus warns that measurement shouldn’t be just about numbers and systems; but also about having the right attitudes about why the measuring is being done in the first place. His advice is that we need to measure to learn rather than to report.
Therefore, while measurement in learning is indispensable – as it is in other industries; caution must be taken to avoid misuse or misinterpretation. Sometimes measurement is taken for its own sake and consequently brings no real gain. A good practice is in building and using measurement systems to improve quality of service.
It’s a fact that very few fields can supply rich sources of easy-to-collect data than in learning. There are several measurement approaches and tools utilised in education and are sometimes confused or misused for lack of understanding of their exact application. The two main approaches are assessments and evaluations whilst the main tools are scoring and grading.
Assessment is commonly seen as a classroom research to provide useful feedback for the improvement of teaching and learning. It is feedback from the student to the instructor about the student’s learning. Therefore, assessment is interactive and direct. Non-gradable aspects of learning such as students’ lack of interest in the subject can be captured by assessment.
Evaluation on its part uses methods and measures to judge student learning and understanding of the material for purposes of grading and reporting. Evaluation is feedback from the instructor to the student (and other stakeholders) about the student’s learning. However, some institutions also ask students to evaluate their teachers using a standard form.
One of the weaknesses in Kenya’s education system is evidenced in performance ranking based on scores as opposed to grades. Whereas, the ranking on scores fixation has reduced in recent times with the abolishing of announcing the best performing schools in national exams; still the practice continues through announcing the best performing students. Grading emphasises bands such as A, B C, and D whilst scores emphasise marks.
There are two major shortfalls of using mark and scores ranking. The first is that it results in numbering students, a tradition that has existed for a long time and one that’s assumed to be perfect. Unfortunately, it’s only a superficial way of assessing a student’s performance since being the first in a classroom doesn’t necessarily mean good performance. Few stakeholders realise that a student can be number one with a low grade, or equally be last – but with a high grade.
The second deficit is that it promotes personalised competition and rivalry instead of shared knowledge pursuit. Smart students may be tempted to study in isolation so as not to share knowledge with others in fear of being dislodged from the top position. Even worse, examination malpractices have become rampant as a result.
Grading on its part has positive outcomes. It still provides ranking but through clustering of similar minds. Grades or performance quartiles are banded so that we know students’ capabilities without over-glorifying or demeaning anyone at a personal level.
Grading emphasises standardisation since if an ‘A’ grade is 80 marks, any student in any class and school is measured against the score. This means a student always carries home a score that reflects universal performance and not arbitrary localised perception of brilliance.
School managers and instructors can monitor improvements and declines through observing whether the aggregate is increasing or declining. Similarly grades in individual subjects are continuously counted and compared to previous outcomes. Furthermore, students clustered in the lower grades can be collectively and effectively assisted.
As stated earlier, measurements can mislead or fail if not well conceived. For instance, if the content in the syllabus is substandard or outdated, then the evaluations may not effectively capture real useful learning. The challenge of creating quality learning that produces graduates who have tangible capabilities is real. A student can get an ‘A’ in an obsolete subject or topic.
Again, if the institution lacks essential facilities then the measurement results are not accurate in their depicting full potential of the learners. Lastly, if the measurement practices don’t provide a platform for inputting improvements, then they are useless. When learning results don’t have constructive applications, they are turned into status media content covering song and dance. Another problem comes about when the measuring helps us identify highly exceptional learners – what do we do with them? Whereas, there are initiatives to measure learning on a wider scale such as the Uwezo Initiative – an NGO that periodically assesses learning effectiveness across many schools in Kenya; and document the findings. However, there is little evidence that stakeholders try to act on their insightful observations.
Edwin Musonye and Mariana Kemei are Technical Communications practitioners working with Document Point. Email: firstname.lastname@example.org