Reading Time: 4 minutesIt is important that evidence take precedence over anecdote and opinion as a basis for making informed decisions amongst educators. Evidence as data can be used to see the effects of teaching on learning.
It can be used to determine whether intervention is required, to measure the effects of intervention in progress, and to assess whether an educational intervention made has achieved the intended outcome.
The anti-evidence movement, most notably the anti-NAPLAN, anti-testing, anti-accountability advocates, favour anecdote and opinion as a basis for non-intervention and inaction. However, subjectivity when deciding how to assist underperforming students is likely to repeat and reinforce disadvantage, rather than remedy it.
Moreover, the anti-evidence movement is afraid of being subject to measurement and, accordingly, having to justify how teaching is being done and resources being applied.
Low expectations characterise the anti-evidence movement, and pop psychologists, seeking to take advantage of parent vulnerability around outsourced stress, fuel dissent and distrust by adding to the stress and vulnerability of families. Such psychologists should have their advice subject to an ‘evidence-of-efficacy’ test.
It is important that professional educators ask the purpose of evidence when seeking to improve educational outcomes. Very basic skills tests, such as the National Assessment Plan Literacy and Numeracy (NAPLAN) in Australia, give insight into whether institutions are presiding over students passing time or whether institutions are allocating resources effectively to boost the learning outcomes of their students.
This is important in the context of varying outcomes between schools and highly varied outcomes between socioeconomic groups.
Disconcertingly, some teachers report that prior to the annual NAPLAN (basic skills) tests, they switched their teaching to ‘teach to the test’. Parents should be concerned that any educator would need to teach basic skills specifically to meet the needs of a basic skills test – rather than teaching and reinforcing basic skills as an integrated and intrinsic aspect of everyday teaching and learning.
And despite this specifically reallocated focus by some teachers, still in the order of 22% of students nationally did not meet the lowest 3 bands of achievement in basic literacy and basic numeracy. How does the anti-evidence movement view this exceptional return? Worthy of ignoring or disparaging the test regime? “How dare we measure whether children can read or do basic mathematics,” their skepticism asserts.
Evidence is necessary for informed decision-making. When teachers are asked why we assess, the common answer is, “to understand what the students know, can and cannot do.” What they forget is that when teachers assess, they should interpret the outcomes as a critique of their own teaching as well.
They could view students’ results through a lens that asks, “What could I, as a professional, do better? What works and what does not?” and so on. In this way, evidence is used to improve professional practice.
Moreover, evidence can displace anecdote as the preferred mode on which to base interventions.
Far too commonly, students are characterised academically by their behaviour, spelling, handwriting, school attendance or homework, rather than their cognitive ability. In this regard, it may be time for educators to work more closely with educational psychologists to better understand elements of cognition. If this could be done, then educational psychologists could help teachers understand thinking and cognition.
Intervention then could bypass the distractor of bad behaviour, incomplete work or poor spelling and could focus on what strategies can help students learn and build their self-efficacy and confidence.
Evidence as a basis for educational planning and strategy does entail more work for educators. Evidence requires students to undertake tasks, and for various elements or criteria to be subject to measurement.
Once measured, meaningful diagnosis (not just averages that often literally mean nothing) can assist with clarifying where specific weaknesses may be. Once issues have been identified, educators can plan a program of remediation or can set performance targets towards which they will work and be accountable to.
Of course, as students progress through the educational system, they would be expected to take a share of the responsibility for doing academic work. However, educators need to instil a love of learning as well as real skills, otherwise there can be danger of conflating age with capacity – when skills have not been taught. This leads to underachievement and students feel confused and misunderstood.
Evidence requires feedback. Meaningful feedback does not take the form of “great work” but rather identifies elements of performance and comments on those elements. For example, “great work” may be translated as, “your use of structure gives this writing order and cohesion”. Similarly “wonderful work” may be “wonderful” on account of the level of detail, the use of language and the extent of the research.
Evidence requires educators to explicitly adduce what they are measuring and then evaluate it against the criteria.
For the anti-assessment brigade, I often wonder why they do not advocate against driving tests. Surely, young drivers feel stress by having to meet basic safety standards. Presumably, these people should advocate against basic driving knowledge being assessed. “Pity the pedestrian” – should this be their stance. That there is a test helps focus attention and resources on what needs to be known to assure a basic level of driving safety is met.
Surely, in the scheme of things, a fundamental education is worthy of the slightly greater complexity of gathering rich data for informed decision-making.