NCVER has just released a paper by Francesca Beddie. It focuses on the views RTOs have about independent validation of assessment, and the practices they use.
It points out that while independent validation of assessment is driven by regulatory requirements and a compliance mentality, it can play a constructive role in RTO governance and continuous improvement.
Getting assessments right?
In an assessment driven and competency-based system like VET, the quality of assessment is really important, but getting it ‘right’ isn’t easy. This latest paper from NCVER, entitled “Begin with the end: RTO practices and views on independent validation of assessment” and authored by Francesca Beddie reminds us of ASQA’s definition of assessment moderation and validation (so have a look at that).
NCVER’s press release about the paper points out that:
“… independent validation or moderation of assessment is not a driver of quality but rather a mechanism for identifying recommendations for improvements to the assessment tool, assessment process or assessment outcome. It can also build stronger relationships between RTOs and employers and serve to improve the industry relevance of training.”
However, the interviews Francesca conducted in the course of the project revealed that independent moderation and validation also: “entail consideration of the diverse clients and business models of the various types of providers, and that a one-size-fits-all approach is not a workable solution.”
NCVER’s press release about this paper also noted that there is “currently an appetite among RTOs for change that values excellence and professionalism in training outcomes.”
These views were accompanied by the desire for more professional development for RTO staff and more opportunities for them to share experiences about assessment, and its moderation and validation. Another article in this issue expands on this theme.
Making things better.
One big message is moving away from a compliance mentality, which “can lead to over-assessment, for example, the repeated assessment of elements common to more than one unit of competency.” Indeed, the paper suggests that “some [are] arguing for centrally produced and regulated assessment tools.”
The paper also maintains that the value of validation “would be much enhanced if less paperwork were involved.” Providers see the evidence-collecting and reporting burden as onerous, it points out. And maybe professional judgement is not as valued as it should be, nor is there clarity and flexibility about the use of technologies to assist the training and assessment processes.
Training packages don’t help either as they are voluminous, and many don’t understand how to unpack and use them. In addition, approaches to training and assessment are “often determined with an eye on the auditor not the trainer or student.”
The good news is that the paper concludes:
“There is a mood for a change that values excellence and professionalism while preserving the national system of defining occupational standards.”
However, Craig Robertson, writing in a recent TAFE Directors Australia newsletter, points to the need for trade-offs to enable providers to validate and improve practice more readily. This, he suggests, requires de-cluttering training specifications and freeing up delivery modes and options.
The real answer, he suggests, is greater trust, but in particular that needs to go to those that deserve it. That becomes more feasible when trusted RTOs can self-assure.
There’s more if you want it!
Francesca’s paper has two support documents. These are a literature review containing an annotated timeline, spanning 2001 to 2020, and a desktop investigation of international approaches used to foster employer engagement in quality vocational education and training validation and moderation approaches. This latter paper draws on information from the United Kingdom, Europe and New Zealand.