Moments of Contingency: How Black and William conceptualize formative assessment

Professors Black and William

Paul Black (left) and Dylan William

Posted by John Kleeman

I’ve always believed instinctively that assessment is the cornerstone of learning. I’ve recently read an interesting paper by the eminent Professors Paul Black and Dylan William that conceptualizes this powerfully.

In Developing the theory of formative assessment, published in 2009 in the journal Educational Assessment, Evaluation and Accountability, they describe how formative assessment gives “Moments of Contingency” in instruction – critical points where learning changes direction depending on an assessment.

In their model, assessment gives you information to take decisions to direct learning, and so makes instruction and learning more effective than it would have been otherwise. There are many paths that instruction can go down, and formative assessment helps people choose the right path.

Person with 3 paths to go onBlack and William’s formal definition of formative assessment is how “evidence about student achievement is elicited, interpreted, and used by teachers, learners, or their peers, to make decisions about the next steps in instruction that are likely to be better, or better founded, than the decisions they would have taken in the absence of the evidence that was elicited”.

Like Professor David Nicol about whom I blogged earlier, an important point they make is that formative assessment is not only instructor-led, but is also about interaction with peers and self-assessment. Black and William have done most of their work in education, but their message resonates with the 70+20+10 model currently sweeping corporate learning. Increasingly we are realizing that interaction with learning peers is a critical part of learning: they can give you feedback, questions or insight that help you learn. As a learner, you can regulate your own learning and are responsible for it – and assessments help you make the decisions on how to adjust your learning.

The CAA Conference, 10 years on

Posted by Steve Lay

I had great fun at the recent CAA (Computer Assisted Assessment) Conference hosted by University of Southampton, UK.  I’d like to thank the team there for taking the lead in organizing the event and opening a new chapter in its history.  This conference builds on the success of the 12 previous CAA conferences hosted at Loughborough University. Although I didn’t go to the first event in 1997, I’ve been a regular attendee on and off for the past 10 years.

I was given the job of summing up the conference and providing closing remarks.  With just two days to read over 30 academic papers I found myself searching for a tool to help me summarize the information quickly.  After a little bit of text processing with a python script and the excellent TagCrowd tool I came up with a tag cloud based on the top 50 terms from the abstracts of the papers presented at the conference:

tagcloud4

Assessment obviously remains a key focus of this community, but I was also struck by the very technical language used: system, tools, design, computer and so on.  However, it was also interesting to see which words were missing.  Traditionally I would have expected words like reliability and validity to feature strongly.  Although summative assessment makes an appearance formative assessment does not feature strongly enough to appear in the cloud.  Clearly students, learners and the individual are important but where is adaptivity or personalization?

It is interesting to compare this picture with a similar one taken from the abstracts of the papers in 2000, ten years ago.

tagcloud2000

An important part of our mission at Questionmark is learning from communities like this one and using the knowledge and best practices to develop our software and services.  During the conference I witnessed a range of presentations covering ideas that we can apply right now through to some fascinating areas of research that point the way to future possibilities.

The conference was a great success, and planning for next year (5th-6th July 2011) has already started.  Check out the CAA Web site for the latest information.

Podcast: David Lewis on Large Scale Online Assessments at Glamorgan University

 

Posted By Sarah Elkins

David Lewis of Glamorgan University has extensive experience with Questionmark Perception. I spoke with him recently about the large scale implementation he has been working on at Glamorgan, where they use Questionmark for formative assessment, summative assessment and module evaluation. David also spoke about the training programs that have been developed within the University, the collaboration with other higher education institutions in Wales, and provided some great advice for anyone working with online assessments.

Soft Scaffolding and Other Patterns for Formative Assessment

steve-smallPosted by Steve Lay

As someone involved in software development, I’m used to thinking about ‘patterns’ in software design.  Design patterns started life as a way of looking at the physical design of buildings.  More recently, they’ve been used to identify solutions to common design problems in software.  One of the key aspects of pattern use is that patterns are named, and these names can be used as a vocabulary to help designers implement solutions in software.

So I was interested to see the technique discussed in the context of designs of formative assessment by the recent JISC project on Scoping a Vision for Formative e-Assessment.  In the final report, the authors document patterns for formative assessment as a way of bridging the gap between practitioners and those implementing solutions in software to support them.

The patterns have wonderful names like “Classroom Display,” “Round and Deep” and “Objects To Talk With” that entice me to want to use them in my own communications.

To give an example of how one might apply the theory, let’s take a design problem identified in the report.  Given that the point of formative assessment is to inform future learning activities it is not surprising that in some environments outcomes are used too rigidly to determine the paths students take resulting in a turgid experience.  What you need, apparently, is “soft scaffolding,” which describes solutions that soften the restrictions on types of responses or paths a student can take with a resource, for example, by providing free-text ‘other’ options in MCQs or replacing rigid navigation with recommendations and warnings.6107473_aaba2abff5

You can jump straight to the patterns themselves using this on the project wiki.