Medical students’ perspectives on a novel international adaptive progress test

Introduction: Computerised adaptive testing (CAT) tailors test difficulty to an individual learner using an algorithm to select questions dependent on previous responses. Between 2018-2021, eight European medical schools took part in a study to develop an Online Adaptive International Progress Test. We consider here study participant feedback to evaluate the acceptability of adaptive vs non-adaptive testing.
Methods: Study participants, students from across Europe at all stages of undergraduate medical education with varying levels of prior experience with progress testing, sat remotely invigilated tests using the QuizOne® platform. Participants completed feedback questionnaires on their experiences and perceptions of adaptive and non-adaptive tests.
Results: Overall satisfaction with the organisation and delivery of online tests was high regardless of previous experience with progress testing. In questions probing the appropriateness of the level and the length of testing, differences were observed between adaptive and non-adaptive tests. There was a high level of agreement that the adaptive test was a good measure of personal knowledge and increased participants’ motivation for study.
Discussion: Adaptive algorithms adjust the level of test difficulty for individual students in real-time promoting test engagement through targeted questions, and in turn, leading to positive perceptions of test length and relevance. Findings were independent of participant country of origin, study stage or prior experience with progress testing, suggesting that computer adaptive testing may represent a significant step towards the development of personalised assessments in medical education internationally.

About the Author

You may also like these

Sponsors