Does the CASC examination meet the standard? Assessing utility for a high stakes postgraduate psychiatry clinical assessment

Dr Musa Basseer Sami 
Laurel House, Canterbury

Introduction

The Clinical Assessment of Skills and Competencies (CASC) is the final and clinical exam for Membership of the Royal College of Psychiatrists (MRCPsych). The CASC is a 16 station Objective Structured Clinical Examination (OSCE) undertaken near the end of Core Psychiatry Training, or by middle-grade doctors who have cleared the previous portions of the Royal College exams. The examination consists of eight seven-minute single stations and eight ten-minute paired stations. Single stations relate to a single task whereas paired stations allow for testing of linked tasks (such as history taking from a patient and then explanation of diagnosis to a relative). The aim of the examination is to test higher-order competencies in history taking, mental state examination, risk assessment, cognitive or physical examination, case discussion and difficult communication across the range of recognised psychiatric sub-specialities (general adult, old-age, forensic, learning disability, child and adolescent and psychotherapy)(1).

The CASC is a high stakes examination allowing for entry to registrar training and thus for an eventual pathway to becoming a consultant psychiatrist. The examination is a summative pass/fail assessment and not designed for formative purposes. Feedback given is limited: candidates who pass are only informed of how many stations they passed whereas candidates who fail are informed of generic areas of concern in the stations they failed.

A typical learner who would undertake the examination is a Core Psychiatry Trainee who has completed Foundation posts and at least two years of psychiatric training at Senior House Officer level. Such a learner will have had to pass their last Assessment of Review of Competencies (ARCP), dependent upon a successful number of Work Based Placed Assessments (WBPAs) to be entered for the examination. This learner also has to demonstrate a degree of competency in psychotherapy techniques via WBPA. Those learners who are middle grade doctors will have to demonstrate equivalent clinical experience and competency to core trainees in order to be eligible for the examination (2). The learner will have also passed the written aspects of the MRCPsych exam (Paper A and Paper B), which are summative knowledge based tests, comprising topics in neuro-scientific principles and knowledge of psychiatric and psychological principles of assessment and treatment. In Paper B half of the paper is devoted to a critical appraisal skills exercise. Thus the candidate who prepares for the CASC examination has already undertaken a number of hurdles to demonstrate a thorough psychiatric grounding to be entered for examination.

Critique of assessment requirements

Justification for the CASC exam can be understood in a historical context. The traditional psychiatric clinical examination had been long and short cases. Despite validity in seeing and working through real clinical cases, there were various criticisms. These included the possibility of examiner bias; the possibility of variation in cases and case difficulty seen between candidates; and the inability to sample more than a small area of the curriculum (3, 4). A move to standardised patients and standardised marking criteria was implemented in 2003. However the marking criteria for the OSCEs was criticised as a ‘check-list approach’ and unable to fully test higher order competencies (3). Consequently the CASC examination was introduced in 2008 based upon principles of (i) standardised patients using actors (ii) standardised criteria for marking (iii) the ability to assess higher order competencies by using global ratings and marking domains and (iv) an ability to assess widely across the curriculum by adhering to a blueprint for stations (1). These were thus aimed at making the CASC a reliable and valid exam.

The Royal College, as part of its Quality Assurance process, states that features to ensure the reliability include (a) standardisation of consultant examiners and training workshops (b) a standardisation exercise for role players and examiners (c) the use of observers and (d) the calculation of psychometric measures and reliability coefficients by station and exam day (5). Standardisation across exams would likely increase test-retest reliability; whereas (d) is a measure of internal consistency (i.e. that the stations within an exam are reliably measuring the same set of skills). However, it is difficult to determine whether there remains inter-rater reliability as there is only one examiner per station. It is difficult to critique this further, as very little reliability data is available in the published literature. This problem has been noted before by previous authors (3,6). The released cumulative results report looked at all 3,171 attempts over the period 20082010, noting a significant reduction in pass rates with number of attempts (p<0.1) (7). There was a 49.0 per cent pass rate for first attempt decreasing to 19.8 per cent for the fourth attempt. This suggests a degree of test-retest reliability i.e. that those who fail the test are more likely to do so again. However this is not a direct test of reliability, as preparation for examination between attempts acts as a confounding variable.

Is the CASC examination valid? As Marwaha (3) puts it: “there is no ‘gold standard‘ by which other assessments could be compared”. However we could see the CASC is an assessment of clinical components of the curriculum (8). There is thus good evidence for the content validity of the CASC, as it is based upon a blueprint approach which samples wide areas from the curriculum (6). Thompson would also argue that there is good face validity of a comprehensive test which tests the key areas of the curriculum, using a previously acceptable approach (i.e. OSCE stations). However, this last area is open for debate: in one survey of candidates at a MRCPsych preparation course, all candidates (n=18) agreed that the exam was fair, and most agreed that ’a competent psychiatry ST3 would pass‘ and ’an incompetent ST3 would fail‘ (9). This was in a group of highly motivated individuals who would be taking the exam within the month. A larger online survey of trainees (n=110), found mixed views: less than half (48 per cent) agreed with the statements that ’CASC examines the required competencies to progress to higher training‘ and that ’CASC scenarios reflect the real life situations faced in clinical practice‘. Interestingly, examiners (n=22) held the CASC in higher regard with 59 per cent agreeing that CASC examines required competencies for progression, whereas 77 per cent agreed that CASC reflects real-life scenarios (10). The debate regarding real life scenarios often relates to the time pressure of undertaking a 7-10 minute station. In real life a full psychiatric assessment would take an hour, akin to a long case. Thus ’there is a risk of trainees who are competent in routine clinical work failing the exam, whereas those who may be clinically inept … may pass it‘ (4). It is, however difficult to quantify this as there are no published studies of predictive validity (i.e. does passing the CASC exam predict future psychiatric competence?). Furthermore, although the College has made a statement that the quality assurance process undertakes psychometric testing to assess the criterion validity of each station, it has not published the data to make this available for critique (5).

The feasibility of the examination needs to be considered. Any one complete round of CASC examinations will test 8 candidates across 16 stations using eight actors and eight examiners. As discussed inter-rater reliability could be increased by having two examiners at a station. However this would require twice as many examiners and is thus not a feasible option. Similarly validity is noted to be increased in long cases when cases are observed. Up to 10 ‘long cases’ have been estimated to provide a reliable for high stakes examination at final year of medical school (11). Indeed some authors have called for revival of such multiple cases to replace the CASC (4,12). However an hour assessment is required to be clinically akin to real life, and having multiple lengthy stations would render the examination impractical. Hence the elements of reliability, validity and feasibility require compromise to deliver a practicable assessment.

The CASC has an impact on the learning of trainees. Positive elements identified in preparing for the CASC have been trainees preparation for exam; selfreflection on performance; and developing communication skills (13). CASC preparatory courses are valued by participants, and trainees appreciate that CASC and mock exams allow for testing of complex cases with realistic simulated patients (9, 14). There has also been a focus on group learning and learning from peers (14). Notably in the literature, there appears to be no mention of the need for consultant supervision, nor for maximising exposure to clinical cases. A more critical view suggests CASC skewing psychiatric practice in trainees: ’Trainees have become unwilling and/or unable to assess, formulate and present whole cases…they have adapted their learning style to passing the exam… undertaking only those tasks which can be completed in 10 minutes‘ (4). This is an anecdotal observation and may or may not be of broader more widely applicable. Further evidence is required to determine if this is a more widespread concern.

For transparency purposes the College details the quality assurance processes. However, there are some key areas where more transparency is required. Candidates who fail are provided with generic feedback or the domain they failed in. This has been noted by both examiners and candidates as lacking specificity and thus of limited utility (10). Similarly there is an increased failure rate in international medical graduates compared to UK and Irish medical graduates, who have a 2-3 times likelihood of passing which is statistically significant (7). Although it may be argued this is due to the UK and Irish medical curriculum preparing the candidates better, this is only likely to be part of the picture, as within UK graduates there remains a statistically significant difference pass rate between White (89.3 per cent) and non-White (74.7 per cent) candidates (7). These differences are measured at all parts of the MRCPsych examinations including the written components and have not fully been explained. This can lead to nonwhite, international medical graduates feeling that the exam is an assessment of language or cultural concepts that cannot be learnt and therefore questioning the validity of the exam as a test for psychiatric competence.

Finally where does this assessment fit into the requirements of the learner? The preceding written papers A or B are focused on knowledge or critical appraisal skills. This is the final high-stakes exam and thus the trainee is required to demonstrate that he has merged the art and science of psychiatry to perform in an array of clinical situations. However, this exam does not mark the end of training, as a further three years of Higher Specialist Training are still required. Consequently this exam allows for the accreditation of clinical competency allowing for the trainee to focus on development of skills which are more difficult to examine such as professionalism, leadership and team management. At this stage passing the CASC examination effectively signals an acquired clinical competency in psychiatric examination to trainee and society. However, there remains a gap in the formulation and management of complex cases, which cannot be demonstrated in a series of 7-10 minute assessments (4). Although work placed-based assessments were designed to address this gap, they have been widely criticised for being a tick box exercise and ineffectual – a fuller discussion of which is outside of the scope of this article (15, 16).

Conclusion

The CASC assessment is a structured assessment which by ensuring wide curriculum coverage aims to ensure that psychiatrists of the future have a holistic understanding of the core psychiatric skills. The College goes to great lengths to ensure reliability and validity of the CASC assessment using a rigorous quality assurance process. However, there are gaps in the evidence as the College has not published the full psychometric data. Thus this article demonstrates the rigour of the exam, whilst accepting the College can do more to make the process transparent. This can be an important conversation to have with some learners who can sometimes become focused on perceived unfairness in the exam, detracting from continuing to improve their psychiatric clinical skills.

References

  1. Royal College of Psychiatrists. MRCPsych CASC Blueprint. 2011. Available at: https://www.rcpsych.ac.uk/pdf/MRCPsych%20CASC%20Blueprint%20March%202011.pdf [Accessed: 1st October 2017]
  2. Royal College of Psychiatrists. Eligibility Criteria and Regulations for MRCPsych Written Papers and Clinical Assessment of Skills and Competencies ( CASC ). 2015 Available at: [Accessed: 1st October 2017].
  3. Marwaha S. Objective Structured Clinical Examinations (OSCEs), psychiatry and the Clinical assessment of Skills and Competencies (CASC) same evidence, different judgement. BMC psychiatry. 2011 May 16;11(1):85.
  4. Michael A, Rao R and Goel V. The long case: a case for revival?, The Psychiatrist, 37, pp. 377–381. 2013 doi: 10.1192/pb.bp.113.043588.
  5. Royal College of Psychiatrists. CASC Quality Assurance Process. 2011 Available at: https://www.rcpsych.ac.uk/pdf/CASC Quality Assurance Statement – July 2011.pdf [Accessed: 1st October 2017].
  6. Thompson CM. Will the CASC stand the test? A review and critical evaluation of the new MRCPsych clinical examination. The Psychiatrist. 2009 Apr 1;33(4):145-8.
  7. Bateman A. MRCPsych examinations: cumulative results 1997-2002. 2011 doi: 10.1192/pb.bp.105.009035 [Accessed: 21 March 2015].
  8. Royal College of Psychiatrists. Core Training in Psychiatry CT1-CT3 – A Competency Based Curriculum for Specialist Core Training in Psychiatry. 2013 Available at: http://www.rcpsych.ac.uk/pdf/Core Curriculum_FINAL Version_July2013_updatedFeb15 KM.pdf [Accessed: 1st October 2017].
  9. Whelan P, Lawrence-Smith G, Church L, Woolcock C, Meerten M, Rao R. Goodbye OSCE, hello CASC: a mock CASC course and examination. The Psychiatrist. 2009 Apr 1;33(4):149-53.
  10. K Kamatchi R, Bandyopadhyay S, Jainer AK, Somashekar B, Marzanski M, Marwaha S. 3 years on: Examiners’ and candidates’ views on the CASC (Clinical Assessment of Skills and Competencies). British Journal of Medical Practitioners. 2012 Dec 1;5(4).
  11. Wass V, Jones R, Van der Vleuten C. Standardized or real patients to test clinical competence? The long case revisited. Medical Education. 2001 Apr 22;35(4):321-5.
  12. Kashyap G, Sule A. MRCPsych CASC exam: is there a better choice?. The Psychiatrist Online. 2012 May 1;36(5):197-.
  13. Hussain A, Husni M. Preparing for the MRCPsych CASC-an insight based on experience. BJMP. 2010 Jun;3(2):55.
  14. McMullen I, Checinski K, Halliwell S, Maier M, Raji O, Rands G, Rao R. Peer observation in simulated CASC events and its effects on learning. The Psychiatrist Online. 2013 Mar 1;37(3):111-5. 15. Sikdar S. WPBA or CASC/OSCE: where is it going wrong? 2010 The Psychiatrist, pp. 72–73. doi: 10.1192/pb.34.2.72b.
  15. Thomson AB, Harding D. CASC candidates need better preparation. The Psychiatrist Online. 2012 Aug 1;36(8):314-5.