Objective structured clinical examination
This article has multiple issues. Please help improve it or discuss these issues on the talk page. (Learn how and when to remove these template messages)(Learn how and when to remove this template message)
An objective structured clinical examination (OSCE) is a modern type of examination often used in health sciences (e.g. midwifery, occupational therapy, orthoptics, optometry, medicine, physician assistants/associates, physical therapy, radiography, rehabilitation medicine, nursing, pharmacy, dentistry, chiropractic, paramedicine, podiatry, veterinary medicine). It is designed to test clinical skill performance and competence in skills such as communication, clinical examination, medical procedures / prescription, exercise prescription, joint mobilisation / manipulation techniques, radiographic positioning, radiographic image evaluation and interpretation of results. It is a hands-on, real-world approach to learning that keeps you engaged, allows you to understand the key factors that drive the medical decision-making process, and challenges the professional to be innovative and reveals their errors in case-handling and provides an open space for improved decision making based on evidence based practice for real world responsibilities.
An OSCE usually comprises a circuit of short (the usual is 5–10 minutes although some use up to 15 minute) stations, in which each candidate is examined on a one-to-one basis with one or two impartial examiner(s) and either real or simulated (actors or electronic patient simulators) patients. Each station has a different examiner, as opposed to the traditional method of clinical examinations where a candidate would be assigned to an examiner for the entire examination. Candidates rotate through the stations, completing all the stations on their circuit. In this way, all candidates take the same stations. It is considered to be an improvement over traditional examination methods because the stations can be standardised enabling fairer peer comparison and complex procedures can be assessed without endangering patients health.
As the name suggests, an OSCE is designed to be objective - all candidates are assessed using exactly the same stations (although if real patients are used, their signs may vary slightly) with the same marking scheme. In an OSCE, candidates get marks for each step on the mark scheme that they perform correctly, which therefore makes the assessment of clinical skills more objective, rather than subjective, structured - stations in OSCEs have a very specific task. Where simulated patients are used, detailed scripts are provided to ensure that the information that they give is the same to all candidates, including the emotions that the patient should use during the consultation. Instructions are carefully written to ensure that the candidate is given a very specific task to complete. The OSCE is carefully structured to include parts from all elements of the curriculum as well as a wide range of skills. A clinical examination - the OSCE is designed to apply clinical and theoretical knowledge. Where theoretical knowledge is required, for example, answering questions from the examiner at the end of the station, then the questions are standardized and the candidate is only asked questions that are on the mark sheet and if the candidate is asked any others then there will be no marks for them.
Marking in OSCEs is done by the examiner. Occasionally written stations, for example, writing a prescription chart, are used and these are marked like written examinations, again usually using a standardized mark sheet. One of the ways an OSCE is made objective is by having a detailed mark scheme and standard set of questions. For example, a station concerning the demonstration to a simulated patient on how to use a metered dose inhaler [MDI] would award points for specific actions which are performed safely and accurately. The examiner can often vary the marks depending on how well the candidate performed the step. At the end of the mark sheet, the examiner often has a small number of marks that they can use to weight the station depending on performance and if a simulated patient is used, then they are often asked to add marks depending on the candidates approach. At the end, the examiner is often asked to give a "global score". This is usually used as a subjective score based on the candidates overall performance, not taking into account how many marks the candidate scored. The examiner is usually asked to rate the candidate as pass/borderline/fail or sometimes as excellent/good/pass/borderline/fail. This is then used to determine the individual pass mark for the station.
Many centres allocate each station an individual pass mark. The sum of the pass marks of all the stations determines the overall pass mark for the OSCE. Many centres also impose a minimum number of stations required to pass which ensures that a consistently poor performance is not compensated by a good performance on a small number of stations.
There are, however, criticisms that the OSCE stations can never be truly standardized and objective in the same way as a written exam. It has been known for different patients / actors to afford more assistance, and for different marking criteria to be applied. Finally, it is not uncommon at certain institutions for members of teaching staff be known to students (and vice versa) as the examiner. This familiarity does not necessarily affect the integrity of the examination process, although there is a deviation from anonymous marking. However, in OSCEs that use several circuits of the same stations the marking is repeatedly shown to be very consistent which supports the validity that the OSCE is a fair clinical examination.
OSCEs have traditionally been marked on optical mark reader sheets. With the advent of tablet computers and wi-fi, a number of systems have been developed to mark examinations electronically, eliminating the need for printing and scanning thousands of mark sheets used in each OSCE circuit. Research comparing traditional paper-based OSCEs with an online marking tool (OMIS) found significant improvements in terms of accuracy, financial feasibility and time-costs, while the validity of the OSCE exam itself can be measured and improved using the generalizability theory alongside student results.
Electronic scoring with OMIS increases fairness of result calculations through the application of Regression analysis as a standard setting method, thereby highlighting issues around inter-rater reliability and validity of OSCE station forms. 
Preparing for OSCEs is very different from preparing for an examination on theory. In an OSCE, clinical skills are tested rather than pure theoretical knowledge. It is essential to learn correct clinical methods, and then practice repeatedly until one perfects the methods whilst simultaneously developing an understanding of the underlying theory behind the methods used. Marks are awarded for each step in the method; hence, it is essential to dissect the method into its individual steps, learn the steps, and then learn to perform the steps in a sequence. For example, when performing an abdominal examination, a student is instructed to first palpate for the liver, and then to palpate for the spleen. This seemingly meaningless order becomes relevant when it is considered that those with enlarged livers often also have enlarged spleens.
Most universities have clinical skills labs where students have the opportunity to practice clinical skills such as taking blood or mobilizing patients in a safe and controlled environment. It is often very helpful to practise in small groups with colleagues, setting a typical OSCE scenario and timing it with one person role playing a patient, one person doing the task and if possible, one person either observing and commenting on technique or even role playing the examiner using a sample mark sheet. Many OSCE textbooks have sample OSCE stations and mark sheets that can be helpful when studying in the manner. In doing this the candidate is able to get a feel of running to time and working under pressure.
In many OSCEs the stations are extended using data interpretation. For example, the candidate may have to take a brief history of chest pain and then interpret an electrocardiogram. It is also common to be asked for a differential diagnosis, to suggest which medical investigations the candidate would like to do or to suggest a management plan for the patient.
- Assessment of clinical competence using objective structured examination, Harden et al., Br Med J. 1975 Feb 22;1(5955):447-51 http://www.bmj.com/cgi/content/abstract/1/5955/447
- Ross, M; Carroll, G; Knight, J; Chamberlain, M; Fothergill-Bourbonnais, F; Linton, J (January 1988). "Using the OSCE to measure clinical skills performance in nursing.". Journal of advanced nursing 13 (1): 45–56. PMID 3372885.
- Kropmans, Thomas; Barry GG O’Donovan; David Cunningham; Andrew W Murphy; Gerard Flaherty; Debra Nestel; Fidelma P Dunne (2012-01-01). "An Online Management Information System for Objective Structured Clinical Examinations". Computer and Information Science 5 (1): 38–48. Retrieved 9 May 2013.
- Kropmans, Thomas; Liam Griffin, David Cunningham, Winny Setyonugroho, Catherine Anne Field, Eva Flynn Kieran M Kennedy (2015-02-15). "Back to the Future: Electronic Marking of Objective Structured Clinical Examinations and Admission Interviews Using an Online Management Information System in Schools of Health Sciences". Health & Medical Informatics 6 (1): 1–6. Cite uses deprecated parameter