Publication:
Sources of validity evidence for an internal medicine student evaluation system: An evaluative study of assessment methods

dc.contributor.authorChirayu Auewarakulen_US
dc.contributor.authorSteven M. Downingen_US
dc.contributor.authorUapong Jaturatamrongen_US
dc.contributor.authorRungnirand Praditsuwanen_US
dc.contributor.otherMahidol Universityen_US
dc.contributor.otherUniversity of Illinois at Chicagoen_US
dc.contributor.otherFaculty of Medicine, Siriraj Hospital, Mahidol Universityen_US
dc.date.accessioned2018-06-21T08:30:09Z
dc.date.available2018-06-21T08:30:09Z
dc.date.issued2005-03-01en_US
dc.description.abstractBACKGROUND: Medical students' final clinical grades in internal medicine are based on the results of multiple assessments that reflect not only the students' knowledge, but also their skills and attitudes. OBJECTIVE: To examine the sources of validity evidence for internal medicine final assessment results comprising scores from 3 evaluations and 2 examinations. METHODS: The final assessment scores of 8 cohorts of Year 4 medical students in a 6-year undergraduate programme were analysed. The final assessment scores consisted of scores in ward evaluations (WEs), preceptor evaluations (PREs), outpatient clinic evaluations (OPCs), general knowledge and problem-solving multiple-choice questions (MCQs), and objective structured clinical examinations (OSCEs). Sources of validity evidence examined were content, response process, internal structure, relationship to other variables, and consequences. RESULTS: The median generalisability coefficient of the OSCEs was 0.62. The internal consistency reliability of the MCQs was 0.84. Scores for OSCEs correlated well with WE, PRE and MCQ scores with observed (disattenuated) correlation of 0.36 (0.77), 0.33 (0.71) and 0.48 (0.69), respectively. Scores for WEs and PREs correlated better with OSCE than MCQ scores. Sources of validity evidence including content, response process, internal structure and relationship to other variables were shown for most components. CONCLUSION: There is sufficient validity evidence to support the utilisation of various types of assessment scores for final clinical grades at the end of an internal medicine rotation. Validity evidence should be examined for any final student evaluation system in order to establish the meaningfulness of the student assessment scores. © Blackwell Publishing Ltd 2005.en_US
dc.identifier.citationMedical Education. Vol.39, No.3 (2005), 276-283en_US
dc.identifier.doi10.1111/j.1365-2929.2005.02090.xen_US
dc.identifier.issn03080110en_US
dc.identifier.other2-s2.0-15244351260en_US
dc.identifier.urihttps://repository.li.mahidol.ac.th/handle/20.500.14594/17059
dc.rightsMahidol Universityen_US
dc.rights.holderSCOPUSen_US
dc.source.urihttps://www.scopus.com/inward/record.uri?partnerID=HzOxMe3b&scp=15244351260&origin=inwarden_US
dc.subjectMedicineen_US
dc.subjectNursingen_US
dc.subjectSocial Sciencesen_US
dc.titleSources of validity evidence for an internal medicine student evaluation system: An evaluative study of assessment methodsen_US
dc.typeArticleen_US
dspace.entity.typePublication
mu.datasource.scopushttps://www.scopus.com/inward/record.uri?partnerID=HzOxMe3b&scp=15244351260&origin=inwarden_US

Files

Collections