Date of Award

2022

Document Type

Thesis

Degree Name

Master of Arts (M.A.)

Department

Learning, Leadership and Change

First Advisor

Rachelle Kisst Hackett

First Committee Member

Allen Wong

Second Committee Member

Fawaz Alzoubi

Abstract

The purpose of the study was to investigate and analyze some sources of validity evidence acquired from data (i.e., scores) of comprehensive clinical competency-focused components of the assessment system within the context of one school of dentistry through retrospective analysis of students’ data. Assessments in dental education need some form of evidence of validity to support the proposed interpretations of scores. The approach for evaluating evidence on test scores is presented as an argument, based on theory and logic, according to the four inferences in Kane’s view on validity: scoring, generalization, extrapolation, and implication. Based on our available data sets, we selected and organized specific sources of evidence to serve our purpose. Our primary approach to the evaluation of the data was through the use of a multitrait multimethod (MTMM) matrix and a multiple regression model. The analysis combined data from three cohorts to provide more reliable estimates. The analyses in this study discussed three sources of validity evidence – reliability, construct validity, and predictive criterion-related validity – in the specific assessment context that support the extrapolation component of the argument. The other three components were presented based on qualitative evaluation, not relying heavily on psychometric data. The results from this preliminary study indicated that the assessment components employed are supported by psychometric reliability and validity qualities. Results of the statistical analysis indicated that the overall intrinsic reliability using Cronbach’s of the OSCE tests, nonetheless, were lower than research standards. Findings on construct validity using the MTMM matrix showed evidence on convergent and divergent validity based on the intercorrelations between the assessment components. Using multiple regression modeling, results supported evidence of incremental predictive validity explained by the change in proportion of variation accounted for in the criterion variable in the series of analyses. The reported evidence is vital for sharing knowledge and contributes to the overall validity argument of the assessment program, but only represents a preliminary step and identifies the basis for future research.

Pages

137

Available for download on Thursday, August 08, 2024

Share

COinS