Mostrar el registro sencillo del ítem
dc.contributor.author | Lee, Eric | es_ES |
dc.contributor.author | Garg, Naina | es_ES |
dc.date.accessioned | 2020-06-09T07:11:30Z | |
dc.date.available | 2020-06-09T07:11:30Z | |
dc.date.issued | 2020-04-21 | |
dc.identifier.isbn | 9788490488119 | |
dc.identifier.issn | 2603-5871 | |
dc.identifier.uri | http://hdl.handle.net/10251/145789 | |
dc.description.abstract | Instructors in higher education frequently employ examinations composed of problem-solving questions to assess student knowledge and learning. But are student scores on these tests reliable? Surprisingly few have researched this question empirically, arguably because of perceived limitations in traditional research methods. Furthermore, many believe multiple choice exams to be a more objective, reliable form of testing students than any other type. We question this wide-spread belief. In a series of empirical studies in 8 classes (401 students) in a finance course, we used a methodology based on three key elements to examine these questions: A true experimental design, more appropriate estimation of exam score reliability, and reliability confidence intervals. Internal consistency reliabilities of problem-solving test scores were consistently high (all > .87, median = .90) across different classes, students, examiners, and exams. In contrast, multiple-choice test scores were less reliable (all < .69). Recommendations are presented for improving the construction of exams in higher education. | es_ES |
dc.language | Inglés | es_ES |
dc.publisher | Editorial Universitat Politècnica de València | es_ES |
dc.relation.ispartof | 6th International Conference on Higher Education Advances (HEAd'20) | |
dc.rights | Reconocimiento - No comercial - Sin obra derivada (by-nc-nd) | es_ES |
dc.subject | Learning | es_ES |
dc.subject | Educational systems | es_ES |
dc.subject | Teaching | es_ES |
dc.subject | Exams | es_ES |
dc.subject | Internal consistency | es_ES |
dc.subject | Reliability | es_ES |
dc.subject | Higher education | es_ES |
dc.title | Reliability of multiple-choice versus problem-solving student exam scores in higher education: Empirical tests | es_ES |
dc.type | Capítulo de libro | es_ES |
dc.type | Comunicación en congreso | es_ES |
dc.identifier.doi | 10.4995/HEAd20.2020.11303 | |
dc.rights.accessRights | Abierto | es_ES |
dc.description.bibliographicCitation | Lee, E.; Garg, N. (2020). Reliability of multiple-choice versus problem-solving student exam scores in higher education: Empirical tests. En 6th International Conference on Higher Education Advances (HEAd'20). Editorial Universitat Politècnica de València. (30-05-2020):1399-1407. https://doi.org/10.4995/HEAd20.2020.11303 | es_ES |
dc.description.accrualMethod | OCS | es_ES |
dc.relation.conferencename | Sixth International Conference on Higher Education Advances | es_ES |
dc.relation.conferencedate | Junio 02-05,2020 | es_ES |
dc.relation.conferenceplace | València, Spain | es_ES |
dc.relation.publisherversion | http://ocs.editorial.upv.es/index.php/HEAD/HEAd20/paper/view/11303 | es_ES |
dc.description.upvformatpinicio | 1399 | es_ES |
dc.description.upvformatpfin | 1407 | es_ES |
dc.type.version | info:eu-repo/semantics/publishedVersion | es_ES |
dc.description.issue | 30-05-2020 | |
dc.relation.pasarela | OCS\11303 | es_ES |