Unknown

Dataset Information

0

Applying computerized-scoring models of written biological explanations across courses and colleges: prospects and limitations.


ABSTRACT: Our study explored the prospects and limitations of using machine-learning software to score introductory biology students' written explanations of evolutionary change. We investigated three research questions: 1) Do scoring models built using student responses at one university function effectively at another university? 2) How many human-scored student responses are needed to build scoring models suitable for cross-institutional application? 3) What factors limit computer-scoring efficacy, and how can these factors be mitigated? To answer these questions, two biology experts scored a corpus of 2556 short-answer explanations (from biology majors and nonmajors) at two universities for the presence or absence of five key concepts of evolution. Human- and computer-generated scores were compared using kappa agreement statistics. We found that machine-learning software was capable in most cases of accurately evaluating the degree of scientific sophistication in undergraduate majors' and nonmajors' written explanations of evolutionary change. In cases in which the software did not perform at the benchmark of "near-perfect" agreement (kappa > 0.80), we located the causes of poor performance and identified a series of strategies for their mitigation. Machine-learning software holds promise as an assessment tool for use in undergraduate biology education, but like most assessment tools, it is also characterized by limitations.

SUBMITTER: Ha M 

PROVIDER: S-EPMC3228656 | biostudies-literature | 2011

REPOSITORIES: biostudies-literature

altmetric image

Publications

Applying computerized-scoring models of written biological explanations across courses and colleges: prospects and limitations.

Ha Minsu M   Nehm Ross H RH   Urban-Lurain Mark M   Merrill John E JE  

CBE life sciences education 20110101 4


Our study explored the prospects and limitations of using machine-learning software to score introductory biology students' written explanations of evolutionary change. We investigated three research questions: 1) Do scoring models built using student responses at one university function effectively at another university? 2) How many human-scored student responses are needed to build scoring models suitable for cross-institutional application? 3) What factors limit computer-scoring efficacy, and  ...[more]

Similar Datasets

| S-EPMC7720934 | biostudies-literature
| S-EPMC7453052 | biostudies-literature
| S-EPMC9051436 | biostudies-literature
| S-EPMC8243833 | biostudies-literature
| S-EPMC139024 | biostudies-literature
| S-EPMC3726226 | biostudies-other
| S-EPMC7911293 | biostudies-literature
| S-EPMC7691311 | biostudies-literature
| S-EPMC4273344 | biostudies-literature
| S-EPMC4247400 | biostudies-literature