this post was submitted on 18 Aug 2024
143 points (100.0% liked)

Texas

1465 readers
64 users here now

A community for news, current events, and overall topics regarding the state of Texas

Other Texas Lemmy Communties to follow

Sports

BYPASSING PAYWALLS

Rules (Subject to Change)

founded 1 year ago
MODERATORS
 

"English-learning students’ scores on a state test designed to measure their mastery of the language fell sharply and have stayed low since 2018 — a drop that bilingual educators say might have less to do with students’ skills and more with sweeping design changes and the automated computer scoring system that were introduced that year.

English learners who used to speak to a teacher at their school as part of the Texas English Language Proficiency Assessment System now sit in front of a computer and respond to prompts through a microphone. The Texas Education Agency uses software programmed to recognize and evaluate students’ speech.

Students’ scores dropped after the new test was introduced, a Texas Tribune analysis shows. In the previous four years, about half of all students in grades 4-12 who took the test got the highest score on the test’s speaking portion, which was required to be considered fully fluent in English. Since 2018, only about 10% of test takers have gotten the top score in speaking each year."

you are viewing a single comment's thread
view the rest of the comments
[–] [email protected] 2 points 3 weeks ago* (last edited 3 weeks ago) (7 children)

I suspect that the human graders were the biased ones, and that this automated test is more accurate. Schools frequently inflate test results when given the opportunity (especially when low results reflect poorly on the school).

How do students known to be fluent in English do on it? Do they pass reliably?

Edit: Here's a discussion of a similar phenomenon in the context of high-school graduation rates. Graduation rates regularly go up by a very large amount when standardized tests stop being required, but that's not because otherwise-qualified students were doing poorly on standardized tests.

[–] [email protected] 10 points 3 weeks ago* (last edited 3 weeks ago) (1 children)

It's possible for both things to be true. Human reviewers might be biased towards awarding higher scores and the computer could be dog shit at scoring. I have no idea how this can meaningfully be grading fluency. Fluency in a spoken language consists of vocabulary, grammar, and pronunciation.

I have seen plenty of people who were very fluent who speak with an extremely noticeable accent who were none the less comprehensible. Software is extremely likely to perform poorly at recognizing speak by non-native speakers and fail individuals who are otherwise comprehensible. Because it wont even recognize the words its nearly entirely testing pronunciation and then denying such students access to electives that would allow them to further their education.

[–] [email protected] 1 points 3 weeks ago

It's quite possible that you're right. I haven't been able to find any research that attempts to quantify how accurate the software is, and without that I can only speculate.

load more comments (5 replies)