Apparently, computer algorithms are now being used to grade student essays. https://www.washingtonpost.com/news/answer-sheet/wp/2013/04/25/can-computers-really-grade-essay-tests/?noredirect=on I stumbled across this recently when researching a graduate school admissions test. That is more than just grade school! As a programmer myself, I have no idea how a computer could accurately grade the content of an essay. It could score the grammar, in a narrow-minded, microsoft-word kind of way. It could score the length, vocabulary, or maybe the basic structure. It could even compare the essay to other successful submissions, but that really just measures conformity. A computer can't understand an essay's meaning. Where will this go? Will publishing houses use these algorithms to screen submissions? Do they already?
Apart from the obvious reason of saving time, what do you think, how much this phenomenon has to do with cowardice? When you read an essay and give it a grade, you can be asked "why this grade?" and you must be able to answer. When you give it to an algorithm to grade, no one can blame you of unfairness. It wasn't you, it was the computer. And the people who designed the grading algorithm can say "we are truly sorry, we are doing all we can, but there are just so many variables to take into account, and you can rest assured that we are constantly working on making the algorithm better". So, in the end, no one is responsible.
English teacher here. I agree in general with the idea that a machine can't effectively grade an essay, or at least not yet. However... If machines could accurately mark grammar, punctuation, spelling etc (SPAG) and assign a score to that, it would be a big time-saver for teachers. All the human would need to do would be to assign a score for the thoughts expressed and add that to what the machine gave for SPAG. Teacher sits down, reads what the student has spewed out, gives it a score of, say, 63%, which is then added to the machine's technical score of 32%. We'll make content be worth 75% of the final grade, and SPAG 25%. SPAG x .25 + Content x .75: (32 x .25)+(63 x .75)=55.25% Of course, a lot of my work is focused on SPAG, so I may be biased.
I built a spreadsheet for writing school reports once. A friend of mine was head of Religious Education so she had to teach the entire school but not really in enough depth to get to know the students so we devised a system whereby she just entered grades for effort, attendance, exam results etc against the names, the spreadsheet then pulled together some boiler-plate text and mail-merged it with the student's name. The results were truly believable. I am pretty sure that essay grading could be achieved in a similar way at a certain level. Examiners use marking schemes to grade essays anyway - structure, grammar, spelling, citations etc. will all accumulate to a final score and possibly allow for random sampling and boundary testing for works which were so good (or so awful) that the software spits out extreme results
Yeah, rubrics can help, and I use grading software to pull all of my scores together (for example, Participation might be 20%, Homework 20%, Quizzes 20%, Writing 20%, and the Final Exam 20%, but the software allows me to set those parameters so that I can assign points as I choose [Participation 3 points a day, Quizzes 15 points each, 8 Writing Assignments at 1,000,000,000,000,000 points apiece] etc) and it plugs the final percentages from each category in to get the correct final percentage. I'm just saying that a SPAG detector could do almost half of your essay marking work for you, leaving you more time to concentrate on content.
absolutely - the algorithm in my spreadsheet did look at consistencies and inconsistencies - so good overall pulled in "little Johny has worked hard all year" whilst poor attendance but a good grade might have been "little Johny surprised as all with his exam result" etc. I am not sure whether we are 100% there yet, although I achieved the report writing with about two hours' work and saved my friend days, but we are getting there... ask yourself what these paintings all have in common... Spoiler Spoiler: Answer they are all created by AI
They're all confusing and make no sense. (My spontaneous answer before looking at the original answer. ROTFL when read I.A's answer after that.)
That might be better than widely used methods like - Stetson-Harrison method - Similarity of political views method - Rating social and sexual attraction instead of essay method. - And what ever methods they use to avoid real work and thinking.
The question is about accuracy. Because even thought people have been working on it for decades, they've yet to build a program that doesn't get absolutely confused at complex sentence structure and I've yet to use spell checker where I don't had to adds word to its dictionary.
Given the number of years computers have been used by employment agencies to screen resumes and cover letters, I'm more surprised it's taken this long for the tech to be able to grade essays. Brace yourself for more people who've never learned the difference between its and it's...
Yeah, but those are basically keyword-hunting programs designed to whittle a huge pile down to something worth having HR look at.
True, but that's twenty year-old tech, so "advances" and all... ETA: Sidebar: I really wish we had a squinty sideeyes emoji here...(*looks again*)something between these two: