Skip to Content, Navigation, or Footer.
Friday, Jan. 2, 2026
The Daily Pennsylvanian

'E-rater' essay scorer gets test run with GMAT

Don't think a professor is capable of grading an essay objectively? Test takers nationwide are increasingly being given the chance to try an electronic opinion. After five years of development, the Educational Testing Service --Ethe organization responsible for administering the Scholastic Assessment Test and the Graduate Record Examination -- has developed a computerized essay grading system which is currently in use for one nationally administered exam. The "e-rater" system was inaugurated early last month with its formal introduction in the scoring of the Graduate Management Admission Test, used for admissions to MBA programs. The e-rater program examines more than 50 elements of writing, including syntax, topical information, vocabulary and discourse, according to Jill Burstein, a development scientist at ETS who helped to create the computerized grading system. The essays being scored are compared to a "training set" of 270 well-written essays, with a different set needed for each essay question, she said. Though each essay being scored is examined for more than 50 elements, e-rater uses only eight to 12 of them in scoring an essay. The system is able to decide which elements are statistically significant for creating a well-written response for each specific essay question, Burstein explained. However, he noted, e-rater is only able to examine an essay in comparison to a training set, meaning that it is unable to determine in reading an essay "whether it makes sense" in its overall presentation of information. For instance, a test taker could theoretically turn in a collection of well-written paragraphs that make the correct arguments but are out of order. The program might not notice the difference. E-rater "doesn't make logical connections," she said. "It doesn't relate arguments." The Graduate Management Admission Council, the organization which sponsors the GMAT exam, is the only ETS client to request use of the e-rater so far, according to Kevin Gonzalez, a spokesperson for the Princeton, N.J.-based company. Frederic McHale, vice president for assessment and research on the GMAC, said his organization let a number of companies developing grading software test their programs using GMAT essays more than a year ago. After being "quite impressed" with the results of the e-rater software produced by ETS, he added, GMAC decided to start using the program to help grade the essay portion of the exam. In scoring the GMAT -- which consists of two 75-minute multiple-choice sections and two 30-minute essays, both administered by a computer -- the multiple-choice section is scored immediately after the exam and the results are given to the test taker. The essays are then read at a later time by one person and the e-rater. If the two readers differ in their scores by more than one point on a six-point scale, a third reader then scores the essay. Previously, the exam was read by two people. The ETS software has a 97 percent rate of success in avoiding the need for a third scorer, McHale said, adding that GMAC had not received any complaints from students concerned about their essays being graded by a computer. Under the old system, the level of agreement was about the same, Burstein said. Alex Brown, an associate director of admissions for the Wharton MBA program, said he was "not particularly" concerned about the introduction of computerized essay grading for the GMAT. Wharton admissions officers receive applicants' essays as well as their essay and overall scores, he noted, so "if there's some issue" the essays can be reread by a Wharton official. Wharton's MBA program application includes an additional essay, he added, so an applicant's writing ability is unlikely to be evaluated unfairly. The main goal of using e-rater, Brown said, is to have "no impact" on the scores given to test takers but to allow students to receive their final test scores faster than the two to three weeks currently required. In addition, he said, using a computer instead of a human being "should be a cost-saver," a point also made by McHale. Scoring is "so labor-intensive that its cost is too high," McHale said. McHale also noted that the e-rater system has the potential to be used for grading non-essay questions, such as math, reasoning or short-response questions. "We would like to enhance assessment beyond multiple-choice testing," he said, adding that the GMAT might be further expanded to include new question types in the future.