Standardized test image from Shutterstock.

Next year DC will begin giving tests based on the new Common Core state standards. Judging from a practice test available online, these assessments will need major revisions before they’ll come close to being able to measure students’ actual capabilities.

This spring, the two consortia that are developing assessments based on the Common Core have been conducting field tests in the District and elsewhere. DC has chosen to use tests produced by the consortium called PARCC, which stands for Partnership for Assessment of Readiness for College and Careers.

PARCC has made practice tests in math and English at various grade levels available on its website. I took the 10th-grade English Language Arts/Literacy assessment to see what it was like. And I found some serious problems.

The Common Core standards, adopted by 44 states and DC, have ignited debate among legislators, educators, and parents. Some love the idea of holding children across the country accountable on the basis of a common “master ruler” by which all students can be measured.

But some opponents fear the Common Core will place too many education decisions in the hands of the federal government. Others say the common standards are either too demanding or not demanding enough, or warn that teachers will still have to “teach to the test.”

Even some who embrace the concept of the Common Core have raised concerns about how it’s being implemented. And some states have provoked an outcry by giving their own versions of Common Core-aligned tests that parents and teachers have said contain unclear or confusing questions.

Some states that had signed on to the PARCC consortium have been having second thoughts. Kentucky pulled out in January, although it said PARCC would be welcome to bid when it issues a request for proposals for new tests. And this month Tennessee’s legislature passed a bill that would have the same effect.

Problems with the PARCC practice test

Some procedural aspects of the PARCC practice test I took were unclear. There is no indication of the length of time students would be given to complete the test if it were real, for example.

Also, the introductory screen says that there are 23 questions on the test, when in fact there are 42 multiple-choice questions. Most screens contain two questions about the same text. It is not clear if these questions are scored as one item, or if partial credit is awarded if a student answers one question right but not the other.

The major issue I have with the test is this pairing of questions. A student first has to select the correct answer to a question about a passage. Then, in the next question, the student chooses which line from the passage best supports her first answer.

On the first screen, for example, question A asks the meaning of the word “resonant” as used in a passage. Question B then asks which of a number of quotations from the passage “helps clarify the meaning of resonant.”

If you choose the wrong answer in the first question, you’ve gotten both questions wrong. That’s true even if the passage you choose for the second question correctly supports your incorrect answer to the first question.

A testing no-no

In a story for NPR, reporter Cory Turner applauded the fact that the PARCC tests “ask kids to read a text closely and to write about it using evidence from the text.”

His point is that this is an advance over previous tests that asked students to write responses to questions that had no right or wrong answers, like “What would you do if you were principal for the day?”

But having a second answer depend on whether you’ve gotten a previous answer correct is a big no-no in test-writing theory.

I spent 4 years editing curriculum at K12, an online educational publisher. Toward the end of my tenure, I worked extensively on editing standardized test questions, along with some leaders in the field.

From that experience, I learned that making the answer to one test question dependent on another is unfair. A standard text on writing multiple choice questions explains the problem: “If [students] get the first question wrong, they will automatically get the other question wrong as well, even if they understand the concept tested in the second question.” And almost the entire practice test is in the form of interlocking questions.

Some of the other questions on the test are just poorly written. For one question that asks students to identify a literary theme, none of the possible answers is in the form of a complete sentence. I challenge you to find a teacher who would accept “the difference between illusion and reality” or “the contrast between reason and emotion” as a valid answer to an open-ended question.

The test also includes 3 essay questions. Even though we don’t know how much time students will be given for the test, I’m not sure it’s possible to allow enough time to craft an acceptable response to even one of them.

Two of the questions, which focus on characterization, are relatively easy, if time-consuming. But one asks students to write an essay “analyzing the arguments of those who believe certain kinds of free speech should be prohibited within an educational setting and those who believe the opposite.”

Students must base their answers on passages from a Supreme Court majority opinion and dissent, along with a 4-minute audio passage on the historical significance of the case.

It’s the kind of question that might appear on an AP US history test. But that test is typically given to 11th-grade students who have just finished studying the subject. The PARCC test will be given to 10th graders who will probably have no background knowledge on the issue they’re being asked about.

So, as teachers are fond of saying, what have we learned? For one thing, asking students to identify textual evidence on a standardized test may be a good idea, but it’s no substitute for giving them the opportunity to make a well-crafted argument.

It’s possible that Common Core-based assessments will be an improvement over past state-mandated tests. But if the PARCC practice assessment I took is any indication, we still have a long, long way to go.