So, how DO we assess all of “this”?

JENNAE COHEN'S BLOGPOST FOR THIS WEEK:


My title of this blog refers to the question that Alex had left us with at the end of the class last week. I for one, left with a bit of a meta-cognitive worry--wondering how I might be assessed on my assessment of well--assessment. Of course, as Alex has mentioned, if he had it his way there would be no grades per say, yet our institutions demand that we place these values on our students in order to constitute progress and productivity. And so, Congress, believed correctly in 2003 that progress is not possible without standards, implementing the “No Child Left Behind” Act to reinforce this idea. Personally for me, the outcome of this act has been: If you want to develop a class of high jumpers, you don’t necessarily have to teach every student proper jumping technique. You can just lower the bar: A Fordist vision of a classroom as an assembly line that produces defect-free students bouncing out the other end. A sad truth and reality in my opinion, however this certainly cannot be what Congress had intended—but, I digress.

The big overarching questions then becomes, what do we value or how do we value a student’s writing in the face of a culture becoming more and more obsessed with efficiency and one that is driven by productivity? More specifically, how do we assess digital writing in this same context?

Case study: What are the consequences/results “When machines read our student’s writing”?

I want to first point to one case study that I had stumbled across on the Pearson Education website while attempting to get into some research of assessment tools. What I had found was shocking: Florida Gulf Coast University adopted an Intelligent Essay Assessor in order to save a required course that was at risk of being cancelled. Due to an explosion of enrollment numbers and not enough faculty members to manage the “burden” of grading, teachers and administrators agreed to integrate the assessment tool that uses the Latent Semantic Analysis method. Considering I have found this on the Pearson website and not the FGCU website, the results claim to be outstanding, suggesting that “Using LSA, IEA can “understand” the meaning of text much the same as a human reader.” I readily and immediately had a problem with this entire proclamation, and so I turn to Herrington and Moran’s article “What Happens When Machines Read Our Students’ Writing?” to outline my frustrations: “Missing from this machine reading would be all of the nuances, echoes, and traces of the individual student writer” (493). I wondered about the consequences about imposing such a tool--what is lost? Or even, what is gained by the individual student at the outset of these types of assessment tools? What did each student really learn? Along the same vein, I find myself struggling to get at the heart of what Herrington and Moran meant by “nuances, echoes, and traces of the individual student writer” My assumption is that as a human reader, we hear, feel, sense and engage with the voice of the writer—a voice that goes beyond a mere replication or imitation of the discourse within our knowledge domain. Do you share this same sentiment with me? Is this what “reading” is supposed to look like? When we use IEA’s, are we “lowering the bar” in order to meet standards? Why do institutions like FGCU get on board with these assessment tools that seem to risk (in my opinion) a loss of engagement with the actual reading or learning processes?

Using a heuristic model: What are the consequences/results when reading and responding to student’s (digital) composition?

Kathleen Yancey has introduced alternative means of assessments—formulas that do not embrace system-wide standards, but instead highlight how “digital compositions may unintentionally offer us new opportunities for invention, for the making of meaning” (100). The digital portfolios that Yancey has outlined for us then seem to do just what perhaps something like an IEA cannot—to ascertain any kind of meaning-making by facilitating student learning as well as assessment. So, then would you agree that Yancey’s heuristic model becomes more student-centered, or, in a sense is a more effective model for assessment? What then, about our need for efficiency and our educational institutions pressures for student-placement? Are there cons to this type of model? Could we maybe assume that this type of model is a bit reductive and/or perhaps overemphasizes assessment?

Now, it’s your turn. I am interested in hearing your experiences with assessment models from a student’s standpoint and/ or as a teacher/tutor. I’ll leave you with these final questions:

How have you seen assessment change? How have your own experiences with digital composition effected how you view assessment in the classroom?

Do you think there is a way to combine the efficiency of an IEA model and the effectiveness of a heuristic model that elicits productive meaning-making?

Comments

  1. Yes, yes, still after all this time I am having issues with posting. Clearly blogger does not agree with my formatting style, either. Anyways, thanks for Nicole for posting this! Looking forward to reading responses...

    ReplyDelete
  2. Thanks for this question-filled post, Jennae. You ask a bunch of great ones and refer us to a valuable reminder that the IEA has not gone by the wayside after Herrington and Moran's 2001 article!

    I want to clarify my statement about grades. What I meant to suggest was that traditional grades (A,B, etc.) are not very useful for assessment. When I suggest that we abandon them (as some kind of educational fantasy), I'm not suggesting that we abandon assessment. In fact, I believe strongly that assessment is vital to teaching. However, it's important that educators are the ones who are developing the assessments, not policymakers or educational testing companies. I love the point that Don Coombs made long ago (cited by Herrington & Moran, 497): "What if the low common denominator is precisely that(?) we don't much care if students learn?" This may seem crazy, but this is exactly the result of programs like NCLB and standardized testing. No longer does learning matter, just the numbers. I'm sure some of you are aware that NCLB tested schools on growth, which is admirable, but they tested growth by comparing the results of one set of students at one grade level with a new set of students (the following year) at the same grade level. Now, it doesn't take a genius to figure out that this is a faulty comparison - if you are going to measure growth, it has to be with the same students! To make an analogy, I could assign grades to my next Teaching with Tech class by comparing their performance with yours. If they didn't demonstrate growth, then my class isn't successful and I could assign them low grades.

    Ultimately, I think the key ingredient that teachers provide, which computer-based or standardized scoring cannot, is an authentic audience. As Herrington & Moran point out, the computer as reader changes the rhetorical situation immensely, so much so that the writers begin to sense that no one is "reading" their writing at all. The point of interaction between reader and writer becomes so standardized as to become meaningless.

    This is why I find Yancey's model, or proposition, to be the most productive for us as teachers. Her heuristic is a helpful basis for developing language and rubrics for evaluating composition in the 21st century. Even if students are not producing multimodal compositions, they are engaging in digital textualities in ways that need to be assessed. It is imperative that we as educators develop assessment tools that are appropriate for these hybrid or digital-born modes.

    ReplyDelete
  3. Jennae, great post! In thinking about your question, how have I seen assessment change, I think assessment directly depends on the teacher. There are so many ways to assess and modes in assessing that it can differ greatly from teacher to teacher. I don’t know that I agree with Yancey when she says, “We can only assess what is produced, and what is produced is increasingly something not only assisted by technology, but, as Whithaus showed, created by technology and in ways that can be at odds with a desired effect” (93). I don’t think you can only assess what is produced, but I think this brings me back to process vs product. I think class observations, check-ins, quick-writes, and most importantly, self-reflections are great ways in assessing the process of reaching the product. I think reflections show the meta-cognitive thinking that students need in order to learn from their work and to move forward. I think that these are “assessments” that can give teachers ideas in how students got from point A to point B and then evaluate what could be done differently next time to make that path smoother or more salient. I do agree with Yancey though when she says, “Technology isn’t the villain; but as a tool, technology is not innocent. It is both shaping and assessing the writers whose work we want to assess” (93). Technology does require different assessments and I think this is maybe one reason why some steer away from it—it’s more work to come up with new assessments for using this tool.
    Going off this, I do not agree with the computer-made assessments. In Herrington and Moran’s article, they state, “In developing their program, they attempted to identify computer-measurable text features that would correlate with the kinds of intrinsic features for example, quality of ideas, aptness of diction that are the basis for human judgments” (482). This idea disturbs me a little. I don’t understand how a computer can make judgments that humans can make. There are so many different aspects that go into assessment. In our tutoring seminar, we learn about higher order priorities and how grammar should be the last thing to go over unless it gets in the way of the writing, but main ideas are the most important. How can a computer overlook this? Wiggins attests to this when he says, “that context is a central characteristic of ‘authentic assessment’” (Herrington & Moran 487). I’ve taken the GREs and the MTELs, and while I took them, I would wonder how the computer grades it or what if it missed one.

    I wonder if there is a way to create a program that people can use to assess. What would it look like? Would it be more effective?

    ReplyDelete
  4. When I assess projects or papers I find it very useful to rely upon rubrics. Any teacher can read a paper and generally categorize it. “It feels like a B”. The problem with this type of grading is that we ignore our own biases and prevent the student from seeing where they can grow or develop. I am generally an advocate of ideas. If you present a good idea and adequately support it I will probably give you a B or a B+, maybe even an A. But what about the use of topic sentences or MLA format or a good introduction? It is easy to ignore the deficits in other areas if the general idea of the paper is ‘good’. (This is my bias) I know other instructors who focus heavily on grammar. If there are too many mistakes it doesn’t matter how good the ideas the paper will never be better than a C. Using a rubric forces me to break down my assessment of the grade, which I believe, allows me to be fairer and show the student where they need to improve and what they do well.

    As to machine assessment, I don't feel the technology we have available is capable of doing an adequate job. Yes a machine can read the words but can it sense the emotion or be persuaded by speech? The beauty and power of language is not only its appeal to logos but to pathos as well. How can a machine accurately ‘read’ without having the capability of being moved?

    I think about the SafeAssign software used here at Umass that is meant to determine if students are plagiarizing their papers. Teachers are requiring that students run their papers through this software before handing them in. SafeAssign and other similar products scan the paper and then determine a percentage of how much of the paper is plagiarized. Teachers will require that papers be below a certain percentage or it won’t be accepted. Say 10% or 15%. The problem is that the software picks up on the works cited pages and considers all titles to be plagiarized. It also picks out the quotes students use and many other sentences that are either cited correctly or the student’s own words. I have met many a student who is frustrated with this because they are forced to go back and reword their entire papers just to make a software happy.

    To me the idea of machines assessing students’ work is just laziness and creates a divide between student and teacher. Reading student responses is one of my favorite moments as a teacher because it is how I get to know my students. It’s like each paper, reflection or blog post is a mini conversation between me and them where no one can interrupt. I get to watch their thinking develop. And especially for those students who do not like to speak in class, I can also see their progress and read the brilliant ideas they don’t have the courage to share in class. Writing is not only a skill but a craft and an art. There was a time that if you wanted to have a certain job you went and apprenticed with a master. Now we send people to school. But the idea is still the same. If students need to learn how to write they must work with a teacher who has mastered the practice. (Perhaps you never truly master it but my point still stands.)This is where teachers come in. Until the day machines are capable of teaching art, there is no replacement for an actual person.

    ReplyDelete
  5. Part I:
    Jennae,
    Great questions that you’ve posed – they really got me thinking about assessment (one of my huge fascinations in education). First off, (for now) I’m a firm believer in the “close-reading which leads to hand-written/typed responses which lead to some kind of grade” sort of writing assessment. I first read about computerized essay evaluation projects last year in an article called “The Complexities of Responding to Student Writing,” and I was appalled to learn about the College Board’s WritePlacer, ETS’s Criterion and Vantage’s MY Access! products for writing assessment – I thought: ‘seriously, how are these things even possible?!?!?!?!’ I was appalled…totally appalled. This same article (Haswell, 2006) also mentioned that of the Writing Program Administrators Councils’ 35 first-year composition outcomes, none were able to be machine scored. This was a relief in one sense (in my naiveté), however in another sense the products are, according to your post, actually being used to assess student work. And if one school is using it, others probably are, too, which means there are students out there getting graded for the sake of bring graded and not for the sake of becoming aware of their own learning. Man oh man – I have issues with this.

    I’ll hypothesize why I think the products are attractive in a bit…but first, your questions… Assessing digital writing probably isn’t something I’m going to figure out anytime soon, however I do think the approach can be founded in a process that which includes constructive criticism using, maybe, wiki-comments, MSWord comments, maybe even blog responses or wiki-responses, and a student reflection piece has to be present also. I am also a firm believer in the value of helping students become aware of their own learning processes and, in turn, how to improve them. Weaving this into assessing digital writing…or assessing digitally…is something it sounds like we should all be thinking about.

    In response to your comment “My assumption is that as a human reader, we hear, feel, sense and engage with the voice of the reader” is YES, absolutely you’re right. But I would add to that a key ingredient to what I think can be part of effective teaching – “reading” the individual students. There are times when “reading” a student’s essay includes using your knowledge of that student (presence, attitude, effort, attendance); thus your “reading” of the student adds to that reading/evaluative process. When I feel it’s necessary, I take these other “readings” into consideration when I’m grading – a machine cannot do this. However, to answer your question about why institutions like FGCU get on board with the computerized assessment tools…having worked in an area of higher education where retention is everything, I’m guessing money is a key factor (which is obvious by Herrington and Moran’s breakdown [485]), if not THE key factor in adopting these tech tools. More students + passing grades + retention + persistence to graduation = MONEY. The school gets tuition money and the tech companies get paid. Whether we like it or not, higher education is a business – a BIG business. (What’s also funny is that this school’s English department website states, “We also recognize that only a small portion of learning takes place in the classroom…” as a draw to get students to join their honor society. What? A “small” portion? I’m also a firm believer in extracurricular programming as being just as important as classroom learning…but…well, it doesn’t matter. I might just question a lot of what happens at FGCU if I kept digging…)

    ReplyDelete
  6. Part II:

    To finish up in response to yours questions re: Yancey quickly – I’m not sure I’d say that her model is ‘more effective’ than anything else I know of, but it does make a bit of sense to me. Now that I’m thinking about it, perhaps since it’s a familiar model being applied to a different set of texts, maybe it just appears different? Foundation and intent is the same, format or platform is different? Maybe? I do agree that an assessment should ask a consistent set of questions, although this might not really be a departure from other assessment methods either. I also agree that using a heuristic in a consistent way to look at digital texts can help us better ‘read’ those texts. It’s like the heuristic method becomes the window through which to makes sense of the texts – perhaps. I think one of the reasons why I’m a little unsure about how I feel about this is because I don’t have a direct way to apply it just yet. In looking at the portfolio collection she references (97), I might be more apt to teach the actual writing process vs. the arrangement of the collection, per se, although I like what she claims about the differing arrangements of texts and pieces of a portfolio. Overall, I buy into what she says, but I’d need to see it in practice somehow.

    Haswell, Richard. “The Complexities of Responding to Student Writing; Or, Looking for Shortcuts via the Road of Excess.” The Norton Book of Composition Studies. Ed. Susan Miller. New York, NY: Norton, 2009. 1262-1289. Print.

    ReplyDelete
  7. Jennae,

    I like how your cool and easy style is infused in your post!

    I guess I want to address your question: "When we use IEA’s, are we 'lowering the bar' in order to meet standards?" Yes, yes we are. I can't even begin to understand that type of grading writing assignments. Any written assignment is difficult to grade because there are many things to take into consideration. Firstly, I think it depends on the grade level (aren't students between grades 1-12 rewarded for effort?). In college level classes effort isn't rewarded because you need to demonstrate development. So, how would the IEA's track that? Secondly, each writing assignment is different, and although you can tell the machine "what to do" or "look for," it's not the same because it's not human. We understand our students, and more importantly, their writing through the use of language. You can track how a student makes meaning by comparing their participation in discussion, their in-class writing assignments, and their papers. Is the machine going to create a digital portfolio of the students and stamp it with a kiss to display emotional connection. I don't think so.

    What about the students' voice? Where does that fit in this schema? Students will lose their voice, write generic papers, and make it their lifetime goal to be on the "bar." I hate bar graphs because it's so impersonal, it marginalizes people, and you become a number to live up to. Is that what writing should be about? I think FGCU need to collectively put their brains together and figure out another way to make things work without the IEA.

    It's so unfair for teachers! It would put a strain on student - teacher relationships because the IEA would become law, and the majority of teachers would feel that they need to uplift it in order to keep their job, or for whatever reason. (I think it's safe to say that most might enjoy it because it would relieve them of a huge responsibility of grading/commenting.) That's horrible because it makes teachers lazy, and it takes away something we take for granted: the students ability to inspire us with their writing and opinions. Of course, the IEA pretty much implies that their individuality goes out the window, and that we will fail to help them nurture their creativity.

    I might be at the extreme on this whole issue, something new and unfathomable to me. But, it seems scary all the same.

    ReplyDelete
  8. Jennae, thank you for posing a variety of questions and making me consider not only how I feel about assessment but considering how both traditional, teacher centered grading, and non-traditional, computer graders, can have their pros and cons. One point that is addressed most with both these articles is the issue of the relationship between the reader and writer. Herrington and Moran clearly outline how computer graders take away the dialectic between reader and writer as they grade more for mechanical conventions rather than content and new ideas, stating, “The replacement of the teacher as a reader […] defines writing as an act of formal display, not a rhetorical interaction between a writer and readers” (481). Instead of writing being a process of thought and reflection, it becomes a display of skills and mastery, dressing it down from the metacognition that writing embodies. In this case, students do not write in order to receive feedback and grow; they write what they know will give them a passing score.

    Yancey’s discussion of digital portfolios is a good counterpoint to the computer grading services that Herrington and Moran make us aware of. Yancey discusses the benefits of digital portfolios and describes them as a way to showcase student work and to show reflection on such work. The questions that she considers such as, “What arrangements are possible,” “Who arranges,” “What is the intent,” and “What is the fit between the intent and effect,” take into consideration that there is a writer to the composition who has made specific choices and who has considered an intended audience. Also, Yancey’s example of Mimi’s portfolio shows how the way in which a student arranges his/her work online can allow for a reader to traverse different paths and reflect on the writer’s experience. Such questioning and thinking about another’s thinking allows for the dialectic in writing to exist, unlike the computer grading system which eliminates a writer’s voice or intent.

    In thinking about both of these articles in relation to my own teaching, I think that digital compositions should be encouraged in order to progress with the 21st century learning goals, but I do believe that teachers must grade their students’ work rather than relying on a mechanized system. As my students continue posting response to discussion questions on our class wiki, I cannot imagine having a machine check to see whether my students are answering all of the questions and including the appropriate detail. Part of my reason for beginning the wiki was to give us an opportunity to visual our responses with one another and add on. It seems as if computer grading systems could make the assessment final, rather than using it as a checkpoint to see how students are doing and what needs to be readdressed. The words, “Fast, Convenient, and Inexpensive” (485), on the Writer Place Plus brochure that Herrington and Moran addresses, echo in my head when I think of computer grading software. Instead of making learning and teaching a process of grappling and building off of one’s knowledge, the grading systems value convenience as if all learning can be completed with no loose or struggling ends.

    ReplyDelete
  9. Jennae, you postulate some interesting questions and raise some excellent points. An IEA device seems like such a bad idea. Noam Chomsky realized years ago the limitations of artificial linguistic functions. There’s simply no way that these mechanisms could sort or understand the needs of students. Language is perhaps one of the few things that truly make us human. If we decide to let computers handle the task of assessing that capability than we might as well throw in the towel. Plus, I’m not convinced that any of these devices could work effectively. Isn’t that what Chomsky finally concluded? I also can’t imagine that most teachers get into teaching so they can feed important student writing into a machine. Assessment always has its limits, but in the right setting, those limits are what make a student think. Just as we discuss meta-assessments, it’s important for students to engage in meta-cognitive activity in order to become better writers. Commenting on student writing provides that type of interaction. Asking questions makes us smarter people, and when we ask questions about our own thinking it’s like super food for the brain. Circumventing teacher feedback disallows that process and ultimately short changes students.
    It’s a well-known fact that standardized tests lower the bar for all students. The only reason that policy makers create these tests is to gain votes from down trodden citizens. They claim to have every intention of helping particular demographic groups, yet these very groups are ultimately short changed. The poor have consistently performed poorly on MCAS and other local standardized tests. It’s not exactly a surprise, however, low scores help politicians blame teachers for society’s problems. This in turn weakens educator unions and disintegrates the very force that could change a student’s life--control. Administrations need to create teacher-centered workplaces so teachers can then create student-centered workplaces. Yet, the state does not want to pay teachers the extra money for extended programs, additional resources, or staff specialists. It’s far easier to demonize the profession, gain a few conservative votes, and force teachers to teach to the test. Essentially, standardized tests impose more problems than they solve.

    ReplyDelete
  10. Jennae! First off, double-daps for coming to produce a blog entry this week. As someone who is barely functioning as the clock approaches midnight, I'm certain if I had to churn out my blog post this week I would be a well-lathered pool of former-human. So word. Good job.

    Assessment is the trickiest son of a bitch I can think of, since it's something that every teacher has an opinion. Or at least they do in my mind, where teachers actively give a crap despite the hurdles and thanklessness of actually caring.

    They're rearing to go! Ready to throw down with their ideologies. (Like I said I'm really overtired and as you can imagine also simultaneously overcaffeinated. To Valhalla!)

    Has assessment changed? That's a great question. I'd like to think that at the very least - how teachers would like to assess - has changed. My own experience throughout high school was dealing with teachers who were rather dogmatic in their approach. Grades were grades. Effort was effort. You got what you endeavored for, and so help me don't mention anything about writing as a process.

    It was pretty cut and dry. The sort of progressive rhetoric that bubbles out of us more recent educational/english students wasn't there.

    These days the teachers I know are my peers. Everyone that I'm broadcasting this out to, the fellow tired looking people who will be sitting next to me in eighteen hours or so. All of these people, to an extent, seem to share a more lenient concept of grading. A more fluid idea of evaluating and assessing students. They're more than their own grade, whatever.

    I think the problem is that has already been mentioned: the structures for assessing haven't changed. The MCAS is still around from when I was in school. NCLB has since been signed into effect. (Giving you an idea of how old I am.) If anything there's more and more bureacracy that is comign to overcomplicate the teaching process.

    It's for reasons also already stated: percentages and evaluations are sexy to people. They're easy to glance at and "assess" what's going on in a school district. Lawrence. 42% graduation rate. It's right there, how they're doing. Now you and I may know it's not, but it's easy, and it's something you can glean from a newspaper bullet point.

    The subversive and essential part of teaching within this environment is how to bring the more progressive idea of student assessment and teaching into an environment where the regulations don't encourage it, and certainly don't promote it.

    Again, good job. I'm going to bed now, cheers!

    ReplyDelete
  11. I don't have much experience with standardized testing, but I have been grading a lot of papers lately. I'll start in defense of machines (although I'll admit it's a weak point):

    After I sit down for several long hours, commenting on the papers for the undergraduate class I TA for, I try and pull back and look for a pattern in my comments/grades. It's a little disturbing. If I'm tired, the comments I give are worse. If I'm well fed, nicer. If I'm grading next to another TA, and we're both bitching about the students--watch out! Also, by the end of the semester, I feel myself start to pigeon-hole students as certain "types" and am more inclined to give them a certain type of grade. I find myself thinking about factors that don't involve the paper: if that student comes late often, if they are an ELL student, if I know they just broke up with their boyfriend....

    None of these things are good, or even professional. My point is, there are things I can do to curb the human instinct to make exceptions. Being aware that I do it is important. Keeping a grading guideline is another. Doing papers in small batches. Reading ALL the papers before I start to comment, then stacking them into grade piles. These tricks help human error on my part, anyway.

    I appreciated Alem's point: do we really want our students to all sound the same, with cookie-cutter paragraphs? If my students exhibit critical thinking, I'll forgive a few grammar issues. Machines can't do that. And Ian's right: statistics are "sexy" and they certainly have their place...though probably not in writing assessment, yet.

    ReplyDelete
  12. This couldn't fit in any better! Found it this morning and had to share:
    http://www.washingtonpost.com/blogs/answer-sheet/post/when-an-adult-took-standardized-tests-forced-on-kids/2011/12/05/gIQApTDuUO_blog.html?tid=ts_carousel

    ReplyDelete
  13. Jennae, I wish I had answers to this. But when you ask if Yancy has found a real solution for assessment, I am not convinced she has. It seems as if this type of heuristic assessment is a lateral move rather than something progressive.

    I must admit I was perplexed by Yancy’s thoughts about the need for “higher level[s] of abstraction,” while recognizing “patterns” in “design and arrangement.” I can see what she means by emerging patterns, but by assessing them we begin to consider standards. As we have it with analog (paper) texts: Intro., thesis, develop key terms, use quotes, conclude. This is one measure of cohesion, thus in many ways coherence. But to anticipate patterns and assess patterns give us reason to value one pattern over another and here we have the apotheosis of new standards, new litmus tests. Which is not really new assessment techniques.

    The heuristic questions Yancy provides creates a fairly succinct way of evaluating how the form of the thing developed in light of content--which relies on context. This does not, to me, remotely trigger the sort of “abstraction” necessary to unearth what about the context matters, and how it informs new context, we find, in say, an email. There are just far too many variables when composition happens collaboratively. Which is how email threads work. To say there are “three specific ways of effective layering” in an email does not account for variables such as lies, purposeful inaction, purposeful over reaction, or other potentially affective layers that even specific questions about “intent” cannot uncover--even if unethical we can not overlook these things. We have to think about spur of the moment rhetorical movement which cannot be patterned--perhaps we can anticipate somethings sometimes, but we can not pinpoint these moves as anything remotely static even if we find them conventional. There are ways to make heuristics more open, thus more abstract-friendly, but creating a set of questions, and then applying the same questions to each individual student’s work seems like nothing new.

    And to recognize patterns in Yancy’s sense seems to call attention to a need to standardize. And we begin to move towards the specific, not the abstract. To apply the same heuristic to different students is like putting a body of student writing into a reading machine for automated grading. The mechanism for assessment is still an automaton. Not that I am suggesting IEPs for all. Nor am I even suggesting anything at all. I just feel that we have remediated one standard for, well, the same standard: we grade, in effect, in relation to the perfect form.

    But I do think Yancy approaches the assessment of form in interesting ways. The heuristic questions she has come up with will help us uncover how exactly content informs form through our new ways of creating “cohesion” in the digital writing sphere. But this is also rather necrophilous : when we grade on patterns, thus form, we do not account for individuality and “authentic thinking” in the Freirian sense. Yancy has only created another automaton for grading.

    Still, I really do like the heuristic she’s created for seeing patterns emerge due to form and content relationships. But I am not sure if we should use it for evaluation as much as a tool to help us see our ideas and organize our ideas.

    ReplyDelete
  14. I have a hard time answering this question because I'm not a teacher, and I don't grade papers. I can only speak abstractly, and I'm not sure if I can make much of a valid point since it's based so heavily on experience that I haven't had the chance to use. BUT, here goes.

    I find it problematic that computers grade something as subjective as writing. Good sentence structure does not a good argument make. It's ridiculous (and presumptuous) to allege that if a person can write a structurally sound sentence that his/her content will be structurally sound as well. With mathematics and the hard sciences, sure, it's much easier to expect a scantron machine to grade work, and the reason that it's easier is because there is one definitive answer. With writing, it's not so black and white. A computer can't grade the mood of an essay. It can't grade on theme. It can't read between the lines on what's NOT being said (which oftentimes is just as important as what is being said). Quite simply, a computer can't grade on content; the only assets of a paper/essay it CAN grade are based on algorithms and black and white outcomes. I would like to know when the last time any of us has been in an humanities-based course where the outcome of literature was definitive, where every student in the class has one answer, and that answer is 100% correct. It's impossible, and it stifles critical and creative thought.

    I find this situation analogous to the discussion we had last week in class, where we talked about the kids who use the dictionary/thesaurus for big words because that's what they think is "good writing." A structurally sound paragraph with big words that are essentially meaningless when the sentences are parsed is a ruse. Simple as that.

    ReplyDelete
  15. It is definitely problematic to think about the repercussions of having machines read student writing. If we are teaching students to write according to rigorous standards, why are we not essentially demanding that the readers of this writing are equally qualified? We need to address these issues as a collective group to ensure that what we believe we are teaching as educators is appropriately addressed. Especially where standardized testing is now dictating so much of our curriculum, our school funding, and our resources, we need to make sure that the scores given to students are accurate. If not, the machine may only be as good as the weakest link.

    ReplyDelete
  16. Jennae, great questions!

    Assessment is something I am starting to think a lot about; I can't imagine an IEA grading device being a fair or nuanced assessment of student writing. I find that one of the questions I am constantly asking professors and teachers is how they grade papers. Just reading the student papers for the class I tutor for makes it difficult for me to imagine grading them. I am constantly wondering what teachers take into account when they grade--is it the individual students ability/improvement over the course of a semester, or is there a set assessment for all students? I'm not sure how that would work, especially with writing; it is such an individual process and the students are all working at different levels. I'm not suggesting lowering the bar, but I am wondering what exactly is taken into account.

    That said, all of the professors and teachers I have asked say that grading is the hardest part of teaching. So I am left to wonder what the answer is? I like the idea of giving students a rubric before the paper is due, which allows them to know what the teacher's expectations are. I guess I'm just nervous about teaching and having to give students grades...it feels like one of those things that I won't really "get" until I have to do it.

    ReplyDelete
  17. I am really disturbed by the idea of machines grading students writing. An essay is something that is meant to be read by another human being. To let a machine read a paper and have it never read by another human takes half the meaning out of writing.

    It also will encourage students to write in a depressing and mechanical way since machines will be reading their writing, and there will be things that they may not get credit for that a person might have understood.

    And if a machine is reading their writing how can they get useful feedback from it? How will they be able to improve their writing from their feedback?

    ReplyDelete

Post a Comment

Popular Posts