Monday, December 5, 2011

So, how DO we assess all of “this”?


My title of this blog refers to the question that Alex had left us with at the end of the class last week. I for one, left with a bit of a meta-cognitive worry--wondering how I might be assessed on my assessment of well--assessment. Of course, as Alex has mentioned, if he had it his way there would be no grades per say, yet our institutions demand that we place these values on our students in order to constitute progress and productivity. And so, Congress, believed correctly in 2003 that progress is not possible without standards, implementing the “No Child Left Behind” Act to reinforce this idea. Personally for me, the outcome of this act has been: If you want to develop a class of high jumpers, you don’t necessarily have to teach every student proper jumping technique. You can just lower the bar: A Fordist vision of a classroom as an assembly line that produces defect-free students bouncing out the other end. A sad truth and reality in my opinion, however this certainly cannot be what Congress had intended—but, I digress.

The big overarching questions then becomes, what do we value or how do we value a student’s writing in the face of a culture becoming more and more obsessed with efficiency and one that is driven by productivity? More specifically, how do we assess digital writing in this same context?

Case study: What are the consequences/results “When machines read our student’s writing”?

I want to first point to one case study that I had stumbled across on the Pearson Education website while attempting to get into some research of assessment tools. What I had found was shocking: Florida Gulf Coast University adopted an Intelligent Essay Assessor in order to save a required course that was at risk of being cancelled. Due to an explosion of enrollment numbers and not enough faculty members to manage the “burden” of grading, teachers and administrators agreed to integrate the assessment tool that uses the Latent Semantic Analysis method. Considering I have found this on the Pearson website and not the FGCU website, the results claim to be outstanding, suggesting that “Using LSA, IEA can “understand” the meaning of text much the same as a human reader.” I readily and immediately had a problem with this entire proclamation, and so I turn to Herrington and Moran’s article “What Happens When Machines Read Our Students’ Writing?” to outline my frustrations: “Missing from this machine reading would be all of the nuances, echoes, and traces of the individual student writer” (493). I wondered about the consequences about imposing such a tool--what is lost? Or even, what is gained by the individual student at the outset of these types of assessment tools? What did each student really learn? Along the same vein, I find myself struggling to get at the heart of what Herrington and Moran meant by “nuances, echoes, and traces of the individual student writer” My assumption is that as a human reader, we hear, feel, sense and engage with the voice of the writer—a voice that goes beyond a mere replication or imitation of the discourse within our knowledge domain. Do you share this same sentiment with me? Is this what “reading” is supposed to look like? When we use IEA’s, are we “lowering the bar” in order to meet standards? Why do institutions like FGCU get on board with these assessment tools that seem to risk (in my opinion) a loss of engagement with the actual reading or learning processes?

Using a heuristic model: What are the consequences/results when reading and responding to student’s (digital) composition?

Kathleen Yancey has introduced alternative means of assessments—formulas that do not embrace system-wide standards, but instead highlight how “digital compositions may unintentionally offer us new opportunities for invention, for the making of meaning” (100). The digital portfolios that Yancey has outlined for us then seem to do just what perhaps something like an IEA cannot—to ascertain any kind of meaning-making by facilitating student learning as well as assessment. So, then would you agree that Yancey’s heuristic model becomes more student-centered, or, in a sense is a more effective model for assessment? What then, about our need for efficiency and our educational institutions pressures for student-placement? Are there cons to this type of model? Could we maybe assume that this type of model is a bit reductive and/or perhaps overemphasizes assessment?

Now, it’s your turn. I am interested in hearing your experiences with assessment models from a student’s standpoint and/ or as a teacher/tutor. I’ll leave you with these final questions:

How have you seen assessment change? How have your own experiences with digital composition effected how you view assessment in the classroom?

Do you think there is a way to combine the efficiency of an IEA model and the effectiveness of a heuristic model that elicits productive meaning-making?

MOOCS: A Problematic Solution to the Disinvestment in Public Higher Education

I agree wholeheartedly with Bady’s cynicism of the speed, inevitability, and necessity of the MOOC movement. From 2011-2015 I direct...