General versus Local Knowledge: The Great Debate

In Michael Carter’s “The Idea of Expertise: An Exploration of Cognitive and Social Dimensions of Writing,” he outlines a late 1980s/early 1990s debate in the field of composition over the value of general versus local knowledge in terms of students becoming “expert” writers.  Pete’s post provides us with a table comparing the elements that make up these two types of knowledge, and since he does an excellent job of summing of the key elements of the article, I won’t rehash it all here.  I will just say that Carter spends about 22 pages to get us to a pluralistic model of knowledge (general to local knowledge = a continuum), which when applied to writing instruction tells us:  “[T]he writers who possess local expertise in a domain must continue to rely on the more general strategies of the competent writer when writing outside that domain” (282)

The article, “Threshold Concepts and Troublesome Knowledge: Epistemological Considerations and a Conceptual Framework for Teaching and Learning,” returns us to the concept of threshold concepts that Pete has already written about.

Jan H.F. Meyer and Ray Land revisit a definition of threshold concepts as “conceptual gateways” or “portals” that ultimately lead to a “previously inaccessible” way of thinking about a concept.  “A new way of understanding, interpreting, or viewing something may thus emerge — a transformed internal view of subject matter, subject landscape, or even world view” (373).  They reiterate that threshold concepts “may be”:

  • transformative
  • irreversible
  • integrative

Meyer and Land describe the “conceptual framework” that they offer up in this article as providing a means of locating “troublesome aspects of disciplinary knowledge within transitions across conceptual thresholds…” (they call this transitional state of not knowing or being “stuck” as “liminal) “and hence to assist teachers in identifying appropriate ways of modifying or redesigning curricula to enable their students to negotiate such epistemological transitions and ontological transformations in a more satisfying fashion for all concerned” (386).  They describe the tension between thresholds and liminality as that between leading a learner toward a “pre-ordained end” (threshold) and unpredictability — the “‘liquid’ space” (liminality).  On account of this, their framework attends to figuring out how/why certain students experience a transformation while in liminal space of learning and others get “stuck.”  To understand these phenomena their framework insists educators take into account “the notion of variation within learning” (380).  Student-centered teaching, they tell us, requires as certain “responsiveness” to “variation in the manner in which students engage with the context and content of learning” (380).

The “emerging framework” that they offer up in this article is really quite general (by their own admission):  “Ultimately of course it is not for us (and we would not wish) to generalize across the varied and complex settings within which discipline-based colleagues might negotiate such transitions in the context of their own institutions and students” (386).  In other words, Meyer and Land see the value of offering up a generalized heuristic or “conceptual framework,” while acknowledging this will be adopted and implemented differently based on the local knowledge of a specific discipline.

While their goal in this piece is not to debate the virtues of general versus local knowledge, as Carter does in “The Idea of Expertise: An Exploration of Cognitive and Social Dimensions of Writing,” Meyer and Land seem to tacitly understand that general to local is a continuum and that both types of knowledge have impact and are, in fact, necessary in an educational space.  Pete puts it this way, “I would say that these problematic concepts represent ones that cannot be understood using only a general knowledge framework.”  Here Pete is referring to threshold concepts when he says “problematic concepts.”  He understands that a pluralistic model is necessary, as do Meyer and Land.

The generalized heuristic is necessary for those “new to a knowledge domain,” as Carter describes novices, in order for them to “gain more and more specific knowledge” (270).  However, “to go beyond competence is to go beyond the reliance on general strategies” (Carter 271) — just as Meyer and Land acknowledge that to help remove problems with teaching threshold concepts we might “create an authentic scenario…that presents an opportunity to think like an economist'” (384).  In other words, to think in a discourse community immersed in the local knowledge of a specific discipline.  I find it interesting that this 2005 piece on the psychology of learning treats as a given what Carter worked very hard to make a case for in 1990.




October Provisions Session – The Culture of Assessment

For the podcast from the session, visit the mediasite or our Soundcloud page –Mediasite / Soundcloud 

The 2nd Provisions session of the year was on the Culture of Assessment. 27 attendees were present to hear talks from John Dion, of the Marking Department in the school of business, Dr. Claudia Lingertat-Putnam, the deputy chair of the Counselling and CSSA programs, and Dr. Stephanie Bennett, from the Sociology Department.

John Dion kicked things off by sharing his experience of assessment from the school of business. He asserted that the main desire of accreditors was to see that professors had a good understanding of what students should leave the program knowing. More specifically, what components of the curriculum students are expected to learn target material through, how to assess progress toward learning targets, and how to make changes if student are failing to make sufficient progress. Then, using the School of Business as an example, Dion outlined the process of developing learning outcomes, and curriculum mapping through which those desired outcomes may be realized. To see Dion’s PowerPoint presentation, click on the link – Assessment for Provisions

Next up was Dr. Claudia Lingertat-Putnam. She demonstrated how her program uses data to inform what they were doing. As of now, the school of education is operating under conceptual framework where there are eight standards to meet – Provisions Flowchart Program Assessment (The presentation).
The counselling students are evaluated in three phases during their master’s program to ensure they are meeting standards. Dr. Lingertat-Putnam described how well-designed rubrics, including those on Chalk and Wire may be used to facilitate evaluation of student progress across learning domains. Chalk and Wire is a particularly useful tool for professors in that, following the input of a rubric, it provides detailed student evaluation reports. Dr. Lingertat-Putnam’s student learning outcome assessment data, collected via a Chalk and Wire rubric, showed that her students were struggling with both their writing, and coming to grips with the APA style. With these problems highlighted, changes were able to be made. Due to the nature of these two problems, a trip to the writing center was seen as the remedial action to be taken.

Last but not least, Dr. Stephanie Bennett focused in on the use of rubrics and the important role that they play. Within the Sociology Department, herself and her colleague created pre and post-test rubrics. Based on their experiences, Dr. Bennett recommends that professors create rubrics that are divided by specific learning outcomes, and in which instructions are detailed and expectations are explicitly stated. Having implemented such rubrics, and thus improved communication between her students and herself, Dr. Bennett discovered that her students were more knowledgeable than she initially realized. Bennett explained that, while her students possessed the intelligence and understanding of the target material necessary to complete assignments well, they depended on clearly stated expectations to demonstrate that intelligence and subject comprehension. For example, some of Dr. Bennett’s students realized only when writing style was included on Dr. Bennett’s grading rubrics, that their manner of writing was as important to their grade as the content of their paper.

After the three presenters had their say, the floor was opened up for discussion. These were a few of the highlights:

  • Chalk & Wire can track progress over time, and inform professors on what works and what doesn’t.
  • Adjunct professors need to be more integrated into the process in order to maintain consistency throughout the faculty.
  • Faculty development in the form of mentoring can help to inform and reach out to adjuncts.
  • Writing skills need to be emphasized and reinforced throughout each year of the program.
  • Clearer guidelines and better communication with the students has generated richer discussions within the classroom.
  • A lot of data is collected, but it does not always make the transition to practical information.
  • Students are honest about their own performance, especially when working with well developed and clear rubrics.

For the podcast from the session, visit the mediasite or our Soundcloud page –Mediasite / Soundcloud 

The Bumpy Road to Expert Status

Articles Discussed
Carter, M. (1990). The Idea of Expertise: An Exploration of Cognitive and Social Dimensions of Writing. College Composition and Communication, (3), 265. [Read]
Foertsch, J. (1995). Where Cognitive Psychology Applies: How Theories about Memory and Transfer Can Influence Composition Pedagogy. Written Communication, 12(3), 360–83.
Meyer, J. H. F., & Land, R. (2005). Threshold Concepts and Troublesome Knowledge (2): Epistemological Considerations and a Conceptual Framework for Teaching and Learning. Higher Education, (3), 373. doi:10.2307/25068074 [Read]
This week, Jenn brought me to her domain, assigning a couple of articles (Carter and Foertsch) that discuss composition theory.  Both authors arrive at a pluralistic model to understand the process that moves a student from novice writer to expert.  Along the way I was able to learn more about the two opposing schools of thought:  1) novices learn by using general knowledge (“universal, fundamental structures of thought and language”) to develop expertise as writers and 2) local knowledge is the key, as knowledge is “constituted by a community and writing is a function of a discourse community.”
Each of these opposing models was expressed in various terms and contains important concepts, outlined in the table below:

Most interesting to me was the alternative models of how a student moves from novice to expert, which I attempt to summarize below and simplify greatly:

  • General Process Model: Experts, through experience, have developed more effective general strategies than novices.  These general strategies can be tranfered from one domain to another and are thus more powerful than those that derive from local knowledge.
  • Local Knowledge Model: Experience in a domain is the dominant way a novice progresses to become expert in that domain.  General knowledge is not sufficient to advance within a discipline.  (The idea of experience is given a framework within cognitive psychology by Foertsch, as she distinguishes between semantic and episodic memory and their role in solving new problems we encounter.)
  • Pluralistic Model: General and local knowledge fall along a continuum and novices move along that continuum from general to local.  Absent the same knowledge and tools to integrate new knowledge as a disciplinary expert, a novice will rely on general knowledge strategies to acquire local knowledge.  In this manner, the novice acquires more local knowledge and eventually can operate primarily from a local knowledge approach within the domain.

I include the third article (Land and Meyer) to my list this week, because the process of acquiring expertise within a discipline is a central problem that is addressed by the authors — in their case through the use of a framework of threshold concepts.  I provided an overview of threshold concepts in an earlier post, but I think it would be useful to revisit the idea in light of this pluralistic model of expertise.  The central idea of threshold concepts is that there are certain important concepts in all disciplines that are particularly difficult for students to grasp.  Within the models presented by Carter and Foertsch, I would say that these problematic concepts represent ones that cannot be understood using only a general knowledge framework.

In large part this is because another necessary criterion of threshold concepts is that they are particularly troublesome, and often the underlying knowledge is counterintuitive to the uninitiated.  The ugly underside of this aspect of threshold concepts is that they often go unrecognized by experts in the discipline.  Having fully integrated these concepts within their larger disciplinary knowledge base, experts may be blind to the troublesome nature of these threshold concepts and not fully appreciate or even recognize the struggle their students face.

While difficult to move beyond, threshold concepts, once understood, are transformative to students’ understandings and are integrative in the sense that they help provide a more unified and comprehensive understanding of the discipline.  As important and significant mileposts along that disciplinary continuum, threshold concepts may be those anomalies where the accumulation of general knowledge does not provide a sufficient basis to continue their progress in the discipline.  Constructing learning opportunities that infuse a unique local knowledge perspective at these junctures may aid in pushing students along that continuum, happily moving ahead until the next troublesome concept slows them once again.

Getting Students to Ask Their Own Questions

Last year’s Provisions Fellow, Dr. Jim Allen, focused on the idea of asking questions as key to critical thinking.  One of the texts he wrote about and presented on was Dan Rothstein and Luz Santana’s book, Make Just One Change:  Teach Students to Ask Their Own Questions.  Pete and I added the book to our reading list for this year (and I would go so far as to recommend that it be required reading, along with John Bean’s Engaging Ideas, for all future Provisions Fellows).

I am going to use Jim’s description of the Question Formulation Technique (QFT) that the book is based on and then move onto notes from my own implementation — broken into the seven steps:

They discuss a questioning strategy they refer to as the Question Formulation Technique (QFT). The focus of this strategy is to help students develop their divergent, convergent and metacognitive thinking abilities.

As the name suggests, the method is very structured in its approach in teaching students how to formulate, modify, improve, and use questions to deepen their learning. There are seven basic steps of QFT where both the teacher and students work collaboratively in the process. The teacher is involved in facilitating the process by setting a “focus” for the questions, discussing with students a set of “rules” for the process of creating questions, monitoring students so that they follow the rules, and providing direction for using the questions to learn specific course content.

Recently, I decided to try out the QFT process for the first time with my students with the goal of  creating focused research questions for their midterm projects.  Since they are all working with different course readings, based on their own selection, I put them in small groups that shared the same chosen course reading.  For this reason, I needed to come up with a total of five QFocus prompts — a different one for each group.

  • “Conservatism of Emojis”:  emojis are can be cultural artifacts
  • “Media Ecologies”:  genres of participation describe online practices
  • “Why Youth Heart Social Networking Sites”:  The importance of youth participation in new media
  • “Activists”:  civic engagement is influenced by digital technologies
  • “The Internet”:  The internet means interaction/connection

Despite my reservations about following the QFT’s strict rules and rigid guidelines, we proceeded through all of the steps (saving the last step of reflection for our next class meeting session) as described by Rothstein and Santana.  Here are some notes/thoughts/recollections on each of the steps:

1.  Discussion of QFT RULES (5-7 minutes):  The four rules are deceivingly simple (as is the entire process, really):  1) Ask as many questions as you can.  2) Do not stop to discuss, judge, or answer any question.  3) Write down every question exactly as it is stated.  4) Change any statement into a question.

To facilitate discussion, I went over the rules and then gave my students a copy of the templates on pages 52 and 53 of the book, which give them space to describe whether they think the rules will be easy or difficult to follow and why.  After filling out the sheet, they discussed their answers as a group.  There was little consensus among group members; however, overall the class as a whole felt that most of the rules would be easy to follow.  #2 — not discussing or answering any questions — was described as the most challenging because students said that they tend to instinctually want to respond when they hear a question posted.  One student was honest in his description of why #1 — generating questions — would be challenging:  Because how do I know what the right questions are to ask.  How do I know how to formulate questions (paraphrased response).  This response is important because it actually describes the genesis of the book and the QFT:  The authors discovered that parents of students in the school they were working in didn’t know how to ask questions that would enable them to participate in their children’s education.

My surprise to the discussion was immediate because I had felt sure that discussing the rules would result in students sitting there silently.  I wasn’t convinced, having never done such an assignment before, that they’d be able to envision what they were about to do and discuss it with any depth.  But they did!

2.  Produce questions (5-7 minutes):  I wrote a reminder to myself:  Do NOT give examples; Only tell them they can start questions with words like what, when, how….  The goal here is to make sure students produce their own questions without any influence from the teacher.  Our role is to monitor time and observe the group work, only to remind students of the rules in case they forget.

It took a bit of prodding to get a couple of groups working on this collaboratively rather than writing down their own individual list of questions, but beyond that, the students followed the rules and seemed engaged for the entire process.

3.  Introduce closed- and open-ended questions — As a class we defined these.  For example, Is it going to be on the test (closed) vs. what will be on the test (open-ended)?  Is/do/can ( lead to closed questions) vs. Why/How questions (tend to be open).  Both kinds of questions can result from, what, who, where, when?

The students had no problem defining the types of questions, but when it came to labeling their questions they found that some questions were tough to categorize — that is, they could result in one word responses (closed), but that really they would need more elaboration than that.  This wrangling is a typical and important part of the process.

Ultimately what matters is the discussion around dis/advantages of both types of questions.  The students had no problem coming up with these:

  • Disadvantages of closed:  you get only one word answers, might not get enough information, doesn’t facilitate further thinking/discussion
  • Advantages of closed:  cut and dry, not confusing, doesn’t go on a tangent, could produce more follow-up questions
  • Disadvantages of open:  too much information, could get boring or go on tangent, answer could be confusing, not clear-cut
  • Advantages of open:  elaboration, allows more thinking about the subject, get more information

The point is to show students that there is clear value in asking both kinds of questions.  The students then label their list of questions as either C or O.  Lastly, they change one question into its opposite form (4 mins), because “being able to change the questions makes them feel more confident about working with questions and figuring out how to solve problems for themselves” (82).

Even more importantly, in my mind, is the idea that by changing the wording of the questions, students can see more clearly that “the construction and phrasing of a question shapes the kind of information you can expect to receive” (85).

4.  Prioritize the questions:  Students collectively choose the three questions that will best help them meet some kind of criteria you’ve established.  For me this goal was coming up with a research question, and so I prompted them to choose the questions that would best help them shape and move forward with their research project, and I added a reminder that the first draft should reflect what they’re trying to figure out; what they’ve learned from their sources; how they respond to their sources.  The goal for me in this process was to get them to “think about a problem from a different angle” and to “unlock something” (114).

5.  What to do with all the questions:  As I mentioned in the last step, the purpose of our question generating activity was clearly established from the start — to produce a research question for their midterm projects.  Rothstein and Santana, however, give examples of ideas for uses of student questions at the beginning of a unit/class, mid-unit or class, and for the end.  There are many options given in the book for using student generated questions.  For example, for opening class discussion, to use as guides for a reading assignment, to use in preparation for a test, and so on.   

6.  Reflection:  

“[Students] need the opportunity to name for themselves what they have learned, why it is valuable, and how they can continue to use the process of creating questions beyond this one assignment” (117).

This last step is important because it is yet another aspect of this process that reinforces metacognition.  By asking students to name what they have learned, “they deepen their learning, develop greater confidence for moving forward…, and reveal…a new depth of understanding that may not have previously been detected” (119).  Requiring students to reflect on the activity should also help save it from the realm of a “classroom exercise whose importance teachers recognize, but that the students follow only because the teachers require it” (120).

I saved this step for the following class period, as we had already spent more than forty-five minutes of class time (this is the minimum amount of time needed the first time you implement the QFT with a class) on the activity.

I stuck to the simple metacognitive question of:  What did you learn?  And asked them to follow that up with:  Did/how did it help you (move forward with your project)?  I had them write their responses, and I received feedback like:

  • I learned how to ask questions that really allow me as the writer to expand my thinking and come up with answers that will help me in the writing process. I also learned that narrowing down the important questions is difficult but is definitely vital to the whole process and will allow me to write a better essay.  It is important to utilize the right type of question in order to get the right answer.
  • The sheets we did on monday made us ask questions about our topic and think a little outside the box.  It helped put some of my thoughts on my topic into words.
  • From mondays class I learned new ways to discover questions on certain topics. This helped me formulate an opinion and think about ways i could express that idea. It did help me while writing my piece because i resorted back to it when i became stuck and it helped me to keep writing.

[Side Note:  I indicate at the start that there are  seven steps involved in the QFT.  The first step is for teachers outside the classroom — and that is developing the QFocus prompt.  While I did not detail the “how-to” for this step, I gave examples of my own prompts.  Further information for this step can be found on pages 28-35 of the book.  I numbered only the steps that actually occur in/during class.]

Flipping Again

In our last post, Jenn and I provided reflections on our experiment with flipped library instruction.  In this post, I wanted to provide some additional context for library instruction, as it’s been my experience that many faculty are not quite sure what this entails.


One-shot instruction: not always the best medicine.

The goal of library instruction is to build a range of competencies in students, often referred to as information literacy, which will give them a framework for engaging in college-level research. The opportunities for this type of instruction are presented in a less than systematic fashion — often delivered in response to classroom faculty who have research-based assignments.  Much to the dismay of many librarians, the reality is that we often engage in “one-shot instruction,” which — just as it sounds — happens once without much opportunity for follow up or assessment.

However, the range of new pedagogical strategies in the classroom presents new opportunities for rethinking library instruction.  Indeed both Jenn and I came enthusiastically to the idea that flipping library instruction could have some significant benefits.  The absence of a teaching lab in the library limits our ability (or conversely — challenges our creativity) to engage students in active learning.  I was excited to be able to be in the writing lab in Albertus, helping students as they worked through their assignment, which the assigned videos had modeled .  While this flipped instruction still can be seen as one-shot instruction, the fact that the videos serve as both pre-class assignments and as semester-long learning assets means that the normal limitations of a typical one-shot class are able to be overcome.

While there is a large body of research on the flipped classroom, there is yet to be a lot published that focuses on flipped library instruction.  The article referenced below provides a good overview, teasing out the benefits and challenges inherent in this relatively new form of library instruction.  Among the challenges many librarians would face:

  • Logistics.  It is difficult to plan for out-of-the-classroom work for a class that you have not yet met.   Fortunately, the goals of Jenn and I as Provisions’ fellows dovetailed nicely and helped eliminate the usual logistical issues; but on more normal one-shot requests, this issue would be one that could be particularly challenging.
  • Engagement.  It is always a challenge for a librarian who sees a class once during the semester.  Think substitute teacher and you have an idea of the challenges we face in engaging students and gaining their trust.
  • Time.  Creating and editing instructional videos, I quickly discovered, is very time-consuming.  I was fortunate to have a good deal of lead time, but this would not typically be the case.  However, I do think I would tend to get better — and quicker — with experience.

Part of what Jenn and I are discovering this semester through our work together and through an examination of the research on first-year students is that many of these students struggle to adapt to higher expectations and a new information environment at the college level.  One-shot instruction is simply one tool — and perhaps not the most effective one — to help first-year students build that “research toolkit” that will let them progressively improve their ability to find and utilize resources in their new and complex information ecosystem.

As I read more research on teaching first-year students and as I gain experience trying new approaches to library instruction, the suggestion that Stephanie Bennett offers in the concluding Provisions’ meeting from 2013-14 to “change just one thing” resonates strongly.  While there are many changes on the institution-level that can address the transitional needs of first year students — both generally and in the area of information literacy, the status of that larger process should not hinder or delay the individual efforts I can make to try to improve the things I do in the classroom.

W.B. Yeats once said

Life is an experiment.

I’m running with this, thinking library instruction is an experiment, and it’s an experiment processed one change at a time!

Document Referred to in the Post

Arnold-Garza, S. (2014). The Flipped Classroom Teaching Model and Its Use for Information Literacy Instruction. Communications in Information Literacy, 8(1), 7. [Read article]

The Culture of Assessment

In their 2013 article,  “Assessment culture: From ideal to real – A process of tinkering,” California State University Monterey Bay [CSUMB] professors Pat Tinsley, Marylou Shockley, Patricia Whang, Paoze Thao, Becky Rosenberg and Brian Simmons introduce a set of curricular assessment guidelines recently adopted by departments across CSUMB. According to Tinsley et al., implementation of these guidelines will promote “meaningful, sustained, and systematic assessment of student learning,” thus fostering a “culture of assessment.”  Their goals for operating within a culture of assessment include “increased curricular coherence” (i.e., across departments and between undergraduate and graduate curricula), helping students to identify milestones in the learning process and assess their own learning, and improvement of the curriculum to facilitate improved learning outcomes through ongoing assessment.

However, the interest in growing a culture of assessment has not been unanimous, and has even been controversial in the world of higher education. In their 2013 “Promoting a “Culture of Engagement,” Not a “Culture of Assessment,”  the Trustees of Princeton University cautioned that, by striving for a culture of assessment (i.e., with externally benchmarked measures), faculty risk promoting: a) standardization of curricula across departments and institutions, at the expense of diverse, individualized missions, b) the inappropriate evaluation of programs using generic/vague surveys and standardized assessments, c) undervaluing non-benchmarked evidence of learning, d) overvaluing standardized test results while undervaluing real-world outcomes like employment and fulfillment post-graduation, e) teaching towards the tests, f), self-validation of assessment policies with no external evidence to support their efficacy in improving real-world learning outcomes. In order to avoid these pitfalls, the authors recommend that institutions of higher education foster a culture of engagement rather than assessment.


Tinsley, P., Shockley, M., Whang, P., Thao, P., Rosenberg, B., and Simmons, B. (2010). Assessment culture: From ideal to real – A process of tinkering. Peer Review, 12(1).

Retrieved from


The Trustees of Princeton University. (2013, September 12). Promoting a “culture of engagement,” not a “culture of assessment” [Remarks to presidents of the American Association of Universities (AAU), prepared for delivery at the AAU meeting on Oct. 23, 2012].

Retrieved from