This is Jerry Green's Typepad Profile.
Join Typepad and start following Jerry Green's activity
Join Now!
Already a member? Sign In
Jerry Green
Recent Activity
Michael X: Exactly right about the AoC track. The title of this series is a bit of a give-away that I have some thoughts about good syllabus design, so I want to make sure folks at least have the right questions on their radars. I think one-on-one meetings to talk over their choices is the only way to do that. Recent phd: Thanks!
Thanks Amanda! Anonymous, I will do my best to remember to post an update. And you're right, the teaching sessions will be delivered to the rest of the class. I usually do a detailed course evaluation on my own at the end of class, so I'll be able to use that to see whether there are any big differences between how people thought about the class (though the combined sample size for both classes will only be about 20)
Good question, anonymous, and thanks! From what I can tell, the students seem to be on board with the different kinds of work involved, though of course they may have their own thoughts that I don't hear. But I have a pretty friendly, informal relationship with most of the grads here, and we all talked through the set-up in advance, so I think (hope) they'd tell me if there were any major problems. From what I've seen so far, there are two complications to this way of doing things: 1) Its complicated. You have to sign up for different tracks, and different things are going on all the time. So you have to be more concerned with due dates and the like than you normally would. And you can't rely on your peers to remind you 'oh yeah, we all have a paper due in 2 weeks' or whatever. 2) Its not clear that the tracks are equally difficult. I did my best to make things come out roughly equal, but I won't really know if that succeeded until after the term is over. Re: (2), I will say that the AoS track looks more difficult than the rest, but that's a least partially misleading. The AoC track requires teaching 2 classes, effectively, and that's more work than it looks. And I'm picky about good syllabi, so that assignment will also take more work than it could (i.e. you can't phone it in). FWIW, I have one student who was pretty upfront about trying to pick the least demanding track, and they went with the exams. But even if the workload isn't quite even, I think that they think its OK. If you're an ancient specialist, you're going to put in more work anyway, so might as well have that codified. Same goes mutatis mutandis for the others. Tying the differences in work to different, self-selected goals I think helps justify those differences a bit. But like I said, to some degree we'll just have to wait and see.
Two issues here: 1) How late in the year can one get an offer? and 2) What is the typical length of time between the application deadline and start of interviews? For 1) my experience is in line with Marcus and the comments above: I've seen cases of offers as late as June. For 2), unclear. In my own experience, things moved very quickly: I'd guess the average time between deadline and contact about interviews was short, maybe 2 weeks. But this is a small sample size. Someone more enterprising than me might compare the dates of the interview notices on the jobs wiki (http://phylo.info/jobs/wiki) with the deadlines on philjobs.org to see what the stats are.
This is great, Helen. I don't do X-phi (though I dabbled a bit early in grad school), but I think this is a great example of ways to think about structuring courses in a way other than just 'read some stuff, write some papers'. I especially like the idea of trying to replicate a result. And focusing on the practical upshot of the course is critical. I'd love to hear your thoughts on things you've learned along the way that you think might translate to other kinds of courses.
Thanks for this post, Shane. I thought it was both thought-provoking and even-handed.
What a great way to put it, Tim. 100% right, I think.
Hi Tim. Thanks for your insightful comment. Since its clear you have a pretty nuanced view, I'll raise a few points that are more inspired by your post than trying to directly rebut it or something. As a description of a widely-shared attitude, you're certainly right: many people hold the view you describe, even the overstated version. One thing I'm trying to do is push back against this attitude, so I'm at least partially concerned with what we should value rather than what we as a profession do value. But I wonder if I can make a stronger case than this. I think 'the point of the entire academy is to produce new knowledge' isn't the end of the story: what's the point of producing new knowledge? One answer is to share it with other experts so they can make new knowledge too. Given how little publications are read, even by other experts, I'm not sure we actually pursue this value very much in practice. But set that concern aside: even granting the point, it only pushes the question back: What's the point of producing knowledge so that other people can produce knowledge? One answer is that knowledge is intrinsically valuable. That's true to a point. But I don't think its the whole story: to borrow one of your arguments, I don't think the academy is set up to be conducive to the pursuit of knowledge for its own sake. We produce new knowledge so we can disseminate it so that it can be used. And there's a name for disseminating new knowledge so it can be used: teaching. As you rightly note at the end of your comment, students are here to learn from the best, and being the best requires staying current in your area of specialty at least. But that just shows that you have to care about both research and teaching, not that teaching is a wholly subservient end. And if we're pursuing cutting edge research because students are here to learn from cutting-edge researchers, then that sounds to me like we're researching for the sake of being better teachers, not teaching in order to pay the bills to allow for research. I suppose I should reiterate a comment I made above: Both Jason's original post and my analogue are purposely over-simplified. His advice is not 'Do as little possible teaching work as you can get away with' and mine isn't 'Do as little research as you can get away with'. As a grad student who is liable to be distracted/overwhelmed by the day-to-day demands of teaching, Jason's advice is a helpful corrective. I'd like to think the same holds true of my advice for people who treat teaching only as a chore to be quickly discharged. As Marcus has written on the blog several times, researching and teaching can be mutually reinforcing rather than in competition (Heck, I'm teaching a seminar on my dissertation topic in the Spring, and used a forthcoming paper in a survey course this term). So ultimately a one-sided attitude is counter-productive. But since we're much more likely to hear 'research is the only thing that really matters', I think its useful to hear a message to balance it out from time to time.
This is interesting: your questions, Marcus and Elisa, made me realize something about rubrics that I had never explicitly thought about. One clarification first: in the style of Jason's original post, I was writing in a somewhat tongue in cheek, overly general way. The kind of procedure I was thinking of for point 5 was something I did myself when I first started graded, and saw many of my colleagues do as well. Namely, they'd respond to a student's paper as if they were taking notes an a seminar paper, or a conference paper they would give comments on. That is, they'd note every possible objection, every qualification, every omitted relevant consideration, etc. This might work if you're giving feedback on a graduate seminar paper, or maybe for advance undergraduates, but certainly not for intro students. This procedure lies at one extreme. The other extreme, of just marking a checklist with no explanation, would be just as bad. One gives too much information, the other too few. So, much as Aristotle advises us when we're disposed toward one extreme to aim for the other, my advice to use rubrics not comments is aimed at someone tempted to give too many comments. OK, clarification over. Here's the thing I hadn't noticed before. Looking at some of my own rubrics, I realized that the criteria I use don't actually substitute for comments, because every time I don't give someone credit for a criterion I give a comment explaining why. What it does do is focus my attention as a grader. There are lots of things I could comment on in any given paper, but a rubric helps me focus on the things that are most relevant to the student. It also helps the student contextualize the comments I give, because they're in reference to a specific feature of the paper. This discussion definitely makes me want to write a separate post on rubrics. Elisa, I'll address your question of want to include on it in that post.
Thanks Marcus. I'm pretty sympathetic to your comments, so I suspect we disagree less than it appears at first. On point 2, I think we only disagree about (c). I meant (c) as something like a side-constraint: one of your main goals as a young academic is to publish enough to get a job, but there are good and bad ways to pursue that goal. Publishing at the expense of teaching is a bad way, publishing while teaching effectively is a good way [and as you've written before, teaching and research can be mutually reinforcing rather than competing]. You're quite right that academic success requires a lot of work on both sides, so really we should accept both Jason's advice and mine: Never sacrifice research for teaching, and never sacrifice teaching for research. You gotta find a way to do both (hence 2a and 2b). [That said, I'll admit that 2c is a bit moralistic, as is 1 and 3. One thing I'm trying to do here is push back against the mindset that teaching is the unpleasant/unimportant price we pay for having a research-conducive job (which, to be clear, is a mindset that you, Marcus, clearly don't share). Maybe that's impractical from an instrumental reasoning standpoint. So be it.] For 5-6, you're totally right about (i) incentivizing student use of feedback and (ii) positive feedback, both in tone and in content. 100% agree. I think our disagreement here is mostly just about the mechanism to provide the feedback (coincidentally, I also allow paper R&Rs if the first draft isn't up to snuff). But now that I think about it, I may be assuming a more robust conception of rubric than I suggested in the post. What I have in mind isn't just a check-list. Maybe I should write a post about this. Maybe I should also say more about peer-grading: One way I use it, as I mentioned above, is for daily quizzes. Another, which I didn't mention, is for paper first drafts. And this works wonders as a tool to help teach good writing, because its much easier to recognize issues in other people's work than our own (in fact, most of the lessons I was taught as a student didn't register until I started grading myself, and a had that 'Aha! *that's why Professor So-and-so always said X' moment). You would give peers responsibility over the final draft grade, of course, but early on it can be quite useful. While we're on the topic, though, I'd be interesting to hear more about your experience with giving students lots of paper feedback. It sounds like experience is different than mine (though I realize I was restricting my attention to lower-division courses, which probably makes a big difference). Is there anything you do in advance to prep the students for the feedback? And do you have any tips for giving plentiful comments quickly? How long does it take you to do 1-3 pages?
Most of this has been said already, but worth repeating/emphasizing, I think: 1) Jaded is right than some schools won't seriously consider ABDs for jobs, even if the ad says otherwise. And you can't blame them: given the oversupply of candidates, its an avoidable risk (and I say this as someone who went out *very* ABD last year). But, not every school is like this, and you often can't know which is which. So I think its better do what you can to minimize problems across the board, even if its futile in specific cases. 2) So what do you do? One option is to have your letter-writers address the issue as well. This is especially true if you're only delaying the defense for bureaucratic/financial reasons. Having your committee chair (and maybe a second recommender) say "The defense is ready to go but.." or "the defense will be ready to go, but..." carries a bit more weight than when you say it (though again, as Jaded notes, not in every case). 3) Another option is to stress evidence of progress/completion. As Marcus suggests, you can post (parts of) your diss on the research section of your website. If possible, you can also say things like "I presented Ch. 1 at so-and-so conference" or "Ch. 2 is under review as a free-standing article". 4) Your research statement is crucial here. You must show (not tell) that your dissertation is near completion. This can be difficult, because tone matters as much as content. Minimally, it should be in present tense (not future), be fairly detailed (but not too long), and should reflect a kind of big-picture confidence that each chapter is worked out on its own and also fit into a coherent whole.
Yes! Great idea Marcus! And readers, I'm always open to suggests for blog posts. I'd much rather write about what you find interesting than what I find interesting.
One more thought, about avoiding the problem rather than addressing it. As you mention, putting the students in control is good for both student and instructor. I often will allow for extra credit assignments that put the rounding-up burden on the student rather than on me. I think they're a lot like your Borderline Points system, except I just treat them as extra assignments. For example, I'll let them do a small number of one-page response papers or something, worth 1% of their final grade each (so, much more work for these points than normal). Anyone who is in range of getting rounded up can choose to do some more work in get over the hump, if they want. But its on them. I also like to allow re-do's on certain assignments. For instance, if I assign three short papers, I'll let them R&R them if they get a C or worse, with a new grade of up to 80%.(And I do mean R&R: they have to write a cover-sheet explaining what they fixed and why). Its a very powerful teaching tool, and it also lets them do the work to 'get rounded up' themselves. Third, for lower-division courses I have small, low-stakes quizzes every day. They're worth about 1% each, but I'll offer more than I grade; this semester, I give 38 quizzes, but you can only get 35 points total. This build-in curve allows students to miss a few quizzes' worth of questions throughout the term without penalty. This means that they're less likely to end up with just a point or so less than they needed, and if they do they have a clear explanation of why (e.g. they blew off 5 quizzes instead of 3). One principle underlying all these approaches is to give the students several opportunities to get the grade they want, and if possible more than one avenue for doing so. Done right, you minimize cases where one assignment is directly responsible for the grade. And putting the students in charge of their own grade as much as you can makes your life a lot easier, not to mention making the students happier.
Good post, Trevor, and good discussion. I have an explicit policy in my syllabus, as follows: "The letter grade cut off for, e.g. an A- is 90.0, not 89.5 or 89.9. But I may choose to round up in exceptional cases, if (i) I feel you’ve done better or worked harder than your grade suggests, and (ii) your grade is not due to excessive missing assignments. This is a courtesy, not an entitlement." As you can see, it basically just codifies the problem you're asking about. Technically, cut-offs are strict and well-defined, and there's no obligation on my end to consider rounding-up in borderline cases. But I may do so in certain cases. In practice, I'm much less strict than the policy makes it seem. This is mainly because, as you note, grading is highly fallible. So, in practice, I think its safe to assume that I could have given them an extra 0.n points somewhere during the term. And the way I run my classes, students have to really try not get the grade they want, so its usually pretty clear who deserves rounding up and who doesn't. One point about effort: you're right that its hard to discern how much effort a given student is actually putting in, let alone how much that effort should determine their grade (We wouldn't dock points from students who don't have to study for lack of effort, after all). I set up my classes explicitly so that you can get an A just by working hard consistently: there's basically a threshold above which high quality work doesn't make your grade better. At least, that's what I do for lower-division courses. Upper-division/ grad courses are different, but in lots of ways that make rounding-up less of an issue anyway (at least in my experience).
Agree with Lauren: Way too much time between submission and presentation. I'll occasionally present at one of the group meeting, which often have later deadlines.
One other thing: Asking to remove the paper title from a website is reasonable, since its easily google-able. Asking to remove the title from a CV seems less reasonable. If you embed your CV as a viewable doc or as a downloadable file (like I do here: http://jerrygreen.weebly.com/cv.html), then I don't think a ref would be able to find it unless they're already reading A's website (at least in normal circumstances)
Interesting case. I've never heard of an author going to those lengths to preserve blind review. Since I'm a big proponent of blind-review preservation, I'm basically on X's side here. But I think there's a decent compromise I would use if I were A. Basically, I'd keep the entry on the CV, but with a format like: (2015) "Comments on [Title and Author removed for Blind review", Actual Name of Conference. I think you can get away with keeping the conference name, because most readers wouldn't be able to connect X and A unless they were at X's session. I suppose its possible that a single person could (i) be on a committee and see A's CV while (ii) simultaneously be the referee for X's paper, and (iii) also be one of the conference participants, and so infer from A's CV that X's paper was one of those papers presented. But I don't think that's problem, because (in addition to the very small likelihood), if a referee were at the conference and could remember X's talk, they could do so regardless of A's CV. So, all in all, I think this approach would allow A to get the benefit of keeping the conference name on the CV without adding to the likelihood that X's anonymity is damaged during the review process.
@Teacher: Thanks! Good luck on your first day. @Gradjunct: Excellent handle, by the way. I can sympathize with your situation: I've got just shy of 200 students this term myself. I'm not sure if there are any shortcuts to learning all those names. I find that I just have to plug away at the photo roster until it sticks. I've found it helpful to keep the roster on me and work on it during downtime, e.g. waiting at the bus stop, or while lunch is in the microwave, or while Netflix is loading. But here are a couple small things that might help: 1)Use names as much as possible in class (e.g. ask for the students name every time you talk to them or they speak up in class). When you email with a student, look at the roster to pair the face with the name. 2) If possible, have the students sit in the same place every class, so you can at least associate people with places (I find this especially helpful when two students look kind of similar). 3) Break the task into chunks: just try to memorize the names of the first 15 people on the roster in week 1 (or the people in the last 2 rows, or whatever), the next 15 in week 2... 4) There are also some mnemonic devices that seem to help, like associating each student with an alliterative description (e.g. Alice is an athlete, Bob is Browns fan, etc). Accurate is good, but sometimes funny or nonsensical descriptions can be more memorable. 5) Focus on unique features, e.g. 'Carlos has the really long beard, Diep dyed her hair blue'. 6) Try to split your attention between talkative and quiet students. You'll learn the names of the students who speak up every class pretty quickly and effortlessly. So focus your efforts instead on the quieter students. One common thing that *doesn't* work, in my experience anyway, are name tents (i.e. a folder paper on the desk with their name on it). For me name tents are a cheat that just allow me to read their names, rather than actually learn them. But lots of people use these, so there must be something to it: maybe you'll have better luck than I do. Hope that helps. Good luck to both of us!
Thanks, Brad. Based on my own experience, I think you're exactly right. One thing I probably should have said in the original post is that learning outcomes are often viewed as this annoying, useless, externally imposed requirement from some administrator who likes pedagogical fads. But they can be really helpful, powerful tools if properly used, for the reason you mention among others.
If the only criticism were that a paper relied on an as of yet underdiscussed paper, then I don't think that would be grounds for rejection. But I suppose there's a difference between 'I assume underdiscussed view X, and X -> Y' vs. 'If X is right, one consequence is that Y'. I don't want to accuse the OP of this, but one trend I've seen in talks lately is to stipulate the controversial bits as a starting assumption, and spend the talk following out pretty trivial implications of those assumptions. So I wonder how much this could be resolved just by framing things a different way. That said, I do worry about a possible Catch-22 here. If you rely too much on an underdiscussed view, you risk a referee thinking that the paper isn't of sufficient interest (though Marcus is definitely right about the self-fulfilling nature of what philosophers find interesting). If you rely instead on a more thoroughly vetted view, you risk a referee thinking that the paper doesn't make a sufficient contribution to the debate.
Here's what my program does (big state school, top-20 FWIW: 1) For the last few years, two faculty have jointly run something like reading group/pseudo-seminar for job marketers. The format has changed a bit, but basically we meet every week early in the semester, and spend each meeting discussing one job market document (so one week on cover letters, one week on teaching statement, etc). We have to submit our materials in advance, and then everyone gives feedback on everyone else's stuff. We do mock interviews with a set of different faculty later on, closer to the real thing. We also have a job market wiki that we've been adding to for the last couple years. 2) Our placement rate has been has been OK, relative to the discipline as a well anyway (3/5 this year, only counting current students). I think we do a good job with document prep, especially when it comes to thinking through the different ways you might approach each part of the application. 3) The main thing I think we're weak about is understanding the different kinds of jobs out there, and what makes you competitive for them. All of faculty went to top research schools, and got hired in top research schools, so that's all they really know. The very things that make you look good to that kind of school can harm you in other contexts. Second, our dissertation committees are too big (5-6 people usually, I've seen as many as 8 on one committee). This means that we have too many letters, and its hard to know which to use which you have to be selective. Finally, with one or two exceptions, my sense is that our faculty are not very proactive after you've applied. It would be a better world if you didn't have famous faculty calling on the search committee independently to sell their students, but it happens, and so I think you have to react in kind.
Interesting stuff, Michael X. As Preston noted, I left out a category: it looks to me like the one that was missing is pretty close to the way you like to do things. I had to look up the doctrine of medium specificity, but when it comes to preparing courses this doctrine looks pretty good to me. I'm curious to hear what you don't like about it, especially if you think the problem applies in both art and pedagogy.
#dang. And I thought my teaching portfolio was long (c. 30 pp)!. Just to add to Marcus's 9:15 comment. I did something similar: one unedited set of comments (c. 12 out of a 20-25 person class), and linked to my website where I posted the rest. I think this approach strikes the best balance you can get away with. On the one hand, since you've got a full set of unedited comments, and make the rest available, you're not trying to hide anything, and I think there's something to be said for a confident display of your information. On the other, the only people who care enough to actually go luck at the info online will know enough about teaching to understand how variable written comments can be, and how common it is to get that one very negative outlier.
I'm not going to take a normative position just now, but here are some observations about practical difficulties with the strategic approach: 1) Fads come and go. This means that, by the time you've got a paper ready to go, people aren't interested anymore. This happened to me early in grad school with the situationist critique of virtue ethics. It was *the* topic in VE for a bit, but quickly played itself out. 2) There's a time lag. The work we're reading now was written and accepted months or years ago, and other work you don't know yet is in the pipeline. Unless you live in a centrally located place and/or have a big research budget, you won't be up on all the current work going on at conferences. This makes it both hard to respond to hot topics, and hard to guess which topics will be hot. 3) More competition. A topic is hot if lots of people are working on it simultaneously. This increases the odds that (a) you get scooped, (b) you aren't the best paper on the topic submitted on an editor's desk, (c) you're competing for a single journal spot (e.g. because they don't want too many papers on a narrow topic), (d) a referee sees your paper as one of a group on the topic, making it look average. 4) Polarizing. If more people are thinking about your topic recently and/or frequently, they're more likely to have strong views on it. This increases the chances a ref won't like your paper. Also, some people are reactionary, counter-cultural, etc: if they see something is popular, they ipso facto won't like it. So a ref might think 'ugh, another unneeded paper on X' from the outset. 5) Unfocused. If you're bouncing around from hot topic to hot topic, it might be hard to have an overarching research project. This could be harmful for job apps or tenure down the road. Of course, these considerations won't apply in every case, could be counter-balanced by other considerations, etc. Even so, we shouldn't assume 'strategy' is automatically the savvier, more practical option.
One thing that I've heard in defense of prestige bias is that institutional ranking really does track student quality, so its not really a bias (i.e. the best schools tend to recruit the best students). And of course people aren't going to combat prestige bias if they think its justified. Two reasons I suspect this line of thinking doesn't work: 1) There's very little reason to think that there's an innate philosophical talent that you manifest in your senior year of undergrad (let alone in high school when you applied to the good undergrad that helped you get into the good grad program). Thinking in terms of innate talent is bad in a number of ways; for a start see http://dailynous.com/2015/01/15/raw-intellectual-talent-and-academias-gender-gaps/ and http://dailynous.com/2015/02/05/more-on-innate-talent-and-philosophy/ 2) Folks at the best schools get a lot of benefits that you don't get elsewhere. Funding tends to be better, which means more semesters on fellowship rather than teaching. Its easier to travel to a bunch of conferences, either to present or just to attend and network. You have more access to more professors, both in your own faculty and with visitors. So, if anything, we should expects the people at the top departments to have more impressive CVs: more publications, more presentations, etc. But at least in my experience its the reverse: a PhD from a top department with no publications will beat out a PhD with publications from elsewhere. This suggests two concrete steps in response [and kudos to the OP for thinking in terms of concrete steps rather than just complaining] i) Stop thinking in terms of innate talent, and allow for growth, the value of hard work, etc. ii) Expect *more* from applicants from top schools, not less. If you have more time, more resources, and more personal connections, then you should be outperforming those without these benefits.