What Should Sessionals Be Paid?

According to Gail Lethbridge, it should be equivalent to what tenure-stream professors are paid:

This is because their pay [sessionals] is significantly lower that that of their full-time peers. An average salary for a full-time tenured professor in Canada is somewhere north of $100,000. A sessional teacher with the same course load is looking at $30,000 for full-time work.

This paragraph is somewhat misleading.  It points to a $70,000 gap that simply does not exist.  What’s the reality?

Continue reading

A typical tenure-stream professor is paid by the university according to the following workload: 40% research; 40% teaching; 20% service. Using Lethbridge’s numbers, that means a typical tenure-stream professor is paid $40,000 for research, $40,000 for teaching, and $20,000 for service.

A sessional is only paid to teach; there are no formal research or service obligations.  If we use Lethbridge’s numbers, then the gap is $10,000 and not $70,000.

So a reasonable argument could be made that sessionals should receive, on average, $40,000 rather than $30,000.

Of course, there are other issues at play here and that are mentioned by Lethbridge that are perfectly legitimate, including the lack of benefits, poor working conditions, and the like. There needs to be debate and action on the increasingly reliance on sessionals by many universities. No question.

But when making salary comparisons, I think it is important that we compare apples to apples.

Fixing Peer Review (again)!

I don’t know anybody who likes peer review.  The complaints are many but boil down to two main concerns: peer review is way too slow and the quality of the comments varies far too widely.

I want to focus on the second issue.  Part of the problem, I think, is that editors provide too little direction to peer reviewers and peer reviewers seem to have far too much discretion to assess the manuscript in whatever way they wish.  The result, in my experience, are reviewers that frequently want you to write a completely different paper or book, hammer your choice of methodological tool or theory (based on personal preference), or provide criticisms that clearly indicate that they did not read the manuscript carefully enough (to be fair, sometimes this criticism is a signal that the author needs to make things clearer or more pronounced). Continue reading

I think the days of “free for all” reviews needs to end.  Instead, editors should consider adopting a list of questions and holding reviewers to answering ONLY those questions (no more additional comments or recommendations!).

Here’s what my reviewer form would look like:

1) Is the argument presented in the paper internally consistent? If not, please identify inconsistencies in the argument.

2) Does the paper make an original contribution to the literature? What is that contribution and what is its magnitude (on a scale of 0-10, with zero being none and 10 being ground-breaking)?

3) Does the evidence presented adequately support the arguments presented in the paper? If not, identify weaknesses or areas where additional evidence would be helpful.

4) Are there any plausible alternative explanations/arguments, given the evidence presented in the paper, that the author should consider seriously?

I wouldn’t ask reviewers to recommend publication or not.  I would simply limit them to answering these four questions and make a decision based on my own reading of the manuscript and these reviews.

Why these four questions? I think peer review should be about assessing whether the manuscript makes any type of contribution (big or small) to the literature and whether the paper is sound in terms of scholarly rigour.  Contribution is important (e.g. question 2 above) since higher ranked, general political science journals, will probably emphasize larger contributions, but that should be only part of the calculation (many small contributions are just as important as one or two major ones!).  Limiting reviewers to rigour is also important because far too often, individual reviewer preferences about research topics and questions, approaches, methods, theories, and political leanings, seem to take precedence when they shouldn’t.  If I choose to do a descriptive, analytical paper, that shouldn’t automatically lead a reviewer to reject a paper just because they wish I wrote something different (normative or explanatory).  Reviewers instead should be assessing questions 1, 3 and 4.

What do you think? Would you add anything else to my reviewer form? Would this form and procedure generate different outcomes?

Methodological and Theoretical Pluralism: Good or Bad?

Last week I was in Milan, Italy attending the International Conference on Public Policy.  Unlike many of my colleagues, I had yet to attend an international conference so this was a very exciting experience for me on a number of levels.

Anyway, a number of things struck me as a result of this conference (and I don’t mean the unbearable heat of Italy in July!).  One was the sheer number of people from different disciplines studying public policy.  On the one hand, it’s a strong sign of a healthy subfield, right?  On the other hand, it seems that a powerful consequence of size and diversity is theoretical and conceptual fragmentation.  In almost every panel I attended, there was significant disagreement about concepts and assumptions within very established theoretical traditions.  For instance, in the panels on “co-production”, presenters and audience members used the terms “co-management”, “co-creation”, “co-construction”, among many others, interchangeably or as meaning different yet similar things.

In one of the plenary sessions, political scientist Bryan Jones noted a similar phenomenon.  He believed that the literature on agenda setting, a concept that he helped invent and pioneer, had seemingly lost its way.  Much of the new literature on the topic, he argued, was no longer in sync with the original theoretical micro assumptions that he and others had originally grounded the work in, with predictably negative consequences. Continue reading

It seems to me that the trends Prof. Jones noted in his talk and the lack of conceptual agreement at the panels I attended were partly the result of the growth and democratization of the academy.  In the past, there were fewer journals, fewer scholars, and fewer students entering and finishing PhD programs.  The result, I think, was a smaller set of high performing scholars writing about public policy (and political science) issues. The demands to keep up with the literature were smaller and the people contributing were the best of the best (I think?!).  As a result, political science and public policy fields and subfields perhaps had more internal conceptual consistency or at least more consistency in terminology. Today, however, with the explosion of new journals and PhD programs, the sheer amount of literature is impossible to read and keep up with.  As a result, you get conceptual fragmentation.

In that same plenary panel, Grace Skogstad gave a powerful defence of methodological and theoretical pluralism and to some extent I agreed with her. Who doesn’t like pluralism when it comes to publishing our research!?  On the other hand, an important and negative consequence of pluralism that rarely gets mentioned is this trend towards fragmentation.  Embracing pluralism means embracing conceptual blurriness, to some extent. For instance, I use co-production but Bob uses co-construction. Do we mean different things? Well, it doesn’t matter.  What matters is that I cite and speak to the people who favour co-production and Bob cites and speak to the co-construction people.  I may try to come up with a new definition of co-production that encompasses co-construction, or I might invent a new term, but there’s no guarantee that anyone will adopt my new definition or term.  Even if some people do, others will continue with their preferred term or definition.  Why? Because we embrace methodological pluralism.

What’s the alternative to methodological pluralism? I’m not sure.  Maybe radically fewer journals?  Then again, if you believe in the work of John Stuart Mill, then methodological pluralism is perhaps the only way to ensure truth wins out eventually.

Researchers and Scholars! Beware of your Cognitive Biases!

I am in the midst of reading Joseph Heath’s Enlightenment 2.0, which was shortlisted for this year’s Donner Prize.  It covers a lot of similar ground in other recent books about how humans think, such as Daniel Kahnman’s and Jonathan Haidt’s books.  Collectively, these books are having a powerful impact on my views of the world and on my scholarship.

Heath’s book is a great read.  It is very accessible and provides an excellent summary of the literature on cognitive biases and decision making (at least it’s consistent with Kahnman’s and Haidt’s books!). Continue reading

Among many important and interesting tidbits, Heath argues that one of the major problems that all citizens face, whether they are academics or non-academics, is confirmation bias (and indeed there’s research showing that philosophers and statisticians, who should know better, also suffer from the same cognitive biases).  It’s why some scholars insist on the need to reject the null hypothesis when engaging in causal inference.

Yet confirmation bias is such a powerful cognitive effect on how we perceive the world and make decisions. Certainly in my subfield, and I assume in many others involving strong normative debates and positions, there is a strong temptation to accept and embrace confirmation bias.

In the words of Joseph Heath:

The whole “normative sociology” concept has its origins in a joke that Robert Nozick made, in Anarchy, State and Utopia, where he claimed, in an offhand way, that “Normative sociology, the study of what the causes of problems ought to be, greatly fascinates us all”(247). Despite the casual manner in which he made the remark, the observation is an astute one. Often when we study social problems, there is an almost irresistible temptation to study what we would like the cause of those problems to be (for whatever reason), to the neglect of the actual causes. When this goes uncorrected, you can get the phenomenon of “politically correct” explanations for various social problems – where there’s no hard evidence that A actually causes B, but where people, for one reason or another, think that A ought to be the explanation for B. This can lead to a situation in which denying that A is the cause of B becomes morally stigmatized, and so people affirm the connection primarily because they feel obliged to, not because they’ve been persuaded by any evidence.


Let me give just one example, to get the juices flowing. I routinely hear extraordinary causal powers being ascribed to “racism” — claims that far outstrip available evidence. Some of these claims may well be true, but there is a clear moral stigma associated with questioning the causal connection being posited – which is perverse, since the question of what causes what should be a purely empirical one. Questioning the connection, however, is likely to attract charges of seeking to “minimize racism.” (Indeed, many people, just reading the previous two sentences, will already be thinking to themselves “Oh my God, this guy is seeking to minimize racism.”) There also seems to be a sense that, because racism is an incredibly bad thing, it must also cause a lot of other bad things. But what is at work here is basically an intuition about how the moral order is organized, not one about the causal order. It’s always possible for something to be extremely bad (intrinsically, as it were), or extremely common, and yet causally not all that significant.


I actually think this sort of confusion between the moral and the causal order happens a lot. Furthermore, despite having a lot of sympathy for “qualitative” social science, I think the problem is much worse in these areas. Indeed, one of the major advantages of quantitative approaches to social science is that it makes it pretty much impossible to get away with doing normative sociology.


Incidentally, “normative sociology” doesn’t necessarily have a left-wing bias. There are lots of examples of conservatives doing it as well (e.g. rising divorce rates must be due to tolerance of homosexuality, out-of-wedlock births must be caused by the welfare system etc.) The difference is that people on the left are often more keen on solving various social problems, and so they have a set of pragmatic interests at play that can strongly bias judgement. The latter case is particularly frustrating, because if the plan is to solve some social problem by attacking its causal antecedents, then it is really important to get the causal connections right – otherwise your intervention is going to prove useless, and quite possibly counterproductive.


In the subfield of Aboriginal politics, there are powerful incentives to ascribe everything that has gone wrong with Aboriginal communities post-contact to the British and later the Canadian state.  Those who try to say otherwise are routinely hammered and ostracized by the public and some members of the academy without even taking a moment to consider seriously their work.  Say what you want about the books and articles by Tom Flanagan, Frances Widdowson and Ken Coates, but at least they are providing us with an opportunity to test for confirmation bias.  Causal inference requires eliminating rival explanations! Otherwise, how can you be sure that A causes B?

In many ways, it is for these reasons why I’ve long been suspicious and wary of ideology (and certainty), whether it comes from the right or the left.  Someone who is hard core left or right, it seems, is more likely to be driven by confirmation bias.  I’ve seen dozens of episodes in my life where ideologues (from the left and the right) or those with strong views of the political world, when confronted with overwhelming evidence, refuse to budge.  It’s irrational, in many ways.  And so I long ago vowed to try and avoid becoming one of them and to embrace uncertainty. Sure, I will take a strong a position in my articles, books, and op ed columns, but I’m always ready and willing to change my mind.

Perhaps it’s a cowardly way of approaching politics and scholarship (and so I guess I should never run for office!) but for me, it conforms to my goal of striving towards causal inference and certainty.

Forget Robert Munsch, kindergartners need skills training

Published Mar. 21, 2015, in the Waterloo Region Record.

Recently, the government of Ontario announced that it would be asking employers and industry groups to participate in a process designed to transform how universities are funded and operated in Ontario.

In many ways, this announcement is unsurprising in that it is simply the latest development in a long-term trend toward pushing universities to become places that focus more strongly on training students to meet the needs of the Canadian economy.

Universities, according to this vision, need to become sophisticated versions of community colleges, providing students with high-end skills and training to meet the current and future demands of the marketplace.

Predictably, this recent announcement has generated considerable opposition and disgust among my academic colleagues. I, on the other hand, applaud the government for taking this bold and visionary stance in provincial education policy.

Read more…

What is Community-Engaged Research? A Conversation with Dr. Leah Levac

Over the last decade or so, community-based participatory research has become a more prominent feature in the discipline. This fact is especially true in the area of Indigenous studies, where research partnerships with Indigenous communities have become almost the norm. Although I certainly appreciate and respect the idea of community-based research, I’ve also tended not to use it mainly because I’m uncertain about the tradeoffs involved. Luckily, I am visiting professor with the department of political science at the University of Guelph this term and just down the hall from my office is Dr. Leah Levac, assistant professor of political science at UofG. Her research, which has been supported by the Trudeau Foundation, the CIHR, and more recently, SSHRC, looks at how government and civil society actors engage “marginalized publics in public policy development and community decision-making”. In particular, she uses community-engaged research methodologies and approaches to study the participation of women and youth in Canada. The following is a conversation I had with her regarding her work and in particular, how she uses community-based research to work with marginalized populations and individuals in the pursuit of common research goals.

Continue reading

Alcantara: What is community-based research?

Levac: Community-based research is one of several methodological orientations to research that have, at their core, a commitment to social justice and equity, and to working directly with people in communities to address research questions that are important and relevant to their lives. Participatory action research, feminist participatory action research, community-based participatory research, and action research are other names used by community and academic researchers who uphold similar commitments to working with communities to bring about social change. Community-based research is committed to the principles of community relevance, equitable participation, and action and change (Ochocka & Janzen, 2014). Emerging from different contexts and histories, forms of community-based research have developed and been practiced in both the Global North and the Global South. In all cases, community-based research pursues the co-production and dissemination of knowledge, through both its process and its outcomes.

Alcantara: How do you use these methodologies in your work?

Levac: Over the last several years, I have been working with various community partners and academic colleagues to develop and use a feminist intersectional approach to community engaged scholarship (Levac, Stienstra, McCuaig, & Beals, forthcoming; Levac & Denis, 2014). The idea is that we use the principles of community-based research combined with a commitment to feminist intersectionality; a self-reflexive theoretical and methodological orientation to research that recognizes gender as a dimension of inequality, and understands that power exists and operates through the interactions between individual or group identities (e.g., gender, ability, age), systems (e.g., sexism, heterosexism, colonialism), institutions (e.g., governments, schools, family), and social structures (e.g. social class, economic structures, societies). We draw on the work of Collins, Dhamoon, Hankivsky, and others to inform our work. Practically, we apply this methodological orientation by engaging with (primarily) women in communities, along with academic colleagues across disciplines, to develop partnerships that lead to asking and answering research questions that are pressing for our community partners. Based on this commitment to developing shared research goals, we use one or several methods (e.g., community workshops, interviews, focus groups, surveys, photovoice) depending on the question(s) being asked. For example, I collected data through community workshops and focus groups, and then analyzed the data with members of the community, as part of the process for creating a Community Vitality Index in Happy Valley-Goose Bay, Labrador. In another case, I used key informant interviews and community focus groups to identify the key challenges facing women in Labrador West. The result, Keeping All Women in Mind, is part of a national community engaged research project focused on the impacts of economic restructuring on women in northern Canada.

Alcantara: Why have you decided to make this methodology central to your work? What advantages does it bring to your research and to your partners?

Levac: My commitment to community engaged scholarship emerged in part from my personal and professional experiences. I returned to school to pursue graduate studies after working with community organizations and community members – young people in particular – where I witnessed disconnects between researchers’ goals and community’s experiences, and where I learned more about the lack of equitable public participation in policy development. As I continue along this path, I am motivated by the ways in which this methodological orientation invites the voices of historically marginalized community members into important public conversations. I also appreciate that the approach brings ecological validity. Through our work, we see important instances of leadership emerging, especially in places and ways that the conventional leadership literature largely fails to recognize. Finally, the theoretical grounding of our work points explicitly to social justice and equity goals, which I feel obligated to pursue from my position.

Alcantara: One of the concerns I have long had about this methodology is the potential loss of autonomy for the researcher. Is that a real danger in your experience?

Levac: I think about this in two different ways. On one hand, I do not think it is a danger that is unique to community engaged scholarship. As I understand it, the core concern with autonomy in community engaged scholarship is about how the relationships themselves might influence the findings. However, the lack of relationships can also influence findings (e.g., if there is a lack of appropriate contextual understanding), as can funding arrangements, and so on. What is important then, is to foreground the relationships, along with other important principles such as self-reflexivity and positionality, so that the rigor of the scholarship can be evaluated. Another way to think about this is to consider that within a community engaged scholarship program, there can be multiple research questions under pursuit; some of which are explicitly posed by, and of interest to, the community, and others that are posed by the academic researcher(s). As long as all of these questions are clearly articulated and acceptable to all partners, then independent and collective research pursuits can co-exist. Having said this, I do find that I have had to become less fixated on my own research agenda per se, and more open to projects that are presented to me.

Alcantara: How do you approach divided communities? Here I’m thinking about situations such as working with Indigenous women on issues relating to gender and violence, identity, or matrimonial property rights. How do you navigate these types of situations, where some community members might welcome you while others might oppose you?

Levac: These are obviously difficult situations, and I certainly do not claim to have all of the answers, particularly in Indigenous communities, where I have not spent extensive time. Having said that, there are a couple of important things to keep in mind. First, the ethical protocols and principles of community engaged scholarship demand attention to the question of how communities are constituted. So, for example, an interest-based community and a geographic community are not necessarily coincidental. As a result, a community engaged scholarship project would be interested in how the community defines itself, and therefore might end up working only with people who identify themselves as victims of gendered violence, for example. Second, because relationships are central to all stages of community-based research projects, these methodologies can actually lend themselves to these difficult kinds of contexts. By this, I mean that similar to reconciliation processes, there is an opportunity for community engaged scholarship to play a role in opening dialogues for understanding across social, political, and cultural barriers. This is one of the reasons that community engaged scholarship is widely recognized as being so time intensive.

Alcantara: What kinds of literature and advice would you offer to scholars who want to use this type of methodology in their work for the first time?

Levac: My first and biggest piece of advice is to get involved in the community. All of my research – including and since I completed my PhD – has come about through existing relationships with community organizations and/or other researchers involved in community engaged projects. There are a number of books and authors that can provide a useful grounding, including Reason & Bradbury’s (Eds.) Handbook of Action Research, Minkler & Wallerstein’s Community-Based Participatory Research for Health, and Israel et al.’s Methods for Community-Based Participatory Research for Health. There are also several great peer-reviewed journals – including Action Research and Gateways: International Journal of Community Research and Engagement. Finally, there are many organizations and communities of practice that pursue and support various facets of community engaged scholarship. Guelph hosts the Institute for Community Engaged Scholarship. Other great organizations and centres include Community Based Research Canada, Community Campus Partnerships for Health, and the Highlander Research and Education Centre. Finally, beyond connecting with communities and community organizations, and reading more about the methods and theories of community engaged scholarship, it is really helpful to reach out to scholars using these approaches, who have, in my experience, been more than willing to offer support and suggestions. Feel free to contact me directly at LLevac@uoguelph.ca.


Overregulation of Research by Ethics Bureaucracy?

There is an interesting article in the journal Mental Health and Substance Abuse that describes the frustrating experiences a team of researchers had gaining ethics approval for a research project investigating treatment options and services for aboriginal and refugee populations in northern Australia.

The researchers say that it took 10 months to gain full ethics approval, on a project that was funded for three years! Obviously there is a need to monitor social scientific and scientific research to ensure it is conducted in an ethical manner, given past experiences. But I share the authors’ concerns that the current system has become so unwieldy so as to raise the legitimate question about whether ethics approval processes have become such an obstacle to doing important social science research such that the approval process becomes unethical in itself?
Continue reading

I have to admit, the research ethics board here at Wilfrid Laurier University is doing an admirable job trying to streamline the process. One thing I find frustrating is the limited ways in which the Tri-Council Policy Statement 2 on Ethics supports multi-institutional research projects. When a team of researchers is working on a common project, more often than not, each team member must get ethics approval from his or her own institution, unless the institutions engage in a lengthy and formal process to recognize each others’ decisions. Also, even the the TCPS2 does try to apply different levels of stringency to the ethics approval process depending on the vulnerability of the population and the possible hazards it faces, I still find there are frustrating hurdles to get over when conducting relatively benign research projects. I conducted several interviews with decision-makers at Health Canada about the regulation of BPA and NGO activists, for which ethics approval was necessary. That seemed like overkill to me.

Everyone I interviewed was extremely capable of defending themselves and understanding the nature of our interaction. Most people never bothered to return to me the elaborate consent form that I had passed through the ethics process. Why bother? These are busy people.

Some of the interesting proposals for streamlining the process identified in the included a better way of regulating multi-institutional research ethics approval and accrediting some researchers with some form of ethics recognition that would speed their process. Researchers could earn accreditation through participation in training seminars and this would allow them quicker approvals for certain low-risk projects.

Peer Review and Social Pyschology: Or Why Introductions are so Important!

Inspired by my colleagues Loren King and Anna Esselment, both of whom regularly make time in their busy schedules to read (I know! A crazy concept!), I’ve started to read a new book that Chris Cochrane recommended: Jonathan Haidt’s The Righteous Mind: Why Good People Are Divided By Politics and Religion.

I’m only in the first third of the book, but one of the main arguments so far is that when human make moral (and presumably other) judgements, we tend to use our intuitions first, and our reasoning second. That is to say, frequently we have gut feelings about all sorts of things and rather than reasoning out whether our feelings are correct, we instead search for logic, examples, or arguments to support those gut feelings. Haidt effectively illustrates this argument by drawing upon a broad set of published research and experiments he has done over the years.

At the end of chapter 2, he writes:

Continue reading

“I have tried to use intuitionism while writing this book. My goal is to change the way a diverse group of readers … think about morality, politics, religion, and each other …. I couldn’t just lay out the theory in chapter 1 and then ask readers to reserve judgement until I had presented all of the supporting evidence. Rather, I decided to weave together the history of moral psychology and my own personal story to create a sense of movement from rationalism to intuitionism. I threw in historical anecdotes, quotations from the ancients, and praise of a few visionaries. I set up metaphors (such as the rider and the elephant) that will recur throughout the book. I did these things in order to “tune up” your intuitions about moral psychology. If I have failed and you have a visceral dislike of intuitionism or of me, then no amount of evidence I could present will convince you that intuitionism is correct. But if you now feel an intuitive sense that intuitionism might be true, then let’s keep going.”

I found these first few chapters, and this paragraph in particular, to be extremely powerful and relevant to academic publishing (and other things!). If humans tend to behave in this manner, (e.g. we frequently rely on gut feelings to make moral judgements and we frequently try to find reasons to support those feelings), then the introduction of a journal article is CRUCIAL, both for peer review and afterwards. On the issue of peer review, I can’t tell you how many times I’ve received a referee report that was extremely negative, yet failed to: a) clearly show that they understood my argument; and b) demonstrate logically why my argument is wrong. I always blamed myself for not being clear enough, which is probably half true! But the real story is that sometimes my introductions were probably ineffective at connecting with people’s intuitions, and so these reviewers found reasons to reject it.

The lesson here, I think, is that introductions matter! You can’t ask or expect readers to withold judgement while you present the theory and evidence first. Instead, you have to find a way to tap immediately into their intuitions to make them open to considering the merits of your argument.

Professors, Elections Canada, and the Harper Government: Do Group Op-Eds Matter?

Early last week, the National Post ran a letter drafted by a small group of Canadian university professors (you can tell who the original letter writers by the order. Just look at where the alphabetized list starts and any names above it were the main writers), and signed (and edited) by a larger number of professors (including some of the most distinguished, senior, and smartest academic minds in Canada).

Several days before the letter ran in the Post, a draft of the letter hit my desk asking for my signature. Ultimately, I didn’t sign for a number of reasons. The main reason I didn’t sign was that I didn’t agree with all of the contents in the letter. There are parts of the “Fair Elections” bill that I agree with, other parts that I didn’t, and other parts that I simply wasn’t sure about. The timeline for signing the letter was tight and didn’t really give me much time to think these issues through.

Continue reading

Another reason why I didn’t sign was because, quite frankly, I’m not sure how effective these types of op eds are. If you have a chance, you should check out the comments left on the National Post page about the letter. To be succinct, they are nasty! There’s all sorts of anti-elitist rhetoric about professors being overpaid and narrow-minded (with some anonymous commentators regaling readers about their bad experiences at universities). Others claimed that professors are all Liberal-NDP supporters, or are at least ideologically aligned with those political parties. We are also experts with no real knowledge, apparently, who hide behind our PhDs and “peer review” to stifle dissent, etc.

To be fair, there were some defenders of professors (thank you!), remarking about the important expertise and knowledge that the profession have to offer. But the dominant discourse was negative.

In my view, it seems group letters bring out a certain amount of dissent towards the profession. Have a look at op-eds written by only one or two professors at a time; occasionally, you do see one or two anti-professor rants, but rarely very many. When we write these group letters, however, there seems to be many of these comments. So, do group-think letters actually help or hinder our ability to communicate with and affect public opinion and public policy?

In terms of public opinion, my sense is that these letters are not effective. Recent research suggests that most people have gut feelings about various issues or things, whether it be politics, religion or food, and then they search for justification for their gut reactions. (Frequently, I’ve been wondering whether the same phenomenon is at work in academic peer review?!) And so I think op-eds are built exactly for this type of behaviour. People don’t change their minds because of op-eds. They use them to feed their gut reactions.

How about public policy? Do these op-eds affect public policy? Well, I’m pretty sure Harper won’t be convinced by these types of letters but I do know that civil servants, including deputy ministers, scan op-ed pages for ideas. So maybe these letters will ultimately push civil servants to act in a way to thwart these reforms?

For me personally, I have yet to sign one of these group-think letters. Instead I’ve treated the op-eds that I write (and sign) as knowledge dissemination tools, trying to link current events either to my research, or the research of others.

But maybe I should have signed! I’ll guess we’ll see when the next letter gets circulated.

More on Knowledge Mobilization: What About Individual Citizens?

There’s been some recent discussion on this blog about the importance of knowledge mobilization. Dr. Erin Tolley, for instance, provided some excellent advice several days ago based on her own experiences in government and academia. But recently I’ve been wondering: what can us academics do to better share our research findings with regular citizens?

My usual strategy has been to write op eds in regional or national newspapers. I have no idea whether this is an effective strategy. I have a hunch that op eds rarely persuade but instead simply reinforce people’s existing opinions on the issue (one day I’d like to run an experimental study to test this proposition. I just need to convince my colleague, Jason Roy, to do it!) Sometimes, I receive emails from interested citizens or former politicians. In one op ed, published in the Toronto Star, I briefly mentioned the Kelowna Accord, dismissing it as a failure. The day after the op ed was published, former Prime Minister Paul Martin called me up in my office to tell me why my analysis of the Accord was wrong. That was quite the experience!

Continue reading

But on the issue of communicating research results to interested citizens, I wonder if there is more that I can do? At least once every six months, I receive an email from a random First Nation citizen asking for advice. Usually, the questions they send me focus on the rights of individual band members against the actions of the band council. One email I received, for instance, asked about potential legal avenues that were available to members for holding band council members accountable, because somehow they saw my paper in Canadian Public Administration on accountability and transparency regimes. Just last week, I received an email from a band member who was fielding questions from fellow band members about the rights of CP holders (e.g. certificates of possession) against a band council that wanted to expropriate their lands for economic development. Apparently, this individual tried to look up my work online but all of the articles I’ve written on this topic are gated (with the exception of one).

So, what to do?

Well, one easy and obvious solution is to purchase open access rights for these articles, which is something SSHRC is moving towards anyway. That way, anyone can download and read the articles.

But what else can we do? Taking a page from Dr. Tolley’s post, maybe I need to start writing one page summaries of my findings in plain language and post them on my website?

Another thing I want to try is to put together some short animated videos that explain my findings. This is what I hope to do with my SSHRC project on First Nation-municipal relations, if Jen and I can ever get this project finished!

Any other ideas? Suggestions welcome!

Knowledge Mobilization and the Academy: A Guest Post from Dr. Erin Tolley

Below is an excellent guest post on knowledge mobilization from Dr. Erin Tolley, Assistant Professor in the Department of Political Science at the University of Toronto, Mississauga. It provides some very practical and timely advice for those of us filling in the knowledge mobilization section of our SSHRC grants!


A few weeks ago, Chris Alcantara wrote a great post about knowledge mobilization and the communication of research results to non-academic audiences. In his post and the comments that followed, Chris raised a number of questions about how best to facilitate knowledge transfer and asked, in particular, if we need a “rethink” of traditional modes of communicating research results.

Continue reading

I would come down firmly on the side of yes, particularly if your aim is to engage government decision-makers and policy analysts. This is true now more than ever, with austerity measures pinching bureaucrats’ time, and budgets for training and travel having been all but gutted. Whereas many government departments once maintained their own in-house libraries, these have largely been shuttered leaving policy analysts without any reference support and no access to the scholarly books and gated journals to which most academics direct their publishing efforts. With luck, an enterprising policy analyst might be able to direct some government resources toward the purchase of policy-relevant research publications, but this is rare. Even rarer would be the opportunity to attend a conference outside the bounds of the National Capital Region to hear academics present their research. That said, in the mid-level of the public service, there are increasing levels of education. Most new policy analysts now have at least a Master’s degree, and many have PhDs. Given this, there is an appetite for research, in addition to the skills and qualifications to understand and apply those insights.

This is not, then, a matter of “dumbing down” but rather of communicating the findings in ways that speak to the target audience. Long discussions about the theoretical framework or the extant literature will not interest most policy analysts. Too much methodological detail will bore or distract. No tables of regression coefficients! No p-values! Tell your audience which parts of your research are significant and really matter. If they would like more details, they will ask.

What will interest this audience most is the policy relevance and implications of your research. What do your findings tell policy-makers about their policy area? Where are the policy gaps? What are the most fruitful areas for action? (Hint: this should be something other than “More research is needed”).

To do this effectively, you need to know your audience. Read the legislation or policies that are most relevant to your field of study. Consult recent Standing Committee reports or other parliamentary publications. Take a look at the media releases from the government departments most centrally connected to your work. Read the department’s most recent Report on Plans and Priorities. Search the Government Employee Directory (GEDS) for the names of policy analysts who work in your field. Contact them. Talk to them. Ask them questions about what they do. Use this to inform your research.

When you communicate your results, remember the constraints that policy analysts face: limited dedicated reading time, tight deadlines, and a need for concise communication. Can you put your results into a “2-pager” that gives a brief synopsis of your work, your main findings and their policy relevance? CERIS has an excellent template. Include your email address. Send it to policy analysts working in your field. Post it on your personal webpage or in any other “Googleable” format. Write an op-ed about your work. Maintain a social media presence.

Contact one of your policy contacts and ask if they would be interested in having you present your work to their colleagues. Most departments have a “Brown Bag” lunch series, and they’re generally quite happy to host researchers with relevant new findings. Make the most of these opportunities when they arise. Don’t present a conference paper; policy analysts prefer PowerPoint or a handout. Provide it in advance. Make contacts once you’re there. Follow-up with them. And don’t ask—even jokingly—if they will give you money for your research. You can save that for the second date.

Erin Tolley is an assistant professor of Political Science at the University of Toronto. Prior to pursing doctoral studies, she worked for nearly a decade in the federal government.

Should We Change the Grant Adjudication Process? Part 2!

Previously, I blogged about the need to reconsider how we adjudicate research grant competitions.

Others agree:

Researchers propose alternative way to allocate science funding

HEIDELBERG, 8 January 2014 – Researchers in the United States have suggested an alternative way to allocate science funding. The method, which is described in EMBO reports, depends on a collective distribution of funding by the scientific community, requires only a fraction of the costs associated with the traditional peer review of grant proposals and, according to the authors, may yield comparable or even better results.
Continue reading

“Peer review of scientific proposals and grants has served science very well for decades. However, there is a strong sense in the scientific community that things could be improved,” said Johan Bollen, professor and lead author of the study from the School of Informatics and Computing at Indiana University. “Our most productive researchers invest an increasing amount of time, energy, and effort into writing and reviewing research proposals, most of which do not get funded. That time could be spent performing the proposed research in the first place.” He added: “Our proposal does not just save time and money but also encourages innovation.”

The new approach is possible due to recent advances in mathematics and computer technologies. The system involves giving all scientists an annual, unconditional fixed amount of funding to conduct their research. All funded scientists are, however, obliged to donate a fixed percentage of all of the funding that they previously received to other researchers. As a result, the funding circulates through the community, converging on researchers that are expected to make the best use of it. “Our alternative funding system is inspired by the mathematical models used to search the internet for relevant information,” said Bollen. “The decentralized funding model uses the wisdom of the entire scientific community to determine a fair distribution of funding.”

The authors believe that this system can lead to sophisticated behavior at a global level. It would certainly liberate researchers from the time-consuming process of submitting and reviewing project proposals, but could also reduce the uncertainty associated with funding cycles, give researchers much greater flexibility, and allow the community to fund risky but high-reward projects that existing funding systems may overlook.

“You could think of it as a Google-inspired crowd-funding system that encourages all researchers to make autonomous, individual funding decisions towards people, not projects or proposals,” said Bollen. “All you need is a centralized web site where researchers could log-in, enter the names of the scientists they chose to donate to, and specify how much they each should receive.”

The authors emphasize that the system would require oversight to prevent misuse, such as conflicts of interests and collusion. Funding agencies may need to confidentially monitor the flow of funding and may even play a role in directing it. For example they can provide incentives to donate to specific large-scale research challenges that are deemed priorities but which the scientific community can overlook.

“The savings of financial and human resources could be used to identify new targets of funding, to support the translation of scientific results into products and jobs, and to help communicate advances in science and technology,” added Bollen. “This funding system may even have the side-effect of changing publication practices for the better: researchers will want to clearly communicate their vision and research goals to as wide an audience as possible.”

Awards from the National Science Foundation, the Andrew W. Mellon Foundation and the National Institutes of Health supported the work.

From funding agencies to scientific agency: Collective allocation of science funding as an alternative to peer review

Johan Bollen, David Crandall, Damion Junk, Ying Ding, and Katy Börner

Read the paper:

doi: 10.1002/embr.201338068
On the one hand, some might argue that the system would be hijacked by logrolling and network effects. But if the system had strong accountability and transparency measures, which involved researchers disclosing expenditures, outcomes, and which researchers they financially supported, I think some of the possible negative effects would disappear.

It’s a neat idea and someone needs to try it. It could be SSHRC creating a special research fund that worked in this way: everyone who applied would receive money (equally divided among the applicants) and would have to donate a fixed portion of their money to others. Or maybe a university could try it with some internal funding.


Hat tip to marginal revolution for this story.

Academia, Knowledge Mobilization, and Canadian Public Policy

Last week, I attended the State of the Federation Conference at Queen’s University. This year’s conference theme was on Aboriginal multilevel governance. The program was interesting, bringing together academics, activists, policymakers, and practitioners to talk about the role of Aboriginal peoples and the Canadian federation.

One of the keynote speakers was Michael Wernick, the current deputy minister of Aboriginal Affairs and Northern Development Canada. His talk was interesting in a number of respects, but one of the most interesting was a comment he made about the role of ideas in public policy making.

Among other things, he said that government bureaucracies were not in the business of knowledge creation and innovation. Instead, it was the responsibility of academics to produce new ideas for government to consider and implement.

Of course the big question is: how does government access the ideas produced by academics?

Continue reading

His answer was that one of the main sources was from newspapers and the op eds written by academics (and even social media)!

It was a surprising admission on a lot of fronts and the implications are significant.

First, academics clearly need to find creative ways to disseminate their research findings beyond the usual academic journals and books. Second, it reaffirms my current strategy of trying to summarize my academic findings through op eds. Third, in one sense, it shows that policymakers don’t have the time or interest to read and consume highly detailed, academic arguments. They value new ideas and research, but they want the bottom line; they want the policy implications, and are less interested, perhaps, in theory, empirics, and the like.

Finally, there is a serious need to rethink knowledge mobilization. The traditional route was conferences and invited talks, or good old fashioned reading! But perhaps there are better ways to facilitate knowledge transfer between policymakers and academics. Thoughts? Provide them in the comments.

The End of the Traditional Academic Career?

So says Alex Hope in the Guardian. Among other things, he writes:

You don’t necessary need to be an academic employed at an academic institution to contribute to the academy. Academics need to reflect these changes in their practice. We need to become more agile both in terms of our employment and our modes of communication. Academic tenure is the past – flexibility is the future.”

Steve Saideman disagrees:

“Universities depend on the full time profs to run programs, provide a variety of services, engage in research, and, oh, yeah, bring in research money.”


“There is also something else–that universities are communities of scholars, not just buildings and administrators.  The pursuit of knowledge (yeah that is mighty high falutin) is a social endeavor, and universities, by bringing together students, professors, post-docs and other folks, facilitate the processes by which we can learn and argue and develop.  Yes, some scholars can work in isolation, but the social environment of universities is incredibly important for most work.”

On the one hand, I agree with Alex Hope. Flexibility is an important asset in a time when rapid change is needed.  And if there is one thing I’ve learned in my five years on the job, universities are slow and difficult to change!

Continue reading

So in one sense, adopting a model that relies more heavily on teaching flexibility, for instance, is a good thing. In many ways, the teaching competencies of university departments are fixed by who you hired in the past. Forever. A case in point is in my department where “legal studies” has become one of our most popular programs.  The problem is: none of our full time faculty members does research in this area and few of us are really qualified to teach an “intro to law” course, for instance. The easy answer is to hire a new tenure-stream professor in Canadian judicial politics but most universities are ratcheting back new hires and refusing, sometimes, to even replace retirees.

So I’m sympathetic towards calls for flexibility; I can understand why five- or ten-year renewable contracts would be attractive and why administrators are encouraging greater use of MOOCs and online courses (or at least blended learning courses).

I’m also not convinced by Saideman’s point that universities are important for knowledge production because they are sites of collaboration and interaction.  We work in an age of advanced communications that can facilitate meaningful and productive relationships irrespective of geography.  Gary Wilson at UNBC, for instance, and I have published two journal articles (one in CJPS and one in Regional and Federal Studies), won a SSHRC Insight Grant, and are presenting a paper at the annual state of the federation conference, yet we’ve never met in person. Ever (Does Gary even exist?!?).

If anything, I think the strongest reason for the status quo is the “research funding” rationale.  But of course, from a university administrator’s perspective, the five- to ten-year contract model still makes more sense then the current tenure model.  If I was a senior administrator, then I would give automatic renewal to those who constantly won research grants.  Those who didn’t: see yah!

Don’t get me wrong. I like the current system.  Tenure affords me the freedom to pursue whatever research I want without having to worry too much about the “politics” of my work.  It’s also nice to have an institutional home with the support that such a home provides (e.g. an office, library, support staff, teaching support services, and the like) but I think there’s merit in findings ways to increase academic flexibility both at the individual and institutional levels.

Should We Change the Grant Adjudication Process?

John Sides, commenting on recent criticisms that the U.S. governments (through the NSF) funds far too many “silly projects,” at the Monkey Cage writes:

“it’s very hard to determine the value of any research ahead of time.  It’s hard because any one research project is narrow.  It’s hard because you can’t anticipate how one project might inform later ones.  It’s hard because some funding goes to create public goods—like large datasets—that many others will use, and those myriad projects also cannot be anticipated.  It’s hard because some research won’t work, and we can’t know that ahead of time.”

I think John is completely right. Committees are faced with extreme uncertainty about the future value of the various proposed research projects and so the grant adjudication process is bit of a crapshoot, all else being equal. (It also explains why one of my SSHRC grants got funded after the third try, even though I made extremely minor revisions!)
Continue reading

Because it is difficult to figure out which proposed research will actually have value in the future, most committees and/or competition criteria put great emphasis on research record. And so, if you have a really strong record, you are likely to get funded.

But I’m not sure that’s the best model for adjudicating grant applications. If we take seriously the notion that predicting the future value of research is very hard, then there are at least two reasonable options for changing how we adjudicate grant applications:

a) fund all applications that come in (which is problematic since resources are limited; one solution would be to simply divide up the money among all of the applications); or better yet,

b) create a small pool of applications that meet a certain scholarly criteria and then randomly choose which applications to fund from that pool.  This process would be different from the current one where applications are ranked and funded beginning with #1 downwards.

My own view is perhaps we should move to model (b).  At least that way, some of the idiosyncrasies involved with personal preferences and networks are somewhat mitigated.

Or maybe not!