On recognizing the intrinsic multi-dimensional nature of Research & Teaching

Academic policy makers, administrators and many academics simply don't seem to get how to deal with the multi-variate nature of higher education. Despite having access to the tools and statistics to take account of all this, it is generally ignored. In my opinion, the recent attempts by HEQCO , the Higher Education Quality Council of Ontario, provide a good illustration of what I mean; their reports aim to boil down the complex nature of higher education in Ontario into a relatively few metrics. That's ok, ecology does this all the time. BUT, the subsequent path that these reports appear to be mapping out doesn't allow them to dip back into the complexity of the higher education ecosystem.

Here's why higher education is so complex:

1. Professors get hired to do a job for which we are only partly trained.  My time is supposed to be allocated to around 30% research (I'm trained to do this), 30% teaching and 20% administration. I didn't received formal training for the last 2. I don't have an MBA or MPA or BEd or MEd. For the last 30 years, I've been getting both on-the-job training and taking courses here and there. I have many colleagues who have never, to my knowledge, undertaken any of the in-career training available for teaching and research. I also suspect that their lack of interdisciplinary interactions within academia, may be limiting their access to good on-the-job training experiences.

What all this boils down to, is that we are doing a multi-dimensional job and therefore, performance assessment will necessarily be a bit complicated and complex. What to do? Making a long list of metrics, collecting data on them and using this for a multivariate assessment, is a good first step. Again, we do this in ecology all the time. However, the efforts that I'm currently seeing, that try do this, e.g. the Dickeson-based assessments of performance going on across Canada, are far too simplistic to take effective account of the multi-faceted nature of my job.

2. Bazely's first-cut list of metrics on which to base an accurate assessment of academic performance
Publish or Perish

1. Presence on Google scholar, 2. Scopus, 3. Web of Science; 4. Number of peer-reviewed publications; 5. H-Index; 6. Number of grants from one agency over time e.g. NSERC; 7. Number of grants from different agencies; 8. Altmetrics; 9. Kardashian-Index; 10. Blogs; 11. Media interviews; 12. Volunteered conference presentations; 13. Invited conference presentations. 14. Reviewing; 15. Amount of grey-literature (shades of peer-reviewed) writing that is different from Blogs.

Teaching
1. RateMyProfessors.com; 2. Teaching Award nominations; 3. Teaching Awards; 4. Number of times same course was taught; 5. Number of different courses taught; 6. Number of graduate students supervised; 7. Number of supervisory committees; 8. Fate of students taught and supervised; 9. Number of examining committees for PhDs and Masters; 10. Number of guest lectures in other professors' courses; 11. Mentoring.

I could go on (and on). These are simply some of the many section and sub-section headings from my >50 page c.v. I also included in my list, both standard publish or perish assessments (H-index) and emerging ones (like the serious Altmetrics and the jokey, but taken seriously, by some, Kardashian-index).

My point is that we have actually defined multiple assessment metrics: they are found on the standard academic c.v. However, they are rarely analysed in a way that takes simultaneously account of them, to produce multi-variate assessments. In plant ecology, multi-variate statistics can identify a single factor that weights the presence of a particular species very heavily. We all know that this may be a gross oversimplification of what's going on, so we put this in context.

This single factor approach dominates academic audit culture. The number of publications or grant size often overshadow the myriad of other factors that define the job. Unfortunately, the kinds of conversations that regularly happen in ecology, with respect to how to measure ecosystem complexity and biodiversity, aren't happening in the field of higher education policy generally, and definitely not in my university. In fact, in the UK, the REF exercises, which pick a few factors from a diverse list of metrics, have driven the country's professoriate to the edge of exhaustion. (Update 8 December 2014: And to suicide, viz. Professor Stefan Grimm, Imperial College)

The data for making multi-dimensional characterizations and assessments are available more than ever. This means that I can assess administrators and staff performance according to these same metrics. When I do this, which I have done, I become very concerned that there is usually little, if any, public evidence of their qualifications, experience and performance:


When I looked up a few of the senior academic administrators who are leading these assessments, with respect to their performance in some of the factors I listed above, I found that these folk don't make a big academic or otherwise, impression. The recent debacle at University of Saskatchewan is one place to start digging into what's going on here, for yourself. I, for one, would like to know that people tasked with assessing my performance do more than perform highly on only one of these metrics (if at all), and don't show up at all on other metrics of academic and job performance. Additionally, I would like to see people who are making decisions that directly affect me in any of these aspects of my job, score visibly in terms of these SAME aspects on the same performance metrics.

Academia is supposed to be about being evidence-based. We can and should do better.