Who Shares Wins: 5 Good Reasons to Take to the T&L Stage

As the so-called “quiet time” of the academic year draws to a close, and the chilly winds of autumn snap at inappropriately sandal-clad feet, thoughts turn to the upcoming semester.

A big gig for our unit in September is the Dublin City University Teaching & Learning (T&L) Day, an annual conference where up to 100 staff members converge to seek inspiration about effective teaching and assessment practice. Like similar events at many institutions this provides a valuable forum for staff to share their experiences and knowledge about teaching. So if you’ve been thinking about responding to a call but are still somewhat ‘undecided’, here are five reminders about the sometimes forgotten benefits of sharing that might spur you on to proceed:

1. The “This worked, it really worked” Effect

There is something incredibly refreshing (dare I say it heartwarming?) about paying forward good ideas, particularly if they solve problems that you know many of your colleagues also struggle with. Who has not lamented a disappointing lack of class discussion or frustrating attitudes to group work, for example? What works for you is often good for your colleagues and we’ve seen several examples of this at gatherings over the years. Sometimes these suggestions involve technology e.g.  highly usable peer review tools or effective uses of audio feedback. But sometimes they don’t require any tech at all: simple but powerful ideas such as getting students to stand more closely in groups (rather than in circles) was one proven technique for supporting active class participation that went down a storm last year. Hearing a colleague from your institution talk about what worked for them is one of the most persuasive forms of professional learning there is.

2. The “It seemed like a good idea at the time” Lesson

Ah yes, the innovation that didn’t quite go according to plan. It takes real bravery to admit professionally that the inspired plan to enhance student engagement did not succeed as one might have hoped. True, you might have learned from the class, the assessments, and the subsequent student evaluations that something was amiss. But as well as reflecting on it yourself, have you ever experienced the cathartic effect that sharing the experience with colleagues can have? Instead of the misplaced tendency to think it was entirely your fault (an impression that student evaluations can all-too-easily promote), your colleagues could help to put it into perspective and give you constructive feedback that might encourage you to make adjustments, reconsider your audience, and perhaps try again. So when it feels right for you, share those stories of experimentation and even failure, please, we can all learn from them.

3. The “I’m really not doing so bad at all” Insights

Closely related to 2 is the idea that oftentimes we can be our own harshest critics. It is also possible, however, to experience a moment of quiet triumph when you realise you are actually more experienced/creative/technologically-adept than you had given yourself credit for. One way to achieve this is to share your work with colleagues and let them know what you are doing  in the classroom and/or lecture hall. Quite often the feedback and questions you will hear after you’ve presented will highlight that not everyone is doing what you’re doing and your unique insights are of real value to fellow professionals.

4. The “I have to get this on paper” Opportunity

Have there ever been times when you’ve missed and regretted a promising opportunity because you have not yet written your ideas up? The blank page fills many of us with dread so any chance to describe your teaching approaches and position them within the literature could also prove very useful elsewhere. Getting an abstract or proposal in for an event at your local institution could be the vital first step towards initiating a collaborative research project, a publication opportunity or a response to a funding call. Carpe Diem, get started, and you are very unlikely to regret the time spent.

5. The “Who are all these people?!” Moment

Your local T&L event offers an opportunity to meet and get to know your teaching colleagues better. There seem to be relatively few chances to do this in higher education, which is one of the reasons why The Sipping Point was set up at DCU. Sometimes informal learning happens over coffee or lunch conversations on the day. It can also come about through follow-up emails and approaches by colleagues afterwards. Whatever way it occurs, the sense of community and solidarity that emerges from a common understanding of challenges (and indeed solutions) can foster connections that stretch well beyond the day itself. On that note, put your best foot forward and get your thinking from your head to the page at the next possible opportunity.

If this post has whetted your appetite to either share your practice or attend the event itself, make sure you sign up for DCU T&L day today: https://www.eventbrite.ie/e/dcu-teaching-learning-day-2018-tickets-49086468950


Thinking of going for Fellowship of SEDA? Reflections on the experience

Having recently completed the SEDA course Supporting and Leading Educational Change (snappily called SLEC), I thought I would share some reflections that might be of interest to those of you considering it. You might, for example, be actively involved in educational  development as a member of a central teaching and learning unit, you might offer postgraduate teaching-related programmes to academic staff, and/or you might lead a team that implements funded projects of a technological and pedagogical nature. If you are toying with the idea of gaining a professional qualification for this type of work, then read on to explore if this course might be a good fit for you.

Course Description

First, some basic facts. This is a 12-week online course that as the website goes is “designed to accredit and advance your work in supporting and leading educational change in further or higher education”.  It is divided into two six-week blocks before and after the Christmas break. Successful completion of the course leads to Fellowship of SEDA (FSEDA).  SEDA is the UK-based Staff and Educational Development Association, a professional body that seeks to promote innovation and good practice in higher education.  Established in 1993, the overall mission of SEDA is to offer members professional learning opportunities, professional recognition, and practice-oriented publications with the ultimate goal of supporting student learning.

As someone who has worked in academic development for a number of years, but who did not have a qualification in that specific field, I felt it was time to give time to probe and more deeply reflect on the way I have been approaching my role. I wasn’t looking for CPD that focused primarily on the science and craft of teaching, I wanted something that was tailored to a role where you are supporting and hopefully enabling other staff to develop as teachers.  To my mind the distinction is important and the big questions for educational developers are very different: Are there better ways of evaluating the impact of various initiatives we are spending time and money on? How are other institutions designing and offering their CPD for maximum gain? Are we doing the right thing as regards the opportunities in place to support the sharing of teaching practice? Am I doing what I really should be doing in my job? These were the types of questions I wanted to explore and develop more confidence in through learning from an international community of peers.

What is involved

Thus in late October 2017, with the support of my manager Mark Glynn, I started the course which very broadly involved:

  • Weekly readings on a variety of academic development themes, many drawn from  Advancing Practice in Academic Development by Baume and Popovic (2016) which is the core (but not exclusive) text
  • Participation in weekly online discussion forums, followed by reflection activities
  • Development of a case study/case studies to demonstrate achievement of specialist outcomes supported by evidence
  • Mapping of current practice and thinking to SEDA values
  • Describing my current role
  • Development of an ongoing CPD action plan
  • Development of a learning portfolio to capture the learning from above

Screenshot 1a - Welcome page

Screenshot 2 - case study page

Big benefits

  • Exposure to the academic development literature: I found the course to be an excellent way to develop a better understanding of the scholarly literature of this field and source quality research on approaches that have been tried and tested at other institutions. Yes, we can all say we will “read more” but having a structured and timetabled commitment to read each week is for me, the only surefire way it will actually happen.
  • Superb self-assessment tools: The quality of diagnostic tools and reflective prompts throughout the course was excellent. At the risk of confirming my crushing descent into middle age, I find it can be tricky to remember all activities I have undertaken and why I have done them in a particular way. Certainly digital evidence helps but I still noticed that I needed to draw from memory  and I suspect that will be the case for most. No matter how reflective a practitioner you may be, I think it nigh on impossible to write down or capture everything that goes on in this role. So the prompts in the form of questions, sample case studies, and sample portfolios were absolutely pivotal to drawing these (sometimes forgotten) conversations and activities out.  
  • Extensive peer review opportunities: As you might expect, there was a strong emphasis on peer review, particularly in relation to the development of the practice-based case study. This offered a super way to compare notes and I’m glad to report that even since finishing, I am hoping to collaborate further with University of Roehampton colleagues that I met through the course.
  • Opportunity to interact with an international audience – several of the UK acronyms and organisations were new to me, and there are a lot of them, but the opportunity to interact with 20 or so participants from the UK, US, New Zealand and Canada was fruitful. The diversity of backgrounds was striking (for example, staff came from central units, from eLearning backgrounds, from research units, and from regular lecturing roles), which is very much in line with Green and Little’s (2016) study profiling educational developers from around the world.


  • I’ve worked out that the course requires a minimum of 7,000 scholarly words. Add in the extra writing for discussion posts and other activities and by my calculations you will probably need to write well in the region of 10,000 words to complete the process.That’s a fair bit of text and reluctant as I am to link word count to workload (I’m with Scott (2008) on the limitations of word count as a workload metric), it might give you some sense of what is expected.
  • Unless you are very lucky indeed, it does require weekend/evening work. The course takes place in two six-week blocks and I think that’s a fair way to run it as it is highly likely that one of those semesters will be lighter or heavier for you in the day job.
  • Not all of the readings and activities will be directly relevant to your role or context – but they might be in future. It’s just a fact that some of the readings will speak more to you than others, depending on your area of professional responsibility and your own context.

Overall, even though it wasn’t easy, I am delighted that I did this course and I’m looking forward to the day I can smugly flash my FSEDA letters. It did help with my confidence and confirmed that while I am doing a lot of the right things, some areas could be further improved (eg evaluation approaches), and that we all face many similar challenges in our diverse contexts. The readings (the Baume & Popovic (2016) book especially opened my eyes to the breadth and depth of activity in this field. The chapters on identifying needs and opportunities for academic development (Chap 2), ‘Is it working?’ (Chap 10), Working with networks, microcultures and communities (Chap 11) and Managing & Leading Change (Chap 13) were particularly relevant to my role and I’ll be revisiting and citing those, I’m sure, many times to come. It is interesting to note too that SEDA fellowship is not simply a once-off – to maintain fellowship you need to complete a yearly CPD report in order to “remain in good standing” – so there’s definitely an ongoing aspect which I think is important.  Looking to the future, and more locally, I am also considering the National Forum’s PACT initiative as a valuable CPD opportunity in the near future too and look forward to hearing more from colleagues about that process also.


Baume, D. (Ed.), Popovic, C. (Ed.). (2016). Advancing Practice in Academic Development. London: Routledge.

Green, D. A., & Little, D. (2016). Family portrait: a profile of educational developers around the world. International Journal for Academic Development, 21(2), 135-150.

Scott, S.V. (2015) ‘Quantifying the assessment loads of students and staff: the challenge of selecting appropriate metrics’, Journal of Further and Higher Education, 39(5), pp. 699-712.

Lend me your ears: the subtle qualities of voice in learning

Seldom a day seems to go by without some mention of the word ‘voice’ in academic discussion. Educators and policymakers frequently refer to the importance of representing ‘the student voice’ in teaching and learning activities. Similarly, the concept of ‘the academic voice’ is often used in conversations around the values, opinions, and perspectives of the university community.  However in this post I would like to take some time to talk about the real-life, living-and-breathing human voice itself in relation to teaching, learning, and assessment. Given the evidence of feedback as a powerful learning tool (Hattie & Timperley, 2007), I would like to reflect on the perhaps underestimated contribution of a person’s actual voice in developing and enhancing knowledge.

Andrew Middleton, well known for his research and staff development work around the development and use of audio-based feedback in higher education, was guest speaker at the recent DCU Teaching and Learning Day. He described audio feedback as “the recording and distribution of spoken feedback on a student’s work” and gave a wide-ranging, stimulating presentation on why, how, and when feedback in audio format might fit into an assessment strategy. We heard how audio feedback can take many forms, ranging from personal to general, and it is ideally suited to constructive criticism on aspects such as evidence, structure and academic argument. You can watch the video of his presentation here: Andrew Middleton at DCU T&L Day

One of the slides that I felt most vividly captured the potential of the audio medium is shown below – it illustrates some reactions from students who received audio feedback from lecturers and it captures many of the key benefits described in the literature.

Screen Shot 2017-10-03 at 14.18.50.png

Clearly the timeliness, replayability, and mobility of the approach appeals to students. But it is that intangible quality of being prompted to “listen more when someone is talking to me than if I’m reading it” that is particularly intriguing.

The importance of tone

One possible factor is the indisputable quality that audio offers around tone of voice. As one of the academics quoted in the presentation said “You can get some of the kindness that we intend… into how you talk about it”. This seems important when, as Evans (2016) suggests, we need to consider the emotions of feedback, and allow sufficient time for students to process it before they take the necessary steps to address.  There’s also the possibility that the spoken word, even at a distance, can seem more ‘alive’ than the text-based equivalent.  Perhaps hearing (and re-hearing) clear and personalised guidance from a lecturer who seems more physically present can encourage students to be more receptive to and motivated by the points made.

From a lecturer’s perspective, there’s also a welcome place for tone in delivering the tough news when it needs to be said – one of the attendees at the conference spoke about her efforts to keep her frustration out of her voice when recording negative feedback. The advice? Be yourself, don’t even try to keep it out completely, and if necessary, perhaps ask the student to come to your office. Most of us are not robots and it is that evidence of humanity that might very well prompt the student to engage in the type of dialogue around learning that leads to better outcomes for all involved.

Further resources on audio feedback

Obviously, there is much more to be learned about feedback in audio form and how it might (or might not) influence engagement with learning in different contexts. If considering using it in your teaching, you may wish to hear from those who have already implemented it. The audio feedback toolkit on the Media-Enhanced Learning Special Interest Group (MELSIG) site includes practical tips, literature, and recordings of experiences from several UK educators who have been using audio feedback for a number years. (By the way, the general consensus seems to be to keep it to under 5 minutes, don’t edit, and get used to the sound of your own voice.) In an Irish context, the Y1 Feedback site contains several excellent case studies of technology-enabled feedback approaches that are well worth exploring. For example using screencasting to close the feedback loop, visual audio screencasts to enrich feedback, and screencasting for enhanced and manageable large group feedback in language learning 

At DCU we are about to embark on a new pilot project around audio feedback involving a group of kindred spirits who wish to try out and evaluate audio feedback approaches in their practice. I will report back on this in due course. In the meantime, my final word for now has to go to DCU School of Computing lecturer Monica Ward who has created a little audio drama of her own, uniquely recounting her experiences of implementing peer feedback with students. Listen up, learn, and enjoy – and do add your voice to the conversation too!

Five-minute Feedback Fairy Tale

Monica by Mark
Monica Ward tells a Five Minute Feedback Fairy Tale at DCU T&L Day


Evans, C. 2016. Enhancing assessment feedback practice in higher education: The EAT framework. Retrieved from https://www.southampton.ac.uk/assets/imported/transforms/content-block/UsefulDownloads_Download/A0999D3AF2AF4C5AA24B5BEA08C61D8E/EAT%20Guide%20April%20FINAL1%20ALL.pdf

Hattie, J., & Timperley, H. 2007. The Power of Feedback. Review of Educational Research, 77(1), 81-112. Retrieved from http://www.jstor.org.dcu.idm.oclc.org/stable/4624888

Meet you at The Sipping Point

In this post I would like to share some early reflections on The Sipping Point community which recently started up at DCU. Drawing loosely on the concept of Wenger’s community of practice, the idea of The Sipping Point is to provide an opportunity for staff with a shared interest in teaching to interact with each other in a non-formal setting. The basic premise is to try to foster a climate where staff across all disciplines can potentially learn from colleagues about aspects of teaching practice. The result so far is a gathering of members who meet up for one hour once a month to hear about and discuss various approaches that peers are adopting in their teaching context. 

Why this format?

Based on feedback and research into various academic professional development (APD) courses offered by our unit, (you can read more about those in Gormley, O’Keeffe & Ferguson (2017)), it is clear that not all staff can or wish to commit to the timeframes involved in our accredited offerings. Even for those who do complete these courses, there are limited opportunities for a cohort to continue their learning community and interact further once the module(s) have completed. Once-off workshops are also problematic since they don’t tend to engage lecturers deeply over a period of time. And I think it’s fair to say that we have also noticed a general lack of awareness of what other lecturing staff at DCU are doing that might be of value/interest across the academic community.

Hence The Sipping Point – a new avenue for like-minded souls interested in discussing teaching practice – came into being in April of this year, driven by an enthusiastic combination of positive change, collegiality and coffee. The general format is a one-hour session led by at least one lecturer presenting on a predetermined aspect of their practice for 10 minutes. After that, the topic goes open to the floor for general questioning and discussion. We meet at all three DCU campuses (Glasnevin, St Patrick’s and All Hallows) over the last week of the month.

So how has it gone down?

So far, the reaction to these sessions has been generally positive and the discussion has been lively although I should add the significant caveat that it is very early days yet. Numbers could not be described as enormous – the largest session attracted 18 participants at one campus – but we are pleased with the interest to date and, to be frank, do not expect stampede-like numbers. Initial signs are certainly promising and here’s a quick summary of what we’ve done to date:

  • April – We kicked things off with information-gathering sessions where we identified big-ticket items for future discussion, such as feedback challenges, assessment of large class groups, and lack of student engagement.
  • May –  This month we focused on sustainable methods of giving feedback – annotated documents, peer review, and audio feedback were all discussed.
  • June –  Staying with assessment, we discussed various approaches including group work with peer review feedback, learning portfolios, use of video for discussion assessment, and rubrics.
  • July – In a complete change of tack, and given the summery time of year, we took active participation to the next level, going on a campus walkabout with a learning spaces theme.

learning space walk 2

Any lessons learned?

While it’s certainly too early to reach any firm conclusions about this as a learning enterprise, a few things are beginning to stand out from an academic development perspective:

It’s important to meet people where they are at: I mean this in terms of participants’ geographical location (go to them rather than ask them to come to you) but also in relation to their teaching and learning goals. Not everyone may wish to make a dramatic overhaul of their assessment strategy, for example, but they may be interested in swapping small-scale ideas that can be immediately implemented and have worked well for others. Both types of interest should be facilitated and encouraged.

Don’t overdo the number of speakers: I did receive feedback to say that one of the sessions was more formal than originally anticipated. If we have too many opening speakers, or they speak more than 10 minutes, this will cut into the discussion time which is so valuable to many of those present.

Think about incentives: Anyone who has been approached to present at a session so far is giving up their time which is something that I think should be recognised. While I do hope the community will be self sustaining at some point, I probably need to bribe encourage members to present until we are well established. Tying in with the tea/coffee theme, I’ve approached a potter friend of mine at Brookwood Pottery to design a set of appropriately themed mugs as little gifts for those who take the trouble to present. Reactions to these have been great and I hope that these branded items might also serve as marketing tools in the future. See photo below.


Of course further research will be required to find out what, if any, type of effect is happening as a result of these sessions. Tea-drinking is almost a national sport in Ireland but is conversation about teaching useful to all involved? What else might be happening and are there unintended negatives to consider? Are these events serving any valuable function at all, and if so, how? More to come on this in future, I hope.


  • Wenger, E., 2011. Communities of practice: A brief introduction.




What Martians (and others) can tell us about learning design workshops

OK, so the Martian headline is a little bit of a clickbait but bear with me, fellow earthlings, and its relevance should become clearer in due course. In today’s post, I’d like to talk about some recent experiences I’ve had facilitating workshops intended to help faculty embark on the design of online and blended programmes at DCU. It might be useful to explore what lessons were learned and consider how to potentially apply those lessons to the format of future workshops.

So what’s been done to date? Not surprisingly, time is always very limited so there are typically 2-3 workshops of no more than 2 hours in length. So far, in designing these initial sessions, I’ve deployed a number of approaches to help participants articulate a shared vision for their proposed programme. Drawing on techniques from Conole and Mor, we have included elements of:

  • Persona development: where participants are asked to come up with a credible profile of two or three potential students, highlighting motivators and potential obstacles
  • Course features cards exercise: where participants are asked to select 16 ‘ideal’ course features/elements from a set of pedagogy-oriented prompts
  • MOOC design patterns cards cross-check: participants review further potential course features (eg 6 minute videos, fishbowl approach) and select for their wish-list

These have all proved useful for getting creative juices flowing, but somehow I suspect that when we move the conversation to the learning outcomes something goes awry. Despite some discussions around the required programme and module learning outcomes, I’m not 100% convinced that these are being considered in as much depth as Biggs (or me!) would like. Quite possibly I’ve been making assumptions about how much people already know about writing learning outcomes that are not only clear and measurable, but are pedagogically-appropriate and constructively aligned. So what to do about it?

Well firstly, I believe there is a need to carefully step through the anatomy of a learning outcome, exploring both programme and module requirements, and discussing why learning outcomes matter. For some, this may be a revisiting/refresh exercise. But for many participants, this may be new territory which requires a ‘back to basics’ approach. Let’s be honest here, I think very few people enjoy dissecting learning outcomes but if they are going to serve as the foundations for course design in higher ed, then they do need to be discussed in some detail.  For the purposes of these workshops I won’t get into the politics but instead will focus more on how to write them in accordance with generally accepted standards.

But even the most elegantly written learning outcomes in the world are simply not enough without an assessment strategy that ensures the programme does what it says on the tin. About a year ago, I attended a very interesting EDIN workshop presented by Ivan Moore who introduced us to the principles of Orthogonal assessment, a technique which seems to hold promise as a way of unpacking the detail of potentially fuzzy learning outcomes. According to Moore, orthogonal assessment requires that you stipulate the core criteria by which you will make judgements about whether or not a learning outcome has been achieved.  This requires drilling down into the detail of each learning outcome at the design stage. He argues that this approach puts the focus on assessment of the learning outcome, instead of inadvertently focusing on the assessment components as so frequently occurs.  As the screenshot below illustrates, this approach can be visualised in the form of a table for each learning outcome with the criteria listed on the left and a description of various achievement standards (most importantly the minimum threshold standard) described in relevant columns to the right.

Screen Shot 2017-06-22 at 11.07.20
Above slide extracted from Ivan Moore presentation at EDIN workshop, April 2016

In previous workshops, participants have been asked to write such criteria after the sessions but there have been completion issues with this for several reasons. Writing these criteria collectively, as part of the workshop, may be a better way forward so next time (e.g to support a forthcoming online MSc in medical diagnostics and therapeutics), I plan to integrate this exercise into the workshop itself. To help attendees get their heads around it, we’ll start with a simple example where I ask participants to create the essential criteria needed to assess someone who needs to prove that they can make a cup of tea for the President of Ireland (a proud tea drinker, if I’m not mistaken). The ‘someone’ in this case would be a Martian who, presumably, knows nothing about the tea-making process. By way of illustration, here’s a highly simplistic draft example that lists the various criteria that need to be fulfilled, in my opinion, with some initial descriptors of various levels of achievement in tea-making expertise.

Screen Shot 2017-06-22 at 15.12.49

So to conclude, while I’ve mentioned the ideas above in previous workshops, my new plan is to step through the learning outcome criteria work during the workshop itself, allowing sufficient time for peer review of proposed outcomes and criteria.  Time will tell as to whether or not this approach works (Will participants be able to apply the same rationale to their courses? Is this approach feasible? Will they run screaming from the room?) but unless I’ve been abducted by aliens, I’ll be back to tell you how it goes.

Conole, G., 2014. The 7Cs of Learning Design—A new approach to rethinking design practice. In Proceedings of the 9th International Conference on Networked Learning (pp. 502-509).
Moore, I. (2016). Towards Best Practice in Assessment. Presentation for EDIN Conference.
Mor, Y., Warburton, S., Nørgård, R.T. and Ullmo, P.A., 2016, September. MOOC Design Workshop: Educational Innovation with Empathy and Intent.In European Conference on Technology Enhanced Learning (pp. 453-459). Springer International Publishing.





Keeping the collaboration love alive

A number of us at the DCU Teaching Enhancement Unit have been having conversations recently about the pros and cons of working with other people on educational research projects. As self-proclaimed blended professionals (as per Whitchurch, 2009), we sometimes become involved in collaborative research projects with lecturers and other staff. Amongst other things, the involvement of a learning technologist or academic developer on a research project can potentially:

  • Help source relevant literature from educational research
  • Advise on relevant learning theory
  • Provide input on methodology and ethics
  • Highlight potential journals/calls for papers
  • Review and provide feedback on abstracts and drafts
  • Advise on the structure and writing of paper
  • Act as a critical friend and sounding board for ideas

The advantages are mutual. For our part, working with lecturers on research projects gives us a deep insight into the challenges of teaching at university, raises awareness of pedagogical issues in particular disciplines, and helps us to stay abreast of emerging and evidence-based technologies and approaches. It also helps us to build up our network and portfolio of publications and who doesn’t want that?

But there’s a but…

While for the most part, these have been very positive experiences, leading to scholarly outputs that were significantly better for the multiple perspectives, there can be some risks involved, particularly with new working relationships. A recent article on co-authoring from Times Higher Education would seem to confirm the benefits of a pre-nup agreement of some kind to help negotiate the process. In particular, the following paragraph on the importance of clear division of labour stood out:

The transition from initial idea to published artefact usually involves a significant amount of time and effort pursuing a variety of tasks. These range from scanning the literature to gathering data, and from negotiating with editors to making the diagrams look presentable. For your co-authoring experience to feel collaborative it helps that these tasks are identified and shared among the members of your authoring team. Be clear on who is doing which bits.

My colleagues and I agreed that it might be a good idea to share the THE article upfront with those who might be new to the co-authoring process. Indeed one of my colleagues is working on a detailed set of guidelines for collaborative authorship, including the thorny questions of author order, what constitutes a ‘significant intellectual contribution’, and ownership of data.

Collaboration Checklist

Taking this one step further, I would also like to suggest that it might be helpful to review a checklist to confirm “who is doing which bits”. Once you’ve agreed that you’d like to work together, then it’s time to get down to some nitty gritty and ask questions such as:

  • What is the agreed order of author names?
  • Who is going to source potential publication opportunities?
  • Who is going to draft and submit the ethical clearance forms?
  • Who is going to write and submit the abstract?
  • Who is going to write the introduction, literature review, methodology, discussion, conclusion? (Or whatever format has been agreed.)
  • Who is going to review and provide constructive feedback on the first draft? How will that feedback be delivered?
  • How often will you meet?
  • Who is going to liaise with the publisher from beginning to end?

Obviously, this is not an exhaustive list but it might prompt some useful thinking about who is doing what and when. I would be of the view that the person listed as first author should typically do the lion’s share of the above – but it is entirely context-dependent and some writers may prefer a much more organic (and arguably more collaborative) approach to the writing process.Ultimately, it should be about working well together so you have to go with an approach that suits all involved. If you can establish that approach sooner rather than later, the chances of a blissful (or at least relatively harmonious!) collaboration are greatly improved.


Whitchurch, C. (2009). The rise of the blended professional in higher education: a comparison between the United Kingdom, Australia and the United States. Higher Education, 58(3), 407-418.

Peer Review – some lessons learned & some friendly advice

Inspired by events and discussions at the recent Y1 Feedback Symposium, I’ve been mulling over how I can improve the peer review process in an online staff development course that I teach. During his presentation, assessment researcher Professor David Nicol made the point that students seem to learn more in conducting a feedback review than actually being the receiving party. The Nicol, Thomson, and Breslin (2014) research on peer review makes convincing reading about the evaluative and cognitive benefits of the review process, and  I’ve decided to share that paper with my own students to explain why we’re using this approach.

This got me thinking about how I’ve used peer review (actually, I called it peer critique, which is probably not ideal) in the past. While not claiming to have vast experience with the approach, I have used it with a class of online students who were asked to give each other feedback on proposed strategies. Some unexpected issues came to light:

  1. Some peer feedback comments were interpreted as being undiplomatic and irrelevant
  2. Providing what I thought were helpful ‘prompts’ for questions/comments ended up being regurgitated directly, in a small number of cases
  3. While some students loved it (particularly those working in similar disciplines), some participants were unconvinced (as they felt they did not receive the type or quality of feedback they would expect)
  4. Because students were free to discuss using any communication mode of their choice, several offline conversations were not visible to other students or me
  5. Some students did not engage at all with the process and some left significant elements out (eg did not explain what aspects of feedback they planned to incorporate or omit)

There appears to be a fine line between providing appropriate scaffolding and micro-managing the process for students: you want to give them enough information to know how to get started with a peer review but not so much that it becomes a simplistic or somewhat pointless exercise. At the symposium, Prof. Nicol made the point that students should ideally generate their own criteria for quality when conducting peer review. While I might see this working in a small group face-to-face setting, I am not sure it would be as successful in an online context where silence (the equivalent of the blank page) is simply easier to ignore. So I suspect the online learner requires the provision of at least some guidelines as a starting point for the discussion and you can find helpful advice on peer feedback forms from the University of Hawaii, Manoa. Of course, to a large extent it depends on the course design but in the online context I think I’d prefer to play it safe by providing at least some suggested criteria or guidelines at the start.

So trying to tie all this neatly together, here are some ideas for the next time I use a peer review approach in online teaching:

  • Record a screencast or video that captures my thoughts ‘thinking out loud’ as I read and annotate a sample draft (this might help to model suggested feedback and tone, addressing points 1,2 and 3)
  • Set up a central location (such as a discussion forum) with a designated thread for each pair to respond to online (this might assist with points 3 and 4 by increasing visibility into the process and enabling greater access to other classmates’ reviews)
  • Create an infographic that provides a recommended pathway for the assignment, from beginning to end (this might clarify the multiple steps involved, addressing point 5)

As luck would have it, my 14-year-old daughter has been doing a peer review for her English class in recent times and has been subject to intense questioning about it by her mother. I should add that she is completely unimpressed by it as a learning device (sigh) and reckons that it is too hard to “be honest” with one’s friends. I know it is the job of a 14-year-old to be unimpressed by everything but she might have a point about that – could anonymity help or hinder the peer review process?

If anyone else has comments on proposed or previous experiences of online peer review, I’d love to hear your thoughts. Speak up, my friends.


Nicol, D., Thomson, A. & Breslin, C. 2014, “Rethinking feedback practices in higher education: a peer review perspective”, Assessment & Evaluation in Higher Education, vol. 39, no. 1, pp. 102-122.