The research is closely associated with my professional practice. As Head of elearning for the RCEM I’m responsible for overseeing pedagogical modelling, technical development and stakeholder engagement for our elearning portfolio. At the time of writing this is comprised of two sites: RCEMLearning, a traditional VLE comprised of an etextbook and a range of assessable modules, and the RCFN. The FOAMed movement has been enthusiastically endorsed by the EM community, but it is in tension against some of the more traditional strictures of medical education which continue to emphasise credentialisation and ‘measurable productivity’ (Pring, 2013, p 158).
The dissertation has two main research questions which are:
The remainder of the methodology will be broken down into a series of sections to provide an overview of the research design and processes involved. These include: a discussion of the project’s rationale, its ethical implications, the techniques used in data collection and analysis (a mixed methods approach with an emphasis on qualitative analysis which triangulated findings from an online structured questionnaire into a series of semi-structured podcast interviews subsequently analysed using thematic analysis), and it will close with a discussion of its limitations.
The methods used here are underpinned by qualitative approaches as they emphasise words rather than quantification (Bryman 2016). From an empirical or pedagogical viewpoint the research is informed by a constructionist and interpretivist ethos which sees domains of knowledge as being locally constructed and contested, and which seeks to gain insight into these complex processes (Onwuegbuzie & Leech 2009). This aligns with the FOAMed movement’s pedagogical frameworks as it continually seeks to challenge and ask questions of evidence and clinical practice. Another aspect of the dialogic theme which forms part of the research’s motivation is that I wanted to listen to and harness as many voices as possible, so the questionnaire attempts to capture the experience of learners and their relationship to OERs; this is not a common feature in some OER research and there are silences amongst the degrees of openness (Hilton et al 2013).
The mixed methods approach reflects my commitment to a dialogic ethos as a mono method of data collection would be insufficient to capture learners’ voices; I also feel it wouldn’t cover the research questions in the required breadth (Bregoli 2012). The decision to employ a mixed methods approach does not guarantee richer data as the data could just as easily conflict as corroborate, and it can be challenging to provide an equal amount of detail about the processes involved for both methods (Bryman 2016, Bryman 2006). The data generated by the self-administered questionnaire was valuable but it simply isn’t as rich as the data generated from the semi-structured interviews. However the decision to select a mixed methods approach for research concerned with health-related issues reflects the complexity of the field (Morgan 1998).
Deciding to pursue a mixed methods approach also involved some significant questions regarding sequencing, methods of integration, sampling and reliability. The key decision about sequencing was obtaining questionnaire data first so it could be fed into the interviews by relating it to research questions (Bryman 2016). Triangulation was used to map the data from the questionnaires into the interviews, as I wanted to see if attitudes corroborated. Crucially the interviews allowed emerging narrative paths to be followed – even if they could not be corroborated -which simply could not happen with solely conducting an online questionnaire (Bryman 2016). Purposive sampling was employed to select interviewees in a very strategic way. I’m fortunate that my professional role means I have access to key educators in the EM community in the UK and I wanted to use their expertise, and concomitantly present views of the stakeholders to them, which would not have been possible without a mixed methods approach. My study has a good degree of external reliability as it could be replicated (Bryman 2016).
The research adhered to the University of Edinburgh’s ethical guidelines. The anonymous online questionnaire included a brief statement about ethics on its landing page; as it was disseminated via Twitter participants were asked to get in touch via my personal Twitter account with any queries. Participant consent is in line with University of Edinburgh policies, and permission was obtained from all interviewees prior to the recording of podcasts.
Image 4: Ethical statement about participation in the online questionnaire
Data was collected via the online questionnaire and semi-structured interviews, which were recorded as podcasts. The self-administered questionnaire (SAQ) had nine questions in total comprised of eight closed-ended questions and one open-ended question. The relatively small number of questions was designed to avoid ‘respondent fatigue’ (Bryman, 2016, p 222). Design was carefully considered as time to complete is compounded by extremely busy clinical environments (Tymms 2013). The questionnaire was open to all learners who engage with the RCFN so it was not limited to members of the college or clinicians; non-members and other core ED staff (such as nurses and paramedics) were eligible to complete it. However it’s impossible to obtain analytics about respondents as this would have added layers of complexity to the design and dissemination stages.
The questionnaire was comprised of nine questions in total. The first eight were structured MCQ-style questions which required single answers from respondents to gauge demographic information and affective data about FOAMed’s relationship with a curriculum and reflective learning. The structured questions could be broadly split into three domains which align with my research questions:
I hoped to receive between 50-100 responses to the questionnaire; 99 were received in total. This was a healthy response rate for the purposes of this research but it’s a relatively low compared to the download rates for each podcast. I decided not to put a closing date on the questionnaire as previous experience distributing surveys in the same professional context taught me that emergency clinicians and EM staff will engage quickly and early if at all. Scheduling constraints were also a factor as I needed to collect data and triangulate it in a timely manner to organise podcast interviews as close as possible to the data collection period. The trend for early responses to such requests from the EM community is reflected in the completion graphic below which identifies that 41% of responses were received on the 18th March 2016, the questionnaire’s launch day:
Image 5: Responses by day to RCFN research questionnaire.
SurveyMonkey was used to design and capture data from the questionnaire. I have used it before for professional purposes so there is trust and familiarity with the software; the ability to download responses and data in a variety of formats is particularly helpful. When responses had been collated I exported both quantitative (response rates and ratio of responses) and qualitative data (narrative responses to the final unstructured question) into Excel. Responses were anonymised and no clinical information was requested or disclosed, which is often a problematic issue for research related to medical education. No information was requested or captured regarding respondents’ gender or ethnic identity as the central purpose was to gauge attitudes about the RCFN and its relationship with the RCEM’s educational activities. However this represents a potential new angle for future researchers to investigate FOAMed use within certain demographics. In an attempt to generate a healthy response rate no question was mandatory to answer, including the final question which required a narrative response; this perhaps accounts for the small sample generated from question nine. The first structured question (please confirm your age) had a response ratio – that is the percentage of respondents who answered the question from the total sample of 99 – of 100%, but this dropped to 21% by the final unstructured question (what would improve your experience of the RCFN. Please enter some general comments). Response ratios to each question are illustrated in the table below:
Image 6: Response ratios for the questionnaire
Narrative responses for question nine are a significant but admittedly small sample (20 respondents are recorded as answering this question, but 16 actually entered comments). I coded this data by identifying recurrences of terms and concepts (CPD, appraisal etc.) and noting the number of times these were mentioned. This data was then triangulated into the relevant areas of the questions for the semi-structured interviews.
The EM community in the UK and internationally is heavily active on Twitter, so the questionnaire was disseminated here. Moreover the RCEM’s elearning sites actively use Twitter (to announce publications, engage with stakeholders, collate feedback etc.) so advertising it via Twitter aligns with users’ expectations. It was distributed primarily through the Twitter account for the RCEM’s elearning platform (@RCEMLearning) but I also re-Tweeted the call for responses via my personal Twitter account (@CJWalsh05). Tweeting in a professional capacity has underscored the importance of including a visual image with Tweets to boost engagement so the Tweets advertising the questionnaire were sent with the following image:
Image 7: Tweets from @RCEMLearning to advertise the questionnaire.
I also scheduled reminder Tweets on specific dates in an attempt to boost responses; Tweets requesting responses were sent on four occasions ranging from 18/03/2016 to 25/03/2016; I would have replicated this but the last Tweet was sent just before the Easter break when clinical pressures increase. The table below illustrates analytics for each Tweet (extracted from Twitter,) and the analytics reveal the call to complete had a significant number of impressions but this did not translate into completion rates for the questionnaire:
|Date Tweet sent||Impressions (times people saw this on Twitter)||Total engagements (times people interacted with this Tweet)|
Image 8: Analytics for Tweets advertising the questionnaire.
I decided against structured interviews as they seem too rigid and monologic. Semi-structured interviews provide a sequenced framework for the interview, but they also allow the interviewer to vary the sequence of questions. There is also latitude to ask further questions in response to unanticipated replies, and new themes can potentially emerge from this, which is significant when a thematic analysis is being conducted (Bryman 2016, Bagguley & Hussain 2014). As the FOAMed ethos is predicated on dialogic critiquing and reinforcing of practice and educational paradigms I wanted there to be a ‘goodness of fit’ between the data sets and the research questions I was investigating. Recording the interviews as podcasts meant they could be embedded in the finished version of the dissertation, which also aligns with the FOAMed community’s ethos of sharing and reuse. The discussion prompts for the interviews – including those triangulated from the questionnaire – are shown in the table below:
|Discussion prompts formulated ahead of the interviews||Discussion prompts derived from questionnaire|
|Should a medical college even be looking to legitimize or recognise FOAMed? What are the risks if we do or we don’t?||The majority of respondents are relatively early-stage clinicians. Is the acceptance of FOAMed an issue of maturation?|
|For many FOAMed has an inherent oppositional sensibility. Is the FOAMed ethos somehow diminished if an institution gets involved?||The majority of respondents think that FOAMed resources should be recognised within the curriculum. Is the curriculum agile enough to accommodate this?|
|It has been argued that FOAMed doesn’t need a curriculum of its own. However can it legitimately be seen to provide a meta-commentary on the curriculum and curriculum formation?||Although a small data set the most common narrative response was a need for CPD recognition for FOAMed activity; does this surprise you?|
|Is offering reflective proof enough, or do FOAMed resources need formal assessable components to align with existing policies of awarding bodies?||One final thing. I assumed that reflective learning is perfectly aligned with the FOAMed movement, but there’s a degree of ambivalence about this. Do you think this tension will always be present?|
Image 9: Questions/discussion prompts for structured podcast interviews
Six people were invited to be interviewed and only one interview (with Emily Beet, RCEM’s Director of Education) failed to materialise due to scheduling constraints. This was unfortunate as Mrs Beet would have provided insightful analysis from a non-clinical perspective. The other interviewees are all practicing EM physicians and RCEM members who have some kind of educational involvement with the college or – in the case of Dr Damian Roland – are renowned EM educationalists. Four of the interviews were conducted in-person either at RCEM headquarters or the interviewee’s place of work; the in-person interviews were recorded as podcasts using the Garage Band application on a MacBook. A Skype video call was used for the remaining interview and it proved to be a flexible format that captured non-verbal communicative gestures, thereby allowing fluency to be maintained. Rapport building was not required as I already had a professional relationship with the interviewees. I formally invited interviewees to participate via email, and the email also outlined the objectives and nature of the research; I attached the university’s ethics form and participant information sheet to this email. The interviews ranged in length from 31.21 to 41.44 minutes long, with a mean of 34.87 minutes. Further relevant details are captured in the table below:
|Name of interviewee||Clinical position at time of interview||Educational/RCEM position||Date of interview||Length||Format|
|Dr. Andy Neill||Advanced trainee in EM in ROI||RCFN Editorial lead||06/04/2016||35.20||Skype|
|Dr. Damian Roland||Consultant Leicester Royal Infirmary||Leading EM educator||15/04/2016||41.44||In-person|
|Dr. Jason Long||Consultant, Southern General Hospital, Glasgow||RCEM Dean||15/04/2016||33.47||In-person|
|Dr. Simon Laing||Consultant, Birmingham Heartlands Hospital||RCEM Clinical lead for elearning||15/04/2016||31.21||In-person|
|Dr. Will Townend||Consultant, Hull Royal Infirmary||RCEM Curriculum committee lead||15/04/2016||33.01||In-person|
Image 10: Interviewee details
Thematic analysis was employed as the most appropriate tool to synthesize and analyse the collected data. Although thematic analysis is a longstanding technique in qualitative research no real consensus exists about how to structure or implement it (Attride-Stirling 2001, Braun & Clarke 2008, Bryman 2016). However thematic analysis is a technique that looks for themes and patterns in data sets and is well suited to constructionist contexts as it ‘examines the way in which events, realities, meanings, experiences’ impact ‘discourse[s] operating within society’ (Braun & Clarke, 2008, p 81).
I adopted Braun and Clarke’s (2008) ‘recipe’ for thematic analysis. One failing they identified in studies where the technique is employed is that research questions are re-cast as themes; this can result in passive treatment of data. They suggest looking across entire data sets (i.e. the collection of interviews here) rather than focusing on individual data units (i.e. individual interviews) in order to generate codes and themes. The thematic analysis conducted here is underpinned by theoretical interests (in OERs/FOAMed and the impact of open practices on pedagogy and accreditation, for example) which can be contrasted with an inductive approach where themes are wedded to the data itself rather than with prior theoretical engagements (Braun & Clarke 2008, Attride-Stirling 2001, Ryan and Bernard 2003).
The analysis of the data was initiated by looking for a series of codes to make sense of the information, which were then organised into themes to provide an analytical overview. I wanted to code at the latent or interpretive level rather than the semantic level as analysis within this tradition comfortably sits within constructionist paradigms (Braun & Clarke 2008). There was no need to transcribe the data as they are all audio recordings, which allowed me to immerse myself in them and fluidly move between data items from a technological perspective. I made initial annotations on paper whilst listening to the podcasts, which were then transposed into Excel spreadsheets as a series of codes for each interview; coding involves grouping data in patterns so interpretative analysis can begin (Braun & Clarke 2008). Computer-assisted data analysis software was not employed as the size of the data sample and the mode in which it was captured did not necessitate it. Coding attempted to capture context, and any contradictions between data and the research questions, which means bits of data can be coded against multiple codes or themes.
Braun and Clarke’s model of latent analysis was used to develop the codes into themes. Latent analysis looks for underlying ideas, assumptions, conceptualizations and ideologies and is not tied to semantic connotations of individual data items (Braun & Clarke 2008). Themes were developed to allow the narrative of the interviews and codes to emerge, but not without losing a critical eye of their relationship to the research questions. Appropriate examples have been included to illustrate each theme which avoids simply paraphrasing the data; it also enables the acknowledgement rather than obfuscation of possible contradictions (Braun & Clarke 2008). The table below details the steps I followed to conduct my thematic analysis based on Braun and Clarke’s model:
|Phase||Description of the process|
|1. Familiarizing yourself with your data||Transcribing data (if necessary), reading and re-reading the data, noting down
|2. Generating initial codes||Coding interesting features of the data in a systematic fashion across the entire
data set, collating data relevant to each code.
|3. Searching for themes||Coding interesting features of the data in a systematic fashion across the entire
data set, collating data relevant to each code.
|4. Reviewing themes||Checking if the themes work in relation to the coded extracts (Level 1) and the
entire data set (Level 2), generating a thematic ‘map’ of the analysis.
|5. Defining and naming themes||Ongoing analysis to refine the specifics of each theme, and the overall story the
analysis tells, generating clear definitions and names for each theme
|6. Producing the report||The final opportunity for analysis. Selection of vivid, compelling extract
examples, final analysis of selected extracts, relating back of the analysis to the
research question and literature, producing a scholarly report of the analysis
Image 11: Phases of thematic analysis (Braun & Clarke 2008, p. 87)
Like any research process this one is not without its problems. There is a clash of research philosophies here to an extent as the affective data I sought to collect contrasts with the scientific and quantitative hard evidence-based demands of EM. As an organisation we need to get better at proving the efficacy of FOAMed, and using statistical modelling would help. The sample size for both sets of data is small, which reflects broader issues around data governance in the FOAMed movement. RCFN podcasts are downloaded thousands of times but the response rate to the questionnaire was low in comparison which means more work could be done to harness the perspectives of FOAMed consumers and advocates. The research process also entailed negotiating different roles; there was a danger perhaps of too much rapport with the semi-structured interviews, and thematic analysis involves the tricky task of assuming the dual role of cultural member and cultural commentator which could compromise a researcher’s objectivity and relationship to their data (Bryman 2016, Braun & Clarke 2008). Unfortunately technical problems meant that Dr Townend’s interview was only partially captured and was therefore insufficient to use here. Given the value of narrative data collected further studies could also look at widening the pool of interviewees.