Pages

Wednesday, 31 March 2021

StarWars and charisma

The 1977 movie StarWars is underpinned by an interesting phenomenon. Take - for example - the scene when Luke, Ben Kenobi, R2D2 and C3PO are attempting to enter Mos Eisley, and are stopped by Imperial stormtroopers. The following conversation takes place:
Stormtrooper: How long have you had these droids?
Luke Skywalker: About three or four seasons.
Ben Kenobi: They're for sale if you want them.
Stormtrooper: Let me see your identification.
Ben Kenobi: (waving his hand slowly) You don't need to see his identification.
Stormtrooper: [pauses] We don't need to see his identification.
Ben Kenobi: These aren't the droids you're looking for.
Stormtrooper: These aren't the droids we're looking for.
Ben Kenobi: He can go about his business.
Stormtrooper: You can go about your business.
Ben Kenobi: Move along.
Stormtrooper: [gesturing] Move along! Move along!
[The group enters the spaceport and parks near the cantina.]
Luke Skywalker: I can't understand how we got past those troops. I thought we were dead!
Ben Kenobi: The Force can have a strong influence on the weak-minded.
The 'Force' can make someone likable: it can create a shared perspective with the stormtrooper, and a vision which makes Ben likable. It can make Ben seem trustworthy. It certainly makes Ben seem articulate, competent, and he exerts a strong influence over the stormtrooper, which definitely outweighs the position that Ben doesn't have. There are also elements of future goals implied, as Ben changes the stormtrooper view. Watch here:

The characteristics of charismatic leadership are (Daft, 2007):
  1. Likableness: Shared perspective and idealised vision make leader likable and an honourable hero worthy of identification and imitation
  2. Trustworthiness: Passionate advocacy by incurring great personal risk and cost
  3. Relation to status quo: Creates an atmosphere of significant change
  4. Future goals: portrays and idealised vision that is highly discrepant from status quo
  5. Articulation: Strong and inspirational articulation of vision and motivation to lead
  6. Competence: Uses unconventional means to transcend the existing order
  7. Behaviour: Unconventional, counter-normative
  8. Influence: Transcends position; personal power based on expertise and respect and admiration for the leader
So I wonder if the 'Force' may be synonymous with charismatic leadership, as there appear to be a number of charismatic markers present. It may be charismatic leadership for a 'good' - or higher - purpose: but it is charismatic leadership.

Charisma is an interesting leadership style. We can be so easily taken in by it. We WANT to believe. We are humans are credulous (Viciana et al., 2016). I feel that charisma is the smoke and mirrors type of leadership where leaders can be all talk and no trousers. Each time I encounter overly charismatic personalities in politics I become hesitant to believe the rhetoric, and suspend belief until I see concrete action.

It only works if we feel the Force.


Sam

References:

read more "StarWars and charisma"

Monday, 29 March 2021

Our acronyms are backronyms

We humans love a good conspiracy theory, so the idea that some words are formed from acronyms or initialisms is seductive. For example, the idea that the word 'pom' - a pejorative term for the English, commonly used in Australia and New Zealand throughout the 20th Century - came from transportees being called "Prisoner Of Mother England", or "Prisoners of her Majesty".

However, seeing as all those in power were actually from 'Mother England', or a subject of her Majesty, these initialisms seem a bit of a stretch.

Michael Quinion, an Oxford English Dictionary reader, and sometime editor of the blog, World Wide Words, explored the origins of the term 'pom'. He found Australian author D H Lawrence described this in 1923, as "Pommy is supposed to be short for pomegranate. Pomegranate, pronounced invariably pommygranate, is a near enough rhyme to immigrant, in a naturally rhyming country. Furthermore, immigrants are known in their first months, before their blood ‘thins down’, by their round and ruddy cheeks". Michael also notes that we humans love creating acronyms where none existed before (Quinion, 1999). Backronyms, if we will.

What is interesting is that there are so many types of what are now termed 'acronyms'. Bloom has a great list of what really are types of abbreviation:

  • "Clipping, e.g. ad for advertisement
  • "Titular contraction, as in Mr, Dr or St
  • "First letter initialism, e.g. BPH, TUR, NPO, RSVP
  • "Opening letter initialism, e.g. Ca, HeLa
  • "Syllabic initialism, e.g. modem (modulator-demodulator)
  • "Combination initialism, e.g. ad inf (ad infinitum), email, CaP", and
  • Pronounceable initialism or "Acronym, e.g. TURP, radar" (2000, p. 2)

While shortening has been around for a long, long while - think INRI for 'Jesus Nazarenus Rex Judaeorum', and SPQR for 'Senatus populusque Romanuis' - the term 'acronym' for a pronounceable initialism was surprisingly recently coined in WW2 (Cannon, 1989). Examples are ANZAC (An-Zack; Australia and New Zealand Army Corps), and radar (RAdio Detection And Ranging). What is more surprising is that the term initialism was first seen in print in 1844 (Clergymen of the Church of England, p. 48). So both initialisms and acronyms as a 'formal' constructs separate from abbreviations are relatively recent.

Coming back to the idea of 'poms', in the 17th & 18th century, few people could read, so we had little need to shorten a long written phrase. If we use Occam's razor, it seems more likely that a pejorative term will have come from something much more simple than a backronym. Like the sunburnt faces of English imports newly docking in Aussie.

Language is fascinating.


Sam

References:

read more "Our acronyms are backronyms"

Friday, 26 March 2021

The vagaries of memory

We have all heard how eye witness testimony can be wrong: where witnesses can go from 100% sure to being proven to be gob-smackedly incorrect in the light of later DNA, or video, evidence. Having just watched a documentary from Deutsche Welle (DW) on memory which goes into the mechanics of how this happens, I thought I would share the documentary's thinking with you.

Apparently our memories start - for the first twelve months or so - being quite 'plastic'. Our memories evolve with how often we revisit them, and in how many times we rehearse each memory, what confirmation we are offered as to 'correctness', and how each memory fits into our group, or societal, meta-memory (if you will). The documentary talked about a longitudinal study run by memory scientists following the Al Qaeda attacks in the US on 11 September 2001. Survey participants could remember where they were and what they were doing soon after the event itself. However, after one year, participant memories of what they were doing and where they were had often changed. From that point on - a point of 'concreteness', in a way, the memory stayed consistent, even out to ten years. It seems that once we have crafted a story that we can live with, we hold it close and nurse it (DW Documentary, 2020).


I resonated with the 11 September 2001 event, but remain sure (!) that I remember my initial thoughts correctly. I awoke to my alarm radio station talking about the first plane having flown into one of the World Trade Center towers, thinking "This is the last time I am listening to The Rock: their practical jokes are simply not funny anymore". Then I got curious and turned on the TV, to find that The Rock was reporting real events as they unfolded. Then I felt guilty for assuming they were pulling the listeners' legs. Shades of The Shepherd's Boy Who Cried Wolf (Aesop, 1912).

Apparently 30% of North American witness testimony is accurate (DW Documentary, 2020). Logically then, the other 70% is inaccurate. That is a fairly scary percentage: consider how many people must have been convicted or fined based on dodgy memory.

While the documentary talks about police practices which help to ensure that we do not edit our memories when it counts, I think we need to do things that help us to remember our lives in all their joyful, and all their shame-filled moments.

I was thinking then that we could keep a diary, and record daily those events which happen to us, along with our thoughts and our feelings. Today's fresh thoughts may be more accurate than those we have self-massaged, or have been manipulated from the outside by others.


Sam

References:

  • Aesop (1912). Aesop's Fables: a new translation by V. S. Vernon Jones. William Heinemann.
  • DW Documentary (21 December 2020). When our minds play tricks on us. https://youtu.be/MmlXY-hzgm0

read more "The vagaries of memory"

Wednesday, 24 March 2021

Checking our Google storage

I would imagine that we have all had emails from Google explaining that our online storage is being capped across all Google platforms and that where we exceed our data cap for two years, they MAY delete data over the cap; and that, from June this year, Google MAY start deleting online data which has not been accessed in 2 years.

Google is saying that they aren't rushing into all of this, but are putting us on notice about both these elements, and will communicate with us well in advance of anything happening.

What does not seem to have yet happened is showing us how we can all check our current online storage load. However, there is a very easy place we can go to:

https://one.google.com/storage

Easy when we know where.


Sam

read more "Checking our Google storage"

Monday, 22 March 2021

Notes on virtual conferences

Last year I attended four conferences. This is much more than I would normally attend, and they only became available to me because of Covid-19: the conferences went virtual, and - therefore - became accessible.

I attended two North American conferences, one Australian conference, and one New Zealand conference. The differences between the conferences was interesting. Some sessions were recorded, and so if I missed that particular conference session because I was attending another strand, I could go back and tap in to the presentation later. There were differences in the time that the sessions were live past the conference having taken place. One conference got the sessions live within a week or so, and kept the sessions live for three months. The other took about a fortnight, and will keep the sessions live for a year. One sent us a live link, but, because that conference required you to register for the sessions you wanted to attend, sent out a link to a recording that could be downloaded.

First of all, session times were tricky. With the two North American conferences, much of the programme was scheduled for quite awkward times. 4am is not a good time when you have a full day of teaching ahead, so I found that I was unable to attend many of the live sessions I wanted to. The Australian conference sessions were all in the evening, which make it very easy to attend.

Secondly, because I was not 'at' a conference, the time was not carved out of my calendar to attend: my teaching life carried on. I had quite a different mindset to the conferences themselves. I found there was a difference in how I approached the conference sessions: I knew I could go asynchronously. Usually we are forced to carve out the time, because we are out of the office. It was the fact that I knew that I could tap into the sessions later that made registering viable (though it was still quite expensive). Attending the conference was not a break from work: it was alongside - on top of - work. I am wondering if I need to take a different time orientation towards them, and take leave to attend... or whether it is OK to not take that break.

Thirdly, normally when attending a conference we pick the sessions in the streams we think we will enjoy. We make some poor decisions, we make some spectacular decisions, but we cannot be in many places at once. Choices are forced upon us. The virtual conference allows us to attend ALL sessions, over time. However, I can relate that three months is not long enough to see all the sessions I wanted to see. I fitted in viewing all the extra sessions around work. Some were great, some were rubbish. But I packed in as many as I could, to get my 'money's worth', and still ran out of time to see everything. I was interested in why I wanted my 'money's worth', too: that value became a key driver.

Fourthly, the synchronous/asynchronous nature of virtual conferences means we are less overloaded: we can digest in smaller segments. We can also take the time to think more critically. I suspect we will absorb more.

Lastly, I found that I like to be able to download the video and slide decks. Some video was downloadable from all conferences, but the sessions that I had to watch again in only on PC without downloadable or supporting material annoyed me. I still am not quite sure why: perhaps collecting the conference materials is part of the perceived value?

While virtual conferences have made attending much more accessible, they remain expensive. In deciding to attend, I think we need to consider our time zone; consider leave; consider our length of access; and to consider what materials we walk away with at the end.


Sam

read more "Notes on virtual conferences"

Friday, 19 March 2021

Seeing the file tree in File Explorer

After a recent Microsoft upgrade, I lost my File Explorer view, showing the file tree. Grr: as usual, an upgrade overwrites our own preferences sand settings, in that "Mother knows best" way which Microsoft still seems to exhibit.

I was at a loss, briefly, to remember how to get it back! Then I remembered that all I needed to do was to go the the View tab, select the Navigation pane button at the start of the ribbon, and tick the "Navigation pane" option from the drop down list. Too easy once your brain starts working again, but I still had that sinking feeling when having just been through a forced upgrade that the recovery may not be that simple!

I figured that I would not be alone in finding this reset annoying, so figured I should create a post in case any of you have got stuck with this too.


Sam

read more "Seeing the file tree in File Explorer"

Wednesday, 17 March 2021

Learning Harvard Style

I am currently working with some colleagues in writing a research article for a journal which none of us have attempted publication in previously. We are all learning new things as a result.

One of the things I am struggling to get my head around is the fine nuances of Harvard Style citations and referencing, as there is an apparent paucity of handbooks which illustrate the intricacies. I am used to APA. I have written previous articles using APA. I know how to cite, how to quote, and how to reference all the tricky little things that we hardly ever need to think about.

I understand that arcane elements like white papers, multiple authors, and pre-press book chapters. But I don't even know where to go to find out with Harvard Style. I am reluctant to use MS Word's built-in referencing function, because I may rely too much upon it, and therefore not see rookie mistakes, and therefore jeopardise the acceptance of our article.

After a day of looking, I have finally found a Palgrave handbook called "Cite them Right" (Pears & Sheilds, 2019), which I hope may help me. Once I have digested the relevant chapters within the book, then I will be able to go forth and translate all our references into Harvard style from APA.

There are small things that catch us, such as no ampersand for multiple authors inside a bracket, but the use of an 'and' both inside and outside. "no date" is written in full instead of being abbreviated. "et al." is italicised. Book chapter citations include the page range in the citation bracket, even when not a quote. If quoting a webpage, we should include the paragraph number. URL home pages may be used as an author name instead of the actual author.

Article titles are in single quote marks in the reference list. References contain stub DOIs, and are preceded by "doi:". References have no full stops up until the page range with articles, or the end of the title with books. A page range is indicated with a "pp.". The publishing house location is included. There is no space between the author initials and the full stops.

Many elements look similar, which is good, but there are enough differences for me to need to be hyper-alert. This feels a little like attempting a new language: sometimes elements are so similar we can build off what we already know. And others are so different that they will make what we say incomprehensible.

Ah, well. Learning is learning: and no learning is ever wasted.


Sam

References:

read more "Learning Harvard Style"

Monday, 15 March 2021

A simple view of method

There is some research that, when you read it, sounds incredibly well-planned. But am also wondering if we get into the habit of making our methods sound good at the end; as opposed to actually being good from the outset.

I suspect that we tend to make methodology and methods too complicated - arcane, even - in academia. All the esoteric categories and sub-categories sound so deliberate. However, the more I read, the less sure of the 'deliberateness' I become.

Recently I ran across these powerful words:

"I begin to see that the whole idea of a method for discovering things is ex post facto. You succeed in doing something, or you do something so well that you yourself want to know how you did it. So you go back, trying to re-create the steps that led you, not quite by accident, not quite by design, to where you wanted to be. You call that recreation your ‘method’. (Koller, 1983: 88 [...])" (Thorne, 2016, p. 19).

Wow. Well, it looks like other people also think we 'make it up' as we go along in research. Even when we write our methodology up when we go to publish our work, we tend to follow a "drunkard's walk" (Heinlein, 1980, p. 164) approach. A drunkard's walk can be defined as:

"a mathematicians' term for a two-dimensional random search. The name comes from the colorful image of a drunk standing in the dark between two lamp posts. The drunk wants to get to a lamppost — he doesn't care which — but he's so intoxicated that he can't control which direction he's stepping in; all he can control is that he is walking toward a light. Every step he takes is a 50/50 split between going one way and the other. Eventually he will reach a light, but how long it'll take him is the big question" (Schroeck, 2012).

The more expert we become and the more experience we accumulate, the fewer our elements of drunkard's walk will be. But is that because we have learned more about our 'chosen' method, or is it that we have learned what category our natural method inclination is most aligned with?

Or some of both? And does it matter?


Sam

References:

  • Heinlein, R. A. H. (1980). The Number of the Beast. New English Library.
  • Koller, A. (1983). An unknown woman: A journey to self-discovery. Bantam.
  • Schroeck, R. (2012). Latest Update: 29 November 2012. http://www.accessdenied-rms.net/dw2conc.shtml
  • Thorne, S. (2016). Interpretive Description: Qualitative Research for Applied Practice (2nd ed.). Routledge.

read more "A simple view of method"

Friday, 12 March 2021

Calling all apprentices

As much as we would like to think that our societies - cultures - are constant, they are in an eternal state of flux... but glacially slow flux. Perhaps we could call it f-l-u-u-u-x. We make ourselves up as we go along, generation by generation. We generally can't see the change as it is happening, but - once we have a long enough individual time horizon - we can see the slow morph of our people over the decades.

Thirty years - a generation - allows us to see quite a lot of shift. If we compare 1990 to 2020, many things considered 'essential' to modern life have changed. VHS. MacGyver. Video stores. Ghost, Total Recall, and Edward Scissorhands. Typing pools with electronic typewriters. Line printers. "Re your letter of the 25th inst.". Landlines. Suits in the office and casual Fridays. After 5 office drinks. Buying booze in the bottle store (not the supermarket). Horse racing. Amateur sport. 'Racing' bikes (no mountain bikes). Driving drunk with one eye closed to see only one white line (yes, I know someone who did this). Salmon pink leather couches in the last flush of 1980s chic. Hair gel. Razor cuts.

This generational change also happens in education, as macro-environmental change drives new skills and requirements for the workplace. Our schools and their education outputs are realigned to feed the industrial machine. In 1990, we had just raised the school leaving age to 16. "Tomorrow's Schools" was two years old, with bulk funding, untrained Boards of Trustees, 'deciles', and the Education Department split into 6 new organisations (Rice, 1992; Snook et al., 1999). If we go back 50 years, education was once stratified or 'streamed' by ability into Arts and Sciences at the top (because those top 5% of students would go to university); 'commercial practice' in the middle (the 35% of people would go into business or government cadetships); and the practical classes at the bottom (the 60% of people would go into the trades or become 'manual labour'). We were sorted, stamped, and sent out.

Then, in the 1980s, we had to float the New Zealand dollar, which sank like a stone. The resulting economic crisis required fiscal axeing rather than spending. We willingly climbed on the neoliberalist bandwagon with 'Rogernomics'. We SOEed. We CRIed. All the government training schemes went: the Electricity Department no longer trained electricians; the Ministry of Works no longer trained fitters, welders, builders, joiners, cabinet makers, or electricians; the Ministry of Transport and New Zealand Rail no longer trained mechanics, coach-builders, or automotive electricians.

Into the dearth of apprenticeships, and in response to business requirements, computing was taught in secondary schools from the late 1970s onwards. Over the next twenty years, computers became essential for work and recreation, shifting us from typing to keyboarding. We could draw plans for a house; tune a car engine; adjust a recipe; mix sound; explore 'what ifs'; run remote field experiments; programme robots; search the library. 'Commercial practice' became IT, and so part of degree training. Computerisation, digitisation, and digital literacies have opened up alternative futures for us. We learned new ways of being, and those ways were usually the mana of a degree. Few now choose trades.

Further, the pre-1980 tradespeople school-leavers - many who apprenticed in government departments - can now taste retirement. There is a looming national trades shortage. We cannot import tradespeople as other nations made the same mistakes as we did, and are just as short of skilled people.

A barrier to businesses contracting an apprentice has been the quality of aspiring tradespeople. Why is the quality lower? Where once there was probably a pool of 60% of school-leavers to choose from, there now may only be 5% of people who are interested in a trade. The numbers have reversed from degree to trade, and the smaller pool means it is harder to find quality applicants.

From the business perspective, not only are there fewer applicants, there is also a cost in time, supervision, and fees when contracting an apprentice. Where once there was enough fat in the system to have a journeyperson (recently qualified apprentice; Maggio, 1987, p. 71) supervise the new recruit, this seems to rarely happen now as journeypeople are few, and they are able to move to where the money is. Apprentices, as they learn on the job, need a lot of supervision, with almost all their work output needing to be checked - at least initially. While there is a lot of rhetoric about apprentices graduating to journeypeople without incurring fees, someone pays for that education. Usually it is the employer who pays the apprentice 'block' course fees; a cost which can be significant.

The current Labour Government is seeking to address the costs of apprenticeship by making trades block course training free via the Targeted Training and Apprenticeships Fund (TTAF), and by the employer grant of up to $16k/apprentice (Careers New Zealand, 2020; Tertiary Education Commission, 2020). TEC has just released a campaign, Vocation Nation, which you can see here:

This is a great initiative, but I wonder if this will be enough to redress forty years of trades training erosion. We are missing more than a generation of trades workers. Providing meaningful redress is likely to require more than what has been currently announced.

It will be interesting to see if the two-pronged programme works better than previous attempts. It is a good start.


Sam

References

read more "Calling all apprentices"

Wednesday, 10 March 2021

Literature review methodology

Once we start collecting our own data, I suspect we can become so fixated on our primary data collection methods that we run the risk of failing to consider our literature review methods.

It is important to remember the 'recipe' for all elements of our research, Our method scopes our secondary collection; ensures we are being a careful and systematic researcher; focuses us on what we want to achieve; and so provides the reader with a better reading experience.

We should start by having a clear research question which guides the literature review. This provides scope, context and helps us to avoid plunging down rabbit holes (Fink, 2014, 2019). We document the question.

Following the question development, we then need to determine where we will source materials from, and what the age range of the articles we include should be. If we are dealing with a very new subject, then we may decide to discuss concepts rather than theories, because the field is - as yet - too young for theoretical development (Ang, 2014). We may limit our range to the last five years (common in medicine), or we may throw it wide open (management). We may wish to use one or two databases only, or we may want to search all possible databases (Fink, 2014, 2019). But we document those screening criteria choices, and provide a short rationale for them.

Next we consider what our key search terms will be, so that our literature review pulls in the 'right' source materials to answer our research question. These search terms should spring from our research question, and search terms and question should inform each other.

We read what we have found. We make notes on each item. We search for themes within the literature, and note those. We evaluate the quality of the materials we have found. We look for gaps in the materials, repeating authors, citations, and other quality indicators. We need to explain how we have constructed definitions, authors consulted, authors not consulted, and to document our criteria for importance (Gavlan, 2009). As we progress, we note our methods in search, gap-filling, and how we create critique. All elements get added to our method, and help us to decide what other method choices will need to be made as we progress.

As we start to write, we note what kind of method we use to extract data from the source material, and our rationale for using it (Fink, 2014, 2019).

Once we have all the extracted source material, we can then start the first draft of our literature review synthesis. We note how we ensured we had covered all sides of the argument, and how we put our argument together (Gavlan, 2009).

We ensure, if we are doing primary research, that our epistemology for our secondary materials aligns with our primary data collection. Are we a qualitative or quantitative researcher? If we are a qualitative researcher, should we include quantitative research results in our literature review? Or not?

While I have posted on literature review types before (here), we need to decide what type of literature review will we undertake. Will we do a systematic review? A narrative review? We need to make these choices, and explain why we made them in our method (Aveyard, 2010). Batbaatar et al, in their 2015 patient satisfaction research, clearly detailed what was inside their project scope, and what was outside (see the image accompanying this post for their methodology).

If we use a diagram, such as the Batbaatar et al (2015) example, we also need to provide the rationale for those choices within the text. However, it is 'normal' to keep the method for the literature brief and considered within the first paragraph or two of the literature review chapter.

Hopefully this list reminds us of the considerations we need to include!


Sam

References:
  • Ang, S. H. (2014). Research Design for Business & Management. SAGE Publications Ltd.
  • Aveyard, H. (2010). Doing a Literature Review in Health and Social Care: A practical guide (2nd ed.). Open University Press
  • Batbaatar, E., Dorjdagva, J., Luvsannyam, A., & Amenta, P. (2015). Conceptualisation of patient satisfaction: a systematic narrative literature review. Perspectives in Public Health, 135(5), 243-250. https://doi.org/10.1177/1757913915594196
  • Fink, A. (2014). Conducting Research Literature Reviews: From the Internet to paper (4th ed.). Sage Publications, Inc.
  • Fink, A. (2019). Conducting Research Literature Reviews: From the Internet to paper (5th ed.). Sage Publications, Inc.
  • Gavlan, J. L. (2009). Writing Literature Review: A guide for students of the social and behavioural sciences (4th ed.). Pyrczak Publishing.
  • Kruse, S. D. & Warbel, A. (2009). Developing a Comprehensive Literature Review: An Inquiry into Method. http://learningandteaching.org/Research/Materials/litreview.pdf
  • LavallĆ©e, M., Robillard, P.-N. & Mirsalari, R. (2014). Performing systematic literature reviews with novices: An iterative approach. IEEE Transactions on Education, 57(3), 175-181. https://doi.org/10.1109/TE.2013.2292570
  • Onwuegbuzie, A. J., & Frels, R. (2016). Seven steps to a comprehensive literature review: A multimodal and cultural approach. Sage Publications Ltd.
  • Ridley, D. (2012). The Literature Review: A Step-by-Step Guide for Students (2nd ed.). SAGE Publications Ltd.
read more "Literature review methodology"

Monday, 8 March 2021

Verifying transcripts

In some fields, it is fairly standard for researchers to have interview transcripts verified by the participant as part of ensuring data trustworthiness. Those fields seem to be counselling and education. In my experience with applied business research, it does not seem to be normal practice (if this is something that you have regularly done, I would love to hear from you, along with any references you can provide!).

The process of verifying transcripts seems to me to be a significant load to place on voluntary participants at Master's level. The average transcript length for an hour of interview is usually somewhere between 5,000 and 10,000 words; and when I went back through my own research transcripts, I found that for a quarter hour interview the average word count was 3,000 words, and an hour was at 12,000 words. Translating that into pages, these average at 5 pages for a quarter hour interview, and between 14 and 27 pages for an hour.

The verification process requires the participant to read through the transcript while listening to what was said, then to come back to the researcher to verify that the transcript represents them accurately. This would mean, depending on reading speed, that we are asking participants for AT LEAST another two hours of their time to (a) read through that number of pages, and (b), to listen to the recording, and (c) to document where the transcript - or their answer - is inaccurate, and (d) to send us that documentation, so (e) we can amend the transcript. It is more likely that this process would take closer to four hours if a participant was a slower reader, or very thorough. Further, ESOL participants are likely to take longer than four hours with the tasks of review, verification, documentation, and sending.

So where we say we need an hour of a participant's time, we are actually asking them for three hours. Or five hours. Or more. This seems quite an imposition on the participant for what is likely to be a low risk project. It will also slow down the data collection process by having to chase participants one the transcripts are prepared (more on transcripts here).

As this is a large piece of work for the participant to undertake, I would imagine that many participants will either: (a) have to diary in a significant chunk of time to do the review; or (b) they will need many reminders from the researcher to complete the review; or (c) they will simply not get around to doing the review at all; or (d) worse, they will say they have done a review without having done it. If (a) to (c), this requirement jeopardises our students' ability to complete; if (d), the verification requirement was a waste of time, and adds an unjustified veneer of 'trustworthiness' to the project. Pseudo-trustworthiness.

Instead, I encourage my supervisees to (a) send their participants a link to a secure-storage cloud recording of the interview, and (b) add a phrase to the informed consent form (which is written from the participant perspective), as follows:

If, after the interview, there are any answers that I would like to change, withdraw, or clarify, I can contact the researcher on or before the withdrawal date. My amendments will be added to the interview transcript.

That gives the participant a chance to feed back in, but does not hold up the research process.


Sam

read more "Verifying transcripts"

Friday, 5 March 2021

Data collection questions framework

Where qualitative research is complex, with fragments of data being collected and confirmed across several different data collection questions, it can be difficult to see what is and isn't covered. To assist students, I created a framework to help them think about the questions they need to ask in order to collect the necessary data.

As can be seen in the accompanying image, I created a Word document with 6 columns (download here). I ask supervisees to fill in as much as they can for each column when they are putting together their data collection questions, then email each draft to me so we can discuss them. We will work through several drafts, until students are comfortable that they will collect the data they need.

The columns in the framework are as follows, with clarifiers:

  1. List EACH SINGLE data collection question, in the order you will ask your participant. Ask questions one at a time. Aim for 15 questions max for an hour’s interview. Chose those questions that we MUST have an answer to in order to answer our research question
  2. Why do you want to find out from this question? Note what ANSWERS we are seeking for this question. What do we want to find? What do we expect to find? Is there a difference between the two?
  3. Where has this been used in previous research? If we can use questions that have already been asked in research we have used in our literature review, then we can compare our data with expert results. Bonus.
  4. How will you analyse the resulting data? Think about HOW we will analyse the data. If we are simply going to be looking for themes, what themes will we be looking for? Do we have a list of codes already? Or have we already thought about the categories?
  5. How will this answer your research question? (which aim will this contribute to?). We must not 'dump and run' and focus just on a single aim, but get specific. Which part of an aim will this particular question help to answer? If we don't get specific, how will we cover everything off?
  6. Can your participants answer this? Is the meaning and language unambiguous? Are the instructions clear? We need to check that the language is clear; that the meaning is clear; that the question is precise; that if we are recycling someone else’s work, the question is worded in the same way, in the same context; that the participant has the knowledge to answer the question; that only one question is asked at any one time; that the question instructions are clear.

While sometimes we might simply be able to ask particular questions under each aim and be satisfied that we have a list of questions that will answer our overarching research question, this is not always the case. This framework at least means that we really think about what we are asking our participants, and why.

And we don't waste the opportunity to collect the 'right' data :-)


Sam

read more "Data collection questions framework"

Wednesday, 3 March 2021

Self-plagiarism, recycling credit and audience expectations

This year I have had a few situations with students 'recycling' work from other assignments. This was not really academic dishonesty: the students were honest about doing it. They just didn't realise that what double-dipping was actually academically dishonesty.

What happened was that students had a small, earlier assignment in a coaching course which they felt related well to a portion of a larger, later assignment on a leadership course. Because the institutes I work for use TurnItIn, this recycling showed up in similarity scores of 15% and upwards, for a third of the class. News had spread amongst the students that recycling the earlier assessment would save them those most precious of student commodities: effort and time.

Needless to say, when I reviewed the similarity scores, the numbers were a shock. We allow similarity scores - in the institutions I teach for - to be in single digits only. Double digits and upwards, the work is penalised at the face value of the similarity score (until we get to 25%, at which time it becomes an issue for the school manager to have a meeting with the student about). A high score is not the lecturers' job to investigate: it is the students' responsibility to advise a high score to their lecturer, and to explain why the score is high (then the student has 'managed' their score). However, the students all knew why it was high - they had recycled their own work - and none of them advised me.

With so many students with high similarity scores, instead of penalising everyone, I (a) sampled the similarity score reports to see if there was a pattern, then (b) went back to the students with an email, and an opportunity to learn, correct, and resubmit. Later I emailed them the following summary to close the incident:
When we submit work to TurnItIn, each of us must manage our similarity score. We each need to allow enough time to submit our work, check our score, then amend our work so that our score is below 10%. We read the help files for tips on how to write academically. If we cannot get our score below 10%, we must email our lecturer and advise why we cannot get our score lower. If we don't do that, then we know our work will be penalised at the face value of the score.

In our portfolio assessment, similarity scores were high because there were three things we didn't properly consider - or didn't know - before submitting:
  1. Firstly, resubmitting the same work without citing ourselves what is known as 'self-plagiarism'. TurnItIn talks about this here and here (2020a, 2020b). We can reuse our work, but we must cite ourselves, following all the usual APA citation and referencing rules: i.e. no more than 50 words per citation; use double quote marks to indicate it is a self quote; and provide the reader with a map back to the source (i.e, author, date, title, and source). If we have submitted our original work online, we simply provide the URL for where the work was uploaded to as our source.
  2. Secondly, there is the issue of 'recycling' credit. If we have turned in work for credit on one course, we cannot recycle it to get credit on another course. Otherwise we could pirate our own work again and again and do - over-exaggeration coming here! - 180 credits instead of 360 credits to earn our degree. This is contained in institutional policy (in NMIT's case, in an Academic Integrity and Academic Misconduct Policy which students are directed to from the Student Charter). NMIT's policy says that it is considered to be academic misconduct if we submit "work for summative assessment which has previously been submitted elsewhere, without the prior permission of the Curriculum Manager or delegate" (NMIT, 2019, p. 2).
  3. Thirdly, is our audience's expectations. Regardless of who our audience is, they are expecting the work we provide to be our own, original, and created to answer the question at hand (think Developer Bob). We have a psychological contract to uphold with our audience, along with our reputation for honesty. If we want to reuse ideas developed in previous work, what we can do is to rewrite those ideas, and to bend them to suit the new use that we are going to put them to. If we have particularly good elements that suit, then we must quote them in APA as per the first point.
We know it now though, and we won't make the same mistake again.
Since striking this problem, I have created "Help" posts on all my courses covering these points, so that students are now quite clear that while they cannot reuse the same material they have submitted for credit on past papers, they are very welcome to reuse ideas. You must cite them, honouring the author - ourselves - just as we would normally do.

This is obviously preying on my mind, as I have written about this more than once (here)!


Sam
read more "Self-plagiarism, recycling credit and audience expectations"

Monday, 1 March 2021

Transcripts and translations

I have a number of students who come to me suggesting that they will ask participants questions in one language, then simply transcribe the data, then translate it... as if this piece of work will be no problem. We can be completely naive about the volume of work we are - unwittingly - proposing that we take on!

For a short Master's project, we need to budget for roughly an hour's worth of interview per interviewee, to have 8-10 interviewees, thus collecting at least between 8-10 hours of data. While more data is good, 8-10 hours should give us the findings complexity and depth required for a short Masters project at 30 credits.

A single hour of interview recordings will contain roughly 5,000 to 10,000 words. If we use a transcription service, it will take an experienced transcriber between 4-10 hours to transcribe each hour of interview (IndianScribes, 2018). We need to brief the transcriber. We need to provide a key for what level of transcription we need (see University of California Irvine, 2014). We need to provide a sample format (e.g. here) so the transcriber knows what we want to get back. Once the transcripts come back, we must review and correct them (see Choi et al., 2012).

It has been documented that inexperienced data transcription has a factor of around 60 to 1 for a "plain" transcription (McCulloch, 2019). A full transcript, which includes tone, phrasing, interrupters etc, may take longer (see Saldana, 2009). We need to explain what our transcription process was, and why. An estimate of times are as follows:

  • If no tools, apps or shortcuts are used (see here for some ideas), this is likely to take between 480-600 hours to transcribe 8-10 hours of recordings (McCulloch, 2019)
  • If tools such as otter.ai are used, there can be a saving of a factor of 6: so 10 times, or 80-100 hours (my own method can be read about here, and Pogue, 2017)
  • An experienced transcriber will take so 32-100 hours for 8-10 hours of data, not including researcher time in checking and rechecking the work (IndiaScribes, 2018). However, we also need to be aware that "Accented speech is often charged at a higher fee to standard interview transcription and this needs to be acknowledged in research budgeting" (Fryer, 2019, p. 1670), so we are better to assume our transcription to be on the high side

As we can see by the hours listed above, if we are doing a short course and are under time pressure, it is worth paying for professional transcription. As it is, we are still going to have to create the brief, decide on the key, and consider the type of transcription we desire. Fryer has a great chapter which details the process very well (2019). We also need to carefully review the transcriptions to check the accuracy, validity, the time stamps, and the formatting. Although we will have been listening to our sound files, it is only once our transcripts are done that we can really start our data analysis.

However, we have not yet considered translation. When we ask questions in one language, then have to not only transcribe, but to translate so we can collectively analyse all findings. If we are planning on transcribing AND translating, this adds considerably to our workload.

Calculating hours for this task is difficult, as there are so many variables. It depends how literate we are in both languages: how quickly we can work across both to accurately capture meaning. It depends on how closely related both languages are: colloquialisms in French can be remarkably similar to English, while colloquialisms in Mandarin are extremely difficult to meaningfully translate into English. But I would estimate that each interview will take as long to translate as to transcribe.

To wrap up, we must ensure that our documentation is correct. Our research ethics application encompasses any use of transcription or translation services. In our methods, as well as noting how long each interview is and how many interviews we complete, we must also note how long interview transcriptions took, describe our method for creating accurate translations, and how we ensured that both transcription and translation were accurate (for some stellar insight into these processes, see Choi et al., 2012; Fryer, 2019; Regmi et al., 2010).

Due to time pressure on short projects, it is easy to see how transcription and translation can become a significant element of the research project, and is often a choke-point for completion.

If we are supervising, we need to have a realistic view of just how much effort and time will be required; only then will we be able to clearly support our students so they make good quality decisions.


Sam

References:

read more "Transcripts and translations "