Pages

Friday, 31 May 2019

National Perspectives and Maps

I am sure that all of us are well aware of the skewing required on the Mercator map of the world in order to show a three dimensional globe in two dimensions. However, it was only recently that I had a think about quite what the Mercator map does to our perspective of what has - or what we give - prominence, and what doesn't.

America looks quite important. But when you look at the world resized to actual size (see purple overlay in the image (Routley, 9 November 2018), we can see that the US is fairly similar in size to Brazil, Australia and China. The US is probably closest in size to China, map-wise. Russia is the largest nation. Canada can be clearly seen to be smaller than the US. Greenland fades into insignificance.

The other point of interest is where we split the map. It is most often split in the Atlantic, however, perhaps we might want to split it in the Pacific. It is a much cosier map, in my view, being from the Antipodes.

All this forgets about the AuthaGraph map, however, from Japanese architect, Hajime Narukawa, which took the globe, folded it is made by converting the globe to 96 triangles, moving the triangles to a tetrahedron while maintaining areas proportions and unfolding it to be a rectangle. This one is very interesting (here; AuthaGraph, n.d.).


Sam

References:
read more "National Perspectives and Maps"

Wednesday, 29 May 2019

Sugar equivalents in foods

Public Health Collaboration (2016b)
I read a very interesting article the other day on Type 2 Diabetes, the glycaemic index, and obesity. A paper by Unwin, Haslam and Livesey (2016) looked at how many doctors "erroneously assumed table sugar to affect blood glucose far more than the carbohydrate in a baked potato", while on the other side of the comparison, GPs "wrongly assumed that carbohydrate in different foods had a similar effect on blood glucose" (2016, p. 1). The researchers simplified the questions for doctors and their patients to ask about food to two: how carby is it, and how sugary is that particular carb?

A much simpler set of questions to ask.

They calculated the glycaemic loading as the equivalent of a 4 gram teaspoon of sugar, and created tables of carbohydrate foods which are regularly eaten. Looking at the tables of differences in breads, common foods, breakfast cereal, fruits, a 'healthy breakfast' shows clearly how much simple carbohydrate is in each portion (click on the infographic links here; Public Health Collaboration, 2016a).

Further, check out the portion size: some of these are tiny. For example, cornflakes are 30g containing 8.4 teaspoons of sugar. My husband would easily eat 90g of those for breakfast, which works out being the equivalent of 26 spoons of sugar. Ouch. Quite an impact on both blood glucose and sugar consumption.

I would have liked to have seen them examine some of the other starchy vegetables such as corn, squash, beetroot, carrots, and pumpkin. I don't eat these as they give me migraines (see here), so seeing the sugar equivalents in a table would have been helpful to for me to better understand the drivers of the condition.

The paper discusses the likely impact of sugar and high GI foods on Type 2 diabetes, reaching the conclusion that Ancel Keys did the human race a disservice by his focus on low fat, high carbohydrate treatments for diabetes. Other interesting aspects of the article for me were a definition of what is considered to be a low-carbohydrate diet (equivalent to or under 130 grams of carbohydrate per day), and a history of diabetes treatment from the 18th century onwards (Unwin, Haslam & Livesey, 2016).

I am already on a very low-carb diet (under 30g per day), and feel very well for it. Following reading this paper, I don't think I will be altering it any time soon.


Sam

References:
read more "Sugar equivalents in foods"

Monday, 27 May 2019

AI and the sky's a-falling 2

Henny Penny; Jacobs (1890, p. 182)
I have written on this topic before (here), but thought I would touch on another couple of issues affected by AI. As the World Economic Forum report says, success in the fourth industrial revolution relies "on the ability of all concerned stakeholders to instigate reform in education and training systems, labour market policies, business approaches to developing skills" (2018, p. vii).

Firstly, AI is already with us (Grant, 2018). We can see it in our smartphones every day with voice recognition; when using GPS to map a route; when verifying our banking; when using online chat. It will just become more seamless, and therefore more pervasive. At work this could mean we can use our time more wisely in planning, and less time fire-fighting.

Health care and hospitality robots are a long way off yet. The complexity required to truly cover the range of human movements - required for work in human environments - is nowhere near at the standard required yet. Robots are also phenomenally expensive. The flexibility will increase and the cost will be driven down, but we are talking many more years for true commercialisation (Grant, 2018).

However, we are creating an underclass. There are people who are becoming less and less employable, with their skills getting increasingly out of step with what the world of work requires. The World Economic Forum state that "54% of all employees will require significant re- and upskilling. Of these, about 35% are expected to require additional training of up to six months, 9% will require reskilling lasting six to 12 months, while 10% will require additional skills training of more than a year" (2018, p. ix). Approximately 50% of future roles will require STEM qualifications (Grant, 2018). The big roles are predicted to be "Data Analysts and Scientists, Software and Applications Developers, and Ecommerce and Social Media Specialists" (World Economic Forum, 2018, p. viii). If we do want to shift the power to the people, education is the key, and strong and robust science, technology, engineering or mathematics training is essential. Whether that is force-fed in schools, or whether we sow seeds and encourage bite-sized training later, more like apprenticeship block courses, will be up to our educators to choose, country by country. But they will each need to have a strategy.

Lastly, I would like to mention Finland. They are getting something really right in STEM education. Yes, they are pretty much mono-cultural, but we need to look carefully at what they are doing, and see if we can do it too. Finnish teenagers spend fewer hours doing homework than many nations (2.8 hours per week), play more, and have only one set of national qualification exams (World Economic Forum, 21 November 2016). Yet they have a 99% graduation rate (WorldTop20, n.d.), and score better in maths and science than the rest of us.

These are complex social and economic issues. But they are navigable providing we don't get into a mindset of "the sky's a-falling" (Jabobs, 1890, p. 182) over AI, and deal with the actual problems: education, dependency ratio, declining population and that AI will take time to evolve.


Sam

References:
read more "AI and the sky's a-falling 2"

Friday, 24 May 2019

AI and the sky's a-falling 1

Jacobs (1890, p. 182)
Gosh, we humans are slow to recognise patterns. Once we all worked at home doing the menial labour on our tiny landholdings or doing menial labour on someone else's land. When the industrial revolution began, everyone was going to be out of work. The opposite happened: we realised that we could all go and work in a factory, and have some resources left over. We started getting ideas above our station.

So we move on to the computer age, and suddenly we were all going to be out of work again. But instead, there was more work, for even more people. Ditto for automation.

Now many people are calling "the sky's a-falling" again (Jacobs, 1890, p. 182), this time about AI. Oh, yes, but it is different this time. It will happen faster. We won't have time to adapt (Rayome, 24 January 2019).

Call me cynical, but I remember the 'paperless' office that was going to revolutionise the workplace. Didn't happen. Still isn't happening (although I am largely paperless, myself, very few people or organisations are). I remember how automation was going to swamp us, that all our jobs could be done by programming. Everything would go to via an automated call centre. End of the world. No work for anyone. Didn't happen. Still isn't happening.

Instead we have gone to smartphones and have lots of people now manufacture apps. The once terribly complex computer language and logic has been simplified. Fewer errors happen. We continue to find more and more things for people to do, to engage with, to be challenged by, and to earn a living at.

Yes, some jobs will disappear, but many more will be created. This is utterly unverified, but I read somewhere that once there were something like 200 professions, and now it is more like 200,000 (actually, if any of you have any reliable sources and numbers for this, I would be very interested in hearing!).

There are also some interesting population trends. As soon as we earn over $10 per day, and child mortality falls in line with WHO guidelines, we stop having more than two children (see the Gapminder Foundation for more info). Then we need to consider the dependency ratio (Grant, 2018). This is the number of people who are not in work - retirees, those in education, at-home parents - who need to be supported by a decreasing ratio to working age people, as we continue to live much longer than the three years of paid support intended by global governments as a reward for our life-long efforts. With an average death rate of around 80 years, that is 15 years which governments cover now, rather than 3. A major fiscal blowout. In China they call this "4-2-1" meaning that one working child supports their just retired parents and their four long-retired grandparents (Goldstein & Goldstein, 2015). There is talk about this becoming "8-4-2-1" as life expectancy continues to increase.

So, I am dubious about claims that the world will end because of AI. There are some economists who agree. Take Rainer Strack for example. He has an impressive pedigree in HR consulting with the Boston Consulting Group and is predicting a fairly significant workplace shortage in Germany by 2030 (Strack, 3 December 2014).

Denning (29 October 2014) has research evidence indicating that new roles are created by private sector organisations which are less than five years old, holding roughly the same pattern since the 1980s. Even more interesting is that those established organisations which are "more than five years old destroyed more jobs than they created in all but eight of those years".

So maturing companies automate. New ones hire people. There will not be enough people for the jobs we currently have.

Let's grow some entrepreneurs and worry less about whether the sky's a-falling.


Sam

References
read more "AI and the sky's a-falling 1"

Wednesday, 22 May 2019

De-stress and Create

Stress is an interesting thing. When stressed, I know I tend to stop looking for new information: instead I rely on systems I have already set up to carry me through, and I tend to worry about the team. So I was very interested to read a short from HBR about some tactics to get creative when we are stressed.

Attempting to relax by doing something different is the first step that Saunders (29 November 2018) offers: doing an activity that you know will allow your mind to wander. I thought of ironing (yeah, right!), swimming, walking or bathing the dog. Things that you can do on auto-pilot, so your thoughts can free-wheel. Keep your hands busy while the boys in the basement get to work (King, 1998).

The next step that she suggests is to get new information (Saunders, 29 November 2018). Your source could be a trip to the local library; getting some research articles; a trip to a local organisation to see how they tackle similar work; a networking event; a short course on the topic; a conference; or collaborating with others. New input should lead to new outputs.

The last point she makes is that we can't rush creativity. Ideas take time to form, so we need try not to get too attached to getting to a speedy outcome.


Sam

References:

read more "De-stress and Create"

Monday, 20 May 2019

Being Wrong, Part 1

I don't know about you, but I hate being wrong. It is even worse when I have climbed upon my high horse, then have to climb down again, experiencing that warm wash of shame. The more sure I have been about being 'right' the hotter the shame on facing the evidence of my failure.

But if we want to learn, we can't avoid being wrong. Being wrong means that we are learning, because learning changes us and moves us from one place to another. Learning changes our perspective. It shows us that what we think is going on is merely the top fraction of a millimetre, and to really get to grips with things, we will have to dig a lot deeper.

Being wrong is a normal risk with learning anything new. Yes, we run the risk of looking foolish, of being thought less of by our peers, of showing that we don't know it all. But when we consider that we now all pay to be students, that is a pretty silly way to think. Why would we pay to stay the same? Unless we are open to the fact that we don't yet know it all - obviously, or else why are we studying - we aren't welcoming the challenge of new ideas.

To learn, we must move ourselves from a fixed mindset to a growth mindset (Dweck, 2006). To do that, we have to be able to question our own beliefs, to seek new evidence, to analyse and to test new ideas, and to acknowledge that the new ideas may well displace our existing ones.

If they didn't, I would be worried that we had wasted our money.

So why is it that in the workplace, we like to think that the manager has all the 'right' ideas? Why do some organisations create a culture of 'don't ask questions'? That's pretty risky too, I think: and a wrong-headed way to manage.

There is a new book out, called "Questions are the Answer: A Breakthrough Approach to Your Most Vexing Problems at Work and in Life" by Hal Gregersen. It deals with being wrong, but with a view to reframing our approach, and how that strategy is very good for us (Vozza, 21 January 2019). I have read a sample chapter, and am hoping to get a copy so I can find out if it will change me.

I hope so. I will report back later in the year.


Sam

References:
read more "Being Wrong, Part 1"

Friday, 17 May 2019

A tiny advance on the Pav

The origins of the Pav continue to be murky (read here and here), but I have a tiny update.

For those who have not read about this before, the origins of the pavlova are somewhat contested between New Zealand and Australia: both of whom claim invention. The originator of the pavlova is not Herbert Sachse in Perth in 1935, as a great deal of documentary evidence has been found which is earlier than that.

The earliest currently known example of a pavlova recipe is in 1926, in the 5th Australian edition of the Davis Gelatine Company's Davis Dainty Dishes recipe book (Leach, 2009), a layered, gelatine dessert. I contacted Emerita Professor Helen Leach at the University of Otago about the possible publication date of the Davis Dainty Dishes recipe book. She advised that Davis Gelatine didn’t include a month of publication in their recipe books.

So the first Pavolva - at present - is a gelatine dessert; not a whipped cream, fruit and meringue confection (Leach, 2009).

However, Professor Leach gave me some additional information (which ties in strongly with the Tracey Tufnail piece in the Vancouver Sun, 15 February 2015). She told me that "The story about the Wellington chef was told to Harry Orsman when he was compiling his Dictionary of New Zealand English. His informant had been a member of the Wellington Ballet Co. in 1926 and attended some of the receptions. These were written up in great detail, e.g. in a Dunedin newspaper on 22 June 1926. But though the flowers decorating the reception rooms were described in detail, there was no mention of any of the food. I know that it wasn’t the done thing to describe food, but if a chef made a presentation of a cake he had devised you might expect it to be mentioned. I used Papers Past very intensively and found no written evidence at all that this presentation took place. What makes it even more problematic is the fact that there are similar hand-me-down stories about a chef in Sydney and another in Melbourne who independently named meringue cakes after her in 1926." (Leach, 9 January 2019).

She went on to tell me that although she has conducted an extensive search of publications and news media, she was unable to find anything relating to a dessert being named in honour of the dancer on the New Zealand leg of the tour. She also felt that this was a fairly significant event, which, given the excitement around Anna Pavlova, would have made its way into the public record (Leach, 9 January 2019).

Keith Money's biography said in his Anna Pavlova biography that there was “…the creation by a hotel chef of a meringue confection named in her honor…” (Money, 1982, p. 352) on the South Island leg of the tour. While it is possible that Harry Orsman was the source of Keith Money's story, that told by Tracey Tufnail and that told by Keith Money are fairly different. I am hoping that there may yet be evidence in - perhaps - a 1926 Christchurch or Timaru newspaper or newsletter which may shed more light.

Watch this space.


Sam

References:
  • Leach, H. M. (9 January 2019). A quick question on the Pavlova. [Personal Correspondence]
  • Leach, H. M. (2009). The Pavlova Story: A slice of New Zealand's culinary history (second printing). Dunedin, New Zealand: Otago University Press. (images from pp. 44-45)
  • Money. K. (1982). Anna Pavlova: Her Life and Art. New York, USA: A. Knopf & Sons
  • Tufnail, T. (21 February 2015). Bitter rivalry and a sweet ending; Pavlova is the quintessential New Zealand dessert - just don't tell Aussies. The Vancouver Sun. http://search.proquest.com.libraryproxy.griffith.edu.au/docview/1657260381?accountid=14543
read more "A tiny advance on the Pav"

Wednesday, 15 May 2019

Rolling back a Windows 10 update

If we can get a Windows update to actually install (read here), sometimes that update needs to be uninstalled. That’s when Ed Bott from TechRepublic rides to the rescue with how to roll back an update in Windows 10.

He notes that the roll back has three pieces.

Firstly:
  • Go to Settings | Update & Security | Windows Update,
  • Click View Update History,
  • Then click Uninstall Updates
  • This will open the old-style Control Panel interface.
  • Select the problematic update, and click Uninstall (NB: use Safe Mode might be required to do this if the compatibility problems are really bad)
Secondly, we need to the prevent the troublesome update from auto-reinstalling after restart, which requires a Microsoft software download.
  • Go to https://bit.ly/show-hide-update to download Microsoft's "Show or hide updates" troubleshooter
  • Run the troubleshooter
  • Choose “Hide Updates”, and select the troublesome update.
Thirdly, run the troubleshooter again, and “Show Updates” to unhide that the troublesome update, just to be sure that it is safely parked!

Thanks, Ed! Another great fix.


Sam

read more "Rolling back a Windows 10 update"

Monday, 13 May 2019

Choking on a Windows 10 update

Late last year I had a problem with my laptop, which was choking on a Windows 10 update. The update was bigger than the remaining space on my drive, so kept crashing the machine. I would start the laptop up, and providing I stayed off the internet, it would keep working. As soon as I was connected, it would crash: maybe every two minutes, maybe every 30 seconds.

Worse, I was away, and my laptop was my only means of doing work.

It took me a while to realise that having an internet connection had anything to do with what was going on though. Then I had the frustrating job of hoping that I could log on for long enough to be able to turn off the WiFi.

Once that was done, then I had to search for what was actually going on. One of the key diagnostic tools I used was the freeware Mini-Update Tool(which you can download from MajorGeeks.com here*). I read about this software on TechRepublic, in a column by Michael Kaelin (11 December 2018).

Why? Well, in Michael's words, "Microsoft has gone out of its way to make an update process for Windows 10 that is as automated, as seamless, and as painless for users as is possible. For the most part, they have been successful at this endeavor. However, occasionally, something goes wrong, an update fails, and users get frustrated, and even angry" (11 December 2018). I could testify to that. I was ready to throw in the towel and buy a new laptop.

He continued, the Mini-Update Tool "can reveal more detailed information about the update process and provide the tools necessary to completely self-manage Windows 10 updates".

Well. I installed the software, connected to the internet, and found the problem. The Windows update was bigger than the remaining space on my HDD. I had a ridiculous 87Gb - yes, eighty seven gigabyte - download choking the system.

So I changed a few settings, and now Windows can no longer update my laptop (see how to to that here). From now on, I will chose when - and what - to update.

Back to old school computing.


Sam

References:


* to download Mini-Update Tool click the "Download Now" part of the breadcrumb trail on the MajorGeeks.com page: MajorGeeks.Com » System Tools » Windows Update » Windows Update MiniTool 20.12.2016 » Download Now
read more "Choking on a Windows 10 update"

Friday, 10 May 2019

Managing research risk with HACCP

Part of undertaking research is to acknowledge our limiters (our potential biases) and our delimiters (our scoping to make our question answerable).

Whenever we identify a limiter or a delimiter, we then need to DO something about it. We need to manage the risk we have identified. Like risk management, we identify our problem, then either eliminate, substitute, minimise, or isolate the risk.

Hazard Analysis and Critical Control Points (HACCP) is one of the best known risk management processes on the planet. Designed to manage food safety risk, I think most of the steps are equally applicable to research risk management.

Below are six of the seven the HACCP principles, as I have adapted them for research projects:
  1. Conduct a careful risk analysis. Read about potential biases. Understand how each could affect our research project.
  2. Determine the key biases. Discount the least likely biases. Decide which biases are most likely to be factors in our research project.
  3. Establish criteria to spot those biases as they occur.
  4. Create alternative actions to limit, eliminate, substitute or isolate those biases.
  5. Set verification actions to be sure that each bias is not present, such as cross-questioning, control groups, split data set and cross-tabbing.
  6. Write up all biases and corrective actions in our methodology.
A limiter is simply a risk. We just need to manage it like a risk.


Sam

read more "Managing research risk with HACCP"

Wednesday, 8 May 2019

Qualitative coding using software

Thesis Whisper Inger Mewburn posted last year, "Are the robots coming for our (research) jobs?", discussing the use of support systems for research, such as Grammarly, NVivo and Interpris. Given a trial version of Interpris (from QSR, the makers of NVivo), Mewburn was pleasantly surprised at how well the software analysed qualitative themes from a raw data spreadsheet: as she put it, "Interpris had done in less than a minute what would take me at least a day – maybe two" (25 April 2018).

What Interpris had done was to look for likely codes. It found far too many, and so plenty of weeding and reorganising had to be done. But because of the in-built AI, it aims to know us and to add our complexity to the suggested codes as time goes on. The best thing is that it could - should - refine without our biases. Nothing like isolating a limiter!

In my view, I think anything that helps us do our work more easily, more replicably - and thus with more replicability - has got to be a bonus. I disagree with the purist view, which is that the thinking, structure and planning has to be done before we get to the analysis part, otherwise we get that old management "systems theory": garbage in >> transformation >> garbage out (Von Bertalanffy, 1968). In qualitative research this 'purist view' is the wrong approach to the research anyway: we are supposed to start our coding once we see our data. We can't set out to analyse with too much structure, otherwise it adds to our biases.

While I love Johnny Saldana's (2009) book, The coding manual for qualitative researchers, I have no problem using a machine leg-up to start to pull out my codes. Interpris's cut would be the first draft though, to be built on, thought about, verified, tested: all those things that researchers do. It should not be taken as an end-point, but as one interpretation of the data.


You can see what Interpris can do here:



Interpris sounds very exciting. I can't wait to see it in action.


Sam

References:
read more "Qualitative coding using software"

Monday, 6 May 2019

Entrepreneurs and uncertainty

I have written about the work of Sarasvathy before (here), whose research illustrated that entrepreneurs use the means to create the ends. One of the reasons for the different entrepreneurial approach is the uncertain and ambiguous environment which entrepreneurs work within (2008).

Like research, entrepreneurs are making something up as they go along, something that has never existed before, which has no boundaries in three dimensions. There is no handbook, no guide that will work. Every choice has a million - or more - possible answers, with probably half of them viable solutions. There is no time to stop and check each one: we simply weigh up each one as we go, and trust that the boys in the basement (King, 1998) will keep us roughly on track. In such an unstructured environment, it is very hard to know whether an entrepreneur's acts are rational, or not.

in her research, Professor Sarasvathy found that entrepreneurs face many of areas of uncertainty all at once, which more traditional corporates don't encounter. She used three theories to describe these uncertainties: Knightian uncertainty, goal ambiguity and isotropy (2008).

Sarasvathy details these theories as follows (2008):
  • Knightian uncertainty: "the probability distributions and even outcomes are unknown, making it impossible to calculate probabilities or expected consequences" (p. 70). As a result, entrepreneurs tend to look at what they can afford to lose, rather than gain, when evaluating risk and opportunity cost. They also "[mis-]trust predictions. Instead they work to ‘confirm by experience’" (p. 92)
  • Goal ambiguity: "preferences are neither given nor well ordered" (p. 70), and, "although goals at the highest levels might be clear", "their operationalizations at lower levels may be highly ambiguous" (p. 113)
  • Isotropy: there are "elements of the environment" (p. 70), with many "decisions and actions involving uncertain future consequences, [where] it is not always clear ex ante which pieces of information are worth paying attention to and which not" (p. 69)
These uncertainties are very interesting, and feel very applicable. Entrepreneurs operate in a sea of uncertainty, ambiguity and amorphous inputs and outputs.

Amazing that anything ever gets done at all.


Sam

References:
  • King, Stephen (1998). Bag of Bones. USA: Scribner.
  • Sarasvathy, S. D. (2008). Effectuation: Elements of Entrepreneurial Expertise (New Horizons in Entrepreneurship). Cheltenham, UK: Edward Elgar
read more "Entrepreneurs and uncertainty"

Friday, 3 May 2019

Entrepreneurs: the means prescribe the ends

Knowing several entrepreneurs, I don't think it is original ideas that entrepreneurs run on. From what I have seen, it is more often often taking an existing idea from another discipline, then leveraging it for all it is worth in their sector. Success is measured pragmatically by dollars in the bank and customer word of mouth.

What is really interesting, is that - according to research by Professor Sarasvathy - entrepreneurs don't plan much. Instead they ensure there is a lot of doing, and a lot of sales - $ in the bank. Sarasvathy (2008) calls this phenomenon effectuation. She studied 45 entrepreneurs and corporate managers to see if there was a difference in how they operated. There was, and the difference is called effectuation.

To define effectuation, I like Sarasvathy's wording: effectuation "take[s] a set of means as [a] given and focus[es] on selecting between possible effects that can be created with that set of means” (2001, p. 245). Burkman (2012) interviewed Sarasvathy and was told that the entrepreneurs in her research "behaved more like ordinary, time-pressed home cooks, checking what was in the fridge and the cupboards, then figuring out, on the fly, what they could make and how". The means prescribe the ends. This is what most entrepreneurs do.

Sarasvathy explains that causal processes start from the ends, and work back to the means: causation "take[s] a particular effect as [a] given and focus[es] on selecting between means to create that effect" (Sarasvathy, 2001, p. 245). Causation is what most corporates do.

We can think of causation as being decision-making problems, and the thought-process used are choice-based. Effectuation is a design process problem which helps us to make something new. As Sarasvathy says, "Causal strategies are useful when the future is predictable, goals are clear, and the environment is independent of our actions; effectual strategies are useful when the future is unpredictable, goals are unclear and the environment is driven by human action. The causal actor begins with an effect he wants to create and asks, ‘What should I do to achieve this particular effect?’ The effectuator begins with her means and asks, ‘What can I do with these means?’ And then again, ‘What else can I do with them?’" (2008, p. xii)

An interesting delineation.



Sam

References:
  • Burkeman, O. (2012). The Antidote: Happiness for People Who Can't Stand Positive Thinking. Melbourne, Australia: The Text Publishing Company
  • Sarasvathy, S. D. (2008). Effectuation: Elements of Entrepreneurial Expertise (New Horizons in Entrepreneurship). Cheltenham, UK: Edward Elgar
  • Sarasvathy, S. D. (2001). Causation and effectuation: Toward a theoretical shift from economic inevitability to entrepreneurial contingency. Academy of Management Review, 26(2), 243–63. https://doi.org/10.5465/amr.2001.4378020
read more "Entrepreneurs: the means prescribe the ends"

Wednesday, 1 May 2019

Feel the procrastination and do it anyway

Getting past procrastination is a key factor for good time management. I have written about procrastination before (here, here, here and here), but the thief of time - procrastination - is always worth another look.

It is human nature to procrastinate. It has been defined as "an irrational tendency to delay tasks that should be completed" (Flett, Blankstein, Hewitt, & Koledin, 1992, p. 85). Just consider all those citizens of Pompei who saw the volcano getting more active, felt the ground rumbles, and decided that there would be plenty of time to move. A great example of irrationality.

Procrastination is not a thing on its own. It is the result of a number of drivers, such as fear of failure, task 'aversiveness', disorganisation, and high social standards/expectations. Fear of failure links to perfectionism. High social standards/expectations can lead to rebelliousness (Flett, Blankstein, Hewitt, & Koledin, 1992; Lay, 1987). There are optimistic and pessimistic procrastinators: the "she'll be right" and the "can't be bothered" (Lay, 1987).

Fascinating. Of course, there are many self-help books which are designed to overcome procrastination. When we consider that the self-help industry was worth $11b for the 2008 year (Lindner, 15 January 2009), I bet it is a whole heap bigger now! Plenty of books to choose from, and still no end in sight to procrastination.

What those books tend to tell us is that we just have to get into the right mood to take action. But, as Burkeman (2012) says, "feeling like acting and actually acting are two different things". We use inaccurate language: we aren't unable to write, we are unable to feel like writing. We shouldn't wait to feel like doing anything: instead we should just "note the procrastinatory feelings, and act anyway."

Have a think about this from Burkeman (2012):
[...] the daily rituals and working routines of prolific authors and artists [...] tend to emphasise the mechanics of the working process, focusing not on generating the right mood, but on accomplishing certain physical actions, regardless of mood. Anthony Trollope wrote for three hours each morning, before leaving to go to his job as an executive at the post office; if he finished a novel within a three-hour period, he simply moved on to the next. (He wrote forty-seven novels over the course of his life.) The routines of almost all famous writers, from Charles Darwin to John Grisham, similarly emphasise specific starting times, or number of hours worked, or words written. Such rituals provide a structure to work in, whether or not the feeling of motivation or inspiration happens to be present. They let people work alongside negative or positive motions, instead of getting distracted by the effort of cultivating only positive ones. ‘Inspiration is for amateurs,’ the artist Chuck Close once memorably observed. ‘The rest of us just show up and get to work.’
I like this approach. Add structure. Note our tendency to procrastination, and get on with the task anyway.


Sam

References:
read more "Feel the procrastination and do it anyway"