Pages

Showing posts with label test instruments. Show all posts
Showing posts with label test instruments. Show all posts

Friday, 9 May 2025

Making sense of testing

We use career assessments in order to help our clients in identifying their unique characteristics. Each assessment is designed to measure different components, thus - with appropriate interpretation - assisting our clients to find career options which match their particular attributes, values, and skills (Osborn & Zunker, 2016).

While tests can assist client's decision making processes (Whitfield et al., 2009), to be effective, those tests need to be reliable and valid (Walsh & Betz, 2000). If a test is valid, it means that it actually measures what it says it measures: it does what it says on the tin (Heale & Twycross, 2015). There are three key types of validity: content validity (test accuracy); construct validity (does what it says on the tin - e.g. testing for job search skills might inadvertently be evaluating problem-solving skills); and criterion-related validity (where the same factor - or variable - is measured each time, through 'convergent' validity which is strongly correlated with similar tests; 'divergent' validity with poor correlation to different tests; and 'predictive' validity where the test is highly correlated to related factors - e.g. being task-oriented should lead to being a completer/finisher) (Heale & Twycross, 2015). 

Tests also need to have been normalised for the population group our client affiliates (awhis) to. That means that, when assessments are created, researchers have run a number of sample tests (usually around 300; Steve Evans, personal communication, 13 September 2021) on each population group, seeking normal distribution in the test results via cultural, ethnic, gender, political and socio-economic group factors (Hansen, 2003; Osborn & Zunker, 2016). We can see that normalising tests is going to be an expensive business, in giving 300 tests to measure each norm group.

We also need to have consistent test-retest rates: the same result needs to be achieved each time the test is run (Heale & Twycross, 2015). If our client does a test in March, we don't want to see that they obtain a completely different result when they repeat the test in July (one of the main bug-bears of MBTI; Mastrangelo, 2001). While it’s not possible to perfectly assess each career instrument, we can estimate their replicability (Heale & Twycross, 2015) through “internal [...] and test-retest reliability” (Osborn & Zunker, 2016, p. 37).

And, while we might have all reliability, validity and representative norm groups, we might still find that our client does not suit the test we propose. The client may complete the test and end up with results which make no sense. For example, each time I complete a RIASEC test, I get a different score. Over the years, I think I have seen a pattern: that in those of us with very generalist skills, the RIASEC test may lose it's test-retest reliability. I offer RIASEC here as one example: it is not the only one I have noticed. I have had clients who achieve poor results from HBDI, from MBTI, and from DiSC. All tests do not necessarily suit all people.

We must take all quantitative tests with a pinch of salt :-)


Sam

References:

Hansen, S. S. (2003). Career counselors as advocates and change agents for equality. The Career Development Quarterly, 52(1), 43-53. https://doi.org/10.1002/j.2161-0045.2003.tb00626.x

Heale, R., & Twycross, A. (2015). Validity and reliability in quantitative studies. Evidence Based Nursing, 18(3), 66-67. https://doi.org/10.1136/eb-2015-102129

Herr, E. A. (2001). Chapter 2: Career Assessment: Perspectives on trends and issues. In J. T. Kapes, E. A. Whitfield (Eds.), A counselor's guide to career assessment instruments (4th ed., pp. 15-26). National Career Development Association.

Mastrangelo, P. M. (2001). [251] Myers-Briggs Type Indicator [Form M]. In B. S. Plake & J. C. Impara (Eds.), The fourteenth mental measurements yearbook (816-820). Buros Center for Testing.

Osborn, D. S., & Zunker, V. G. (2016). Using Assessment Results for Career Development (9th ed.). Cengage Learning.

Walsh, W. B., & Betz, N. E. (2000). Tests and Assessment (4th ed.). Prentice Hall.

Whitfield, E. A., Feller, R. W., & Wood, C. (Eds.). (2009). A counselor’s guide to career assessment instruments (5th ed., pp. 13–25). National Career Development Association.

read more "Making sense of testing"

Wednesday, 2 April 2025

Using career assessments from other countries

It is quite a process to create, test and normalise career assessment instruments (Stuart, 2004), but living in the Antipodes, where we have such a small population - only 5m - it would also be a costly procedure. Pretty much the only quantitative tools we have in Aotearoa are tests which have been internationally-developed. So, if we career practitioners in New Zealand want to give our clients evidence-based assessments, we have to rely on those which have been developed elsewhere. But are those international assessments worth using, from a cultural appropriateness point of view, or should we avoid quantitative testing altogether?

Due to our geographic isolation, rural roots, and confluence of Pākehā, Māori and Pasifika ethnicities, New Zealand's multicultural society is unique. Māori and Pasifika cultures have tended to focus more on collective well-being, interdependence, and respect for the environment; as opposed to the Western individualism arising from the Pākehā settlers (Harmsworth, 2005; while noting that all three culture are moving closer together). Our social norms, leadership styles, and personal interactions of Aotearoa mean that we prize modesty, practicality, and resilience (Harmsworth, 2005). Due to Māori and Pasifika cultural influence, all New Zealanders may have more community-oriented career goals, on average, than other nations. In fact, the John Hopkins Institute collected and cross-tabulated UN volunteer data, which showed that New Zealand has the most volunteers by a third, even though our not-for-profit sector is smaller than some other nations (Belgium, Australia and Israel; GMVP, 2013). Volunteering in New Zealand appears more culturally endemic than in Australia; apparently 50% of Kiwis volunteer versus 5% of Aussies volunteer (SNZ, 2006; VNZ, 2024). 

Our differing values may mean that international test validity may not translate to test validity here in Aotearoa. But why should we use quantitative assessments anyway? Well, there are good reasons. It seems that clients who complete assessment instruments have a deeper understanding of their own interests, values and strengths (Heppner et al., 1994). In addition, clients tend to make more informed career decisions, and seem to experience less career indecision as a result of testing (Heppner et al., 1994). Even better, clients who took assessments as part of seeing a career practitioner experienced more positive career outcomes, including better career goal alignment, increased job satisfaction, and improved career advancement (Heppner et al., 1994).

It appears that knowing ourselves may assist our career decision making, how we further our careers, and make us happier in our work. So, as long as we don't put too much emphasis on the tests (don't treat them as gospel), then the tests give our clients some clarity.

Bonus.



Sam

References:

GVMP. (2011). The Global Volunteer Measurement Project. http://volunteermeasurement.org/

Heppner, M. J., O'Brien, K. M., Hinkelman, J. M., & Humphrey, C. F. (1994). Shifting the paradigm: The use of creativity in career counseling. Journal of Career Development, 21(2), 77-86. https://doi.org/10.1177/089484539402100202

SNZ. (2006). Finding and Keeping Volunteers [report]. Sport New Zealand [formerly SPARC]. http://www.sparc.org.nz/filedownload?id=850d18af-002f-40b7-b989-5a99e5b40f82

Stuart, B. (2004). Twelve Practical Suggestions for Achieving Multicultural Competence. Professional Psychology: Research and Practice 35(1) 3–9. https://doi.org/10.1037/0735-7028.35.1.3

VNZ. (2024). State of Volunteering Report 2024 [report]. Tuao Aotearoa | Volunteering New Zealand. https://www.volunteeringnz.org.nz/wp-content/uploads/f_SOV-report_2024_web.pdf

read more "Using career assessments from other countries"

Wednesday, 5 February 2025

Evaluating employee strengths

In my reading, last year, I encountered a meta-analysis (of sorts) by Miglianico et al. (2022) where the researchers evaluated 27 value or strength instrument studies published between 2010 and 2019, using a range of methods: cross-sectional; diary; experimental; and quasi-experimental.

Out of the review of this range of studies, and supported by the literature, the researchers proposed a five-step flow diagram for workplaces to identify and develop employee strengths. The five steps are shown in the image accompanying this post, and the steps themselves - as outlined by Miglianico et al. (2022) are as follows:

  1. Work with the employee and "educate the[m...] about the strengths approach and the proposed [career] intervention". Employees must understand "and appreciate the value of the approach, to understand the steps involved in the process, and to be actively and genuinely involved in the intervention (Clifton and Harter 2003). The approach’s origins, advantages and limitations, as well as the overall process, must therefore be presented, and all questions must be answered (Dubreuil and Forest 2017). This step helps reduce negativity bias, the natural tendency of humans to give more attention to negative than positive information (Ito et al. 1998), and fully engage participants in the intervention from the start" (Miglianico et al., 2022, p. 757)
  2. Next we "identify the person’s strengths" via "a psychometric instrument (e.g., StrengthsFinder, VIA-Survey, StrengthProfile), or in a less restrictive way by observing oneself (e.g., identifying activities that involve performance, energy, authenticity and flow; Biswas-Diener et al. 2011; Linley 2008; Linley and Burns 2010), or by collecting feedback from peers". Using a range "of different methods (e.g., psychometric instrument and feedback from peers) can yield a more accurate and complete picture of an individual’s strengths" (pp. 757-758).
  3. Following that, we assist the employee to absorb the identified strengths and integrate them into their identity. Allowing time for employees "to fully grasp and assimilate this new information, better understand the reasons for [their] actions and observe [their] behavior in light of personal strengths. This new conceptualization of self can then be integrated into the identity before planning the next steps (Clifton and Harter 2003). It can be facilitated by appropriation exercises, such as specific questions linking strengths to previous successes (Dubreuil et al. 2016), feedback analysis (Roberts et al. 2005b), and self-portrayal exercises (Forest et al. 2012), in order to help the individual gain a deeper awareness of [their] strengths" (p. 758)
  4. Once complete, we next put the ideas into action, in two parts. To begin, the employee "decides the specific changes [they want] to put in place to make better use of [their] personal strengths. The individual then implements the intended transformations. To help workers move from theory to action, strengths must be invested in specific individual, group, or organizational goals and initiatives (e.g., personal objectives, team projects, new tasks and responsibilities, complementary partnerships, etc.), and their application must be monitored or closely followed by managers, peers or coaches, who can provide support and encourage progress (Linley 2008). In the long term, it is important that the person always remain careful to avoid the overuse of strengths, and rather aims to use the right strength, to the right amount, and at the right time" (p. 758).
  5. Lastly we review. "[R]esults can be evaluated subjectively through the individual’s appreciation of the progress made (in terms of strengths awareness and use, goal achievement, overall well-being, etc.), or objectively through changes in various variables that were measured prior to the intervention: wellbeing, job satisfaction, motivation, work engagement, or job performance. A measure of the impact of the intervention can then make it possible to ensure the effectiveness of the procedure and allow for readjustment if necessary" (p. 758).

This is a very handy process outline. Because it is research-based, we can probably rely on this meeting the needs of both individuals and the organisation. I think it would be relatively easy to implement; and I suspect it would also be easy to monitor, and to tweak. 

A simple tool for organisational growth, delivered through individual development.


Sam

Reference:

Miglianico, M., Dubreuil, P., Miquelon, P., Bakker, A. B., & Martin-Krumm, C. (2020). Strength use in the workplace: A literature review. Journal of Happiness Studies, 21, 737-764. https://doi.org/10.1007/s10902-019-00095-w

read more "Evaluating employee strengths"

Friday, 6 October 2023

Exploring the NEO Personality Inventory-3 (UK Edition)

Our longer term characteristics - our personality - can be “defined as the relatively enduring patterns of thoughts, feelings, and behaviours that distinguish individuals from one another” (Roberts & Mroczek, 2008, p. 31). We can also consider personality as how our natural tendencies and inclinations differ from others within our own society, or as an “enduring set of Traits and Styles" where we exhibit certain "characteristics" (Bergner, 2019, p. 4).

Commonly used in the study and research of personality, the ‘Big Five’ or Five Factor Model (FFM) is made up of the five broad personality dimensions: extraversion; agreeableness; conscientiousness; neuroticism; and openness to experience. Developed from early research looking at how trait theory relates to individuals' temperament and behaviour (de Raad & Mlačič, 2015), research into the FFM has covered many populations and cultures and appears “to be the most widely accepted theory of personality today” (Lim, 2020). The FFM dimensions are structured into instruments to measure how individuals thinks, feels, and behaves, which collectively aids our understanding of personality difference (de Raad & Mlačič, 2015).

One of the instruments designed to test the FFM is the NEO Personality Inventory, or NEO PI. A number of research “studies in many different settings have verified the overall factor structure and construct validity of the Big Five [model...], based on many different demographic and cultural characteristics of individuals” as participants (Lounsbury, 2005, p. 709). Originally created in 1978 for use with adults, in the early stages it was clear that college students would also benefit from its use, but would require separate norms (McCrae et al., 2010). Later studies using samples as young as 10 years old showed that the revised version, NEO-PI-R, could be used but some items were difficult for younger respondents to understand. High school students - instructed to leave blank items not understood - found 30 of 240 test items difficult (McCrae et al., 2010). Using more current, colloquial language - although originally designed for adolescents - also improved the test for adult test-takers (McCrae et al., 2010). The latest version is the NEO-PI-3 (Lounsbury, 2005), and is a three-level self-report instrument, consisting of 240 items, a validity question, 30 facets, to test the five domains of “Neuroticism, Extraversion, Openness, Agreeableness, and Conscientiousness” (Vassend & Skrondal, 2011, p. 1301). Individuals rate the 240 items on a 5-point Likert scales ranging from strongly disagree to strongly agree (Hattrup & Smith, 2021; Hey, 2022).

The UK version of the NEO-PI-3 consists of language and normative data more appropriate for use with British individuals aged 16 and up, although it should be noted that norm data is not available for those under 18 (Hattrup & Smith, 2021). The UK edition shows consistent reliability with the US version, showing retest reliability supporting the conclusion that it measures stable traits (Hattrup & Smith, 2021). Like the US version, the UK NEO-PI-3 can be administered and scored both online or in paper version, remotely or in-person. While the test itself may only take 30-40 minutes, it is suggested that an hour be allowed to brief the client, and for the client to make considered responses (Hattrup & Smith, 2021).

Research and testing from multiple sources indicate that the NEO-PI-3 is appropriate for many ages and stages in career development. But gender options are binary only (Kluck, 2014), which ignores - invalidates - those who don’t identify this way. In addition, like the US, the UK samples too are homogenous, with over 90% of participants identifying as Caucasian (Hattrup & Smith, 2021).

While it’s noted that UK norms resemble US data (Hattrup & Smith, 2021), it cannot be assumed that this necessarily translates to Aotearoa's super-diversity context (Chen, 2015). Practitioners must consider collective and individual culture (Laher, 2013), relevant when working with Māori and Pasifika kaimahi and ākonga. Further, considering Kiwis who like to get on with others, candidates may feel compelled to conform with societal 'expectations' when answering (Kumar, 2019). A step further on, some participants may fear negative ramifications if vulnerabilities or ‘flaws’ are exposed. However, there is a “Problems in Living Checklist” at the end of each report (Costa & McCrae, 2010) which is helpful in allaying client concerns.

Whether we decide to use the test or not, it is useful to explore the issues.


Alex

References:

Bergner, R. M. (2020). What is personality? Two myths and a definition. New Ideas in Psychology, 57, 100759, 1-7. https://doi.org/10.1016/j.newideapsych.2019.100759

Chen, M., (2015). Superdiversity Stocktake: Implications for business, government and New Zealand. Superdiversity Centre For Law, Policy And Business. https://www.superdiversity.org/wp-content/uploads/Superdiversity-Stocktake-Section1.pdf

Costa, P. T., & McCrae, R. R. (2010). NEO™ Personality Inventory-3: Interpretive Report. Australian Council for Education. https://www.acer.org/files/NEO_PI-3_Interp_Rpt_Sample_Report.pdf

de Raad, B., & Mlačič, B. (2015). Big Five Factor Model, Theory and Structure. In J. Wright, C. Fleck (Eds.), International Encyclopaedia of Social & Behavioural Sciences (2nd ed., Vol. 2, pp. 559-566). Elsevier. https://doi.org/10.1016/B978-0-08-097086-8.25066-6

Hattrup, K., & Smith, J. V. (2021). [101] NEO Personality lnventory-3 (UK Edition). In J. F. Carlson, K. F. Geisinger, & J. L. Jonson (Eds.) The Twenty First Mental Measurements Yearbook (pp. 450-455). The Buros Institute of Mental Measurements.

Hey, L. (2022). Presenting a new NEO-PI-3 International Senior Manager Norm for a post-covid-19 world. Hogrefe Ltd. https://www.hogrefe.com/uk/index.php?eID=dumpFile&t=f&f=10141&token=8296bcf0af59cbf9aa92753a20dce8f92057ad8f

Kluck, A. S. (2014). [116] NEO Personality lnventory-3. In J. F. Carlson, K. F. Geisinger, & J. L. Jonson (Eds.), The Nineteenth Mental Measurements Yearbook (pp. 477-483). The Buros Institute of Mental Measurements.

Laher, S. (2013). Understanding the Five-Factor Model and Five-Factor Theory through a South African cultural lens. South African Journal of Psychology, 43(2), 208–221. https://doi.org/10.1177/0081246313483522

Lim, A. G. Y. (2020). What Are the Big 5 Personality Traits? Openness, Conscientiousness, Extraversion, Agreeableness, and Neuroticism. Simply Psychology. https://www.simplypsychology.org/big-five-personality.html

Lounsbury, J. W., Saudargas, R. A., Gibson, L. W., Leong, F. T. (2005). An Investigation of Broad and Narrow Personality Traits in Relation to General and Domain-Specific Life Satisfaction of College Students. Research in Higher Education, 46(6), 707-729. https://doi.org/10.1007/s11162-004-4140-6

McCrae, R. R., Costa, P. T., Martin, T. A. (2010). The NEO-PI-3: A More Readable Revised NEO Personality Inventory. Journal of Personality Assessment, 84(3),  261-270. https://doi.org/10.1207/s15327752jpa8403_05

Roberts, B. W., Mroczek, D. (2008). Personality Trait Change in Adulthood. Current Directions in Psychological Science, 17(1), 31-35. https://doi.org/10.1111/j.1467-8721.2008.00543.x

Vassend, O., & Skrondal, A. (2011). The NEO personality inventory revised (NEO-PI-R): Exploring the measurement structure and variants of the five-factor model. Personality and Individual Differences, 50(8), 1300-1304. https://doi.org/10.1016/j.paid.2011.03.002

* Alex Howe has kindly prepared much of the material for this post

read more "Exploring the NEO Personality Inventory-3 (UK Edition)"

Friday, 31 March 2023

A critique of MBTI

Career assessments are part of a holistic career counselling process. Assessments have a role as a starting point for clients to “identify, understand and appreciate the unique aspects that make them up as an individual” (Osborn & Zunker, 2016, p. xii). Tests can catalogue client aptitudes, abilities, interests, values, personality and career decision-making skills, as an integral component in our career practitioner toolbox. Further, career assessment tools are often standardised quantitative instruments, which use norms - or “typical scores” - to enable comparisons of client results with a statistical representation of a similar population (Osborn & Zunker, 2016).

As part of our process of seeking appropriate instruments to use with a client, not only do we need to consider how reliable, and valid an assessment is; and what norms are used (Osborn & Zunker, 2016), but also how culture and personal characteristics may affect test norms. We should also factor in aspects of "age, gender, ability, race, ethnic group, national origin, religion, sexual orientation, linguistic background” (Flores et al., 2003, p. 45). During test construction, minorities are often not well catered for, meaning that we cannot necessarily assume accurate generalisations about client results (Flores et al., 2003).

We also must consider "equivalence" with assessment, as it relates to language, constructs, scales and norms within any test (Flores et al., 2003). The meaning of words and ideas - such as behaviour or values - change between different cultures. We need to ask ourselves if the questions/scales are relatable and understandable; and if the culture of clients is represented in norms. We need to be aware of the duality of those who walk in two value worlds - such as Māori and Pasifika people - of both individuality and the collective (Apulu, 2022). These differences must be noted and taken into account when interpreting results.

According to the Myers-Briggs website, the Myers-Briggs Type Indicator, or MBTI,  is “based on large representative norms that account for race, age and gender” (The Myers Briggs Foundation, 2022). They note that there have been “hundreds of studies over the past 40 years which have proven the instrument to be both valid and reliable" (The Myers Briggs Foundation, 2022). In the Buros Centre for Testing review, the Myers Briggs Foundation is quoted as stating that the MBTI is the "most popular personality type inventory" (Mastrangelo, 2001, p. 816).

MBTI is designed for ages 14 year and older, and is very accessible: there are a number of free tests which self-calculate online; if completing a full inventory with a licenced provider the test can be hand written and scored, or completed as an online version. MBTI is available in 29 languages, which should be useful for a range of cultures to complete the test in their first language (Mastrangelo, 2001; The Myers Briggs Foundation, 2022). 

While we may find this test personally valid, when working with clients it is essential to independently establish the validity of qualitative assessments. The Buros Centre for Testing found that the MBTI test/retest reliability after 4 weeks was only 65% (Mastrangelo, 2001). Sixteen years later in 2017, MBTI reliability estimates ranged from 38% to 97%, again averaging around two thirds (Harris, 2017). This means that over a third of clients will get a different result if they retook the test within 90 days. Our clients are likely paying $350 for a professional MBTI assessment from a licenced practitioner, and it is easy to see that many would be unhappy with that level of reliability in their results.

The founder of many of the concepts which MBTI is based upon, Carl Jung, warned that his personality types were useful primarily as tools for studying large numbers of people, and became all but meaningless when applied to individuals (Pittenger, 2005), throwing further doubt upon the validity of the test. It has also been noted that MBTI appears to have “no evidence to show a positive relation[ship] between MBTI tests and success within an occupation” (Pittenger, 1993, p. 52). Critiques have also been made about the binary scoring scales leading to less than valid results (Harris, 2017; Mastrangelo, 2001). Tests such as the Big Five (aka NEO) use scales, considered more valid and appropriate measure. 

MBTI can be a useful tool to gather general preference information and a good self reflection tool, but it should not be relied upon AT ALL for hiring decisions. It is a preference indicator not a personality measure (Mastrangelo, 2001).


Sam, Alexandra, Donna, Karen & Helen

References:

Apulu, M. (2022). How to grow a culturally responsive career practice. [Master's thesis: University of Otago]. https://www.researchbank.ac.nz/bitstream/handle/10652/5711/MPP_2022_Peter_Apulu.pdf

Flores, L. Y., Spanierman, L. B., & Obasi, E. M. (2003). Ethical and professional issues in career assessment with diverse racial and ethnic groups. Journal of Career Assessment, 11(1), 76-95. https://doi.org/10.1177/106907202237461

Harris, S. M. (2017). [127] Myers-Briggs Type Indicator® Step III. In J. F. Carlson, K. F. Geisinger, & J. L. Jonson (Eds.) The Twentieth Mental Measurements Yearbook (pp. 521-526). The Buros Institute of Mental Measurements.

Mastrangelo, P. M. (2001). [251] Myers-Briggs Type Indicator [Form M]. In B. S. Plake & J. C. Impara (Eds.), The Fourteenth Mental Measurements Yearbook (pp. 816-820). Buros Center for Testing.

Nord, C. (2017). Could fMRI be a viable biomarker in psychiatry? A test-retest reliability fMRI study. https://www.ucl.ac.uk/pals/research/experimental-psychology/blog/fmri-viable-biomarker-psychiatry-test-retest-reliability-fmri-study/

Pittenger, D. J. (1993). Measuring the MBTI...And Coming Up Short. Journal of Career Planning and Placement, 54(1), 48-53.

Pittenger, D. J. (2005). Cautionary comments regarding the Myers-Briggs type indicator. Consulting Psychology Journal: Practice and Research, 57(3), 210-221. https://doi.org/10.1037/1065-9293.57.3.210

Osborn, D. S., & Zunker, V. G. (2016). Using Assessment Results for Career Development (9th ed.). Cengage Learning.

The Myers Briggs Foundation (2022). MBTI® Basics. https://www.myersbriggs.org/my-mbti-personality-type/mbti-basics/

* Karen Bennett, Alexandra Howe, Donna Manley & Helen Davie-Martin have kindly prepared much of the material used in this post. And I have mashed it up, and connected it :-)

read more "A critique of MBTI"

Friday, 24 March 2023

Qualitative versus quantitative career interventions

Assessment tools can be split into two broad categories, those of quantitative (number-, test- or survey-oriented, deductive, using mathematical modelling and numerical statistical patterns) or qualitative (relational, narrative, interpersonal, interview-, activity-, and discussion-oriented, using drawing) types. These tools are used to guide our work with clients, assisting us to measure client characteristics such as values, skills, abilities, interests and personality. These two categories also help both the client and ourselves understand how their personal characteristics connect with occupational selection (Swanson & Fouad, 2020). 

Quantitative tools tend to be represented by instruments such as standardised tests, measuring traits, counting and grouping interests; therefore, the psychometric properties of validity, reliability and norms hold high importance when considering the use of instruments that fall within this category (see here for more information). To be valid, tests must be normalised and standardised to be sure that the results are consistent over the population that is being assessed. Tests must be able to be taken once, then retaken and obtain close to the same result (test/retest validity; Osborn & Zunker, 2016).  

On the other hand, qualitative tools are non-standardised tools, such as Savickas’ Career Construction interview (CCI), narrative therapy, card sorts and career genograms. Qualitative tools assist when working with diverse clients as they “enliven the career counselling process” (Okocha, 1998, p. 5). For example, genograms - vocational family trees - capture a client’s heritage. This can be immensely useful for exploring family patterns, modelling, and dispelling outdated ideas (Osborn & Zunker, 2016). 

Rather than taking a trait-based, person-fit approach, some theories encourage a relational approach. Career construction theory, or CCT, addresses the needs of a workforce facing challenges  (Savickas, 2013). If we stop to think about how much the world has changed in recent years, it is easy to see how technology, rationalisation, redundancy, up-skilling, has resulted in 'new' roles: who would have thought of a "Work-from-home facilitator" pre-Covid? (Kelly, 2021). With CCT, the practitioner utilises 'life design' processes such as story-telling and self-construction techniques, taking either a group or individual approach (Maree, 2019). Career Construction enables clients to build their view of self “from the inside out,” rather than from the outside in, as trait theory prescribes (Savickas, 2013, p. 182). Research seems to indicate that a more relational approach builds greater adaptability and resilience (Savickas, 2013). Practitioners applying this approach may choose to use quantitative assessments - such as values inventories - or not, as best suits each client.

The main thing is that tools should not channel us or our clients: they should assist the client get to know themselves better, and to assist the client to make good quality choices. 


Sam

References:

Kelly, J. (9 May 2021). 10 Hot, Fast-Growing Jobs For The Future Post-Pandemic World. Forbes. https://www.forbes.com/sites/jackkelly/2021/05/19/10-hot-fast-growing-jobs-for-the-future-post-pandemic-world/?sh=f7d44ad5d064 

Maree, J. G. (Ed.). (2019). Handbook of Innovative Career Counselling. Springer.

Okocha, A. A. (1998). Using qualitative appraisal strategies in career counseling. Journal of Employment Counseling, 35(3), 151-159. https://doi.org/10.1002/j.2161-1920.1998.tb00996.x

Osborn, D. S., & Zunker, V. G. (2016). Using Assessment Results for Career Development (9th ed.). Cengage Learning.

Savickas, M. L. (2013). Career Construction Theory and Counseling Model. In S. D. Brown & R. W. Lent (Eds.) Career Development and Counseling. Putting Theory and Research to Work (2nd ed.). John Wiley & Sons.

Swanson J. L., & Fouad, N. A. (2020). Career Theory and Practice: Learning through case studies. Sage Publications, Inc.

read more "Qualitative versus quantitative career interventions"

Friday, 17 March 2023

Culture-Infused Career Counselling

A person’s culture is determined by a combination of some or all of "ethnicity, gender, religion, ability, sexual orientation, age, and social class" (Arthur & Collins, 2011, p. 147). Developed in Canada, the Culture-Infused Career Counselling (CICC) model (Arthur & Collins, 2011, Arthur, 2019) is based on the premise that a person's culture and identity are relevant to career concerns and must be considered to provide a fair, just career intervention (Arthur, 2019). CICC provides a framework - model - to incorporate culture into our practice as "cultural influences are inextricably woven into a [client]’s career development" process (Arthur & Collins, 2011, p. 147). CICC "focuses on establishing an effective and culturally sensitive working alliance with clients" (p. 148).

The CICC model has four stages, as follows:

  1. "Gaining awareness of personal cultural identities" (Arthur, 2019, p. 22)
  2. "Gaining awareness of the cultural identities of other people" (p. 22)
  3. "Understanding cultural influences on the working alliance" (p. 22)
  4. "Implementing culturally responsive and socially just career interventions"  (p. 23).

As career practitioners, we need to be awake to our personal cultural approach in our client work, where those clients come "from nondominant populations" (Arthur & Collins, 2011, p. 148). Further, we must ensure that any interventions we choose will have an appropriate meaning and purpose "within the cultural contexts of [our] clients’ lives" (Arthur, 2019, p. 27). 

Finding information on what is 'appropriate' for our client norm group may be difficult to determine because of the lack of applied research in the Antipodes, but, following the CDANZ code of ethics (2016), considering the following elements will lead us to good practice:

  • "Respect - the dignity and personal rights of the client involved and the client’s right to self-determination, and treat the client honestly, and with respect, empathy, and integrity at all times" 
  • "Ensure – that any ethical and cultural dimensions relevant to the client are respected" 
  • "Remain - fully aware of their social responsibility and the impact of their recommendations and actions" (CDANZ, 2016).

It is our role to ensure we "keep ethics and culture in dialogue with each other" (Agee et al., 2011, p. 29). It reminds us that if we are working with a client who is of a different culture to ourselves, whose shoes we have not walked in, we need to be very careful not to make ‘assumptions’.


Eleanor

References:

Arthur, N., & Collins, S. (2011). Infusing culture in career counseling. Journal of Employment Counseling, 48(4), 147-149. https://doi.org/10.1002/j.2161-1920.2011.tb01098.x

Arthur, N. (2019). Chapter 3: Culture-Infused Career Counselling: Connecting culture and social justice in career practices. In N. Arthur, R. Neault, McMahon, M (Eds.) Career Theories and Models at Work: Ideas for practice (pp. 21 – 30). CERIC.

Agee, M., Crocket, K., Fatialofa, C., Frater-Mathieson, K., Kim, H., Vong, C. & Woolf, V. (2011). Chapter 1.3 Culture is Always Present: A conversation about ethics. In K. Crocket, M. Agee, S. Conforth (Eds.) Ethics in Practice: A guide for counsellors (pp. 28-32). Dunsmore Press.

CDANZ. (2022). Code of Ethics. Career Development Association of New Zealand. https://cdanz.org.nz/ModularPage?Action=View&ModularPage_id=26

* Eleanor Blakey has kindly prepared most of the material for this post

read more "Culture-Infused Career Counselling "

Friday, 3 March 2023

Not slipping up with testing

Collectively, our clients have a wide range of worldviews, experiences, values, and expectations due to their cultural and social context. As practitioners, we must be open to exploring the client's context, not making assumptions about their culture, their identity, and their ethnicity (Arthur & Collins, 2011).

Both ourselves as practitioners, and the assessments we select, can inadvertently be racist and ethnocentric. Our clients deserve to have valid, culturally appropriate career experiences as they understand themselves and are better able to make informed choices (Blustein & Ellis, 2000). Any assumptions we make about a client's cultural influence can be problematic, potentially resulting in stereotypical thinking. This not only contradicts our professional codes of ethics (CDANZ, 2016; Stuart, 2004), but also limits our effectiveness in building the focus of our practice: “a strong therapeutic alliance” (Flores et al., 2003, p. 78).

Our role as practitioners is to reflect on the client's cultural identity, considering whether a particular tool or model may be culturally relevant for them; and whether that will aid their understanding of self in terms of future aspirations (Arthur & Collins, 2011). Coming back to consider our underlying career theories helps us to engage in more rigorous personal analysis with our client. We take the time together what theory should guide our process, and that should lead us to resulting measurement tools which have value and is relevant to the client in front of us (Blustein & Ellis, 2000). Having our client undertake any test without due consideration, planning, and research is unethical (CDANZ, 2016). Further, we must honour Te Tiriti, offering Māori models and processes to aid our client's career pathways (Came et al., 2020).

We need to remind ourselves that data from quantitative assessments are only valid when derived - and normed - from within the person's culture. Thus assessing the appropriateness of a particular test, and information collected, should be a careful and considered process (Stuart, 2004). If not careful, not only may the data collected not be relevant, it may cause confusion (Osborn & Zunker, 2016), or damage the client's self-concept (Arthur & Collins, 2011). During a results debrief, we need to acknowledge - as they were not designed for New Zealand - where cultural biases in the tool may impact the test findings for our client (Arthur & Collins, 2011). Our clients tend to come from multiple cultures here in Aotearoa, and we are a small population: few assessments are normed here. 

So, when we are using a qualitative assessment which has not been normed for our cultural group, it has been suggested that we treat that assessment as if it were qualitative: i.e. that it provides general advice, which should be used with caution (Flores et al., 2003).

Let's be careful out there. We don't want to slip up.


Sam & Fiona

References:

Arthur, N., & Collins, S. (2011). Infusing culture in career counseling. Journal of Employment Counseling, 48(4), 147-149. https://doi.org/10.1002/j.2161-1920.2011.tb01098.x

Blustein, D. L., & Ellis, M. V. (2000). The Cultural Context of Career Assessment. Journal of Career Assessment, 8(4), 379–390. https://doi.org/10.1177/10690727000080040

Came, H., Kidd, J., & Goza, T. (2020). A Critical Tiriti Analysis of the New Zealand Cancer Control Strategy. Journal of Cancer Policy, 23, 100-210. https://doi.org/10.1016/j.jcpo.2019.100210

CDANZ (2016). Code of Ethics. The Career Development Association of New Zealand. http://www.cdanz.org.nz/uploads/CDANZ_CoE_Word%20English%20Final.pdf

Flores, L. Y., Spanierman, L. B., & Obasi, E. M. (2003). Ethical and professional issues in career assessment with diverse racial and ethnic groups. Journal of Career Assessment, 11(1), 76-95. https://doi.org/10.1177/106907202237461

Osborn, D. S., & Zunker, V. G. (2016). Using Assessment Results for Career Development (9th ed.). Cengage Learning.

Stuart, B. (2004). Twelve Practical Suggestions for Achieving Multicultural Competence. Professional Psychology: Research and Practice 35(1) 3–9. https://doi.org/10.1037/0735-7028.35.1.3

* Fiona Wilson has kindly prepared much of the material used in this post

read more "Not slipping up with testing"

Friday, 25 November 2022

My favourite instrument

Now I wonder why have I not done a post - in all these years - detailing my PERSONAL favourite career assessment instrument; my most enjoyed test? That is a very interesting question! Perhaps it is because I would have to put my own decision up for scrutiny? Perhaps it is because I have to actually decide on one? Perhaps it is because I don't really know why I like this particular instrument? Hmm. I think this aspect of 'why' needs more reflection!

However, to move onto the 'what', my personal favourite test is the Herrmann Brain Dominance Instrument (HBDI). Following the brain research of Levy-Agresti and Sperry (1968), in 1976 a manager at General Electric, Ned Herrmann, began development on the brain model to examine preferred thinking styles.

The test is administered using a 120 question test battery (Herrmann International Asia, 2008). I like this test because it shows participant differences, so it can be used to see where a department, organisation or board is well rounded... or not. Having been the board chair of an organisation where HBDI was administered to all staff, I had it administered to board members as well. Collectively, we found the organisation was weaker in one quadrant, and after more exploration, we decided to deliberately recruit a new staff member who was strong in that area.

While Herrmann International Asia (2008) claims that the HBDI has been validated by research, fMRI machines have clearly shown that left/right brain, upper/lower quadrant 'geographical' thinking sites as originally outlined in HBDI's literature are simplistic and are unevidenced. 

However, in my view - like many tests including MBTI - the test still has some utility, as the thinking styles themselves still show patterns and may be valid (Bunderson, 2003). If we can step back and consider it a 'sorting hat' (Rowling, 1997), if we will, and use it for guiding decisions, not for being the be-all and end-all of truth, then it has some value. It certainly gave the organisation I was with good value as a lens to see where we lacked 'wholeness'.  

If you want to know a bit more about how HBDI works for me, watch the video below:

I hope you find that interesting!


Sam

References:

Bogen, J. E., & Gazzaniga, M. S. (1965). Cerebral commissurotomy in man: Minor hemisphere dominance for certain visuospatial functions. Journal of Neurosurgery, 23(4), 394-399. https://doi.org/10.3171/jns.1965.23.4.0394
Bunderson, C. V. (2003). The Validity Of The Herrmann Brain Dominance Instrumenthttp://www.hbdi.com/uploads/100021_resources/100331.pdf
Herrmann International Asia (2008). Confidential Personal Profile Information. Author.
Levy-Agresti, J., & Sperry, R. (1968). Differential Perceptual Capacities in Major and Minor Hemispheres. Proceedings of the U.S. National Academy of Sciences, 61(3), 1151. https://doi.org/10.1073/pnas.61.4.1435
Nebes, R. D. (1971). Superiority of the minor hemisphere in commissurotomized man for the perception of part-whole relations. Cortex7(4), 333-349.https://doi.org/10.1016/S0010-9452(71)80027-8
Rowling, J. K. (1997). Harry Potter and the Philosopher’s Stone. Bloomsbury Publishing (UK) Ltd.
Young, S. (18 March 2017). AUT Leadership AUT Topic 2c Followership Part 3 HBDI, 2013 [video]. YouTube. https://youtu.be/w2_l7YZlaNo
read more "My favourite instrument"

Friday, 1 April 2022

Cross-group comparison where there is little data

When we are trying to evaluate something, such as an assessment, we need to know what the norms are for the base culture, to be able to make sense of the test results.

Norms are score standards developed from individual tests, analysed, checked for content and construct validity, and based on either the general population or on specific groups (Osborne & Zunker, 2016). Practitioners use test norms to determine whether their client's background and characteristics fit derived population norms (Osborne & Zunker, 2016).

If we are testing a sub-culture within a national culture, then we would hope that the test had been normed for the culture we were applying the test to. There are significant cultural differences between Americans, and Pākehā and Māori population groups. US assessments being used in New Zealand appear to lack a local frame of reference for Aotearoa (Reid, 2010) - norm group - so results are likely to be less valid and less reliable.

In the USA, obtaining sound statistical and population data for comparisons is simple, and run by the Buros Institute (Carlson et al., 2021). However, in New Zealand we have no Buros Institute: there is no national, independent, co-ordinating organisation with the resources to gather test data, to analyse that data, and to normalise the test results.

Further, norming tests is expensive. If we consider that many of the tests cost between $300 and $500 per test, and most tests require around 350 norm group samples, the opportunity cost is immediately visible: $100,000 to $175,000. Ouch. Few tests in New Zealand have established norms, and those which exist are proprietary and not available for independent analysis and validation.

On an international scale, Aotearoa is the size of a small city of 5m. We would have to do a LOT of paid tests to recoup the loss we made in creating a norm group. If we want to norm a Māori group, that is 16% of the overall population: 800,000 people. It is a small pool to be drawing from. It becomes difficult to reach enough participants to create a reliable norm group, and it becomes even more obvious that creating sub-culture norm groups may well be too expensive for the anticipated return.

So how can we compare cultural markers and data from one country with the cultural marker from another country? By using a bridging measure. For example, if we had norm group data for a US test, we could use a cultural measure from the country where where the norm group data is based. We could compare the norm group culture to the culture of NZ, and bridge/not bridge the career assessment norm results (depending on the cultural alignment).

A couple of possibilities for cultural bridging would be the Hofstede (1980, 1984) cultural continuum measures, or the work of House et al (2004; Chhokar et al., 2008). We attempt to work out the differences between the cultures using this intermediary marker. Then we compare the norm group from the US culture with the US cultural measure. We hope to see many similarities. Then we explore the differences between the cultural measure of Aotearoa and the cultural measure of the US. We then predict the likely norm group differences.

Although this is not ideal, the cross-cultural measurement bridge may help us here in New Zealand to better predict cultural fit of testing, and to identify measures that do not work here, such as - in my experience - the extraversion continuum in the Big 5 test.


Sam

References:

Carlson, J. F., Geisinger, K. F. & Jonson, J. L. (Eds) (2021). The Twenty First Mental Measurements Yearbook. The Buros Institute of Mental Measurements. https://buros.org/mental-measurements-yearbook

Chhokar, J. S., Brodbeck, F. C. & House, R. J. (Eds.). Culture and Leadership Across the World: The GLOBE Book of In-Depth Studies of 25 Societies. Sage Publications, Inc.

Hofstede, G. H. (1980). Culture’s Consequences: International Differences in Work-Related Values (1984 Abridged Edition). SAGE Publications, Inc.

House, R. J., Hanges, P. J., Javidan, M., Dorfman, P. W., & Gupta, V. (Eds.). (2004). Culture, leadership, and organizations: The GLOBE study of 62 societies. Sage Publications, Inc.

Osborne, D. S., & Zunker, V. G. (2016). Using Assessment Results for Career Development (9th ed.). Cengage Learning.

Reid, L. A. (2010). Understanding how Cultural Values Influence Career Processes for Maori [Doctoral Thesis: AUT University]. http://openrepository.aut.ac.nz/handle/10292/1036

read more "Cross-group comparison where there is little data"

Friday, 4 February 2022

Some career assessment resources

If we take qualitative assessment instruments to mean those “methods [which] are flexible, open-ended, holistic, and nonstatistical” (Goldman, 1992, p. 616), then logically, quantitative methods should be those which are structured, close-ended, focused, and use statistical methods. Testing should be standardised, valitated, and replicable. However, testing is also largely normed for US populations (Osborn & Zunker, 2016). A review of Buros shows that there are very few tests which have actually been normed for other nations, let alone for the smaller population groups within those nations (Spies et al., 2010).

While I appreciate the cost in developing career assessment instruments, It can be difficult to find sound quantitative testing methods - and their base theories - which aren't hidden behind a pay wall. Fortunately, the following two pages of testing resources are kindly supplied and maintained by US academic Paul Spector, of the University of South Florida. They should be of interest to anyone in career practice:

Paul says on his page: "Looking for measures for a study? I have created two free assessment archives with links to dozens of organizational and nonorganizational measures. Almost all are free to use for noncommercial (educational/research) purposes. Where possible, I linked to the article itself that contains the measure, mostly on ResearchGate". These resources provide "organizational measures of attitudes, behavior, environment, leadership, occupational health/safety", plus "general measures of mental and physical health, health behavior, positive well-being, personality".

Paul can be emailed at pspector@usf.edu.


Sam

References

  • Goldman, L. (1992). Qualitative assessment: An approach for counselors. Journal of Counseling and Development, 70(5), 616–621. http://dx.doi.org/10.1002/j.1556-6676.1992.tb01671.x
  • Osborn, D. S., & Zunker, V. G. (2016). Using Assessment Results for Career Development (9th ed.). Cengage Learning.
  • Spector, P. (2021). Assessment Archive. https://paulspector.com/assessments/assessment-archive/
  • Spector, P. (2021). Personality. https://www.stevenericspector.com/mental-health-assessment-archive/personality/
  • Spies, R. A., Carlson, J. F., & Geisinger, K. F. (Eds) (2010). The Eighteenth Mental Measurements Yearbook. The Buros Institute of Mental Measurements.

read more "Some career assessment resources"

Monday, 16 August 2021

LinkedIn's Career Explorer

There is a new kid on the block of interest inventories, developed by the LinkedIn platform, called Career Explorer (here). This new tool is free, designed to help us identify potential new vocational paths, and opportunities for upskilling (think Microsoft's LinkedIn training arm, Lynda) so we can move from a desire for change to a more likely reality (Adams, 2021).

Like CareerQuest in New Zealand (Careers New Zealand, 2021), it appears that Career Explorer "looks at thousands of job-specific skills and tech knowledge ranging from time management and interpersonal skills to familiarity with various software applications. Using a metric the company calls 'skill similarity', the tool helps" us to see potential transitions between skill sets (Adams, 2021).

However, CareerQuest is designed for those who are not yet in work (Careers New Zealand, 2021). Career Explorer is designed for those who are already in work, but looking to transition into something else (LinkedIn, 2021).

We can select our city, then enter the job we currently have. The 'sorting hat' inside the app then provides us with options in our locale based on the recruiter information that LinkedIn already possesses (LinkedIn, 2021), and the skills, interests, and values information all we LinkedIn member cattle have provided Microsoft freely over the years.

This could eventually be a useful piece of kit. However, from a quick tour through the data recently, there appears to be very little New Zealand data. I found it heavily US-centric. I suggest either not selecting a city, or selecting a more cosmopolitan US city such as New York, then entering our current role.

I didn't find anything too startling in the list that arose, but it does provide us with another tool for considering transition options with our clients. And, if our clients are already active on LinkedIn, it will be an environment which they are familiar with... so are more likely to use, and to rely on.

Give it a try.


Sam

References

read more "LinkedIn's Career Explorer"

Monday, 5 October 2020

Long answer test technique

Students tend to struggle with tackling written examinations. This only gets harder as we increase in education levels, as the complexity and quality of the thinking required also increases. In general with tests and examinations, controlled examinations do not require an essay: the marker wants to see that students are able to demonstrate their learning.

The advice I give to my students is to divide the minutes available for the test by the marks (normally that ratio will come out at a little under two minutes per mark). We use the number our calculation gives us to budget our time against the marks for each question.

For example, if we are undertaking a 180 minute test of 100 marks, this allows us 1.8 minutes per mark, without any time for review. For a ten mark question, we allow 18 minutes; double it for a 20 mark question to 36 minutes. If we want to build in review time, then we could drop this down to 15 minutes per ten marks (30 for a 20 mark question), giving us half an hour for review at the end.

When tackling a long answer question - the types we are likely to be asked as a post-graduate student, I give the following advice for students in the Polytechnic sector, where learning needs to be applied. There are five aspects that we need to show the examiner: that we can analyse the situation, that we can select appropriate theory, that we understand the theory by defining it, that we can demonstrate what it means to ourselves, and that we can apply it to a real situation or case. Those five steps are again are analyse, select, define, justify, apply:
  1. That we can appropriately and accurately analyse the situation in the case, linking to the materials we have been exposed to in our course work;
  2. That we can select appropriate tools, theories and frameworks to answer the question and meet the needs of the client in the case;
  3. That we understand the tools, theories and frameworks by defining them. That we can define it, explain what it does, and can break it down into its key components;
  4. That we can justify why those particular tools, theories and frameworks have been chosen;
  5. That we can apply those tools, theories and frameworks to the real case. In postgraduate education, students are most likely to be assigned a case, so we will need to ensure that our entire response is focused on the case we are being examined on. How the case has used it ourselves, or how we propose they use it in future. Clearly, clearly applied.
I also have a strategy for when we run short of time, or are stuck:
Stuck: if we are stuck, define the theory we think we are being asked about, detail the components, then paraphrase what we think the examiner is asking us, and answer that question to the best of our ability, providing examples. Even if we are off track, other students too may have misread or not been able to interpret the question, and the examiner may give us marks for what we have answered.
Short of time: if we are running out of time, quickly define the theory we think we are being asked about, again, paraphrase what the examiner is asking us, then just list where we were aiming to go with our answer in brief bullet points. It may not get us many marks, but some marks are better than none.
Remember: analyse, select, define, demonstrate, apply. Work hard to master the material, and ALWAYS put an answer, even when you are stuck.

Good luck!


Sam
read more "Long answer test technique"

Friday, 11 September 2020

Career practice philosophies

Teaching career development to students is certainly an interesting role for a practitioner to have. I get so many very interesting questions from students, which spark fascinating discussions.

Recently I had a student who was tackling a textbook exercise where they were being asked about their client's career philosophy (Osborn & Zunker, 2016). The student responded with worry that they were not really clear as to what their own career philosophy was, let alone their client's one.

My response was that I felt that the student would already have a preferred way of working with their clients: that they would more than likely have a particular theory, set of theories or range of approaches which resonated with them... even though they may not have deliberately put a name to them yet.

It made me think about the fact that when we get a new client in our practice, we will note what language they use, what markers they give us, what behaviours they display, and what cues they provide. We will - cautiously - start to form a picture about how this new client may like to work, and therefore what tools and approaches that client will resonate with. As our experience grows as practitioners, we realise that some clients will want a quick fix, to be task focused: others will want to build an on-going, relationship-oriented conversation.

I then related a story to the student about an academic in Australia who was leading CEOs through an MBA programme. The colleague was trying to shift the student/CEO mindset, and decided to put the CEOs at the edge of their comfort zone by using a guided practice of meditation and yoga in order to create discussion (Sinclair, 2007).

What was very interesting was that this one act effectively lit the fuse on what became a bi-partisan rebellion and almost derailed the entire course. One group of students wanted facts, quantitative methods, to stay private, and to keep exploration superficial. They wanted THE answer, not enlightenment. The other group of students wanted to reflect, to be open to new experiences, wanted to explore their own views deeply. They wanted to be investigative, and ask many questions: they sought personal enlightenment.

All that had happened was a clash of philosophical approaches: but the clash was so powerful, so potentially derailing, that it sparked the academic to write a paper about the event. The approaches may be thought of as quantitative or qualitative; or we may think of these philosophies as being deconstructivist/ Socratic or constructivist; but I am not sure that initially we need the labels. What our way is is the useful thing to determine: who we are when we work with our clients.

Although initially we may not have names for what those client approaches are, by adding to our theory knowledge, we can learn to identify the grounding philosophies, and understand how that changes the way we approach our clients.

Getting familiar with the underpinning career theories helps us to improve our service by being more responsive. It should not narrow our approaches so much that we pigeonhole, and forget to be open to clearly hearing our client.

...and what was even more interesting - and utterly off topic - is that in the case above, the task-oriented group were men; the relationship-oriented group were women (Sinclair, 2007).


Sam

References:

  • Sinclair, A. (2007). Teaching Leadership Critically to MBAs: Experiences From Heaven and Hell. Management Learning, 38(4), 548-472. https://doi.org/10.1177/1350507607080579
  • Osborn, D. S., & Zunker, V. G. (2016). Using Assessment Results for Career Development (9th ed.). Cengage Learning.
read more "Career practice philosophies"

Monday, 17 August 2020

Personality testing

A phrenological diagram
(Combes, 1934, p. 20, citing Dolci, 1562)
We often conflate assessments into one pile: those of personality tests. So what are personality tests?

"Personality tests are are self-report questionnaires in which the respondent provides information about [their] feelings or behaviors". Personality plays "a significant role in helping people determine occupations that may or may not be a good match for them", with the matching self-knowledge growing from self-reflection (Greenhaus & Callanan, 2006, p. 633).

This understanding of ourselves brings "a better understanding of which occupations and career paths would better match [our] interests and personality. Being honest with ourselves about who we are and our strengths and weaknesses can help us choose situations that we will be comfortable in, as well as make us aware of situations that we might want to avoid" (Greenhaus & Callanan, 2006, p. 633).

Interestingly, psychologists agree that "the optimal personality test is one that measures the Big Five personality dimensions: extroversion, agreeableness, conscientiousness, emotional stability, and openness to experience. [...] Literally hundreds of personality tests have been used for selection; however, tests of the Big Five are among a very small group of such tests that have demonstrated value in selection" (Greenhaus & Callanan, 2006, p. 633).

It is surprising how few tests have been rigorously tested (Cripps, 2017; Osborn & Zunker, 2016). However, there are barriers to personality testing: many tests are proprietary, requiring the permission of the owners before any research can take place; we need a stable population to be able to test and re-test on; we need to be immensely careful about how questions are asked; we need to be very careful of participant bias, of researcher bias, of sample size, of methodological approach; and we need to be careful of vested interest. There are large career and human resource trucks which drive the economic engine of the testing sector (Cripps, 2017).

Like the study of phrenology in the 19th century, I think personality testing will be a hard train to derail. It has usefulness in places, but not in ALL the places we currently use it.
We just need to be careful to use testing appropriately, and not assume that we will get 'answers' from it. With good use, we should get better questions.

Let's be careful out there.


Sam

References:
  • Combe, G. (1834). A System of Phrenology (3rd American ed.). Marsh, Capon and Lyon.
  • Cripps, B. (Ed.) (2017). Psychometric Testing: Critical perspectives. Wiley Blackwell.
  • Greenhaus, J. H., & Callanan, G. A. (2006). Encyclopedia of Career Development (Vols. 1 & 2). SAGE Publications Ltd.
  • Osborn, D. S., & Zunker, V. G. (2016). Using Assessment Results for Career Development (9th ed.). Cengage Learning.
read more "Personality testing"