Pages

Showing posts with label artificial intelligence (AI). Show all posts
Showing posts with label artificial intelligence (AI). Show all posts

Monday, 24 February 2025

The Emperor has no clothes & AI

While I am sure that AI platforms will improve, I was struck by a Guardian long read article last year where a journalist reported that, "when I asked ChatGPT to write a bio for me, it told me I was born in India, went to Carleton University and had a degree in journalism – about which it was wrong on all three counts (it was the UK, York University and English). To ChatGPT, it was the shape of the answer, expressed confidently, that was more important than the content, the right pattern mattering more than the right response" (Alang, 2024).

I think that is the core of the AI problem. The confidence of the delivery from the AIs we consult (Alang, 2024). The large language models which AI is trained upon is logically North American. That is where the tech companies are. The USA has driven much of the research and IT work for the past half century. The US is probably the most WEIRD society (here; Henrich et al., 2010): a Western, educated, industrialised, rich and democratic society, which collectively make up 12% of the global population. Researchers have considered "how WEIRD [society populations] measure up relative to the available reference populations" (p. 62), finding that in most behavioural research studies, a full "68% of [research participants] came from the United States, and a full 96% of subjects were from Western industrialized countries, specifically those in North America and Europe, as well as Australia and Israel" (p. 63); and even more narrow, that "67% of the American [participants] (and 80% of the [participants] from other countries) were composed solely of undergraduates in psychology courses" (p. 63).

So not very representative then. And if we think of the 12% of global population in WEIRD societies, 50% will be female. Around 40% of Americans go to college (National Center for Education Statistics, 2020). So lets assume of the 6% of WEIRD societies which are male, that 40% have gone to college. While this is very rough maths, at the most, AI is based on 2.4% of the global population (and it will be a fraction of that number, because few will have completed an IT degree, let alone a behavioural science degree, as per Henrich et al., 2010). Yes, I know I am comparing apples with oranges, but I don’t think we can safely assume that the data being used to 'train' the AI models is unbiased. I think it is pretty clear that the training data is based on a tiny non-representative percentage of the global population. 

The software and hardware engineers working on AI are also likely to be male, with a good chunk from North America (Alang, 2024). While, 23% of workers in IT are women (Deloitte, 2021), it was noted at one large US company that there were "641 people working on 'machine intelligence,' of whom only 10 percent were women" (Simonite, 2018). So yes, while nearly a quarter of the IT sector has women in it, the gender distribution is uneven. And if we come back to Von Bertalanffy's system theory (1968), this shows that the input is definitely biased. Thus the transformation - no matter what we do elsewhere - will also be biased. This means that the output too will be biased. 

We are used to consulting the internet for factual answers. Yet there is a growing trend that what is on the internet is a mashup of fact and fiction. Since the early 1990s, we 'little people' have been able to create our voices without the peer review of publishers and others to filter what we say. And now, perhaps throwing a massive spanner in the works, generative AI creates blends of fiction and fact... and - unless we know our field - we have little idea which elements are fiction, and which are factual (Alang, 2024; Lingard, 2023). We consult the oracle and lack the understanding to be able to point out that the emperor has no clothes.

But the more I read, the more I think that the emperor is indeed naked. So far, anyway.


Sam

References:

Alang, N. (2024, August 8). No god in the machine: the pitfalls of AI worship. The Guardian. https://www.theguardian.com/news/article/2024/aug/08/no-god-in-the-machine-the-pitfalls-of-ai-worship

Deloitte. (2021, December 1). Women in the tech industry: Gaining ground, but facing new headwind. https://www2.deloitte.com/us/en/insights/industry/technology/technology-media-and-telecom-predictions/2022/statistics-show-women-in-technology-are-facing-new-headwinds.html

Henrich, J., Heine, S. J., & Norenzayan, A. (2010). The weirdest people in the world?. Behavioral and Brain Sciences, 33(2-3), 61-83. https://doi.org/10.1017/S0140525X0999152X

Lingard, L. (2023). Writing with ChatGPT: An illustration of its capacity, limitations & implications for academic writers. Perspectives on Medical Education, 12(1), 261-270. https://doi.org/10.5334/pme.1072

National Center for Education Statistics. (2020). Chapter 2: College Enrollment Rates. In The Condition of Education. https://nces.ed.gov/programs/coe/pdf/coe_cpb.pdf

Simonite, T. (2018, August 17). AI Is the Future—But Where Are the Women?. WIRED. https://www.wired.com/story/artificial-intelligence-researchers-gender-imbalance/

Von Bertalanffy, L. (1968). General System Theory:  Foundations, Development, Applications. George Braziller.

read more "The Emperor has no clothes & AI"

Monday, 20 January 2025

So what is slop then?

If you haven't heard of "slop" yet, I am sure you have seen it. Slop is AI generated crap, according to the 'founder' of the term, Simon Willison (Hern & Milmo, 2024). Think religious characters with lobster arms (Hern & Milmo, 2024; see image accompanying this post). Think people with ten fingers and dissolving faces. Think truckloads of babies (Rawhiti-Connell, 2024; see image accompanying this post). Slop. Using lots of generative AI computing power, even the least creative amongst us is able to create "low-quality content [...with a few commands which] allow[s] anyone who don't know a thing about the art to create their own slop" (Zhou et al., 2024, p. 5). Rawhiti-Connell compares the apparently run-away generative AI slop on Facebook to a hydra, where FB's policing people on minimum wage "cut off one head, [only to have] two more ads about witnessing your brother’s best friend masturbating appear" (2024).

Slop "isn’t interactive, and is rarely intended to actually answer readers’ questions or serve their needs. Instead, it functions mostly to create the appearance of human-made content, benefit from advertising revenue and steer search engine attention towards other sites" (Hern & Milmo, 2024). A proxy for interaction, then. What is really interesting is that I don't think answering questions, serving needs or creating interaction was ever the reason for social media: it was always generating money through advertising placement. As Rawhiti-Connell says, we, the grazing herd, "are the product" (2024). 

Meta, dubious owner of equally dubious Facebook, reported the removal of "631 million fake accounts globally in the first quarter of 2024" for creating fake news stories related to celebrities: known as 'Celebcore' (Rawhiti-Connell, 2024), which is also slop. Crap that is all pixels and no actual content, created by generative AI to look like news, but to not actually be news. Apparently FB has become so rife with AI slop that it is turning into an AI ghost town (Rawhiti-Connell, 2024): but I have no personal knowledge of this as I left FB after the 2017 Cambridge Analytica scandal. It appears that social media platforms are increasingly full of "ads, and now, the plague-like sprawl of fake posts, bot-farmed engagement and meaningless imagery" with drivers to engagement being "outrage and anger" (Rawhiti-Connell, 2024). We really are sparking some useful emotions here :-)

Social media always was a "pseudo connection", but it is becoming more so as the clock ticks on. The "lucrative advertising business [of FB] sits flush alongside its transformation from a digital hub of genuine social interaction to a chaotic landfill of misinformation and AI-generated freak shows" (Rawhiti-Connell, 2024).

So I wonder what will happen when social media platforms are so full of slop that people stop coming to look at the ads?


Sam

References:

Hern, A., & Milmo, D. (2024, May 19). Spam, junk … slop? The latest wave of AI behind the ‘zombie internet’. The Guardian. https://www.theguardian.com/technology/article/2024/may/19/spam-junk-slop-the-latest-wave-of-ai-behind-the-zombie-internet

Rawhiti-Connell, A. (2024, July 9). Fake news, AI slop and little human connection: What is Facebook these days?. The Spinoff. https://thespinoff.co.nz/internet/09-07-2024/fake-news-ai-slop-and-little-human-connection-what-is-facebook-these-days

Zhou, K. Z., Choudhry, A., Gumusel, E., & Sanfilippo, M. R. (2024). "Sora is Incredible and Scary": Emerging Governance Challenges of Text-to-Video Generative AI Models. arXiv, Advance online publication. https://arxiv.org/pdf/2406.11859

read more "So what is slop then?"

Friday, 25 October 2024

AI Hallucination

I have a burning question around the validity of AI. I have run my own tests (here) in ChatGPT, where I felt that the hype around AI was just that: hype. My brief and rather unscientific experiments found that the AI I had used (ChatGPT) effectively made up the answers I obtained, which is called "AI hallucination" (Lingard, 2023). I know the answers I was given by the AI were largely nonsense because I carefully validated what the AI had delivered to me.

The trouble is, when we are writing academically, we cannot afford to stand our arguments on dodgy evidence. So if we lack a reasonable expectation that ChatGPT will supply us with sound evidence, its 'use' becomes useless. If it becomes “crucial for students to factcheck all ChatGPT output during interaction with the system to identify potential biases or inaccuracies to construct an accurate understanding of the topic” (Rasul et al., 2023, p. 8), how many students are going to do that? And if students DON'T fact-check, what does that do for their quality of their work? Or the overall quality of academic writing? 

We will not only mark students down for insufficient understanding, we will also ping them for using AI in their written work. The institutions I teach at require students to declare where and how they have used AI in their work. Lingard notes that academic publications are stating "that ChatGPT cannot be a co-author because it cannot take responsibility for the work, and they require that researchers document any use of ChatGPT in their Methods or Acknowledgements sections" (2023, p. 261). 

Just as I and others have noticed, Rasul et al. (2023, p. 3) point out:

That, yes, "ChatGPT can act as a research assistant, answering users’ questions based on the related literature [...], analysing data [, ...] serve as a writing assistant [,...] and provide writing support". However, "users should exercise caution as ChatGPT may be prone to hallucinations (Alkaissi & McFarlane, 2023) and fabricate references and quotes (Sallam, 2023; Shen et al., 2023)".

I continue to be concerned about AI. It needs to get much, much better before it can become a useful, reliable and valid tool.


Sam

References:

Lingard, L. (2023). Writing with ChatGPT: An illustration of its capacity, limitations & implications for academic writers. Perspectives on Medical Education, 12(1), 261-270.  https://doi.org/10.5334/pme.1072

Rasul, T., Nair, S., Kalendra, D., Robin, M., de Oliveira Santini, F., Ladeira, W. J., ... & Heathcote, L. (2023). The role of ChatGPT in higher education: Benefits, challenges, and future research directions. Journal of Applied Learning and Teaching, 6(1), 1-16. https://doi.org/10.37074/jalt.2023.6.1.29

read more "AI Hallucination"

Wednesday, 15 May 2024

The use of AI in academic writing

As learners, when we bend our mind to understanding concepts, we all read widely. We begin broadly, then read more deeply and narrowly, drawing on the work of experts in order to improve our understanding. We gain mastery through work over time. Part of our training is for us to evaluate the utility of theories, frameworks and models and to apply them in our practice; and to be assessed on how we do this. 

There are usually few meaningful shortcuts in our learning process. Learning is a “time-skill, where the ticking away of the unforgiving seconds plays a dominant part in both learning and application of the skill” (Canning, 1975, p. 277), now known as sequential and map learning, where a skill becomes implicit and automatic. For best retention, we need regular, focused practice to develop unconscious, implicit learning (Snyder et al., 2014, p. 162).

And it is the time-oriented nature of learning - mastery over a long period of time - which is why I think the use of AI in the academic world is going to be problematic. 

  • Problem 1: There are no shortcuts to learning. Putting a question into ChatGPT will not allow us to gain mastery. It will not build our academic critiquing muscles. Our comprehension will not grow. It will just mean we can ask questions. 

  • Problem 2: Lacking a bullshit detector. ChatGPT is a model for predictive text. In the brief amount of testing that I have done (read more here), I have often found that ChatGPT is inaccurate, misleading, or plain wrong. When the AI reaches the bounds of its ‘knowledge’ I think it reverts to 'type' and falls back on text prediction. It then gives us a string of seemingly logical next words: but it will not necessarily 'make sense' in the academic meaning of the words. Without us having our own personal understanding of the materials, we will not be able to judge whether the AI result is accurate ...or not.

  • Problem 3: No idea who the experts are. When we use ChatGPT we lack those markers of academic writing which has arisen from us having 'done the work'. The AI is rarely able to provide us with real citations and references to support the learning which arises from familiarity with the field. The process of having gone mining for the views of qualified, careful researchers who are professional enough to have had a paper pass peer review and to be published facilitates our learning, helps us to map our field, builds our critical thinking skills, and to understand where new trends are emerging.

  • Problem 4: Poor writing. If we don't practice our own writing, we will not get better at expressing ourselves. Like driving a car, a dual understanding of the rules harnessed with sound practice will improve our performance. 

  • Problem 5: Not our original work. The expectation is that we OWN our own writing. We can use tools, but we need to (a) acknowledge that we have done so, and (b) our academic writing needs to be our own, not the product of someone else's AI software. If we use AI, I feel the owners of the writing is really the AI owner. It is not ourselves.

Most academic institutions require students to state that assignments contain their own original work on submission into plagiarism software. . This is a key issue, well-explained by TurnItIn: 

"Similar to contract cheating, using AI to write an assignment isn’t technically plagiarism. No original work is being copied". Actually, I am unsure about the validity of this: the creators of AI have paid plenty to create the AI software, and I feel that technically they may own the output. BUT there are also growing ownership disputes about the large language models, copyrighted materials, and so forth which various stripes of AI have been trained on (Morriss, 2023). Perhaps it might be better to say that ownership is murky. "But at the same time, it isn’t the student’s original work. In many ways, AI writing and contract cheating are very much alike; in the realm of contract cheating, students submit a request to an essay mill and receive a written essay in return. With AI writers, the only difference is that a computer generates the work. To that end, using AI writing to complete an assignment and represent it as your own work qualifies as academic misconduct" (TurnItIn, 2024). 

If we want to truly gain mastery, then we need to work at it. We must accept there are no shortcuts to the learning process. We must do the work. We have to read so we get to know the experts. We learn to evaluate our critical eye and hone our bullshit detector. We write to improve our writing. We own our output. 


Sam

References:

Canning, B. W. (1975). Keyboard skill-a useful business accompaniment. Education + Training, 17(10), 277-278. https://doi.org/10.1108/eb016409

Morriss, W. (2023, December 15). Who owns AI created content? The surprising answer and what to do about it. Reuters. https://www.reuters.com/legal/legalindustry/who-owns-ai-created-content-surprising-answer-what-do-about-it-2023-12-14/

Snyder, K. M., Ashitaka, Y., Shimada, H., Ulrich, J. E., & Logan, G. D. (2014). What skilled typists don’t know about the QWERTY keyboard. Attention, Perception, & Psychophysics, 76(1), 162-171. https://doi.org/10.3758/s13414-013-0548-4

TurnItIn. (2024). What is the Potential of AI Writing?. https://www.turnitin.com/blog/what-is-the-potential-of-ai-writing-is-cheating-its-greatest-purpose

read more "The use of AI in academic writing"

Wednesday, 29 March 2023

ChatGPT hype is only hype

After all the hype about ChatGPT, I decided to have a go at using it. To say I am deeply disappointed is an understatement. 

OK. So I signed up, then asked the AI a question about post-linguistic methodology. The bot replied with logical sounding, but what appear to be made-up answers: the answers looked relatively OK, but were still sniffy enough for me to ask for sources. I got four sources (author names and titles), but could not find the actual papers, so asked for APA references. The AI supplied a set of four different APA references to the sources, and - though they too looked relatively OK - the problem arose when I went locate, download and read the articles themselves. For example, after quite a bit of questioning, then asking for DOIs, I received another different set of four sources:

Preda, A. (2022). Post-Linguistic Social Science. Journal of Cultural Economy, 15(1), 1-10. https://doi.org/10.1080/17530350.2021.1998102

Looks fine, right? But I found that the articles the AI supplied simply did not exist. The bot had made the references up out of whole cloth. No article by that name; certainly not in the journal; not by that author (in any journal); and nothing similar even in that entire year. I checked the GoogleScholar author profiles and no similar articles are listed. The DOIs were unallocated. It appears that the answers and the sources are bullshit, top to bottom. 

I quizzed the AI, and it then apologised for making a mistake. And for all the other mistakes it had made, but only once it was cornered in having provided false information. Providing I was polite - and I was - the bot seems to remain polite too (though I have heard anecdotally that the AI will return 'snippiness' if the biological user shows frustration).

So the whole thing appears to be a rort: and that is disturbing. If the AI was a person, I would say it 'lied' (yes, anthropomorphising). It seems, to me, that if the ChatGPT ai doesn't 'know' the next logical step, it makes up answers following its predictive text modelling.

But that then means that nothing it returns can be relied upon: we won't know if it has reached the bounds of its knowledge without testing everything ourselves. So if we cannot be certain that it is even 'roughly right', that we cannot rely on anything it says as being reliable; what then is the point of it...?

Author and technology forecaster, Jaron Lanier's comment was “This idea of [AI] surpassing human ability is silly because it’s made of human abilities”, going on to explain that "comparing ourselves with AI is the equivalent of comparing ourselves with a car. 'It’s like saying a car can go faster than a human runner. Of course it can, and yet we don’t say that the car has become a better runner'" (Hattenstone, 2023). I found this latter a very leavening comment that considerably reframed my thinking. AI is a tool. And, at the moment, it's output is blunt.

As a result, I really don’t see how this thing is going to revolutionise the world. Yet, anyway. And I am not the only one who thinks that ChatGPT "can just make stuff up" (Grove, 2023, citing Toby Walsh, Professor of AI, UNSW Sydney). So yes, it appears that ChatGPT indeed 'lies'. 

So the AI hype is seeming more and more like smoke and mirrors to me.


Sam

References:

Grove, J. (16 March 2023). The ChatGPT revolution of academic research has begun. Times Higher Education. https://www.timeshighereducation.com/depth/chatgpt-revolution-academic-research-has-begun?spMailingID=24864972&spUserID=MTAxNzczMjY5MzYxMAS2&spJobID=2193180520&spReportId=MjE5MzE4MDUyMAS2

Hattenstone, S. (23 March 2023). Tech guru Jaron Lanier: ‘The danger isn’t that AI destroys us. It’s that it drives us insane’. The Guardian. https://www.theguardian.com/technology/2023/mar/23/tech-guru-jaron-lanier-the-danger-isnt-that-ai-destroys-us-its-that-it-drives-us-insane

read more "ChatGPT hype is only hype"

Monday, 27 May 2019

AI and the sky's a-falling 2

Henny Penny; Jacobs (1890, p. 182)
I have written on this topic before (here), but thought I would touch on another couple of issues affected by AI. As the World Economic Forum report says, success in the fourth industrial revolution relies "on the ability of all concerned stakeholders to instigate reform in education and training systems, labour market policies, business approaches to developing skills" (2018, p. vii).

Firstly, AI is already with us (Grant, 2018). We can see it in our smartphones every day with voice recognition; when using GPS to map a route; when verifying our banking; when using online chat. It will just become more seamless, and therefore more pervasive. At work this could mean we can use our time more wisely in planning, and less time fire-fighting.

Health care and hospitality robots are a long way off yet. The complexity required to truly cover the range of human movements - required for work in human environments - is nowhere near at the standard required yet. Robots are also phenomenally expensive. The flexibility will increase and the cost will be driven down, but we are talking many more years for true commercialisation (Grant, 2018).

However, we are creating an underclass. There are people who are becoming less and less employable, with their skills getting increasingly out of step with what the world of work requires. The World Economic Forum state that "54% of all employees will require significant re- and upskilling. Of these, about 35% are expected to require additional training of up to six months, 9% will require reskilling lasting six to 12 months, while 10% will require additional skills training of more than a year" (2018, p. ix). Approximately 50% of future roles will require STEM qualifications (Grant, 2018). The big roles are predicted to be "Data Analysts and Scientists, Software and Applications Developers, and Ecommerce and Social Media Specialists" (World Economic Forum, 2018, p. viii). If we do want to shift the power to the people, education is the key, and strong and robust science, technology, engineering or mathematics training is essential. Whether that is force-fed in schools, or whether we sow seeds and encourage bite-sized training later, more like apprenticeship block courses, will be up to our educators to choose, country by country. But they will each need to have a strategy.

Lastly, I would like to mention Finland. They are getting something really right in STEM education. Yes, they are pretty much mono-cultural, but we need to look carefully at what they are doing, and see if we can do it too. Finnish teenagers spend fewer hours doing homework than many nations (2.8 hours per week), play more, and have only one set of national qualification exams (World Economic Forum, 21 November 2016). Yet they have a 99% graduation rate (WorldTop20, n.d.), and score better in maths and science than the rest of us.

These are complex social and economic issues. But they are navigable providing we don't get into a mindset of "the sky's a-falling" (Jabobs, 1890, p. 182) over AI, and deal with the actual problems: education, dependency ratio, declining population and that AI will take time to evolve.


Sam

References:
read more "AI and the sky's a-falling 2"

Friday, 24 May 2019

AI and the sky's a-falling 1

Jacobs (1890, p. 182)
Gosh, we humans are slow to recognise patterns. Once we all worked at home doing the menial labour on our tiny landholdings or doing menial labour on someone else's land. When the industrial revolution began, everyone was going to be out of work. The opposite happened: we realised that we could all go and work in a factory, and have some resources left over. We started getting ideas above our station.

So we move on to the computer age, and suddenly we were all going to be out of work again. But instead, there was more work, for even more people. Ditto for automation.

Now many people are calling "the sky's a-falling" again (Jacobs, 1890, p. 182), this time about AI. Oh, yes, but it is different this time. It will happen faster. We won't have time to adapt (Rayome, 24 January 2019).

Call me cynical, but I remember the 'paperless' office that was going to revolutionise the workplace. Didn't happen. Still isn't happening (although I am largely paperless, myself, very few people or organisations are). I remember how automation was going to swamp us, that all our jobs could be done by programming. Everything would go to via an automated call centre. End of the world. No work for anyone. Didn't happen. Still isn't happening.

Instead we have gone to smartphones and have lots of people now manufacture apps. The once terribly complex computer language and logic has been simplified. Fewer errors happen. We continue to find more and more things for people to do, to engage with, to be challenged by, and to earn a living at.

Yes, some jobs will disappear, but many more will be created. This is utterly unverified, but I read somewhere that once there were something like 200 professions, and now it is more like 200,000 (actually, if any of you have any reliable sources and numbers for this, I would be very interested in hearing!).

There are also some interesting population trends. As soon as we earn over $10 per day, and child mortality falls in line with WHO guidelines, we stop having more than two children (see the Gapminder Foundation for more info). Then we need to consider the dependency ratio (Grant, 2018). This is the number of people who are not in work - retirees, those in education, at-home parents - who need to be supported by a decreasing ratio to working age people, as we continue to live much longer than the three years of paid support intended by global governments as a reward for our life-long efforts. With an average death rate of around 80 years, that is 15 years which governments cover now, rather than 3. A major fiscal blowout. In China they call this "4-2-1" meaning that one working child supports their just retired parents and their four long-retired grandparents (Goldstein & Goldstein, 2015). There is talk about this becoming "8-4-2-1" as life expectancy continues to increase.

So, I am dubious about claims that the world will end because of AI. There are some economists who agree. Take Rainer Strack for example. He has an impressive pedigree in HR consulting with the Boston Consulting Group and is predicting a fairly significant workplace shortage in Germany by 2030 (Strack, 3 December 2014).

Denning (29 October 2014) has research evidence indicating that new roles are created by private sector organisations which are less than five years old, holding roughly the same pattern since the 1980s. Even more interesting is that those established organisations which are "more than five years old destroyed more jobs than they created in all but eight of those years".

So maturing companies automate. New ones hire people. There will not be enough people for the jobs we currently have.

Let's grow some entrepreneurs and worry less about whether the sky's a-falling.


Sam

References
read more "AI and the sky's a-falling 1"

Friday, 12 January 2018

UBI - Universal Basic Income

I have noted an increasing commentary around the planet about a universal basic income (UBI). A UBI is where a country's government pays all citizens a base salary that can be lived on - say $20k/$30k per annum for EVERYONE. Children from birth. People can go and get themselves a paying job as well, or run a company, or work for a charity, or have a family, or paint, or become an engineer, retire at 15, or read to the blind: but, because they are a citizen, they are fully supported throughout their lives by their country. I had thought that Switzerland was going to vote in a UBI a few years ago, but the proposal was defeated.

The idea is that this is funded from company taxes. Any companies that trade in your country have to pay their due taxes for the privilege of selling to your people, creating waste etc. The government then passes those taxed company profits back to the citizenry. Robotics, AI, all those issues go away, as does social welfare, to a large degree, as people do not need to be employed: they have enough to live on. The 'ordinary person' gets more than minimum wage, but can chose to go and earn more if they want to. There is no personal income tax: this is funded from commerce. GST would probably carry on though.

Of course, the trick to this system is getting companies to pay their fair amount of tax in the first place. If companies did pay their intended amount of tax instead of wasting resources in avoiding it, countries would have enough money in government coffers to cover a UBI easily. But we would have to globally clamp down on companies being able to avoid/evade local taxes - by being registered offshore, or in tax havens - for this to work.

It is an interesting idea, and one I hope to see develop over time. It is going to mean some quite radical change to inter-governmental co-operation though. And I can't see that happening in a hurry.


Sam
read more "UBI - Universal Basic Income"

Monday, 20 January 2014

What Applicant Tracking Software (ATS) is

OK. So what is this "ATS" thing that is rapidly becoming a buzzword in the US, and a puzzle in some less teched-up markets?

ATS is an abbreviation for Application Tracking Software (ATS), sometimes known as a talent management system (TMS). OK, so that part's clear. An ATS is essentially a specialist CRM which handles applications. It is the organisation's central recruitment database; it partner with job boards to get the hiring ads out there, and stores the applications that come back. 

The database has some built-in specialist tools. Specifically artificial intelligence (AI) and natural language processing (NLP) for intelligent guided 'semantic search'. One of the clever things the software does is look for key words in the documents that applicants upload, that match the job description specs of the jobs being applied for so that staff don't have to read CVs.

So what happens is that applicants upload their CVs, forms and application letters. The documents are automatically screened by the ATS according to the preloaded role criteria, using AI & NLP.  The ATS spits out a list of matches who get to the next stage (whatever that stage is - interview, secondary screen on different criterla or whatever). Insufficient keyword matches, and the applicant doesn't make the cut.

The development of ATS has largely been driven by two factors. The economic downturn - there are so many people out of work, there needed to be a better way to manage so many applications for companies flooded with desperate applicants - and the AI algorithms have reached a level of sophistication that enables this to work. As The Resumator says, "the right ATS can give recruiters their time back, once stolen by the overwhelming demands of finding, handling and evaluating piles of resumes" (29 May 2013).

The proprietary software is currently pretty pricey, but it is saving large organisations loads of money in recruiting fees. However, the software is going to follow Moore's Law and get cheaper - fast. There are some open source alternatives already at http://www.linuxlinks.com/article/20091006153557925/HumanResourceManagement.html. Expect to see ATS at SME applications near you. Soon.

From the applicant end, there are some tricks to making an ATS appllcation work for your clients, else they won't make the cut. I will explore this a later blog post.

References:

Sam
read more "What Applicant Tracking Software (ATS) is "

Friday, 4 June 2004

Newsletter Issue 80, June 2004


Sam Young Newsletter

Issue 80, June 2004
Hi guys,
Business not as good as you would expect for this time of year? Then maybe the idea in the Workshop Your Skills article below might contain your ticket into an untapped market.
We ask the question Are You Organised? and look at a few tips to increase your productivity.
Don't forget, if you want to be taken off my mailing list, click here to send me a reply e-mail and I will remove your name.

Workshop Your Skills

Recently I read an article on workshops by an American business-woman, Suzanne Falter-Barns. This article was about offering workshops as a way of attracting new clientele, which I thought was a great idea. She has kindly given me permission to reproduce her article in my newsletter.
So you've discovered your niche, completed your training, got your licences, made up your business cards, started your web site and got a good marketing plan together. But still, no one's exactly beating a path to your door. Wondering what you're doing wrong?
Chances are you're doing everything right. What may be missing is a broader chance for the public to really get a taste of which you are. You need to build relationships with these folks. Yet, how can you do that without actually coaching them first?
Enter the big solution: workshops. Holding workshops targeted to your niche is an excellent way to give your larger audience a real taste of what you do. The full 3-hour, or full-day format of a workshop gives your audience a chance to sit back and observe you at work. Not only that, if you've shaped your workshop to fit your niche, you'll find yourself with an excellent database of interested potential clients. You'll also be able to test the drawing power of your niche quite graphically, and learn the most effective ways to reach these folks.
One psychotherapist I know In New York City built a thriving practice simply by leading three workshops about Jung and dream analysis. An added perk: when you lead workshops, you get all kinds of terrific stories you can use in future articles, books, and speaking gigs. Three best-selling self-help authors I know actually lead workshops for this reason alone.
That said, there are a few key things that must be in place to turn your workshop the client magnet that it can be.
  1. Give yourself and your workshop a brand name. Some of the most successful I know of are "The Ezine Queen", "The Comfort Queen", "Marketing Shaper", "The Publicity Hound", "Authentic Promotion", and "The Grok". These are own-able, distinctive names that let people know exactly who you are ... (well, maybe not The Grok). One thing's for sure... these folks, especially the Grok, are not easily forgotten.
  2. Teach with your heart on the line. The teacher who cares the most wins ... so come prepared, give it your all, and don't say good-bye until literally everyone in the group has had some kind of breakthrough.
  3. Hand out plenty of materials. Class notes, additional resources, your own articles, forms, great quotations, etc., are essential marketing tools. Every one of them should have all of your contact information on them, including your brand name, email, website, all phone numbers, and fax. Put them in a snappy folder with a sticker on the cover that bears, yes... your brand name ... and website. Then staple your business card to the inside of the folder. And be sure to include a well done one-sheet or brochure about your coaching services.
  4. Give away a free coaching session during the break. Simply pass around a hat or jar to collect business cards as folks come in (they can also substitute name and email on paper). Then draw your winner just before the break, which gives you the opportunity to give your coaching a discreet plug. This technique is especially helpful if you're doing your workshop in a venue where you have not done registered the class, and you lack contact info for the group. That nice jar of business cards gives you fodder for your database.
  5. Don't oversell your coaching. Just mention it a few times lightly, and let the truly interested approach you. Better yet, instead of selling it, tell some stories (protecting confidentiality, of course) from your practice that demonstrate what you do. That gives you the power of attraction, as opposed to the stink of the hard sell. If you do your job effectively, they will come.
  6. Stress the importance of getting support at some point in your presentation. Support is one thing that most people really deny themselves, yet that is so critical to success. And what better support is there than coaching? Seed it lightly but firmly in your talk.
  7. Continue to do your workshop in any appropriate market. Nothing builds a base of clients like consistently getting out there. Your name gets heard, and your brand registers each time it does. You can travel locally or globally with this.
One last word of advice - make a point of researching different markets to find your perfect group. I do this by seeing where other comparable workshop leaders are doing their thing, and I observe how they market themselves to these groups. Then I set up comparable tours.
Suzanne Falter-Barns free ezine, The Joy Letter, brings you a crisp, fresh burst of inspiration for your dream every week or two. Sign up at http://www.howmuchjoy.com/joyletter.html. For more information about how to create, book, fill and lead your own workshops, go to http://www.howmuchjoy.com/tangfacil.html.

Are You Organised?

Feeling like there are not enough hours in the day? Want a "wife" to help you keep on top of things & don't have one?! Then here are a few tips that might be of assistance to help you find your "wife" within;
  • Diary: Keep ONE diary and use it for ALL of your appointments. it doesn't matter if it is electronic or paper - just use what works best for you. The easiest way to fail is if you use a paper diary for some parts of your life, your Outlook Calendar for work, the wall planner in the pantry for the family bits and your mobile phone's appointment book for the remainder...
  • Cluster: Do you know that one of the best ways to get more done in a day is to cluster your tasks? By grouping similar tasks together you get more bang for your buck. So how do you do that?
    When planning your day, organise your tasks into "like" activities and diarise time for answering email, invoicing and returning phone calls.
  • Organise: Organise your environment to fit your work pattern. If you are not a natural "filer", try keeping an archive box per client/project and put all the current materials in it; leave the boxes well labelled by your desk in a row for easy access. Just pick it up & go for meetings & you will always have everything you need with you. When things are where you expect them to be, you can focus on the task, not the logistics - saving you valuable time. 
  • Make Outlook work for you:
    • Calendar: If you are going to use the Outlook Calendar as your main diary, keep it up-to-date. To be portable, either print off each day/week's schedule and take it with you to enter new appointments, or get a PDA that syncs with Outlook (best thing since sliced bread! You have all your contacts right there with you, wherever you go...)
    • Contacts: Use Outlook to manage your contacts and will search by category, by first name, organisation name, surname. And use categories to cluster your contacts according to how your mind works - clients, organisations, suppliers, friends/family (see Newsletter_051 to work through the "how to" of setting up categories).
    • Emails: Create folders for separate clients, projects or tasks and set up rules so that your incoming mail gets automatically routed. All my private email goes to "Private" so that I can deal with it when I have time. We will work through how to set up folders in the Newsletter 81 (so watch out for the next issue).
I hope these tips help you to help yourselves!

Artificial Intelligence

I was reading on the Ubiquity website (http://www.acm.org/ubiquity/interviews/v4i43_russell.html) an interview with Stuart Russell, Professor of Computer Science at Berkeley and author of "Artificial Intelligence: A Modern Approach" (Prentice Hall 2003).
We probably all have a pre-conceived idea that Artificial Intelligence (AI) will be full-size humanoid robots who walk, climb, run, jump, manipulate objects and behave like Dr Spock versions of humans; blindly observing Isaac Asimov's three laws of Robotics and get into lots of trouble that AI specialists will have to sort out.
But what about new and powerful applications of technology like smart fridges & pantries that work out what consumables you have used and then send your order to your online supermarket? That is really useful, and useful technology generally gets adopted quickly and consumers don't really care how it works.
Could you explain how your DVD player works? We assimilate new technology very quickly and get used to it just... well, working. When we can buy smart microwaves, as customers we will quickly get used to the idea that our microwave should be able to tell when an item is defrosted after 3 minutes and turning off on its own... despite us having programmed it to defrost for 20 minutes.
We are still very much in the early days of the integration of AI systems into human life. Examples include TiVo (allows recording of TV programs, searches for shows it predicts the viewer will like, edits out commercials etc), smart toasters and on-board computers in cars.
Stuart believes that we'll start to see some very complex scientific hypotheses being constructed on computer, using probabilistic modelling and machine-learning techniques. That means computers that can actually determine causation (just imagine what would happen to Tobacco companies were the causal link was irrevocably established between smoking and cancer!). However, he also says that AI can't constitute the human intellectual enterprises because human scientists are still built a damn sight better than artificial scientists. It will take us a VERY long time to be able to top what the human mind is capable of.
Additionally, Stuart believes that when we "start having humanoid household robots that are interestingly competent, that will change things... There's something about a human-shaped thing that hits you at a physical level. Right now they're incredibly expensive — probably one or two million dollars for the full-size one. Plus, you need to hire half a dozen full-time engineers to keep it working. But people have done demonstrations showing that it can operate a backhoe."
For more information on Stuart Russell, check him out at www.cs.berkeley.edu/~russell

TLAs for SMEs

Here are this newsletter's TLAs for you;
  • AI, Artificial Intelligence. An intelligent system is one whose expected utility is the highest that can be achieved by any system with the same computational limitations
  • USP, Unique Selling Proposition. Each advertising proposition should demonstrate a specified benefit to the customer, compelling them to act

Please feel free to email me with any TLAs that you want to get the bottom (meaning!) of.

Short+Hot Keys... and now tips
All the Function keys for you again, but this time we are shifting as well - all you can do with Alt, Shift, Ctrl in what;
  • Access "Insert the value from the same field in the previous record" Ctrl & ' (Apostrophe)
  • Excel "Copy a formula from the cell above the active cell into the cell or the formula bar" Ctrl & ' (Apostrophe)
  • Excel "Display the Style command (on Format menu); works in a spreadsheet" Alt & ' (Apostrophe)

Hot Linx
Do you recycle? Can you recycle from where you are? Make a quick stop at this site to see if your company can stretch global resources further at http://www.recycle.co.nz/how.html
Getting lots of virus and chain letters? As many of them are hoaxes, always verify them before you send them on to others at http://snopes.com/ or http://www.europe.f-secure.com/virus-info/
Go to Cameo Publications for a list of tips and hints on the "how to" of the American publishing industry at http://www.cameopublications.com/publishing/articlesandtips.html
Going tramping over winter? Then perhaps these American sites might have something for you; survival tips at http://www.surviveoutdoors.com/ and recipes at http://www.camprecipes.com/

                                Catch you again soon!! E-mail your suggestions to me here
read more "Newsletter Issue 80, June 2004"