Academics get a hard time from Practitioners as being somewhat dusty and out of touch with the ‘real world.’ I must confess that as a mature student and a practitioner I held the same point of view. So it is with some surprise that in the past 12 months that I have found myself walking a path that bridges the gap between academia and practitioner.
I still practice organisation development in the real world, but it is a practice that requires a deep understanding of theory. I also spend some of my time writing books, academic research papers and lecturing. Now it is true that there are parts of the academic world some still raise an eyebrow as to how out of touch some academics are, but also I think like any siloed thinking the them and us attitude of practitioner versus academic fails to see the benefits that academics can bring to the workplace.
This weekend I went on a Psychometric course to become licensed to deliver Level A and Level B psychometric tests, which I need as part of my practice. As an MSc student at Birkbeck, they offered a really good deal on the course, and so I signed up. Usually Psychometric courses are delivered by test publishers who have an interest in promoting their own tests but the course that I went on wasn’t aligned to anyone publisher who had a commercial stake in the course outcomes.
Now when it comes to statistics I am afraid my brain automatically goes into a flight mode, but we spent a great deal of time on Friday going through standard errors of measurement, reliability and validity. Which basically seeks to find out whether the test actually measures what it says it measures and also whether the score that someone obtains on the test is actually close to being their true score (since there are many factors that affect our test results.)
You probably use a range of psychometric tests in your organisation but one question organisational leaders are unlikely to ask is how reliable and valid that test is.
Organisations often choose tests based on the cost of delivering the test, the efficiency of administrating the test, and possibly the type of report the test gives – but the reliability of the test? Turns out the some of the most popular tests are the least reliable. The Publishers of these tests don’t talk about reliability because, well, they have a product to sell. But if you found out that the accounting system you used was accurate and reliable only 40% of the time, it would draw gasps of disapproval… yet we don’t ask these questions of the test that we administer for people.
The other thing that I picked up over the weekend was around the subject of competencies. I have been in organisations where a set of competencies were introduced with great fanfare. But there is no evidence that competencies are relevant in regards to predicting job performance.
Consultants, or internal project teams who get the competencies from the web or a book usually write competencies that are chosen for use for recruitment, selection, development and career progression by an organisation. But what evidence is there that competencies add any value in these People processes within an organisation.
Part of the problem is that very often organisations don’t evaluate the practices that they put into place. The other thing is that although there may be evidence that organisations with competencies have better retention or performance level, no evaluation has been taken as to whether it is the competencies or other practices which have been introduced at the same time which have contributed to the improvements.
For your information cognitive ability tests and structured interviews are the best predictors of future job performance. If you have a competency framework the likelihood is that the organisation has structured interviews. It is not the competency framework that has contributed to the improvements but rather the practice of introducing structured interviews.
How many millions of pounds have organisations invested in a competency structure, when all they had to do was train their managers in structured interview techniques, which they would have done as part of the competency framework introduction.
We might bash academics for not living in the real world, but it might occasionally be worth revisiting some academic research regards some of the practices that organisations engage in, you never know you might just learn something that contributes something of really value to your organisation.
Plus one last piece of advice, when choosing a psychometric test, ask first about validity and reliability you might be surprised at how little evidence there is that the tests you have used for years predict what they say they predict or indeed the level of certainty they have in predicting those things.