Hiring professionals’ increasing interest in AI stems from the long history of psychometric assessment. Essentially, it has been widely tested and proven that certain personality traits, cognitive abilities, and mental health conditions can predetermine an individual’s success in a particular role. However, in most countries, it’s unlawful to deny employment based on physical or mental disabilities unless it’s a very specific job that legally enforces the employer to test applicants for it. In the US, for example, the Americans with Disabilities Act forbids employers to ask candidates about their physical and mental states, as this information is considered private.
With the emergence of AI, personality assessment has become as attainable as never before. Hirers can now understand a candidate’s suitability for a particular role on the most granular level by retrieving information from their social media accounts, facial expressions, and even voice.
For example, in their study published in the August 2020 edition of Journal of Personality, Dr. Kazuma Mori and Principle Investigator Masahiko Haruno used an ML algorithm to assess personality traits including socioeconomic status, bad habits, depression, and schizophrenia among other 24 attributes, based on people’s Twitter activity.
However, this is where problems arise. Is it ethical to analyze this data on such a deep level without applicants’ consent? Paradoxically, while every company knows that it can’t directly use private information to make hiring decisions, it’s not prohibited to use tools that can discern this data from publicly available information.