top of page

AI is rejecting you during an interview: Here is how to ace and fix it.




How is your job application being rejected by AI in job interview, shocking truth! with examples.






How are you being rated by AI in job interviews?


I was doing a lot of research on AI one-way video interviews as a certified career and resume strategist to teach my clients the backend of real-world hiring. I firmly believe that job seekers need to understand how the hiring tools work to stand out. Ignoring the current hiring trends will smash confidence. Do not shoot the shot without knowing what is being measured by AI during the job application process. There is a lot of misinformation about AI rejecting applications, but each AI is set up by humans and customized to employers' needs. I teach how to interview with AI in my mock interview session. I need to understand the current tools and features in the hiring world to understand. Let me give the background of why I am writing this article. A couple of months back, I saw a job posting on LinkedIn by a top employer for a lot of roles and internship programs, It was mostly a technical role. I decided to click on the application, it pushed out to the AI vendor partner. There were lots of unmeasurable soft skills job seekers are rated on. How are soft skills being monitored? Who is doing the quality check on the AI scoring system? How is the system set up in the backend? Who sets it up? Who's auditing the biases in AI? That led me to tag employer and nobody got back to me. I need to know the backend system to train career professionals and gatekeeping hiring information is not helping the hiring team. There is no transparency in the AI hiring process and job seekers are frustrated. Here is my interview on AI bias with Globe and Mail.


One thing that stood out to me was the verbal communication ratings. Are facial gestures, eye contact and filler words being rated too? HireVue, a leading provider of software for vetting job candidates based on an algorithmic assessment, removed a controversial feature of its software: analyzing a person’s facial expressions in a video to discern certain characteristics. I work with first-generation immigrants and newcomers to Canada and the verbal rating impacts overall score, Internationally trained professionals might not be aware of the local jargon/pronunciation. Employers might compare verbal communication with people born in Canada which is contrast bias. This is where no Canadian experience gets in the way, even though rejecting a candidate based on the no Canadian rejection is a human rights violation. In my interview with Global News, shared AI bias. I'm concerned about people with English not being a first language for verbal communication ratings. AI is not foolproof, and it has biases implemented by humans at the backend. The question is what protections are in place for AI bias and discrimination. The above example was taken from the top company that has been using the tool for hiring, they focus on diversity, equity, and inclusion and fail to see AI bias. You could see verbal ratings being scored, and this could impact international talents, newcomers to Canada, Career professionals with English as a second language, pronunciation, accent, people with speech disabilities, invisible disabilities and neurodiverse. To job seekers, read the job description and weave your answers with the CARL framework relating to the role only.


Amazon’s machine-learning specialists uncovered a big problem: their new recruiting engine did not like women. Amazon’s computer models were trained to vet applicants by observing patterns in résumés submitted to the company over a 10-year period. Most came from men, a reflection of male dominance across the tech industry. As per Guardian news AI penalized résumés that included the word “women’s” Amazon ditched AI recruiting tool that favored men for technical jobs.




AI Interviews: What They Are


I used to manage French-speaking people as well in a corporate role. French was their first language, and I could sometimes hardly understand them. They were born and raised in Canada. Even within an English-speaking community, there are accent biases. How is AI evaluating verbal communication for someone whose speech/accent is not clear enough? When we're talking about DEI, diversity, equity, and inclusion, we need to factor in potential AI bias and discrimination in the hiring world. One solution is providing accommodation for opting out and options for job seekers. Option for written skills instead of a video interview. I have had a few clients who went through AI one-way video interviews. Usually, the candidate will get a link with an expiry date to complete the AI video interview. There's a timer between 2 to 5 minutes, the candidate has to answer each question within the time frame. AI tools might be different based on configuration and features built by vendors. Some tools would give you 3 attempts to answer and submit which answer you would want to submit. Some would not give the option and it takes the last answer. If candidates are not able to commit to that timing for whatever reason including disabilities and barriers, they could be rejected in this process. There is no option to appeal or follow up later. The solution for the employer could be to let the candidate adjust the timer and edit if they want. In face-to-face there is no timer, people interview differently without pressure. There are no options for people. So, what kind of accommodation is in place? Who's going to review AI videos? Is it AI monitoring it? Is someone even going to watch that or listen to the video? There is no whistle-blower hotline for AI discrimination for job seekers and an appeal process to fight back. I was invited by Nola Simon on her podcast , listen to our in-depth AI conversation.



I couldn't find compliance/ AI watchdog in Canada's monitoring AI tool. How are we holding vendors and employers accountable to make sure it's a fair process? This is where the biases and discriminations are probably happening, and people setting up AI at the back end don't even know. How many rounds of interviews are happening in the AI world before you get into the face-to-face? It's a lack of human interactions and connection as well. There are other AIs specifically used for video interviews and maybe scoring people based on facial expressions and eye contact. In some cultures, eye contact works differently. You are not able to straight look into someone's eyes who is older or man. There's a cultural bias as well. On Nov. 6, 2023, the Ontario Government announced that it will introduce legislation that, if passed, will require employers to include expected salary ranges in job postings and disclose if artificial intelligence (AI) is used in the hiring process. In response to growing concerns about the ethical, legal, and privacy implications of AI, Ontario is proposing to require employers to inform job seekers when it is used to inform decisions in the hiring process. The legislation doesn't outline what exactly employers need to disclose for AI hiring. Ontario also makes it the first province in Canada to help even more internationally trained immigrants work in the fields they’ve studied by banning the use of Canadian work experience as a requirement in job postings or application forms. This change would help more qualified candidates progress in the interview process.




LinkedIn has an AI interview tool for interview prep which gives out the option to record the video and also write the answers to get the feedback from your trusted circle. Check the step-by-step video here.


Personality Test AI hiring


One thing that always bothered me was how accurate hiring is based on AI personality/ psychometric evaluation. These traits include openness, conscientiousness, extroversion, agreeableness, and emotional stability. AI also measures personality-related traits, candidates are evaluated on other metrics, like humility and resilience. The algorithms analyze candidates’ responses to determine personality traits. AI scores indicate how closely a candidate matches the characteristics identified by prompts set by the hiring team as ideal for the position. As a hiring manager, we were given the scores by HR. The hiring team is not trained on how to interpret those scores. Honestly, it didn't help much working with direct reports because the personality test is not between managers and new hires but based on the role, I completed personality-based questions myself after I got laid off. It is similar to behaviour questions with multiple choice. I found it a waste of time and repetitive. A lot of employers, what they do is they try to mirror their high-performing employees or top sellers. Let's say it's a sales job, personality test scores will mirror someone successful in the role and they try to put the matrix together, mirroring that person. The new hires are compared to old hires. I've led the top-performing team and the personality changes based on the environment/coaching/ leadership. Once a top seller could become a poor performer based on the environment. Who is adjusting personality scores once people from the past leave the organization? Vendors are popping out with the AI personality to make it seem like this is the way to go. There is a gap in AI hiring as human needs are compromised to save the recruiting cost, AI can make life easier but could make someone's life miserable when not configured correctly.


MIT technology reviewed the Artificial Intelligence predictions and job-matching scores that raise concerns about what exactly these algorithms are evaluating. AI can't pick up on the cultural add nor can they assess the soft skills. Those skills are seen in action after the candidate gets hired. Hiring decisions could be based on algorithms and job seekers have one more thing to worry about to land a dream career.


Stakes are so high for job seekers attempting to navigate AI tools and adapt to tech-savviness.

Here are a few solutions to work on AI interviews. How AI decides who gets hired and moves to a face-to-face interview is unknown to many career professionals. There needs to be transparency to avoid missing out on the right talent pool.



A couple of years ago I was approached by Hilke Schellmann, Emmy-award-winning investigative reporter and journalism professor at NYU. Her work covering artificial intelligence has been published in The New York Times, The Guardian, the MIT Technology Review, and The Wall Street Journal, where she led a team investigating how AI is changing our lives. We had a chat about AI biases in the recruiting world. I had sent her screenshots of AI biases in recruitment tools which was alarming for job seekers. Hilke has covered the AI biases in her book . The Algorithm: How AI Decides Who Gets Hired, Monitored, Promoted, and Fired and Why We Need to Fight Back Now.

As per the book description., her book takes readers on a journalistic detective story, meeting job applicants and employees who have been subjected to these technologies, playing AI-based video games that companies use for hiring, and investigating algorithms that scan our online activity to construct personality profiles— including if we are prone to self -harm.  She convinces whistleblowers to share the results of faulty AI -tools, and tests algorithms that analyze job candidates’ facial expressions and tools that predict from our voices if we are anxious or depressed.  Schellmann finds employees whose every keystrokes were tracked and AI that analyzes group discussions or even predicts when someone may leave a company. Her reporting reveals in detail how much employers already know about us and how little we know about the technologies that are used on us.


 The Algorithm tells an even bigger story with Schellmann discovering faulty algorithms and systemic discrimination of women and people of color, which may have already harmed thousands of job seekers and employees.  It advocates to go beyond these tools to more thoughtfully consider how we hire, promote, and treat human beings—with or without AI.  As Schellmann emphasizes, we need to decide how we build algorithmic tools in any industry and what protections we need to put in place in an AI-driven world.


Listen to our podcast where I interviewed Hilke to share her findings here























Sweta Regmi is a hiring manager from award-winning companies turned into Founder and CEO at Teachndo, a Certified Career & Résumé Strategist.Sweta Regmi is a globally recognized top career expert, speaker, and LinkedIn Top Community Voice for Career Development, Job Search Strategies, Personal Branding, Public Speaking, Resume Writing, and Interviewing with over a decade of experience empowering career professionals. Sweta's insights are featured in CBC National News Prime Time & Local, Global National News Top Story & Global Local News, CNBC, Wall Street Journal, HuffPost, CTV News, City News, FOX 26 News, Daily Mail, BNN Bloomberg, 5 times in Globe and Mail, Yahoo News, National Post, MSN, theaustralian.com.au, FORBES, Toronto Sun, 80+ times in LinkedIn News, LinkedIn Hello Monday award-winning podcast, LinkedIn Creators, Indeed, Employment services, Top Colleges and Universities, Career Conferences, Leadership Conferences, and 100+Top media outlets have widely recognized Regmi's expertise, see here. Regmi has also partnered with leading brands and organizations to elevate and spearhead career strategies, career sites, and outplacement and establish non-profit employment services partnerships. Her RBC Canadian Women Entrepreneur Awards nomination by Women of Influence in 2022 and 2023 further demonstrates her success as a recognized career expert in Canada. Regmi is also the Amazon Best Seller of 21 Resilient Women: Stories of Courage, Growth, and Transformation. The book has been recognized by libraries, ministers and MPs in Canada.






Subscribe to YouTube career tips




Comentários


bottom of page