Indeed’s Head of Responsible AI gets to the heart of what talent leaders actually need to know about today’s rapidly evolving technology and how to use it.
Key Takeaways
- Agentic AI, the next step up from generative AI, can autonomously complete tasks such as screening resumes, ranking candidates and initiating outreach.
- AI agents can help employers bridge the skills gap by mapping transferable skills, prompting candidates to take assessments and personalising learning and development at scale.
- Employers should focus on experimenting with the AI systems they have to improve the hiring and employee experience.
Good news for any talent leaders feeling overwhelmed by AI’s developmental onslaught: “If they are reading this article, they should not worry,” says Trey Causey, Indeed’s Head of Responsible AI. “Anyone even considering using AI is already ahead of the curve.”

In this interview, Causey cuts through the hype and breaks down what you need to know about the latest AI technology in hiring — generative AI (GenAI) and agentic AI — and the impending arrival of artificial general intelligence (AGI), or superintelligence. The conversation has been edited for length and clarity.
GenAI, such as ChatGPT, has been around for a while now. How does agentic AI differ, and what use cases do you see in hiring and retention?
Causey: What makes GenAI unique is that, as the name suggests, it generates new outputs and content based on a prompt or a set of instructions the user provides. That could be a job description, handling customer service inquiries — anything where you need ideas.
Agentic AI is the natural next step. Many current GenAI systems are chat-based, and everything happens within the confines of that chat. But AI “agents” are like assistants that can do things for us. It's their independent actions that separate agentic AI.
For instance, you can set up an agent to review the daily applications to an open role; summarise those applications; identify the candidates who have the skills and experience you are seeking; and generate a report that orders those candidates with an overall summary to review at the end of your day. Then, you can tell it to craft and send a personalised message to each candidate you approve, invite them to a screening call and alert you when they have responded.
Before, the AI would not be able to interact with other systems; it would not be able to go get those resumes unless you somehow had put them all into the context that it had access to. But with agentic AI, you can keep adding steps to this chain and it will work in the background while you take care of other things. We are working on streamlining all of that on Indeed.
[At FutureWorks 2024, Indeed CEO Chris Hyams announced an upcoming AI-powered product from Indeed that will provide jobseekers with the resources to develop a career path and help employers fill talent gaps. Stay tuned for more details in the near future.]
Are there misconceptions or pitfalls unique to agentic AI that users should be aware of?
Causey: AI systems are prone to flaws and mistakes. Just because it is the next evolution does not mean it is perfected.
These agents are designed to take action independently, but that means the cost of mistakes is higher — if you reach out to a candidate, you can not take that back. It is important to be intentional about what you enable AI agents to do, making sure you have a way to review the tasks and outputs. It would be misguided to immediately delegate all of your work to an agent right now.
It is like the early days of self-driving cars: You still need your hands on the wheel.
Indeed’s recent global report reveals both employers and jobseekers support skills-first hiring, but limited time and resources are barriers. How can agentic AI help?
Causey: In the transition to skills-first hiring, the biggest puzzles are, How do we know that jobseekers have the skills we need? Do the job seekers know they have those skills? And how do we verify both sides of that equation in a way that both the jobseeker and the employer trust?
Imagine having an AI agent automatically look at resumes and not only extract the skills listed, but also use information it has on the back end to map other skills to the positions candidates held previously. It could even follow up with the jobseeker to say, “These skills are not on your resume. But from your experience in jobs A, B and C, you might have them. Would you like to take an assessment?”
Automating that back-and-forth avoids ruling someone out for not using the “right” language on their resume and prevents the recruiter from submitting someone for review only when they have the time to follow up on those skills and wait for a response. The assessments close the trust gap so the employer can quickly verify the essentials and get to interviewing.
Indeed’s global survey also shows that workers increasingly value learning and development (L&D) opportunities when choosing employers, even over pay. How can employers use AI in L&D to better attract and retain talent?
Causey: AI opens up a lot of opportunities to make L&D on-demand to employees at scale and at a relatively low cost. It can construct personalised learning plans and study materials, then create an assessment to see how well you are learning and provide opportunities to practice at your own pace.
But there are still social elements. It is difficult to stay accountable with online learning. Maybe it is nine at night, you just put your kid to bed and you really do not want to learn Python right now. That is where a manager can support and motivate. The human component is always key to success.
Can AI also help with work well-being? If so, how?
Causey: While we do not want to create a surveillance culture, I do think it can be useful if a manager gets overburdened and may not notice one of their team members is becoming disengaged.
For example, imagine you have collected data on absences. An agent can regularly compile a report to identify employees who might need a break. There are so many ways we can aggregate data to make it easily accessible and actionable.
How does artificial general intelligence, AI’s supposed next evolution, differ from the other forms of AI we have been discussing?
Causey: Artificial general intelligence is basically a system or set of systems that can outperform humans at any task. But there is no agreed-upon definition of what that looks like, so some jokingly say it is “whatever we don’t have yet.” It is more of an academic debate at the moment.
Most of the large AI labs have been shortening their timelines for when we will see AGI, including the engineers actually working on these systems. This has led to some proposed AGI nightmare scenarios that I do not find super compelling. Just because something is very intelligent or has the appearance of intelligence does not mean it can do everything humans do.
So what do employers need to know about AGI right now?
Causey: My hot take is they do not need to care. Regarding the macroeconomic implications of AGI, so many outcomes are equally probable right now that you can not do anything until there is more information. Whether or not AGI happens and when is much less important than what we’re doing with the systems we have now.
Rather than spending time figuring out the right type of AI to use or where to use it, just start using AI in everything (within your company’s policy and the parameters provided to you, of course). An experiment-driven approach lowers the stakes and relieves the pressure of perfectionism. Using AI is like anything else — if you do not practice, you do not get good at it.