AI opens up infinite possibilities. For HR teams, it can simplify global compliance. But one wrong step and you can find yourself in legal disputes and a damaged reputation.
IT leaders everywhere are rolling up their sleeves and tackling AI adoption head on. In a recent webinar, Maria Lees, Senior Director of IT at G-P, shared how HR teams can integrate AI and alleviate concerns about data security, bias, and privacy.
Lees joined G-P in 2023, right around the time we started building our AI-powered global HR agent, G-P Gia™. Gia is agentic AI for HR that can cut the cost and time of compliance by up to 95%. So Lees knows a thing or two about AI adoption, which she boils down to one crucial thing: data integrity.
“You can't leapfrog foundational steps,” Lees says. “Part of our role as IT leaders is to help leadership understand this journey. A leader might ask for an agentic AI tool that can make complex decisions on its own, and while that's a great goal and I love it, it's not necessarily realistic to get there without that foundation first. And to build that foundation, the first step is to establish trust in your data, and your data is only as good as the information you feed it. So if it's siloed, it's inaccurate, or it's messy — whatever you build on top of that is going to be completely flawed.”
Engineering trust: bridging the 3% gap
Lees is a firm believer in “trust through transparency.” In the AI era, companies have to be anchored by this mantra and be willing to show the sources and explain the “why” behind any AI recommendation. This is the foundation for product credibility and user confidence.
“Trust and transparency is something that we took to heart when we built our own AI,” Lees explains. “We knew that for anyone to trust Gia’s answers, they'd have to trust its foundation. And it's built on a ton of experience, a decade of G-P’s own global expertise. Its knowledge isn't random. It includes a million real-world scenarios and over 100,000 legally vetted articles and data from over 1,500 government sources.”
When users ask Gia a question, the output is always accompanied by G-P Verified Sources, meaning G-P experts have validated the information. Gia is built for unmatched precision, and the end product is patent-pending AI combined with a proprietary RAG model that delivers results that are 10x better than the AI industry standard.
Despite the clear advancements in AI, gaining company-wide trust in the tools is still an ongoing challenge for many IT leaders. The 2025 World at Work report revealed that only 3% of executives would trust AI to make any decision. IT departments need to help leaders feel comfortable with adopting AI technologies.
In Lees’ words: “That 3% is really telling, and it makes perfect sense right now. There's a lot of unknowns out there. And there’s a lack of knowledge and understanding, and everybody's on a race to get somewhere. But it truly highlights a natural trust gap. And it shows that our challenge as leaders isn't just about implementing technology, but rather it's about building the confidence in it. So my advice is to think of it like a new, incredibly smart team member. You have to build trust incrementally.”
The human-in-the-loop validation framework
Not all AI tools are created equal. And not everyone can build their own solutions, so companies need a strong vetting process for third-party tools. So, how can IT teams evaluate AI tools before rolling them out? “This question really gets to the heart of our philosophy here at G-P,” Lees says.
Lees uses her working relationship with Connie Diaz, Senior Director of HR at G-P, to showcase how IT teams introduce new AI technology to their company. “So when any team, especially HR, wants to adopt a new tool, my team's [IT] role isn't to be a gatekeeper at the end of the process. It's more to be a strategic partner from the very beginning. It's a true cross-functional collaboration that we take on,” she says.
IT takes a human-in-the-loop approach to validate every tool, where cross-functional expertise ensures compliance and confidence. The technical team assesses alignment with engineering standards, IT analyzes security risks, legal reviews for global compliance, the AI council guarantees governance alignment, and HR gives the critical business and ethical case.
“Looking at this as a joint effort helps build that foundation of trust,” Diaz agrees. “So as an employer or as an executive, an AI tool like Gia isn’t just a black box, it's a tool that is actively and transparently overseen by both HR and IT experts. People are far more likely to trust the outputs that they're getting from it.”
HR gives the assurance that the tool is used ethically and fairly, while IT gives the confidence that it’s secure and compliant. It's a shared responsibility that demonstrates a commitment to both people and data integrity.
Start with a clean data source
The timeline to roll out a new AI tool can take anywhere from weeks to months. This depends on whether you have data that is immediately ready for use in an AI model or your data is incomplete and stored in incompatible formats that IT will need to extract. Lees recommends starting small. “Don't try to boil the ocean. Just pick one spot that you can tackle and get a quick win out of that. Start building your momentum from there,” she says.
The big questions are generally about data privacy.
First: Can the company's data be used to train the AI model?
Second: How can IT set up strict access controls so only the right people see the right information?
“Because we're addressing all these critical issues collectively and upfront, the conversation isn't about listing problems. It's about finding the actual solutions,” Lees says. “When we follow this process, it's a win-win for everyone. We can confidently approve an AI tool that's going to help the business… it's a process that one might look at as a hurdle, but we've turned that into a strong partnership to make sure that we're protecting G-P and our data.”
IT and HR: partners in data readiness
For teams beginning their AI journey, Lees stressed the importance of preparation. Choosing an AI model on incomplete data is going to result in bad outputs that damage the trust you're trying to build in your organization. Lees’ advice is that before a team selects a tool, the data foundation has to be the first checkpoint. And that’s where a close partnership between HR and IT becomes so important.
First, IT can help HR by cleaning and centralizing the data. This means consolidating information from different systems, such as payroll, benefits, and performance reviews, into a single source. Next, IT can help HR establish data governance and put strong access controls in place to protect sensitive employee information.
Creating clear rules for how data is collected and used is crucial for accuracy and privacy.
Cut compliance costs and time by up to 95% with Gia
Don’t let compliance hurdles slow your HR team. An expert-vetted AI tool can accelerate momentum and meet your IT team's high bar for data security, ethical transparency, and verifiable compliance. Gia is that tool.
Gia was recognized as a Top HR Product of 2025 by HR Executive. It’s agentic AI designed to streamline global HR by answering your toughest compliance questions across 50 countries and all 50 U.S. states.
No more legal counsel hurdles or costly billable hours. With Gia, global compliance is easy. Want to simplify global HR with expert-vetted guidance you can trust? Sign up for a free trial today.
To learn more from Maria Lees about how IT can successfully collaborate with HR, watch her full discussion with colleague Connie Diaz.








