April 11, 2023 By Jennifer Kirkwood 3 min read

The move towards monitoring HR tools and applications for bias is gaining traction worldwide, driven by various global and domestic data privacy laws and the US Equal Employment Opportunity Commission (EEOC). In line with this trend, the New York City Council has enacted new regulations requiring organizations to conduct yearly bias audits on automated employment decision-making tools used by HR departments.

The new rules, which passed in December 2021 with enforcement, will require organizations that use algorithmic HR tools to conduct a yearly bias audit. As per the new law, noncompliant organizations may face fines ranging from no less than USD 500 to no more than USD 1500 for each violation.

To prepare for this shift, some organizations are developing a yearly evaluation, mitigation, and review process. Here’s a suggestion for how that might work in practice.

Read the AI governance e-book

Step one – Evaluate

To have their hiring and promotion ecosystems  evaluated, organizations should take an active approach by educating its stakeholders on the importance of this process. A diverse evaluation team consisting of HR, Data, IT, and Legal can be crucial to navigate the evolving regulatory landscape that deals with AI. This team should become an integral part of the organization’s business processes. Their role is to evaluate the entire sourcing-to-hiring process, and examine how the organization sources, screens and hires internal and external candidates.

The evaluation team should assess and document each system, decision point, and vendor by the population they serve, such as hourly workers, salaried employees, different pay groups, and countries. Although some third-party vendor information may be proprietary, the evaluation team should still review these processes and establish safeguards for vendors. It is crucial that proprietary AI is transparent, and the team should work to include diversity, equity, and inclusion in the hiring process.

Step two – Impact testing

As governments around the world implement regulations regarding the use of AI and automation, organizations should evaluate and revise their processes to address compliance with new regulations. This means that processes utilizing algorithmic AI and automation should be carefully scrutinized and tested for impact according to the specific regulations in each state, city, or locality. With numerous rules varying in degree, organizations should stay informed and comply with the requirements to avoid any potential legal and ethical consequences.

Step three – Bias review

After the evaluation and impact testing are complete, the organization can start the bias audit, which should be conducted by a neutral algorithmic institute or third-party auditor and can be required by law. It is important to choose an auditor that specializes in HR or Talent and trustworthy, explainable AI, and has RAII Certification and DAA digital accreditation. Our organization is ready to assist companies in becoming data-driven and addressing compliance. If you need any help, feel free to contact us.

Data and AI governance’s role

A proper technology mix can be crucial to an effective data and AI governance strategy, with a modern data architecture such as data fabric being a key component. Policy orchestration within a data fabric architecture is an excellent tool that can simplify the complex AI audit processes. By incorporating AI audit and related processes into the governance policies of your data architecture, your organization can help gain an understanding of areas that require ongoing inspection.

What’s next?

At IBM Consulting, we have been helping clients set up an evaluation process for bias and other areas. The most challenging part is setting up the initial evaluation and taking inventory of every piece of technology and each vendor the organization works with to find automation or AI. However, setting our HR clients up on a data fabric can help to make this step smoother. A data fabric architecture offers transparency into policy orchestration, automation and AI management, while monitoring user personas and machine learning models.

Organizations should understand that this audit is not a one-time or isolated event. It’s not just about the regulations a single city or state is enacting. These laws are part of a continuing trend of governments stepping in to mitigate bias, establish ethical AI use, make sure private data stays private, and to reduce the harm done when data is mishandled. Therefore, organizations must budget for compliance costs and assemble a cross-discipline evaluation team to develop a regular audit process.

Learn more about an AI HR/Talent Strategy
Was this article helpful?
YesNo

More from Artificial intelligence

A new era in BI: Overcoming low adoption to make smart decisions accessible for all

5 min read - Organizations today are both empowered and overwhelmed by data. This paradox lies at the heart of modern business strategy: while there's an unprecedented amount of data available, unlocking actionable insights requires more than access to numbers. The push to enhance productivity, use resources wisely, and boost sustainability through data-driven decision-making is stronger than ever. Yet, the low adoption rates of business intelligence (BI) tools present a significant hurdle. According to Gartner, although the number of employees that use analytics and…

The power of remote engine execution for ETL/ELT data pipelines

5 min read - Business leaders risk compromising their competitive edge if they do not proactively implement generative AI (gen AI). However, businesses scaling AI face entry barriers. Organizations require reliable data for robust AI models and accurate insights, yet the current technology landscape presents unparalleled data quality challenges. According to International Data Corporation (IDC), stored data is set to increase by 250% by 2025, with data rapidly propagating on-premises and across clouds, applications and locations with compromised quality. This situation will exacerbate data silos, increase costs…

Where to begin: 3 IBM leaders offer guidance to newly appointed chief AI officers

4 min read - The number of chief artificial intelligence officers (CAIOs) has almost tripled in the last 5 years, according to LinkedIn. Companies across industries are realizing the need to integrate artificial intelligence (AI) into their core strategies from the top to avoid falling behind. These AI leaders are responsible for developing a blueprint for AI adoption and oversight both in companies and the federal government. Following a recent executive order by the Biden administration and a meteoric rise in AI adoption across…

IBM Newsletters

Get our newsletters and topic updates that deliver the latest thought leadership and insights on emerging trends.
Subscribe now More newsletters