Posted by

June 10, 2021

The European AI regulation: what impact will it have on the HR industry?

Categories
All articles Human Resources Recruiting

The European Commission’s draft regulation on the European approach to Artificial Intelligence was published on April 21, 2021.

The purpose of this initiative is to ensure that AI applications comply with the legislation on fundamental rights.

Today, there is already great concern about the complexity and opacity of many AI algorithms, especially with reference to the so-called “black box” problem. There is often no way even for those who employ this technology to know the criteria that was used to make a decision, which might therefore be the result of errors or biases.

The regulation on AI—as was the case with the GDPR, from which it has borrowed many ideas—will involve additional work for those who produce this technology, for the companies that import and distribute it, and for the companies that use it for their business.

Certain requirements will have to be met under penalty of fines that seem to be even more substantial than those in the GDPR, and may reach up to 6% of a company’s annual global turnover.

According to the European Commission’s intentions, the regulation—combined with other AI funding initiatives—will also serve to further develop this technology in Europe, in order to bring the continent into competition with the United States and China.

The regulatory framework could contribute to greater AI adoption in two ways. On the one hand, increased trust will increase demand for these systems in businesses and governments. On the other, detailed and shared regulations will allow AI vendors to access larger markets.

It should be stressed once again that this is a draft at this stage, for a regulation that will have to remain flexible in order to regulate a technology that is still evolving.

It must also be pointed out that for now, most vendors don’t appear to be able to meet the criteria it sets out, especially for systems considered to be high risk, which include a number of algorithms designed for the HR sector.

Let’s look at these in more detail.


High-risk AI systems and the HR industry

Annex 3 to the Regulation for the European approach to Artificial Intelligence contains the list of algorithms that are considered high risk, because the decisions they make can have a major impact on people’s lives.

As we mentioned, this also includes AI systems designed for the HR sector, and in particular:

  • AI systems for recruiting, and in particular for the automatic screening and ranking of candidates.
  • AI systems for managing employees, providing suggestions on promotions and firings and performance evaluations.

There is also the issue of biometric detection systems. In Annex 3, reference is made to systems designed for the biometric identification of people, both in real time and deferred, and it is not clear whether this includes systems used by companies for attendance monitoring and access control. It should be recalled that the GDPR and the Italian Privacy Guarantor discourage the use of such tools in a number of ways, given that they allow for the collection of sensitive data without a real need (in most contexts, the classic badge is enough).

Another class of AI systems that could also be considered high risk are those that determine which training courses to enroll employees in.

The use of these HR AI tools by companies will have to be considered with great caution, as it will not be just the producers, but also the users—in the case of organizations—that will have to meet the requirements.

High-risk systems are defined as those that present a high risk with regard to any of the following:

  • Risk to health
  • Risk to security
  • Risk to fundamental rights and freedoms

As a result, both manufacturers and users must implement a range of technical and organizational measures to limit this risk.

They must, in particular, meet the following requirements.


Data Governance

Algorithms will need to undergo a series of processes during the training, validation and testing phases to make sure they work as intended and in a safe manner.

In the training phase, one will have to focus on the database used and the definition of the characteristics and attributes that determine the output of the application. The database must be relevant and pertinent to the purposes of the system, complete and error-free.

During the validation phase, attention should be paid to the phenomenon of overfitting. If too much value is given to training data or, conversely, to exceptions to the statistical rule, the algorithm will end up not generalizing to the right extent.

Finally, during the testing phase, it will be necessary to confirm the expected performance of the system.

A final key aspect of this phase is the fine-tuning of the algorithm. To ensure that it gives answers that are consistent with the subjects or group of subjects on which it will be used, the algorithm will need to be trained on data that takes into account the geographical, behavioral and functional characteristics of the context in which it will be applied. For example, algorithms trained on large American databases will probably not be suitable for use in Italy.


Technical documentation

Prior to being put on the market, an AI system must be accompanied by comprehensive documentation demonstrating the AI system’s compliance with the regulation.

This documentation, which must be continuously updated, should provide an understanding of how the algorithm was developed and its performance across its life cycle.


Logging systems

HR applications that leverage artificial intelligence will need to be equipped with logging systems that record the most important events that occurred during their use, so that they can also be monitored after they are put on the market.

High-risk systems will need to be able to record information such as the date and time of the start and end of each usage session, and the database with which the data was compared.


Transparency

High-risk AI systems must be designed and developed with transparency in mind. Therefore, the logic by which they make decisions must be understandable to their users.

This means, for example, that the reasons for excluding a candidate from a selection process must be clear and able to be reconstructed for everyone.

Any risks the system poses to the health, safety, and fundamental freedoms of users must also be made clear.

Using clear language is also a recurring theme in the GDPR.


Human supervision and training

High-risk AI systems must, by design, include the possibility of human intervention, or at least of an appeal to human intervention.

In short, human beings must be able to control their behavior.

This must be made possible through a set of operational constraints provided within the algorithm’s decision-making process that cannot be ignored by the system.

All this implies the need, on the part of public administration bodies and businesses, to provide for and train personnel with the appropriate skills to intervene in the operation of the AI system when it appears erroneous or discriminatory, or to decide that its use is not appropriate for the circumstances.

The regulation also mentions the possibility of using a “stop button” or a similar procedure to interrupt system operation.

These are likely to be new skills that will need to become part of the HR department’s technical toolbox, in collaboration with the company’s IT department.


Accuracy, robustness and cybersecurity

AI-based HR systems must also be evaluated for accuracy, correctness and fairness in their algorithmic output.

Such a system must also offer strong resilience against its own limitations, errors, failures, inconsistencies and unforeseen situations. It will also need to have redundancy solutions in place, such as backups and security plans, to reduce and mitigate risks from such damage. It will need to show the same levels of resilience against malicious actions that can compromise security and cause harmful behavior.

Then there is the issue of cybersecurity, and thus of resilience to cyber threats. In particular, attention must be paid to “data poisoning,” i.e. the risk that data, especially training data, could be “polluted.”

All of this is to ensure that high-risk systems will generate correct output that is not distorted by internal or external threats.


Conclusions

The draft regulation, expected to reach its final form in the next few years, puts the focus on the ethical and opacity-related issues that have dominated discussions about artificial intelligence and prompted many companies to back off and divest from such systems, waiting for a more mature and transparent technology.

In the future, EU companies will need to be careful to choose vendors that comply with the regulation and consider the appropriateness of using HR AI systems for needs that don’t strictly require it, thus exposing themselves to unnecessary risk.




Copyright: ©thodonal/Adobe Stock