sábado, 4 de septiembre de 2021

sábado, septiembre 04, 2021

The Missing Link in Europe's AI Strategy

Europe can become a global leader in artificial intelligence, but only if it protects its citizens and involves workers in the regulatory and deployment process. In that regard, the European Commission’s recent draft regulation leaves much to be desired.

Aida Ponce Del Castillo



BRUSSELS – The European Commission’s strategy for artificial intelligence focuses on the need to establish “trust” and “excellence.” 

Recently proposed AI regulation, the Commission argues, will create trust in this new technology by addressing its risks, while excellence will follow from EU member states investing and innovating. 

With these two factors accounted for, Europe’s AI uptake supposedly will accelerate.

Unfortunately, protecting EU citizens’ fundamental rights, which should be the AI regulation’s core objective, appears to be a secondary consideration; and protections for workers’ rights don’t seem to have been considered at all.1

AI is a flagship component of Europe’s digital agenda, and the Commission’s legislative package is fundamental to the proposed single market for data. 

The draft regulation establishes rules concerning the introduction, implementation, and use of AI systems. 

It adopts a risk-based approach, with unacceptable, high-risk, limited, and low-risk uses.

Under the proposal, AI systems deemed “high-risk” – posing significant risks to the health and safety or fundamental rights of persons – are subject to an ex ante conformity assessment to be carried out by the provider, without prior validation by a competent external authority. 

Requirements include high-quality data sets, sound data governance and management practices, extensive record-keeping, adequate risk management, detailed technical documentation, transparent user instructions, appropriate human oversight, explainable results, and a high level of accuracy, robustness, and cybersecurity.

The Commission says that its definition of AI, as well as the risk-based approach underpinning the draft regulation, are based on public consultation. 

But the fact that industrial and tech firms constituted an overwhelming majority of the respondents to its 2020 AI White Paper suggests an exercise that is far from democratic. 

These businesses, while pretending to promote knowledge, science, and technology, steered the regulatory process in a direction that serves their interests. 

The voice of society, in particular trade unions, was drowned out.

The regulation has several shortcomings. 

Among them are the Commission’s narrow risk-based approach, the absence of a redress mechanism, the failure to address the issue of liability for damage involving AI systems, and a reliance on regulatory sandboxes for providing “safe” environments in which to test new business models. 

The draft also fails to deliver from a worker-protection perspective.

To address this shortcoming, an ad hoc directive that focuses on AI in the context of employment, which would protect workers (including those in the platform economy) and enable them to exercise their rights and freedoms on an individual or collective basis, would be a possible way forward.

Such a directive should address several key issues. 

For starters, it should set employers’ responsibilities in preventing AI risks, in the same way that they are obliged to assess occupational health and safety hazards. 

AI risks extend further, because they include possible abuses of managerial power stemming from the nature of the employment relationship, as well as other risks to workers’ privacy, fundamental rights, data protection, and overall health.

Safeguarding worker privacy and data protection is equally vital, because AI is hungry for data and workers are an important source of them. 

The EU’s General Data Protection Regulation (GDPR) is a powerful tool that, in theory, applies to workers’ data in an employment context, including when these are used by an AI system. 

But in practice, it is almost impossible for workers to exercise their GDPR rights vis-à-vis an employer. 

The EU should introduce additional provisions to ensure they can.

Making the purpose of AI algorithms explainable is important, too. 

Here, firms’ workplace transparency provisions will not protect workers. 

Instead, employers, as users of algorithms, need to account for the possible harm their deployment can do in a workplace. 

The use of biased values or variables can lead to the profiling of workers, target specific individuals, and categorize them according to their estimated “risk level.”

Another priority is ensuring that workers can exercise their “right to explanation.” 

The implication, here, is that employers would be obliged to consult employees before implementing algorithms, rather than informing them after the fact. 

Moreover, the information provided must enable workers to understand the consequences of an automated decision.

The new ad hoc directive should also guarantee that the “human-in-command” principle is respected in all human-machine interactions at work. 

This involves giving humans the last word and explaining which data sources are responsible for final decisions when humans and machines act together. 

Trade unions should be considered as part of the “human” component and play an active role alongside managers, IT support teams, and external consultants.

Furthermore, EU lawmakers must prohibit algorithmic worker surveillance. 

Currently, worker monitoring is regulated by national laws that often predate GDPR and do not cover advanced, intrusive people analytics. 

AI-powered tools such as biometrics, machine learning, semantic analysis, sentiment analysis, and emotion-sensing technology can measure people’s biology, behavior, concentration, and emotions. 

Such algorithmic surveillance does not passively scan workers but rather “scrapes” their personal lives, actively builds an image, and then makes decisions.

Lastly, workers need to be able to exercise agency by becoming AI-literate. 

Teaching them technical digital skills so that they can operate a particular system is not enough. 

Understanding AI’s role and its effect on their work environment requires workers to be informed, educated, and critically engaged with the technology.

Regulating AI systems, in particular those deemed high-risk, should not be based on their providers’ self-assessment. 

Europe can become a global leader in the field, and foster genuine public trust in and acceptance of this emerging technology; but only if it effectively protects and involves its citizens and workers. 

No “human-centric” AI will ever exist if workers and their representatives are unable to flag up the technology’s specific employment-related risks.

In that regard, the Commission’s draft regulation leaves much to be desired. 

The European Parliament and EU member states must now act and, in particular, integrate worker protection in the final version of this key regulation.


Aida Ponce Del Castillo is a senior researcher at the Brussels-based Foresight Unit of the European Trade Union Institute. 

0 comments:

Publicar un comentario