Member area

Artificial Intelligence (AI) – The OECD starts discussing general and policy principles

27 September 2018

The OECD is moving ahead with its work on Artificial Intelligence (AI) towards developing a Council Recommendation in the course of 2019 and creating an AI observatory as a pillar of the second phase of the Going Digital project.

On 24-25 September, the first meeting of the AI expert Group at the OECD (AIGO) was held in Paris. It will be followed by other meetings in the next months. The AIGO has multi-stakeholder membership and brings together experts who have been nominated by OECD delegations and advisory committees (including the TUAC) as well as a handful of invited experts. TUAC was represented through its Secretariat, Anna Byhovskaya and by UNI Global Union, Christina Colclough. The group is asked to contribute to the scope and content of the OECD principles on AI – which are expected to range from safeguarding core democratic and societal values, to operational parameters for AI systems, to guidelines for policy frameworks.

At both, the national and regional level (including the EU level) such multi-stakeholder discussions are currently underway. The implications of an increasing and wider use of AI, combined with other technologies will be far reaching on jobs and on workers – from changing work patterns, to data control, to the replacement of tasks. Yet, many opportunities and challenges of AI and its scope can still not be fully grasped as its development and spread take place at an unprecedented speed.

For TUAC, building in safeguards into design, development and the use of AI will be crucial, not least on employment aspects. Many policy proposals start and end with education and training when it comes to societal impacts. Collective bargaining or just transition funds are not on the radar in some of these debates, yet.

Public policy needs to look into the economic, social including labour market, ethical and legal aspects of AI as several risks arise. At the same time, it is important to keep the balance between applying and revising existing standards and regulatory frameworks. The TUAC had published a first set of priorities in November 2017 ( ) and will flash these out further:

  • develop operational, legal and ethical standards and avoid a fragmentation of rules and regulations;
  • set human-in-command requirements including the right of explanation and that robots and AI must never be “humanised”;
  • devise and finance transition strategies for workers to retain or change their job if the occupational task content is significantly altered by AI;
  • engage social partners in industrial and innovation dialogue processes towards ensuring the appropriate parameters for standardisation, fairer outcomes through collective bargaining, and the autonomy of workers in machine-to-human interactions;
  • anticipate what competencies are needed to complement tasks performed by cognitive technologies  and develop training policies and the underlying financing under a lifelong learning prism with the participation of trade unions in governance, design, implementation and oversight;
  • ensure the quality of data sets that AI is built on: bad algorithms may lead to detrimental outcomes: among others, challenges to data ownership arise in view of the opacity of data processing and re-purposing, and look into ways towards the  anonymization of personal data (including privacy impact assessments);
  • audit machine learning techniques against bias and security risks and discuss liability and consumer protection, as well as Occupational Health and Safety (OHS).
  • support public R&D that currently lacks the resources needed to pursue longer-term goals compared to corporate laboratories, and to this effect encourage innovation eco-systems and clusters in regions;
  • create incentives for and make simulations and validation systems when testing AI obligatory.