Search
Member area

Urgent action needed to guard against risks and harms from Artificial Intelligence and immersive technologies

15 May 2024

The OECD Digital Economy Outlook 2024 (Volume 1) released on 14 May reports that there is a currently a “vacuum of comprehensive policy by governments” regarding immersive technologies, with data suggesting that policy action has so far focused on supporting their development. Similarly, the rapid advances in AI’s capabilities are outpacing regulation and have “not yet been matched by assurances that AI is trustworthy and safe.”

“Swift action from governments is needed, as AI and immersive technologies continue to expand throughout the economy and society,” says Veronica Nilsson, General Secretary of TUAC. “Principles and codes of conduct are not sufficient; regulation is needed to safeguard human rights, protect democracy and ensure a technological future that benefits all.

Artificial intelligence is already causing rapid change to the workplace and the pace of its growth is accelerating. We call on the OECD to guide countries in taking concrete action to implement the AI Principles and ensure that humans remain in control of the trajectory."

— Veronica Nilsson, TUAC General Secretary

While the OECD declares that “long-term implications of rapidly advancing AI systems remain largely uncertain”, it is clear that governments need to deliver swift policy action to address the risks posed by AI to human rights, social cohesion and democracy.

The Outlook’s key findings include:

  • Generative AI is now so advanced that, in some cases, its output may be indistinguishable from that created by humans. Research suggests that there is no reliable way to detect AI-generated content, meaning that it could potentially be spread widely without detection, facilitating mis- and dis-information. While regulation is emerging in response to this risk, the Outlook observes that the lag in policy, implementation and enforcement may result in damage to the quality of public discourse and information online, which “may be difficult, if not impossible, to rectify in the years ahead.”
  • There has been a 53-fold increase in Generative AI incidents and hazards since 2022, according to reputable global news outlets. The OECD defines AI ‘incidents or hazards’ as events or circumstances where the development, use or malfunction of one or more AI systems can lead, or has already led to, injury or harm to a person; disruption of the management and operation of critical infrastructure; violations of human rights; or harm to property, communities or the environment.
  • “AI could significantly disrupt labour markets”. The Outlook notes that expert research has raised concerns of increasingly capable AI replacing some high-skilled and high-wage tasks, leading to economic and social disruption. AI will also change the nature of many jobs by altering the skills required of workers.
  • AI could marginalise or exclude certain groups by echoing, automating and perpetuating social prejudices, stereotypes and discrimination in its outputs. As AI becomes more complex, it may exacerbate divides between advanced and emerging economies, creating inequalities in access to opportunities and resources.
  • The vast amounts of personal information in some AI training data raises concerns for privacy rights. AI now has the ability to infer personal information not shared with the model during its training, leading it to make predictions and draw conclusions about individuals based on the limited information provided.

The Outlook also identifies a number of opportunities, risks and regulatory questions related to virtual reality (VR) and other immersive technologies. These too are developing at a rapid pace and may be at a “tipping point” in terms of adoption. The use of VR technologies results in an unprecedented collection of data directly related to users’ environment and their bodies; 20 minutes in VR generates almost two million unique data points of body language. The Outlook observes that this could lead to the increased profiling and targeting of individuals, and an unprecedented monitoring and surveillance of individuals’ physiological responses and emotions.

Other risks discussed in the Outlook include impacts on health and safety, along with a long-term effect on the cognitive development of children. As with AI, these many concerns point to an urgent need for rapid and robust regulation to govern the development and use of VR and other immersive technologies.