About Nesta

Nesta is an innovation foundation. For us, innovation means turning bold ideas into reality and changing lives for the better. We use our expertise, skills and funding in areas where there are big challenges facing society.

The Complex Economics of Artificial Intelligence

Economic debates about Artificial Intelligence resemble the science-fiction battlefield in James Cameron’s Terminator saga. They are dominated by robots.

Some economists believe that these robots (the embodiment of AI systems and automation) will transform our economies like steam or electricity did in previous economic eras, with the difference that more jobs will be displaced this time because AI systems can increasingly automate intellectual tasks as well as physical ones. Others are more sanguine, and believe that AI systems will boost productivity, augmenting existing jobs and catalysing new ones.

All seem to assume that AI systems will “succeed”: they will be deployed across the economy, lifting productivity and increasing innovation. The question is how fast will this happen, and with how much disruption.

But is this assumption warranted?

A steady stream of AI controversies and revelations in the media and scholarly work raise important doubts. They include algorithmic failures in social media, the criminal justice system and self-driving cars, manipulation of users, degraded conditions for workers, algorithmic biases against minorities, and concerns about the brittle, opaque and inefficient nature of state-of-the-art AI (deep learning) systems. This timeline of algorithmic controversies in the last 12 months created by Varoon Mathur at AI Now Institute illustrates the scale of the problem [2]. Meanwhile the much vaunted robotic job apocalypse is nowhere to be seen [1].

2018 was a year full of AI-related controversies.

Algorithmic Controversies in 2018 (via @VaroonMathur at AI Now Institute)

In a new essay, I show that these controversies and puzzles are manifestations of AI’s economic complexity: the fact that its impacts are not linear but unfold through a complex system full of interdependencies...

  • between investments and processes inside organisations: how do you deploy AI systems to enhance their benefits and reduce their risks?
  • between the behaviours of different organisations: how can you trust other organisations to deploy AI systems in ways that don’t exploit you or put you at risk?
  • between social groups: Will the benefits and costs of AI will be fairly distributed if vulnerable groups are less able to influence where and how it is deployed?
  • between generations: What if the scientific, technological and business decisions we make today lock us in to mediocre, divisive and unsafe AI trajectories?

In order to manage these complexities, economists and policymakers first need to recognise that AI is not a neutral technology: it can evolve following different trajectories, some of which are more socially desirable than others. Regulation and ethics have a role to play in steering AI in a desirable direction, but so do Research and Innovation (R&I) policies that should prioritise (societally) ‘better AI’ over ‘just more AI.’ This is a key argument being made by scholars of AI failure which I argue, doesn’t just have an ethical foundation, but also an economic one [3].

The rest of the blog considers different dimensions of AI complexity, and the principles of a research and policy agenda to tackle the challenges that they raise, and which is already being realised in Nesta’s work (see figure for a summary, and the annex for a brief discussion on definitions).

A summary of the economic complexities created by AI

A summary of the different forms of complexity created by AI systems.

Building algorithmic architectures in organisations

First, the arrival of AI creates organisational complexity: AI innovations have uncertain impacts. They offer the promise of more and better decisions, but they also make mistakes and create accidents when the environment changes in unexpected ways. Their adopters need to acquire new skills and infrastructures, modify their processes and put in place systems for supervision and error detection where humans keep an eye on the machines, and machines keep an eye on the humans.

But will these work? And what happens if they don’t?

If estimates of benefits, costs and suitable strategies for AI deployment are biased by euphoria or pessimism, this could result in disappointing AI applications with surprising failures (as we have seen in social media and self-driving cars), or underinvestment in AI despite its potential benefits. Differences in perceptions and attitudes across sectors could create divides in AI deployment. Some industries such as technology or advertising that are more optimistic about AI systems (and perhaps naive about their downsides) experience AI bubbles while other sectors such as health or government, where the stakes of algorithmic mistakes are higher, lag behind. Off-the-shelf AI solutions and algorithms developed in other industries might be less relevant for their needs.

If AI experiments create spillovers that other organisations can free-ride on, this could discourage these experiments, or drive the organisations that carry them out to secrecy or to seek control over unique datasets, computing infrastructures or machine learning skills that are hard to imitate or can even be profitably sold to imitators. This will increase the divide between AI leaders and followers, and contribute to further increase the market concentration already present in many industries.

This organisational complexity should be addressed through experimentation and evidence: this will involve rigorous research and organisational experiments to measure the impact of AI deployment and its side-effects, and to identify the skills and practices that complement it, drawing on new combinations of data and methods (as we are already doing through our AI mapping work in Nesta), and widely sharing findings to facilitate collective learning. This could be done drawing on models pioneered by Nesta such as our Digital R&D funds, which turn experiments with digital technologies into learning sites that are thoroughly evaluated to maximise learning by others. Another promising strategy are the Randomised Controlled Trials for innovation supported by the Innovation and Growth Lab (perhaps it is time for an independent Artificial Intelligence Growth Lab that tests the impact of different algorithmic interventions to determine what works and what doesn't?).

More experimentation and evidence are particularly important in areas where AI could create substantial public benefits but there are high barriers to AI deployment for lack of suitable business models, concerns about AI dangers and lack of skills. Nesta is already working to identify good practices in the design of Collective Intelligences that combine humans and algorithms to improve decision-making, and setting out questions that need to be answered before deploying AI in the public sector. Some of the innovation challenges and missions around AI being put in place by UK government and the European Commission could fulfil a similar demonstrator role if they are designed with the explicit goal of removing uncertainty and enhancing learning around new applications of AI.

Pruning Information thickets in AI markets

AI systems have uncertain impacts, can be combined in many different ways, and deployed with goals that are hard to fathom from the outside (what is a social network optimising when it shows you this news item instead of that one? Are the predictions you see generated by an AI system, or are they an example of fauxtomation, with humans doing the actual work behind the algorithmic curtain?).

Uncertainty and lack of visibility create pervasive information asymmetries between participants in AI markets. Very often, one party in an AI-related transaction (the ‘agent’) knows more than the other (‘the principal’) and can therefore behave in ways that create losses or risks for the principal without much fear of reprisal. This increases market complexity. For example...

  • AI scientists can fudge their results or make them hard to reproduce, exaggerate the benefits of the technologies that they develop or downplay their risks to funders (many AI systems are dual-use and can be equally applied for beneficial or malicious purposes).[4]
  • AI adopters mis-apply brittle AI systems in fields for which they are not suitable, exploit personal data and manipulate users and other businesses, or deploy AI systems that are mediocre and unsafe.
  • The AI algorithms ‘themselves’ behave in ways that increase market complexity when they confuse signal with noise (overfit) the data, mindlessly optimise the wrong goal for an organisation, or behave in unexpected, emergent ways creating AI flash crashes and systemic risks.

The resulting thicket of information failures could eliminate trust in AI markets, leading to races to the bottom in safety and respect for user rights, the abandonment of AI in some markets (for example after an AI catastrophe), or drive organisations to internalise more AI Research and Development activities to reduce their dependence on other actors whose motives they cannot trust, further increasing AI market concentration.

Information asymmetries between participants in AI markets

Information thickets in AI markets.

How can we prune these information thickets? We have seen recently a plethora of guidelines, charters and codes to encourage ethical AI in business and government. However, we cannot assume that these will be sufficient to prevent misbehaviour if they are not complemented with legal requirements for transparency and compliance about what AI systems can be deployed or not, and with what safeguards.

How can this be done in a way that avoids stifling innovation and experimentation? Regulation itself needs to become more innovative, using anticipatory regulation approaches that proactively seek suitable rules for future markets - an area that Nesta is active in. The principle of experimentation and evidence that I mentioned above can also help to identify the domains where AI innovation can remain permissionless, and where it should be more controlled.

Avoiding the automator’s dilemma

The complexity of AI systems also becomes manifest in the decisions we make as consumers, users and voters, creating new social complexities.

When we choose cheap or convenient products based on extreme AI systems (as is the case with some e-commerce platforms and car-hailing apps), this has an indirect impact on the workers in their sectors, say if it makes their jobs more precarious or tightly monitored. When we seek efficiency in public services through the wholesale deployment of AI systems, savings in costs could happen at the expense of biased and unfair decisions for the vulnerable groups that depend on those services. Extreme AI deployment could also bring with it less safety, privacy risks and more market concentration, but many of these costs and risks are hidden or indirect, and therefore easier to ignore in our day-to-day decisions.

If everybody behaves this way, we might all suffer more extreme and disruptive levels of AI deployment than we would like - I call this the automator’s dilemma. Even worse, vested interests could try to manipulate AI deployment so that it affects others but not them, increasing inequality and creating injustice. Ultimately, this might lead to a popular backlash against AI systems that are perceived to benefit only the few.

Avoiding this outcome will require solidarity in AI deployment: social dilemmas can be resolved through coordination and negotiation between different groups, and changes in customs and institutions. Government and civic society organisations should intensify public engagement to identify goals, rules and priorities for AI. This engagement will help identify societally preferred trajectories for AI deployment to be explored through targeted R&D programmes, and encoded in regulations. The process to identify the principles of the recent Montreal Declaration for Responsible AI are an excellent example of how this could work.

Solidarity also means identifying those social groups economically threatened by the arrival of AI, and putting in place interventions to alleviate their situation. Many strategies have been proposed to do this, from educational and skills interventions such as those being explored by Nesta’s READIE team, to spreading ownership over AI capital so that the returns from automation are broadly shared, or basic incomes. Determining which of these ambitious measures work more effectively, and marshalling social support for their adoption will be an important challenge going forward.

Maintaining pluralism in AI scientific, technological and market trajectories

What would engineers, entrepreneurs and planners involved in the early days of the combustion engine have done if they had known what we know today about pollution, congestion and climate change? We need to ask ourselves similar questions about AI systems and platforms that are temporally complex: choices we make about them today constrain tomorrow’s options, potentially locking us into trajectories of deployment that are hard to switch away from.

In the science and technology side, current AI advances rely on deep learning methods that are data-hungry, brittle and opaque: once they are fed enough data, these systems perform well but sometimes they break down in surprising ways (for example when they are shown “adversarial examples”), and these accidents are often hard to understand or predict. Critics of deep learning worry that it will take us down a technological dead-end where we continue papering over its limitations by collecting and labelling more data, spending more compute and adapting our economies and societies to its limitations instead of exploring alternative AI models that could give us more robust and interpretable AI systems. Current initiatives to redesign cities to make them more suitable for unreliable self-driving cars are an example of this adaptation of human systems to technological affordances. But would you want to live in a city that looks like a robotised factory?

Similar path-dependencies are at play in markets dominated by digital platforms that provide free and personalised services and content in exchange for personal data used to train AI systems. Once they become dominant, these platforms and business models are hard to dislodge, specially once users have become accustomed to mass personalisation and (at least in the surface) free prices.

Diversity and pluralism are needed in the face of these inertias and potential irreversibilities. Until we know more about the impacts, complements and risks of complex AI systems, we need to avoid dominance by a single model for “doing AI” in science, technology or business that might turn out to be inferior but hard to move away from. This requires an active scanning of the AI landscape to identify creeping mono-cultures, and targeted interventions to create diversity in AI science, technology and business. Nesta is working with the European Commission to achieve these goals in projects such as DECODE and Engineroom, where we are developing technologies that put individuals in control of their own data, and could provide the building blocks of a more inclusive and human-centred internet. Our innovation mapping methods can also be used to monitor the evolution of AI R&D trajectories with great detail and timeliness, helping to measure diversity in the field.

Conclusion: aligning values in space and time

Steering AI development in real-time to manage these economic complexities would be hard enough even if we agreed about what we want to achieve as a society. But we lack this agreement inside countries, and a fortiori between countries. Just look at the unfolding AI global race where different players - the US, China, the EU - are pushing their visions of AI at the risk of fragmentation, conflict, abuse and accidents. This AI nationalism is being intensified by the perception that those countries and regions that don’t automate will be automated - the geography of AI impacts is complex too.

Institutional innovations, international standards and collective learning via policy experimentation will be needed to avoid dangerous races to the bottom in safety and ethics. They could work as a mirror version of the Turing test where different societies learn about themselves - what they are and what they want to become - through their responses to the opportunities and challenges of the AI systems they have created.

Acknowledgements

This blog and the essay it is based on have benefited from conversations with / comments from Tommaso Ciarli, John Davies, Chris Gorst, Geoff Mulgan, Simone Vannuccini and Konstantinos Stathoulopoulos.

Endnotes

  1. This compendium presents some of the latest research on the subject.
  2. Also see the AI Now State of the Nation 2018 report for an up-to-date overview of algorithmic controversies and their policy implications.
  3. In this vein, I believe that an economic analysis of AI complexity can complement critical approaches that have carefully documented AI failure, primarily explaining it as the result of power dynamics between different groups without considering systematically the role of incentives, information and coordination.
  4. This over-selling explains previous ‘AI winters’ where disappointed funders stopped supporting AI researchers who failed to deliver on their promises of thinking machines.

Annex: Defining AI systems

In the essay, I define AI systems as “socio-technical systems whose function is to inform or automate behaviours in dynamic situations.” This definition captures the fact that AI systems are embedded in social systems (organisations and markets) and that they fulfil an economic function by automating or informing behaviour in changing conditions (this is why they need to behave intelligently, detecting those changes and adapting to them).

In order to do this, AI systems need sensors to capture information about the environment, analysers that turn that information into decisions about what to do (potentially involving human decision-makers that set goals and supervise outputs), and effectors that transform those decisions into actions.

This definition is agnostic about what goes inside each of those components. Over the history of AI as a discipline, researchers and technologists have pursued different strategies to build AI systems. The contemporary approach, which has led to important improvements in AI performance, relies on machine learning algorithms that are trained on labelled data (supervised learning) or synthetic environments (reinforcement learning) enabling them to generate predictions about the environment which can be used to recommend or implement suitable strategies. This data-driven approach has the advantage of not having to specify, ex ante, a set of rules that the AI system needs to follow in order to achieve its goals. On the downside, it is data-hungry and data-blinkered. It makes AI systems dependent on large datasets and brittle when faced with situations outside the dataset used to train them.

The diagram below represents the structure of these AI systems, the relations between them and their inputs.

Components of an AI system

AI system components and inputs.

Author

Juan Mateos-Garcia

Juan Mateos-Garcia

Juan Mateos-Garcia

Director of Data Analytics Practice

Juan Mateos-Garcia was the Director of Data Analytics at Nesta.

View profile