Path-setting and environment-shaping as strategic perspectives to AI governance
Executive Summary
The strategic perspectives framework (Maas, 2022) usefully identifies key assumptions underpinning approaches to AI governance. This post operationalises the research of the Collective Intelligence Project through this lens, providing analysis and recommendations in the context of path-setting and environment shaping approaches to AI governance. This post will briefly explain the strategic perspective concept and provide an overview of the benefits of applying it in policymaking; identify path-setting and environment-shaping as the suitable strategic perspectives; and examine potential recommendations aligned with these perspectives. 'Other recommendations' aligned with the given perspectives that would benefit from further research prior to potential use will be laid out.
Introduction
The key takeaway from this post is that the research and implementation of policies should occur with a view to establish policies and principles that set a good precedent for future AI development. I will briefly explain the strategic perspective concept and provide an overview of the benefits of applying it in policymaking; identify path-setting as the perspective that seems most suitable, given current AI development; and examine a potential recommendation aligned with this perspective - standards for auditing and transparency (in the context of one particular use case - Generative Foundation Models (GFMs)). At the end, I will lay out 'Other recommendations' aligned with the path-setting perspective that would benefit from further research prior to potential use.
The research in this article has been framed in terms of the strategic perspective taxonomy.
Definitions and background
AI Governance
AI governance means "bringing about local and global norms, policies, laws, processes, politics, and institutions (not just governments) that will affect social outcomes from the development and deployment of AI systems".
Existing attempts at AI governance are being pursued by the UK AI Safety Institute, EU and US. This includes but is not limited to the following.
The UK and US AISI have formed a partnership, which will see them work together to develop tests for the most advanced AI models.
The UK and France recently made an agreement to strengthen R&D collaboration through new funding, to further global AI safety.
The EU AI Act 2024 was enacted with a view to "foster responsible artificial intelligence development and deployment in the EU" through a risk-based approach.
Strategic perspectives
Using a strategic perspective framework entails explicitly setting out assumptions on technical and governance possibilities and policymaking based on that understanding, which enables a more informed discussion about what near-term actions one can or should pursue to ensure the development of positively transformative AI (TAI). Strategic perspectives are explained here in further detail.
It may be helpful to see all the strategic perspectives and how and when they should be used based on a technical and governance view, as set out by those who termed the concept.
The path-setting strategic perspective to AI governance entails the regulation of AI by establishing policies and principles, which set a good precedent for future AI development.
Benefits of applying strategic perspectives as an approach to policy making
Using the strategic perspective framework makes the underlying assumptions on the technical and governance views clearer, making it easier to find the central issues and helps policy makers have more focussed discussions on what particular policies to implement. As a result, policymakers can benefit from huge operational efficiency gains because resources are dedicated to identifying solutions on explicitly identified and agreed assumptions about the current circumstances. Findings emerge quicker as more resources can be dedicated to a particular sort of policy solutions, and the quality of research found in the identified direction have a far greater level of detail, and subsequently, applicability, relevance, ease of implementation and likelihood of success. Furthermore, initial policy decisions generate increasing returns when resources and time are devoted to its implementation, it becomes increasingly costly to choose another path. Path dependence also provides stability in policymaking.
Path-setting
Assumptions inherent to the path-setting perspective
Possible assumptions about the key parameters that shape AI inherent to path-setting are that secondary legislation on AI, broadly speaking, will be effective in ensuring that AI development occurs safely and responsibly, and that future development will occur in alignment with the principles that affected past developments and thus develop along the same 'route', the impact of which is likely to increase exponentially as more governance actors implement more policies aligned with those first set out on this perspective, making it less likely that AI development can occur without these regulatory policies and principles acting as safeguards.
Tensions and trade-offs
Internal tensions and possible trade-offs in path-setting are those common to policy-making more widely, here is what they could be. Policymaking disturbs and/or impedes the development of AI that could have otherwise have broad public benefits (such as healthcare or education applications) which did not occur due to restrictions. The trade-off is the safety-progress element of the technology trilemma. The participation of the public in the AI development could be limited due to safety concerns about the quality and nature of input provided or occurs in a way that compromises the safety of the AI systems developed (safety-participation). The participation-progress aspect of the trilemma is relevant in more generic terms and is not specific to the path-setting perspective.
Why should policymakers adopt the path-setting perspective?
Setting a good precedent is a compelling reason to consider adopting the path-setting perspective, and there is additional evidence to encourage policymakers as well. At this stage, clear controls and monitoring of AI development is a priority. The AI Risk Prioritisation and Alignment report, by the Collective Intelligence Project, found that the desire for safeguard and controls was the most significant for people. The Collective Intelligence Project recommended monitoring post-deployment effects carefully, showing that acceptable use policies were being enforced, data was shared on real world cases and forums developed for public input into AI. Another finding was that people were concerned LLM-based tools would be used for undesirable purposes. The clear next step is to address public concerns by implementing policies that prevent the misuse of AI. This involves controlling AI development to ensure it is safe and aligned with democratic values. Priorities include monitoring post-deployment effects, enforcing acceptable use policies, sharing data on real-world cases, and creating forums for public input. By taking these actions, policymakers can ensure AI development is responsibly governed and risks of misuse are minimised.
Recommendation: Standards for auditing and transparency
Use case: GFM
To address the risks and inefficiencies associated with generative AI models (GFMs), policymakers should prioritise the development and implementation of robust standards for auditing and transparency. Although proposals such as algorithmic audits, data cards, and model cards are gaining traction, their application remains inconsistent. Moreover, datasets used to inform widely-used models are often poorly understood by end-users, limiting our ability to evaluate the impacts and contributions of GFMs effectively.
Contextualisation and justification of recommendation for use-case
Establishing standardised frameworks for auditing and transparency for AI and generative foundation models (GFMs) would provide clarity, accountability, and consistency, particularly through determination of appropriate avenues and forms of data release. Standards-setting bodies should define and enforce appropriate venues and formats for information release, ensuring the industry adopts transparent practices. This approach would serve two critical objectives: ensure accountability and safeguarding public trust. Also, regularised audits would allow for early detection of risks, reducing the likelihood of unintended outcomes associated with poorly understood or opaque models.
Issues with Generative Foundation Models (GFMs) and solutions
Information Quality and Mis/Disinformation:
GFMs generate content faster than humans but often amplify biases, stereotypes, or inaccuracies by creating data quickly that is low-quality. This risks polluting the digital commons with low-quality or harmful material and proliferating accidental and deliberate mis/dis-information.
Standards can be set for the quality of information that AI must produce, benchmarks can be created for content reliability which must be met for it to become a part of the digital commons. Combined with auditing, this would reduce the proliferation of mis-/dis-information.
Erosion of Self-Determination and Democracy:
Without adequate oversight, GFMs may be exploited for manipulative purposes, such as personalised persuasion or mass disinformation. Such purposes fall under the general category of the erosion of self-determination and democracy, as this content can be used as a tool to influence political/state-level and individual outcomes and actions.
Transparency mechanisms should be implemented, which include clear audit trails, ensuring AI design decisions can be held accountable upon auditing if they are not meeting the guidelines.
Homogenisation of Content:
Reliance on a few widely used models risks generating uniform outputs that fail to reflect a diverse range of perspectives, as content filters and other design decisions are typically made by a small team with limited participation and calibration to different people in society.
Auditing can identify and address biases in training data, encouraging the development of models that support pluralistic and inclusive content creation. Transparency standards could be set so that this risk of homogenised content is accessible to those using the model. Each model could share what level of participation was involved in designing it.
Economic Concentration and Labour Impacts:
The private ownership of capital-intensive GFMs by a handful of entities raises issues of market monopolisation.
Policymakers should explore standards for equitable access to AI systems, promoting open-source models and ensuring public accountability. Additionally, auditing processes must include evaluations of labour market implications, ensuring workers are not disproportionately affected by large-scale automation. Limits could be placed on the amount of investment that can occur in GFMs or other AI developments to ensure that access concerns do not arise and development is sustainable economically.
Unpredictable Risks from Advanced AI Systems:
Black-box models pose significant safety risks due to their opacity.
Transparency standards must extend to pre- and post-deployment monitoring, creating infrastructure capable of tracking and assessing system behaviour over time.
Other path-setting recommendations
The standards for auditing and transparency is the main recommendation provided. The recommendations below provide options that policymakers are advised to explore further through research before considering implementation.
Ethical Impact Assessments required during ideation, development and pre/post-deployment
Creating institutional models for co-ownership of inputs to AI, including worker-owned sector-specific generative models.
Building data cooperatives and establishing collective ownership rights over data to ensure public accountability for models trained on the digital common (ie Common Crawl and public data).
Expanding the Collective Intelligence Project's pilot alignment assemblies to a repeatable, scalable pipeline for public input into AI across the development lifecycle, focusing on risk assessments, API access, and speed of deployment.
Creating multi-stakeholder standards-setting bodies for third-party auditing into AI (covering red-teaming, evaluations, and more traditional audits), modeled after polycentric fora for Internet governance to evaluate the social, ethical and technical impacts of AI systems across their lifetime
Require the use of structured formats such as data and model cards, ensuring stakeholders can access meaningful, interpretable information.
Co-design of AI systems for democratisation of AI development: AI systems that are built without real world testing and participation from the affected stakeholders are less likely to serve a diverse range of communities so more likely to result in unexpected times when they're deployed. AI corporations and start-ups are unlikely to create applications with broad public benefit by default so state funding and regulation could change that. particular policies to this end include:
expanding the pipeline of AI developers, researchers and designers to ensure that diverse representative viewpoints and backgrounds are covered
direct public funding for AI to use case models that provide broad public benefit
investment sales models specifically for public interest use cases such as health education and financial literacy
Environment Shaping
Environment shaping refers to improving the broader context within which AI operates. This involves enhancing "civilizational competence, specific institutions, norms, regulatory target surface, cooperativeness, or tools, to indirectly improve conditions for later good TAI decisions." Essentially, it's about creating a fertile ground for responsible AI development and use.
Broad Public Education
Broad public education in the context of AI refers to publicly-provided initiatives aimed at increasing general awareness and understanding of artificial intelligence, its functioning, capabilities, limitations, and societal implications. This would ideally occur with a view to empower individuals with the knowledge necessary to engage meaningfully with AI, understand its impact on their lives, and participate in shaping its development and governance.
A key recommendation for environment shaping is broad public education. The benefits of AI should not be concentrated within a small segment of the population, exacerbating existing inequalities and widening the digital divide.
Building infrastructure, protocols, and tools for the "AI commons" is crucial to ensure broad-based access and shared benefits.1 This involves closing the AI divide through public education, reskilling initiatives, and generally making AI capabilities accessible to people from all backgrounds.
The Integrity of the Digital Commons
What are the digital commons?
Consider this definition adapted from the Collective Intelligence Project.
"The digital commons comprises two things:
The online commons of information resources that we all benefit from, own and contribute to together. This traditionally includes things like wikis, Internet archive snapshots, Creative Commons (CC) licensed images and public software repositories, but online discussion spaces such as Reddit and news sources such as The Guardian are also part of this information commons. These generally have reasonably open and shared access, with few barriers to people contributing to or using these digital resources.
The collective infrastructure that underpins the commons. This infrastructure includes the physical (e.g. Internet cables), the institutional (e.g. organizations like the IETF, W3C, and IEEE), and the technological (e.g. open-source libraries). As we see increasingly more GFMs and other AI being deployed, we may see more ML models, datasets, libraries and platforms, ML-tailored computing hardware, as well as various AI building or governing institutions come under this umbrella."
Maintaining the Integrity of the Digital Commons
Maintaining the integrity of the digital commons requires a many-pronged approach. It involves fostering open-source models, promoting open access policies, and supporting resources like Wikipedia that exemplify the economic and ethical benefits of shared digital infrastructure. It requires resisting the potential for private entities to dominate innovation and establish monopolies. This necessitates building and maintaining collective digital infrastructure that fosters healthy competition. Furthermore, it means ensuring the digital commons remains a vibrant part of the broader knowledge commons, providing access to accurate, accessible, comprehensive, and diverse information crucial for culture, welfare, science, and technology. This includes recognizing Generative Foundation Models (GFMs) and their training data as part of this knowledge commons, and ensuring the GFMs themselves act as effective interfaces to this information. Finally, maintaining the integrity of the digital commons means safeguarding its role in underpinning democracy by fostering high-quality knowledge and genuine debate. This requires nurturing a healthy "epistemic commons" where trusted norms and institutions exist for processing information and reaching consensus.
Why the Integrity of the Digital Commons Matters
Public access, democracy and knowledge ecosystems
The integrity of the digital commons is significant because it enables shared access to and benefits from digital technologies. Without it, the potential for monopolies and the concentration of power in private hands increases dramatically. This can stifle innovation, limit access to information, and undermine democratic processes. The digital commons is essential for ensuring that the benefits of digital technologies are broadly shared, rather than concentrated within a small elite. It is vital for maintaining a healthy knowledge ecosystem, fostering informed public discourse, and ultimately, supporting a thriving and equitable society. A robust digital commons is the bedrock upon which a democratic and technologically advanced future can be built.
Why Policymakers Should Adopt This Perspective
AI risk prioritization research indicates that people are more concerned with good governance than specific risks. Investing in literacy, accessibility, and communication is therefore crucial. Public understanding of AI concepts and functionality is essential for effective participation in its development. An informed public can contribute valuable collective intelligence, shaping the development of democratic AI models.5
Other environment-shaping recommendations
Flexible Governance Structures: AI governance frameworks should be adaptable and responsive to the rapid pace of AI development. This includes mechanisms for iterative policy adjustments based on emerging trends, potential risks, and societal feedback. Experimentation and pilot programs can play a vital role in testing and refining governance approaches. Consideration should be given to sunset clauses or periodic reviews of regulations to ensure they remain relevant and effective.
Governance should be multi-layered, incorporating input from diverse stakeholders, including technical experts, ethicists, policymakers, and the public.
Practices:
Coordinating research and development with foreign allies and setting joint standards proactively
Explainable AI
Democratic governance, in the sense that the public participates directly on AI governance questions with access to unbiased clear information.
Fusion of Environment-Shaping and Path-Setting Perspectives
Literacy and accessibility are key to building public trust.
Broad public education creates literacy and the transparency standards for GFM create accessibility. (Detailed above)
Companies should share research results on AI capabilities, limitations, and evaluations in a clear and accessible manner for the general public.
Product descriptions should explain how the technology works. (see applied example below)
Applied example: Chatbots
For example, users of chatbots should understand:
How these systems are designed to mimic human interaction.
That information retrieved is compressed within the model's weights from training data, not directly from internet access (unless explicitly augmented).
That chatbots are designed to predict the next word and generate predictable text, leading to potential "hallucinations."6
That chatbots lack memory or the ability to learn directly from interaction outputs.
That outputs are sampled sequentially based on the entire conversation and underlying prompts.
Information about evaluation and audit results should be presented in layman's terms, explaining the implications for an LLM's capabilities, limitations, and behavioral patterns.
Conclusion
This approach provides a comprehensive overview of strategic perspectives for policymakers navigating AI governance. It balances proactive shaping of AI development trajectories with reactive measures to address emerging challenges, emphasizing the importance of public education, the digital commons, and transparency in building a responsible and beneficial AI ecosystem. Critically, AI development requires robust legislation that prioritizes setting a positive precedent for current and future development, ensuring progress occurs safely, transparently, and with accountability. Policymakers should adopt a path-setting perspective, providing clear direction for policy recommendations and ensuring policymaking keeps pace with, or ideally anticipates, AI advancements. Foundational steps include establishing standards for auditing and transparency, particularly for Generative Foundation Models (GFMs). However, these standards must be part of a broader strategy. Comprehensive AI governance should also explore additional measures, such as ethical impact assessments, co-ownership models, data cooperatives, and expanded public input mechanisms. This comprehensive approach will not only safeguard the present but also create a responsible, secure, progressive, transparent, and equitable foundation for the future of transformative AI systems.
Bibliography
- 'AI Chatbots Work by Predicting the next Word.1 So Do Our Brains. Is There a Connection? | Tufts Now' (now.tufts.edu, 23 May 2023) https://now.tufts.edu/2023/05/23/ai-chatbots-work-predicting-next-word-so-do-our-brains-there-connection
- 'AI Risk Prioritisation and Alignment Report' (The Collective Intelligence Project, 31 October 2023) https://www.cip.org/research/participatory-ai-risk-prioritization accessed 10 December 2024
- Elliott M and Thomas R, Public Law (5th edn, Oxford University Press 2024)2 accessed 10 December 2024
- Hogg Q, Elective Dictatorship (British Broadcasting Corporation 1976)3 accessed 10 December 2024
- Landemore H, 'Fostering More Inclusive Democracy with AI by Landemore' (IMF, December 2023) https://www.imf.org/en/Publications/fandd/issues/2023/12/POV-Fostering-more-inclusive-democracy-with-AI-Landemore accessed 30 November 2024
- Siddarth D, 'The Collective Intelligence Project' (The Collective Intelligence Project, 5 July 2023) https://www.cip.org/research/democratizing-ai accessed 27 November 2024
- 'The Collective Intelligence Project' (The Collective Intelligence Project, 27 November 2024) https://www.cip.org/ accessed 27 November 2024
- Department for Science, Innovation and Technology, AI Safety Institute and The Rt Hon Michelle Donelan, 'UK & United States Announce Partnership on Science of AI Safety' (GOV.UK, 2 April 2024) https://www.gov.uk/government/news/uk-united-states-announce-partnership-on-science-of-ai-safety accessed 10 December 2024
- 'Whitepaper' (The Collective Intelligence Project, 2014) https://www.cip.org/whitepaper accessed 10 December 2024
Appendix
Perspectives to governance
Perspective
In (oversimplified) slogan form
Exploratory
We remain too uncertain to meaningfully or safely act; conduct high-quality research to achieve strategic clarity and guide actions
Pivotal Engineering
Prepare for a one-shot technical 'final exam' to align the first AGI; followed by a pivotal act to mitigate risks from any unsafe systems
Prosaic Engineering
Develop and refine alignment tools in existing systems, disseminate them to the world, and promote AI lab risk mitigation
Partisan
Pick a champion to support in the race, to help them develop TAI/AGI first in a safe way, and/or in the service of good values
Coalitional
Create a joint TAI/AGI program to support, to avert races and share benefits
Anticipatory
Regulate by establishing forward-looking policies today, which are explicitly tailored to future TAI
Path-setting
Regulate by establishing policies and principles for today's AI, which set good precedent to govern future TAI
Adaptation-enabling
Regulate by ensuring flexibility of any AI governance institutions established in the near-term, to avoid suboptimal lock-in and enable their future adaptation to governing TAI
Network-building
Nurture a large, talented and influential community, and prepare to advise key TAI decision-makers at a future 'crunch time'
Environment-shaping
Improve civilizational competence, specific institutions, norms, regulatory target surface, cooperativeness, or tools, to indirectly improve conditions for later good TAI decisions
Containing
Coordinate to ensure TAI/AGI is delayed or never built
System-changing
Pursue fundamental changes or realignment in the world as precondition to any good outcomes
Skeptical
Just wait-and-see, because TAI is not possible, long-term impacts should not take ethical priority, and/or the future is too uncertain to be reliably shaped
Prioritarian
Other existential risks are far more certain, pressing or actionable, and should gain priority
'Perspective X'
[something entirely different, that I am not thinking of]
Table 2: oversimplified mapping of strategic perspectives, by overall Technical and Governance views
See M.M.Maas 2022's website.