The idea of robots drafting legislation may evoke images from a science fiction plot of the 1980s. There is little doubt that artificial intelligence (AI) has transcended its status as a futuristic concept, permeating numerous facets of our societies and influencing businesses across industries. However, expectations that AI will be drafting the law are premature and indeed, unlikely to be fully realized.
From supporting research processes to providing insight into areas for possible revision, AI has exciting potential to support the legislative process. However, with its capabilities come multifaceted considerations including trust, bias, data privacy and ethics.
This article series delves into the foremost topics in this space, aiming to equip legislative staff with a toolbox to navigate its complexities, mitigate risk and safeguard democratic principles.
AI’s potential in a legislative context spans several areas, including streamlining the request workflow and assisting with research and repetitive tasks.
Its pitfalls, however, require careful consideration and management, especially as generative AI tools like ChatGPT become commonplace in day-to-day workflows – that could be to ask for a summary of legislation related to a certain topic or provide recommendations such as synonyms.
States like California and Colorado are pioneering the regulation of AI. Emerging frameworks provide valuable insights for legislatures in navigating potential pitfalls associated with AI implementation.
AI’s role in the legislative drafting process
AI can support the legislative drafting process in multiple areas, such as enhancing the request workflow, supporting research through contextual insights, offering assistance in drafting and providing insight into areas for potential revision.
1. Research
Drafting attorneys don’t want to lose focus by switching between research tools and the drafting environment. Instead, AI tools can streamline the experience, supporting drafting attorneys in the research process by providing access to reference materials at the moment of need.
Focusing the language model on the legislature’s data reduces the risk of errors and enables capabilities such as providing recommendations on next steps, identifying gaps or omissions, and allowing attorneys to carry out detailed searches on organization-wide data. Indeed, providing the right information at the right time is a key principle to the solutions we provide at Propylon.
AI tools like ChatGPT are becoming commonplace in day-to-day workflows.
2. Workflow
AI tools can streamline legislative workflows by automating tasks such as bill summary generation, simple resolution drafting or identifying existing statute that might be impacted by new legislation. Additionally, AI tools can provide consistency checks, for example identifying a mixed usage of terms in the same context (e.g. employee and worker).
3. Enhancing security
Pitfalls to look out for
1. Reliability
AI’s sporadic production of errors (‘hallucinations’) is a well-covered drawback of utilizing this technology. Indeed, AI models are only as good as the data they’re trained on. If the data contains biases, the AI will inadvertently perpetuate these biases in its decisions and recommendations.
Steps legislatures can take to mitigate reliability risks include:
- Focusing the language model on a contained data set
- Carefully curating and auditing the training data
- Implementing regular bias checks
- Allowing for human review
The protection of data should always be at the forefront.
2. Human oversight
There is some fear that AI could replace human roles, such as those of junior research roles or other entry-level positions in the legislative drafting process.
Arguments that the law is like a computer code – a set of rules that can be turned into executable instructions for a computer have been around for some time. However, as we wrote in a previous article, if attorneys were replaced wholesale by technology, would the result be a more rigid society that functions like a hard-coded machine?
The human element – human judgment – remains indispensable in interpretating the nuances of law and policy. Positioning AI as a tool to handle repetitive tasks is crucial to assisting the human in focusing on more nuanced, strategic work.
3. Trust
Reliance on AI necessitates a level of trust in the algorithm’s decisions and recommendations.
To bolster trust, it can help to establish transparent algorithms and methodologies, openness about decision-making, and clear avenues for human oversight and validation.
Further, the use of AI might lead to ethical dilemmas, for example, if AI makes recommendations based on majority views or historical data, it might sideline minority perspectives or uphold outdated norms. Thus it is important to set ethical guidelines for AI usage and ensure that there’s human intervention in sensitive areas.
AI regulatory initiatives: case studies
Recently the state of Colorado became the first to sign into law an artificial intelligence-specific law, the Colorado AI Act on May 17th, 2024. The law focuses on regulating algorithmic discrimination in employment, housing and education – identified as high-risk sectors. The law applies to all developers and deployers of the technology operating in Colorado, regardless of the number of consumers affected. It will go into effect on February 1st, 2026.
Case studies from the European Union and the state of California offer further insights into the frameworks for regulation that are currently emerging.
1. Case study one: EU AI Act
Introduced and passed in June 2023, the EU AI Act establishes a comprehensive framework for AI, ensuring it is used safely and ethically across the EU. Categorizing AI systems based on risk, the Act imposes stricter regulations on higher-risk applications such as those used in law enforcement and critical infrastructure.
Seeking to balance innovation with the protection of fundamental rights, the EU AI Act aims to prevent the misuse of AI and ensure that AI systems are secure, reliable and trustworthy.
2. Case study two: California AI regulations
California has been proactive in regulating AI through various efforts.
In March of 2024, the California Privacy Protection Agency voted to advance the state’s proposed regulations targeting the rapid proliferation of AI technologies and their implications for privacy security and ethics. The focus is on creating guidelines that protect personal data and prevent misuse in sectors such as job compensation, housing, and healthcare. Businesses will be required to notify individuals before using AI, allow them to opt out, and ensure businesses perform risk assessments to evaluate the technology’s performance.
Initial regulations were enacted in 2023 with ongoing legislative efforts to expand the framework which aims to enhance transparency and accountability and mitigate the risk of bias and discrimination in decision-making processes.
Implications of current and future Legislation
The above laws and regulations set important precedents for the regulation of AI across other states and regions, providing frameworks that ensure that AI’s benefits are realized while minimizing risks, such as inaccuracies and ethical concerns.
These regulatory frameworks provide fruit for thought on incorporating AI systems in a legislative context, such as enhancing AI accountability and transparency, ensuring human oversight and safeguarding data privacy.
As AI technology continues to evolve, ongoing regulatory updates will be necessary to address the next phase of challenges and opportunities. We can expect additional regulations focusing on emerging issues such as deeper AI integration, cross-border data flows, and enhanced privacy protections.
Getting ready for the future
What role could AI play in legislative processes? As the technology sweeps across society and business, its usage is becoming mainstream and may soon become ubiquitous. Its benefits in terms of improving the quality and accuracy of legislation, and supporting staff with repetitive tasks are certainly promising. However, undoubtedly, there are challenges to navigate. Legal and regulatory frameworks provide some foundations to move forward.