Language models trace their roots to the 1960s and the early days of artificial intelligence (AI). Large language models (LLMs) are the next step in the evolution. There is a vast expansion in the amount of data used in training, inferring relationships, and thus, in creating new content.
The most modern LLMs surfaced in 2017 and make use of the transformer architecture which has made the ability to predict the next word in a sequence more human-like.
Generative Pre-trained Transformers (GPT) are a family of LLMs launched by OpenAI in 2018. ‘Generative’ means that the model can generate text based on what it has learned from examples. Language models are already used to power a number of familiar tools such as autocomplete features (e.g. Gmail’s Smart Compose), translation tools, speech recognition, and handwriting recognition as well as chatbots like OpenAI’s ChatGPT and Google’s Bard.
Neither Google nor OpenAI has released the exact sources used to train their respective models, but the data sets reportedly consist of diverse text sources including books, articles, websites, and social media, and are curated to remove low-quality content and personal information. Bard has been taught literature including novels, poems, and lyrics. Two months after its debut, ChatGPT hit over 30 million users, receiving about five million visits per day and making it one of the fastest-growing software products in memory, according to the New York Times.
The emergence of LLMs like ChatGPT has inspired both excitement at the possibilities and caution as leaders consider what the next phase could bring. In addition to the long-standing debate around the future of jobs, there are questions regarding the inaccuracies these bots can sometimes produce (‘hallucinations’), data security, and ethics as they can pick up on the internet’s biases and stereotypes. What might LLMs bring to the world of rule-making and rule-taking to improve how laws, regulations, and standards are created and updated, and how they are understood by business and society?
LLMs look set to have a rippling effect across business and society.
The power of contextual delivery
Providing the right information at the right time is key for rule-makers and rule-takers to carry out their jobs efficiently and accurately. If you are a legal drafter, you don’t want to lose focus switching between research tools and the drafting environment. If you are an auditor, you don’t want to have to leave the audit platform to search for relevant guidance. You want to be able to access that information as and when you require it.
This is already something we at Propylon work with organizations to address through contextual delivery. Our APIs allow for the integration between our software and other tools to provide the right information at the right time.
New-generation LLMs present opportunities to tap into AI technology to build on contextual delivery and take it to the next level, slashing the amount of time spent on redundant tasks and acting as an aid in drafting and research processes.
Greater accuracy in rule-making and rule-taking
Imagine you are implementing a process, for example, drafting a piece of legislation or a company policy. Along the way, you want to check for errors and so you ask your content: is there anything I’ve missed?
Focusing the language model on a contained data set, like organization-wide data, reduces the risk of an LLM-powered chatbot producing errors. Indeed, the convenience of such functionality saves staff from turning to Google for instant answers which cannot be vetted by the organization.
Rather, the bot can select the most relevant documents and provide an answer accordingly. The bot can further assist by summarizing what might have changed and prompting the appropriate staff to implement it.
Similarly, for the creation of, for example, a presentation on a new regulation, the bot can find relevant summaries and enter the bullet points. If you are creating guidance, drafting a law or a standard, the bot can assist by providing recommendations on the next steps, identifying gaps or omissions, and allowing users to carry out detailed searches. While data is sent to a third-party server, there are established best practices to handle issues arising in such scenarios. However, the protection of data should always be at the forefront.
The emergence of LLMs like ChatGPT has inspired both excitement at the possibilities and caution as leaders consider what the next phase could bring.
The workflow of the future
LLMs look set to have a rippling effect across business and society. While that full impact has yet to take shape, the potential benefits they can have on reducing workload and improving accuracy are set to permeate how we all carry out our jobs.
For rule-makers and rule-takers, LLMs can be harnessed to provide tools that reduce the time and effort consumed by manual tasks and allow staff to focus on higher-value activities, with steps taken to ensure that data is protected to the highest standard.
Focused on a contained company data set, the technology can help staff do their jobs better and with reduced errors. While the workflows will be different, we foresee that these workflows will be enhanced, optimized rather than reinvented or fully replaced.