USA / March 20 2024
Written by – Thomas H. Curran,Thomas H. Curran Associates, LLC
Instead of debating whether AI should or shouldn’t have a place in disputes, practitioners may be better served by embracing the inevitability that AI isn’t going anywhere. The question isn’t whether AI should have a place in disputes, but rather, what can we as practitioners do to leverage AI’s efficiencies while avoiding its pitfalls? By now, most litigators are likely well aware of the dangers of taking a backseat to generative AI in drafting critical legal documents — none of us wants to end up in hot water with the ethics committee (or worse, our malpractice insurers) for unwittingly relying on mythical case citations fabricated entirely by ChatGPT. However, if properly implemented, AI can be a powerful tool for maximising efficiency and perhaps even streamlining law practice and leading to more efficient dispute resolution.
Is generative AI already being used in dispute resolution in your jurisdiction?
It’s likely safe to assume that other practitioners are utilising some form of generative AI in preparing and
arguing their cases, however there is no indication that generative AI is yet being used as part of the process for presiding over disputes in Massachusetts. That is to say, there is no indication that generative AI is ready to replace the services of mediators, arbitrators, judges or other neutrals in determining the merits or dictating the outcomes of disputes in Massachusetts — nor is AI likely to ever fully replace the need for human neutrals. However, it’s not unrealistic to expect that, in the future, generative AI could be leveraged to more efficiently dispose of certain kinds of disputes in Massachusetts.
Court-related online dispute resolution (ODR) platforms have been piloted in other jurisdictions to aid litigants in resolving disputes exclusively online and without the need for interaction with or appearance in the courts. In the United Kingdom, the Road Traffic Accident (RTA) Small Claims Protocol prompts litigants to use the court’s online claims portal to attempt resolution of low-exposure personal injury claims arising out of road traffic accidents (https://www. officialinjuryclaim.org.uk). If claims can’t be resolved on the portal, they may move back and forth between the portal and the courts as necessary until the claim is disposed. Courts in British Columbia have a similar court-annexed ODR program available in small claims matters as well.
Although the Massachusetts Judicial Branch does not currently have a court-annexed ODR program, there has been discussion about developing AI chatbots on government websites to aid pro se litigants in navigating the state court system. A feature like this has promising implications for access to justice and, if reliably programmed, such an application may improve the quality of self-representation and better equip pro se litigants who may not be experienced in interacting with the court system. With the Massachusetts government expressing goals of positioning the Commonwealth to be a “hub” for the development of AI technology, it would be unsurprising to see a surge of activity in the future along these lines.
How do you feel about generative AI in disputes? Could ChatGPT or related applications pose a faster, fairer dispute resolution process, or does it pose a serious risk to the resolution process?
The usual generative AI caveats undoubtedly apply in the context of dispute resolution: because this technology is only as good as its programming, it will always be imperative to exercise care in its development and deployment. Where courts in the United Kingdom and Canada have had some success in developing ODR processes, there is a promising road ahead as the technology develops that may help to streamline and simplify the dispute resolution process while at the same time reducing the load on overburdened courts. Care should always be taken to ensure that no parties are deprived of access to the courts or resort to human intermediaries, and there will need to be strong failsafe mechanisms in place to ensure accuracy and fairness in dispute resolution processes that rely on generative AI. However, practitioners should remain hopeful at the prospect that, properly harnessed and programmed, generative AI could become a powerful tool to aid not just in faster and more reliable dispute resolution, but in many other aspects of law practice like document review, sorting through voluminous information, and optimising productivity.
How is your jurisdiction preparing to tackle bias and transparency in generative AI tools?
A handful of bills have been introduced in the Massachusetts legislature aimed toward state regulation of generative AI. One in particular, S.31, stands out. This bill was written with the help of ChatGPT, a popular generative AI website. Bill S.31 focuses on three major concerns: preventing bias against groups and individuals, transparency in using AI and preventing plagiarism, and the prevention of unjust data collection from individuals. The bill does not go into detail about what would be considered bias against groups or individuals. It does, however, require large-scale AI companies to be registered with the Attorney General’s Office and provide regular risk assessments. Hopefully, the regular risk assessment and reports will be sufficient to mitigate potential bias. Additionally, the bill does take specific measures to ensure transparency, such as the requirement of a distinct watermark that must be generated along with any text. There are several other bills that have recently been introduced to the Massachusetts State Senate regarding generative AI software, but none of them focus as much on bias and transparency as S.31 does. Other bills currently in committee like S2539 and HD4788 tackle infrastructure and training in relation to AI software.
Massachusetts Attorney General Campbell has expressed great concern about generative AI software and the need for regulation. Campbell has urged Congress and the Massachusetts State Legislature to accelerate the creation, passage, and implementation of AI-related bills, which bodes well for the speedy prevention of bias and encouraging transparency in this new and rapidly developing technology.