What I Learned at the Access to Justice Conference: Why AI Is Transforming the Future of Law

Matt
Founder of BoardWise

This past week, I had the privilege of attending the International Forum on Access to Justice Conference at NYU Law and Fordham Law, a gathering of scholars, judges, practitioners, and advocates who share a common goal: making justice accessible to everyone, not just those who can afford it.
The conversations were eye-opening. They also confirmed what I've been building toward with BoardWise: artificial intelligence is no longer a distant idea; it is already reshaping how people interact with the legal system.
The Promise of AI in Justice
Across the panels and hallways, AI was described less as a "what if" and more as a "what now." Concrete applications are already emerging, including:
- Guided document drafting for people responding to licensing boards, housing courts, and debt collection suits.
- Plain-language explanations of rights, obligations, and procedures that are normally hidden behind jargon.
- Rapid triage and intake for legal aid organizations, making scarce resources stretch further.
- Predictive insights that help courts manage caseloads and improve consistency.
The common theme? AI may not replace human judgment, but it can reduce friction, save time, and make access to justice faster and fairer for people who otherwise face the system alone.
Arguments in Support (and Skepticism)
Most speakers expressed optimism, but the skeptics raised familiar points:
- "AI hallucinates."
- "Who is accountable if it makes a mistake?"
- "Isn't law too complex to entrust to technology?"
Supporters responded with a dose of realism: AI doesn't have to be perfect; it only must be better than no help at all, which is the current reality for millions of Americans. And as tools improve, their accuracy is often already on par with the rushed, under-resourced support many people receive today.
I made sure to press one issue that rarely gets addressed: if accountability is the standard by which AI is judged, then we must also confront the unaccountability of our own officials. Judges and prosecutors (and other government officials) enjoy broad immunity doctrines that often shield them even when rights are violated. If we can tolerate unaccountability in human power systems, it rings hollow to reject AI on those grounds.
Why the A2J Community Is Hopeful
What struck me most was the overwhelmingly positive tone across the A2J community:
- Scholars emphasized how AI can help us finally scale solutions that have been talked about for decades.
- Judges acknowledged that their courtrooms are overflowing, and technology offers relief.
- Non-lawyer advocates spoke about how AI tools give ordinary people the ability to understand and assert their rights.
Rather than fear, there was a sense of opportunity — not to replace the human dimension of justice, but to rebalance it.
The Bigger Picture
The legal system is facing a legitimacy crisis. When people can't afford lawyers, when officials operate with immunity, and when outcomes often depend more on money than merit, respect for the rule of law erodes.
AI will not fix all of this. But it is a disruptive force that the system has invited upon itself. By failing to be accessible, transparent, and accountable, the system created the conditions for disruption. Now, innovators, scholars, and advocates are stepping in to fill that void.
Closing Reflection
At the conference, it became clear to me: AI is not an enemy of justice. It is a tool — one we must wield carefully, but also boldly. Used wisely, AI can help restore trust by empowering the people, but the legal system has too often failed.
At BoardWise, this is our mission: to use technology not to replace lawyers, but to open the doors of justice wider than they've ever been before.