Speakers:
- Keren Farkas, Chief Access to Justice Officer with the Oregon State Bar
- Quinten Steenhuis, Practitioner in Residence and Adjunct Professor, Legal Innovation and Technology Lab, Suffolk University Law School
- David Neumeyer, President, Virginia Access to Justice
- Kirsten Dunham, Executive Director, Mid-Missouri Legal Services
At this year’s ITC Conference in San Antonio, one session stood out for me as both deeply practical and genuinely hopeful about the future of legal services: “Triage, Referral, and Intake the AI Way.” This session brought together leaders who are actively exploring how artificial intelligence can reshape the way legal aid organizations handle intake — a process that has historically been time-intensive, frustrating for clients, and difficult to scale.
The questions they tackled weren’t theoretical. They were grounded in real data, real challenges, and real people seeking help.
The Problem: Intake Overload Across the Country
One theme that emerged repeatedly was scale.
Legal aid organizations are swamped. In Virginia, for example, staff handle roughly 18,000 calls each year, with clients waiting up to two hours for help — all with a lean team of just seven paralegals and two attorneys. Similarly, Oregon faces more than 100,000 calls annually, and rural programs like Mid-Missouri Legal Services rely heavily on phone intake because community members often lack alternative access points.
Across these contexts, panelists described the same core issue: traditional intake systems weren’t built to handle the volume, and as demand for services continues to grow, the process simply isn’t sustainable.
Voice-First AI Intake: A New Way to Start the Conversation
One of the most intriguing solutions came from the Virginia Access to Justice team, which is piloting a voice-first AI intake system. Rather than making clients wait for a live staff member, callers interact with an AI that listens and collects information in a natural, conversational way.
Here’s how it works:
- Callers describe their legal issue aloud in their own words.
- The AI records and interprets the response, accurately capturing key details and classifying the issue.
- It then recommends how the matter should be routed — including passing it to an intake paralegal for follow-up.
What makes this approach compelling is not only its efficiency but its design: the voice is realistic and conversational, clients can speak in multiple languages, and the system improves over time through iterative testing. Rather than replacing people, this AI acts as a first step in a humane, responsive intake process — one that preserves dignity while saving staff time.
Form-Driven AI: Reducing Barriers Online
Oregon’s approach leaned into online, AI-powered forms as a self-service intake pathway. The idea is simple: move more people to a place where they can describe their situation in their own words through structured prompts.
AI then:
- Identifies the legal issue
- Asks follow-up questions to clarify needs
- Helps guide the person to the right attorney or resource
- Or, when appropriate, gives clear guidance if they aren’t eligible
Oregon’s goals are not just efficiency but accuracy and accessibility. The organizations are planning a soft launch to gather feedback before expanding. One challenge they noted is behavioral: many people still instinctively pick up the phone. Encouraging users to try online intake will require thoughtful guidance and outreach.
Conversational AI in Rural Communities
In Mid-Missouri, leaders are adopting conversational text AI to meet the needs of rural clients. Rather than rigid multiple-choice forms, users can type their concerns in their own words, and the system classifies the legal area and checks eligibility — while letting users correct or override the AI’s interpretation.
Their rollout has been pragmatic: a soft launch last fall, with a handful of trusted community partners helping test the system and identify bugs. This cautious approach has helped shape a model that balances innovation with real-world feedback.
Their work also raised important questions that are worth broader reflection:
- Should clients be able to override AI assessments?
- How does automation affect trust and satisfaction?
- How accurate does the data need to be before a human reviews it?
- If intake becomes available 24/7, will it overwhelm staff?
These aren’t minor concerns — they go to the heart of ethics, accessibility, and client experience.
Looking Ahead: AI as a Partner, Not a Replacement
What resonated most from this session was the honest way these leaders approached AI: not as a magic solution, but as a tool that — when used thoughtfully — can reduce burden on staff and improve access for clients. Across models — voice, form, and conversational systems — the shared goal was the same: increase access to justice while preserving quality and dignity in human connection.
Technology alone won’t close the justice gap, but it can be a powerful partner in a system where demand far outstrips capacity. These early pilots show promise, and they remind us that innovation should always be measured by how it helps people — especially those who have long been underserved.
The future of intake is not automated instead of human care, but automated to better support human care.