Everyday I encounter lawyers jolted by AI.
Some of them jolted in a good way. Perhaps readers of this blog.
Others, less spoken about the better. But things have changed.
Unlike before, when lawyers met technological innovation with indifference, resistance or grudging adoption, this time seems too close to home.
The elephant in the courtroom.
Document writers (as lawyers are also called) can’t stop being enthralled by ChatGPT.
Even when it does underperform, there is oh so much delight in its streaming text, logical next sentences and vast knowledge pool.
A good intern at least, if there was ever one!
What more can I do with this? Can it do this? Can it do that? Why not we try this?
Photo by Daniel Velásquez on Unsplash
Or not.
Is my information safe? How accurate is this?
And then the braver (and richer) of you, have more interesting questions.
Should we build our own model?
Our law firm has googol MB of data.
We have every agreement in the world.
We are the best in this never-heard-of-niche.
Associates can clean data and prepare JSON files.
There are good reasons to, and not to.
But from where I am standing today, I would rather not. Why?
Not because I like being dependant on OpenAI.
Not because I don’t think it is possible to build a useful language model.
If you want to get into this a little deeper, I will point you to a thread that has raised a storm in the AI world.
TLDR:
All this is apparently based on a credible internal leak from Google’s top AI brains.
While it is true that for many very focussed use cases, open sourced models trained on sufficiently large datasets can and to a high level of accuracy accomplish tasks in those specific use cases, there is nothing today that comes anywhere close to OpenAI’s large language models for generative text use cases.
For legal problems, models trained on large datasets with multidimensional abilities are more approachable than other approaches.
I would go one step farther and say that there is nothing (I have seen) that comes close to OpenAI’s GPT models for contract review use cases too.
For those who want to bicker with me, show me one that works so well.
If someone is building this, my question is - why bother?
The moat is being raised, its an uphill fight, for some time in the future.
Instead - your time might be better spent leveraging OpenAI’s (or like) technology for more of your work. There is a great implementation that is happening as we speak. You don’t want to miss the bus.
In other words - buy the plane ticket, not the plane, if you want to travel.
On the aspect of contract/document review, we ran an interesting experiment a few days ago, and some of you volunteered.
It was simple.
You have a contract that you often review at work, and you know the 4, 5 or 15 things that you should care about when you look at that type of contract. The contract might be 100 pages, 200 pages, doesn’t matter. Give it to us, and we will show you what GPT can do for document review.
We closed the window on receiving data last Friday, and will now be sharing the results.
Those of you who volunteered will be the first to know what it means when I say - Buy the tickets to the show!
The rest of you, what can I say?
Start your own show?
Don’t. You have far too much to do.