ChatGPT (Generative Pre-trained Transformer) uses an artificial language learning model to imitate a human response to a query.
Many consider the technology impressive for its ability to perform ‘deep learning’ but users should be wary, as the program is far from performing ‘artificial general intelligence’
The technology has been known to absorb and reflect wildly inaccurate information (both as a result of both general and targeted injections of online misinformation), and ‘hallucinate’ unsound responses as a function of the insufficient intelligence of its algorithmic programming.
Potential uses of ChatGPT and other LLM technologies
Despite these shortcomings, ChatGPT’s LLM functioning still provides some considerable benefits to lawyers and expert witnesses that are likely to become a part of everyday legal standard practice, if not already. Here are five of the key functions ChatGPT can help lawyers and expert witnesses perform faster.
- Digitisation of documents
In the context of expert witness reporting in particular, the more elementary capabilities of LLMs promise to revolutionise standards of expert witness workload capacity and report analysis detail. ‘Document Artificial Intelligence’ and ‘Optical Character Recognition’ operate with significantly increased capacities to recognise finer graphic forms and lexical patterns common in certain kinds of professional or technical material – e.g., a table of results, receipts, key prognoses and notes from a medical or mechanical examination, and voice note recordings.
The technology therefore offers much more powerful or accurate PDF or readable document conversions, patching over previous areas of illegibility (e.g., text in irregularly formatted documents) with predictions from a wide base of comparable texts and language patterns, thereby allowing much more material (e.g., scans, voice-recordings) to become digitised. Experts will no doubt see a vast field of advantages in this unprecedented ease of format transferability, ranging anywhere from document searchability to ease of retrieval and safekeeping in digital storage.
- Conceptual analysis of document contents
The operation of LLMs and ChatGPT through detecting graphic and language patterns also permits the technology to produce more data on the contents of documents. Probability modelling in Document AI relates words and terms with reference to texts embedded in real-life contexts – it can thus scan the contents of files to suggest concepts or ideas that use similar formulations in a spectrum of already-existing real-world texts. This ability of the technology to ‘read’ content in this comparative manner thus equips experts with a digitised means of performing thematic or conceptual analyses of documents (as opposed to mere words and character searches).
- Document categorisation
ChatGPT and a number of alternative LLMs already enable or are on the way to enabling users to sort documents according to their relevance to certain aims and concepts, and to create data sets out of the distribution of certain ideas, and even, purportedly, sentiments, expressed throughout a text.
The fact that LLMs will enable expert witnesses to conduct these searches in a matter of minutes or seconds – i.e., at seemingly an instant compared to the rate of manual reading and organisation – is likely to help expedite a number of aspects of expert witness practice, particularly those associated with document categorisation (e.g., hours spent on tracing correspondence threads or separating positive technical reports from ones signposting defects).
- Assessment of suitability
At the preliminary conference in particular, Document AI could become particularly useful in providing an overview of a brief or responding to technical inquiries about the brief contents that would allow experts to determine their suitability for a matter without the risk of being taken by surprise by the hours required for them to make this assessment in the first place.
- Drafting reports
The uses of Document AI are also innumerable in the context of expert report drafting and analysis. Especially in areas of expertise requiring the review of paper trails (mechanical job cards, medical reports, financial reports, correspondence) the ability to identify quickly and with precision the invocation of rules, concepts, or the expression of certain sentiments across a timeline could prove to be invaluable to experts investigating purported patterns of conscientious behaviour or negligence.
The reduction of hours coupled with the ease of, and therefore potential for unprecedented levels of intricacy in, document and statistical analysis affords experts a greater capacity to home in on substantive issues, e.g., how to portray the circumstances and nature of certain decision-making under scrutiny, rather than merely matching correspondence and decisions to concurrent events.
Limitations of ChatGPT and other LLM technologies
Despite its overall efficacy with basic administrative tasks, however, we urge experts to take care when integrating AI into their reporting processes, particularly where the technology contributes significantly to the formation of an expert opinion or the drafting of it.
LLMs like ChatGPT, as noted earlier, lack the intelligence to perform more complex critical and interpretative work. They’ve also been shown to produce false information, as a New York lawyer recently discovered.
The design of the technology presumes that interpretation involves a process of selecting and applying knowledge about a relevant context in which the content of a file belongs, which LLMs attempt to reproduce by scanning contents with reference to their own contextual network of word-relationships, modelled out of algorithmic processing of texts.
LLMs are no substitute for human problem-solving
The current inability of this modelling to replicate the more enriched experience and context-driven activity involved in human problem solving and interpretation becomes apparent when we consider the work of the most effective expert witnesses on high profile cases, e.g., class actions. Plaintiff Class expert witnesses who are effective in challenging the opinions of Defendant expert witness often do so not on the basis of the Defendant expert’s trivial mistakes (e.g., calculation errors), but rather on the basis of alternative methodologies for interpreting the facts that are more closely related to the wisdom of practice and nonetheless supported by available and scientifically valid theory.
In those cases, providing the winning opposing opinion relies on the ability to understand the difference between trivial and significant problems in argumentation, as well as the ability to select the most relevant method of evaluating events as a part of a professional practice. Defendant experts in contrast often rely on dominant theoretical knowledge about phenomena that seems to represent the correct approach to design, evaluation, or professional practice in purely technical contexts (often not coincidentally, e.g., in class actions against car manufacturers where the reliance on mechanical engineering concepts only in design justification was intended in the first place to deceive or mislead consumers about design flaws obvious in vehicle operation).
Expert witnesses bring more than just data to the table
LLMs are likely to be defenceless against Defendant class theoretical arguments in these circumstances because a critique of those opinions involves something more than the LLM’s data set and probability modelling can arguably yet provide: the tricks of the trade of many expert witnesses and their knowledge to support these professional intuitions escape the more plainly aggregative and less discriminative interpretative understanding of AI. Experts should therefore always preference their personal professional understanding of a case over differing explanations provided by ChatGPT and other LLMs, not just as a means of taking into account false information easily absorbed by the AI technology, but also as a way of ensuring they bring their particular “knowledge, skills and experience” as expert witnesses in the first place.
Finally, experts should be wary of the Privacy and Confidentiality risks they expose themselves to when using LLMs. There is no clear evidence at the moment that sensitive prompts, queries and documents are not visible to LLM platform providers and creators, and the nature of this material will almost undoubtedly be used to develop the AI technology later on. Expert witnesses should therefore avoid using prompts, queries or documents on the platforms that might identify a case or expose other critical or confidential information about it.