Support Centre

You have out of 5 free articles left for the month

Signup for a trial to access unlimited content.

Start Trial

Continue reading on DataGuidance with:

Free Member

Limited Articles

Create an account to continue accessing select articles, resources, and guidance notes.

Free Trial

Unlimited Access

Start your free trial to access unlimited articles, resources, guidance notes, and workspaces.

USA: Multiple state bars set guidelines for generative AI use in law practice

Since the public debut of generative artificial intelligence (AI) about 18 months ago, proponents and detractors of the new technology have saturated the media with breathless commentaries about the promise and peril of this new technology in the legal profession. On the one hand, a reported 44% of all legal tasks could be replaced by generative AI, while on the other hand, generative AI 'hallucinates' and makes up fake but convincing-sounding case citations, leading to lawyers being sanctioned. So, which is it? 

And importantly, how should lawyers navigate this new landscape? Shun AI and risk falling behind the competition? Or embrace it and get too far out over your skis? 

This choice raises both practical and ethical questions. While the practicalities are still a work in progress - as new use cases and applications are hitting the market every day - the ethical questions are beginning to take shape. Lawyers should be aware of how to use generative AI tools responsibly and ethically, maintaining compliance with professional rules of conduct as required by their respective state bars. Several state bar associations have now issued guidance. Dr. Christian Mammen, Vincent Look, and Dr. Seiko Okada, of Womble Bond Dickinson, discuss this guidance and how the practice of law may evolve with the increasing use of generative AI.  

Suriya Phosri / Essentials collection / istockphoto.com

Introduction of new guidelines 

On November 16, 2023, the State Bar of California approved guidelines to help lawyers navigate the use of generative AI in light of their ethical obligations. The California guidance, titled 'Practical Guidance for the Use of Generative Artificial Intelligence in the Practice of Law,' measures the use of generative AI against the state bar's 'Rules of Professional Conduct.'

On January 19, 2024, the Florida Bar released their own guidance, titled 'Professional Ethics of the Florida Bar, Proposed Advisory Opinion 24-1.' The Florida guidance refers to some existing ethics opinions in Florida and other states as instructive in the AI context. 

Additionally, on October 27, 2023, the State Bar of Michigan released their Ethics Opinion JI-155, counseling that judges need to balance duties of competence to understand and properly use technology (including AI), as well as set boundaries to ensure they are used within the confines of the law and court rules. Finally, on January 24, 2024, New Jersey issued the 'Preliminary Guidelines on the Use of Artificial Intelligence by New Jersey Lawyers.' 

These state bars provide that lawyers may use generative AI in the practice of law, taking particular caution regarding the risks and duties associated with such use, providing similar guidance regarding duties of confidentiality and competence. As the New Jersey guidelines summarize, 'While AI does not change the fundamental duties of legal professionals, lawyers must be aware of new applications and potential challenges in the discharge of such responsibilities.' 

In all guidelines issued so far, some common themes emerge. 

Duty of competence in generative AI use 

The duty of competence requires lawyers to keep abreast of changes in the law and its practice, including both the risks and benefits associated with relevant technology. In other words, at some point, lawyers may find that the duty of competence encompasses an affirmative duty to embrace and use generative AI technologies. 

Current guidance is more cautious, however. Both California's and Florida's guidance warn against overreliance on AI tools, noting that the output of generative AI is not without error, and that generative AI can 'hallucinate.'  

According to California's competency guidance, lawyers must understand how the generative AI solution works, including its limitations and potential use of client data. Further, lawyers cannot simply trust that the output from the generative AI tool is correct, but must review and analyze these outputs to support 'the interests and priorities' of the client. Importantly, the guidelines state that the 'duty of competence requires more than the mere detection and elimination of false AI-generated results.' In other words, lawyers cannot overrely on the generative AI solution, because doing so would essentially result in a delegation of lawyers' professional judgment to generative AI, which should remain lawyers' responsibility at all times.  

Florida competency guidance similarly requires that lawyers make reasonable efforts to ensure that the 'conduct' of generative AI is compatible with the lawyers' own professional obligations. Lawyers must review work that is a product of generative AI, just as they would a non-lawyer assistant such as a paralegal. Lawyers are ultimately responsible for the work product that is created regardless of generative AI's role. In other words, the lawyer responsible must verify the accuracy and sufficiency of work that is performed by generative AI. This aligns with the duty to supervise non-lawyer assistance set forth in the American Bar Association's 'Model Rules of Professional Conduct.' Further, Florida guidance adds that a lawyer may not delegate acts that constitute the practice of law to generative AI, such as negotiation of claims or other legal tasks that require the lawyer's personal judgment and participation.  

The Michigan judicial guidance particularly emphasizes these aspects of the duty of competence and their potential interplay with the judicial duty of impartiality. 

Duty of confidentiality in generative AI use 

Regarding California's confidentiality guidance, a lawyer must not input any confidential information of the client into a generative AI solution, unless the lawyer knows that the AI tool provider will not share the information with others or use the confidential information for itself. In addition, lawyers must anonymize the input to generative AI so that the input does not identify a client. In other words, lawyers should treat the 'prompt' of a generative AI tool like the ears of a stranger.  

California's guidance recommends that lawyers disclose to their clients their intent to use generative AI in the client's representation, including how it is used, risks, and benefits.  

Florida's confidentiality guidance similarly asserts that a lawyer must understand how the generative AI tool is going to use the input. Lawyers should ensure that the provider of the generative AI tool will preserve the confidentiality of confidential information that a lawyer inputs to the generative AI tool. Under the Florida guidance, lawyers should determine whether the provider retains the confidential information after it is submitted, as well as investigate the provider's reputation, security measures, and policies.  

Florida's confidentiality guidance adds that a lawyer should not try to access information that is input to the generative AI tool by other lawyers. For example, a lawyer should not try, via 'prompt engineering' or otherwise, to tinker with a generative AI tool to extract confidential information that was input by another lawyer into the generative AI tool.  

Similarly to California, Florida's guidance recommends that lawyers obtain informed consent from their client regarding the lawyer's intent to use a third-party generative AI program and associated risks, unless the use of generative AI does not involve the disclosure of confidential information to a third party. In this context, Florida's guidance recommends use of an in-house generative AI tool, rather than outside (third-party) generative AI, as use of an in-house tool may mitigate the confidentiality concerns related to generative AI use. 

The New Jersey guidelines recognize that there are now a number of generative AI tools on the market specifically developed for lawyers and optimized for the confidentiality and security requirements of the legal services industry. Thus, rather than requiring that lawyers know how the AI tool works, the New Jersey guidance merely states that '[a] lawyer is responsible for ensuring the security of an AI system before entering any non-public client information.' 

Advertising, billing, discrimination 

Florida guidance particularly cautions against the role of generative AI in potentially creating a lawyer-client relationship without the lawyer's knowledge. The Florida guidance provides that generative AI that interfaces with clients (e.g., an AI chatbot) should provide disclaimers and properly identify itself as a chatbot. The guidance warns that an overly welcoming generative AI chatbot may inadvertently create a lawyer-client relationship and improperly provide legal advice.  

Similarly, with respect to lawyer advertising and AI, a lawyer must inform prospective clients when an AI program (e.g., an AI chatbot) is used for advertising or intake purposes. The lawyer is ultimately responsible if the chatbot provides misleading information to prospective clients or communicates in a manner that is intrusive or coercive. The lawyer must inform the prospective client that they are communicating with an AI program rather than a lawyer or law firm employee. 

Regarding billing a client while leveraging generative AI, California guidance provides that a lawyer may use generative AI to more efficiently create work products and may charge for actual time spent on the legal work. While the time that is charged may include crafting or refining generative AI inputs or reviewing and editing generative AI outputs, the lawyer must not charge hourly fees for the time saved by using generative AI. A fee agreement should explain the basis for all fees and costs, including those associated with the use of generative AI.

Florida legal fee and billing guidance similarly provides that while generative AI may increase lawyer efficiency, this increase may not be used to inflate claims of time. Florida guidance suggests that contingent fee arrangements and flat billing rates may be negotiated to spread this increase in efficiency between the client and the lawyer. 

Regarding candor to the tribunal, California and Florida guidance provide that a lawyer must review all submissions that are made to the court for accuracy, including analysis and citation to case law. Generative AI already has a history of 'hallucinating' or making up non-existent case law and bogus quotations

Both the California and New Jersey guidance mention that, with respect to the prohibition of discrimination, harassment, and retaliation, some generative AI is trained on biased information, and lawyers should be aware of possible biases and the risks they may create when using generative AI (e.g., to screen potential clients or employees). The Florida guidance is silent on this topic.  

Conclusion 

Overall, the pronouncements by various state bars provide overlapping guidance regarding the use of generative AI in the legal space. This is expected and reassuring given the general similarity in legal ethics among the states.  

Other state bars are also working on AI guidance. For example, in the Fall 2023 issue of the North Carolina State Bar Journal, the North Carolina State Bar published an article by its ethics counsel listing key ethical considerations for the use of AI in the legal profession. The New York State Bar Association has created the Task Force on Artificial Intelligence to 'review AI-based software, generative AI technology, and other machine learning tools that may enhance the profession and that pose risks for individual attorneys dealing with new, unfamiliar technology.' Although the practice of law is governed by states, federal guidance may help provide consistency across the nation.  

Doubtless, as this revolutionary technology continues to develop, the ways in which lawyers must think about it in relation to their own ethical obligations will also continue to evolve. Inevitably, some questions about the boundaries of regulations governing the unauthorized practice of law will also emerge. For now, one can imagine self-service generative AI chatbots that could run the risk of providing 'legal advice' without the supervision and intermediating judgment of a licensed attorney. Ultimately, regardless of generative AI use, the lawyer maintains full responsibility for the 'practice of law' and for providing competent legal counsel.  

As generative AI tools become increasingly capable, the line between generative AI outputs and legal work products may blur further. The starting point for the analysis should continue to be an application of the existing rules of ethics. In most (or perhaps all) cases, this will suffice (for now).  

Dr. Christian Mammen Managing Partner 
[email protected]  
Vincent Look Associate 
[email protected]
Dr. Seiko Okada Associate 
[email protected]
Womble Bond Dickinson, California and North Carolina 

Feedback