Skip to main content

AI Use Disclosure

For Qualified Solicitors

AI Use Disclosure and Declaration Requirements

E-Solicitors Legal Services Marketplace

─────────────────────────────────────

Version 1.0 - January 2026

England and Wales

Important Notice to Solicitors

  • YOU MUST DISCLOSE AND DECLARE TO CLIENTS WHETHER YOU HAVE USED ARTIFICIAL INTELLIGENCE IN THE DELIVERY OF LEGAL ADVICE BEFORE PROVIDING THAT ADVICE.

The AI Transparency Principle

The use of Artificial Intelligence (AI) in legal practice is increasingly common. While AI can enhance efficiency and service delivery, clients have the right to know when AI has been used in their matter. This reflects core principles of honesty, transparency, and enabling clients to make informed decisions.

Platform AI Disclosure Requirements

  • REQUIRED: You must disclose to clients BEFORE providing advice whether AI was used

  • REQUIRED: You must specify the NATURE of AI use (research, drafting, review, etc.)

  • REQUIRED: You must confirm human review and professional judgment was applied

  • REQUIRED: You must obtain informed client consent for AI use

  • REQUIRED: You must maintain records of AI use and disclosures

Why This Matters

🤖 AI tools can produce inaccurate or 'hallucinated' information

🤖 Client data may be processed by AI systems with confidentiality implications

🤖 Professional judgment must be applied to AI outputs

🤖 Clients may have preferences about AI use in their matter

🤖 Transparency supports public trust in the profession

Regulatory Basis

These requirements derive from:

SRA Principle 2: Uphold public trust and confidence

SRA Principle 4: Act honestly

SRA Principle 7: Act in best interests of each client

SRA Code Rule 3.2: Competent and timely service

SRA Code Rule 8.6: Information for informed decisions

SRA Code Rule 8.11: Communications not misleading

Law Society AI Guidance 2024

Contents

Part A: General Terms and Definitions

  1. Definitions - AI and Related Terms

  2. Scope of This Agreement

  3. Platform Position on AI

Part B: Ai Disclosure Requirements

  1. Mandatory Disclosure Obligation

  2. Timing of Disclosure

  3. Content of Disclosure

  4. Form of Disclosure

  1. Categories of AI Use

  2. AI for Legal Research

  3. AI for Document Drafting

  4. AI for Document Review

  5. AI for Communication

  6. AI for Case Analysis and Prediction

  1. Informed Consent Requirements

  2. Client Right to Refuse AI

  3. Vulnerable Clients

  4. Documenting Consent

Part E: Competence and Supervision

  1. Professional Competence with AI

  2. Human Review Requirements

  3. Quality Assurance

  4. AI Hallucinations and Errors

Part F: Confidentiality and Data Protection

  1. Confidentiality Obligations

  2. Data Protection Requirements

  3. AI System Selection

  4. Third-Party AI Providers

Part G: Liability and Insurance

  1. Professional Responsibility

  2. PII Coverage for AI Use

  3. Liability for AI Errors

Part H: Sra Compliance

  1. SRA Principles and AI

  2. SRA Code of Conduct

  3. Reporting AI Issues

Part I: Law Society Requirements

  1. Law Society AI Guidance

  2. Best Practice Standards

Part J: Fca and Consumer Protection

  1. FCA Consumer Duty

  2. Consumer Rights

Part K: Record Keeping

  1. AI Use Records

  2. Disclosure Records

  3. Retention Requirements

Part L: General Provisions

  1. Warranties

  2. Suspension and Termination

  3. Governing Law

Schedules

Schedule 1: AI Use Declaration Form

Schedule 2: Client AI Disclosure Template

Schedule 3: AI Use Categories and Disclosure Requirements

Schedule 4: Client Consent Form - AI Use

Schedule 5: AI Quality Assurance Checklist

Schedule 6: AI Risk Assessment Template

Schedule 7: AI Use Policy Template

Schedule 8: AI Red Flags and Prohibited Uses

Part A: General Terms and Definitions

  1. Definitions - AI and Related Terms

1.1 In these Terms, the following AI-related definitions apply:

'AI' or 'Artificial Intelligence' means any software system that uses machine learning, natural language processing, or other computational techniques to perform tasks that would typically require human intelligence, including but not limited to generating text, analysing documents, conducting research, or making predictions.

'AI-Assisted' means work where AI tools have been used to support or enhance human work, but where a qualified solicitor has reviewed, verified, and applied professional judgment to the output.

'AI-Generated' means content, analysis, or output that has been primarily created by an AI system, even if subsequently reviewed by a human.

'Generative AI' means AI systems capable of generating text, images, code, or other content, including Large Language Models (LLMs) such as ChatGPT, Claude, Gemini, and similar systems.

'AI Hallucination' means false or fabricated information generated by an AI system that appears plausible but is factually incorrect, including fabricated case citations, statutes, or legal principles.

'AI Tool' means any software application or service that incorporates AI functionality, including legal-specific tools and general-purpose AI systems.

'Automated Decision-Making' means decisions made solely by automated means without human involvement, as defined in UK GDPR.

'Human-in-the-Loop' means a process where human review and approval is required before AI outputs are used or provided to clients.

'Prompt' means the input or instruction given to an AI system to generate a response.

'Training Data' means the data used to train an AI model, which may include information provided by users.

  1. Scope of This Agreement

2.1 This Agreement applies to:

(a) All use of AI tools in the delivery of legal services through the

Platform;

(b) All communications with clients that involve AI-generated or AI-assisted

content;

(c) All legal research, drafting, review, or analysis involving

AI;

(d) Any automated processes affecting client matters.

2.2 This Agreement covers AI use regardless of whether the AI is:

(a) Integrated into legal practice management

software;

(b) A standalone AI service (e.g., ChatGPT, Claude

);

(c) A legal-specific AI tool (e.g., legal research AI, contract review AI

);

(d) An AI feature within existing software (e.g., AI drafting in Word).

  1. Platform Position on AI

3.1 The Platform acknowledges that:

(a) AI tools can enhance legal service delivery when used

appropriately;

(b) Clients have the right to know when AI is used in their

matter;

(c) Professional responsibility cannot be delegated to

AI;

(d) Transparency about AI use supports public

trust;

(e) AI use must comply with all regulatory requirements.

3.2 The Platform requires:

(a) Disclosure of AI use to clients before advice is

provided;

(b) Informed client consent for AI

use;

(c) Human review of all AI

outputs;

(d) Records of AI use and

disclosures;

(e) Compliance with confidentiality and data protection obligations.

Part B: Ai Disclosure Requirements

  • MANDATORY: You must disclose AI use to clients BEFORE providing legal advice that has been generated or assisted by AI.
  1. Mandatory Disclosure Obligation

4.1 You MUST disclose to the client whenever:

(a) AI has been used to conduct legal research for the

matter;

(b) AI has been used to draft or generate

documents;

(c) AI has been used to review or analyse

documents;

(d) AI has been used to summarise

information;

(e) AI has been used to generate or assist with

advice;

(f) AI has been used to communicate with the

client;

(g) AI has been used for case outcome prediction or

analysis;

(h) AI has been used in any other material way affecting the matter.

4.2 Disclosure is required regardless of:

(a) The sophistication or type of AI

used;

(b) Whether the AI is legal-specific or

general-purpose;

(c) Whether you consider the AI use to be

minor;

(d) Whether the AI output was subsequently modified.

  1. Timing of Disclosure

5.1 Disclosure must be made:

(a) BEFORE providing advice that incorporates AI

use;

(b) At the earliest practical

opportunity;

(c) In advance, if AI use is planned from the

outset;

(d) Promptly, if AI use becomes necessary during the matter.

5.2 Initial Disclosure:

(a) If you routinely use AI, disclose this in your client care

letter;

(b) Explain the types of AI you may

use;

(c) Obtain general consent with specific consent for material

uses;

(d) Provide the opportunity to opt out.

5.3 Ongoing Disclosure:

(a) Update the client if AI use changes

materially;

(b) Disclose specific AI uses for significant

outputs;

(c) Confirm AI use when providing final advice or documents.

  1. Content of Disclosure

6.1 Disclosure must include:

(a) That AI has been used (or will be used

);

(b) The nature and purpose of the AI

use;

(c) Confirmation that human review has been

applied;

(d) That professional judgment remains with the

solicitor;

(e) Any relevant limitations of the

AI;

(f) Client's right to request non-AI alternatives.

6.2 Disclosure should also address (where relevant):

(a) Confidentiality measures for data entered into AI

systems;

(b) Whether client data will be processed by third-party AI

providers;

(c) Any additional risks associated with AI

use;

(d) How accuracy has been verified.

  1. Form of Disclosure

7.1 Disclosure must be:

(a) In writing (may be electronic

);

(b) Clear and

understandable;

(c) Prominent - not buried in lengthy

terms;

(d) Retained for records.

7.2 Appropriate methods include:

(a) Specific section in client care

letter;

(b) Separate AI disclosure

document;

(c) Email disclosure before providing

advice;

(d) Statement on documents indicating AI assistance.

  1. Categories of AI Use

8.1 AI use in legal practice typically falls into these categories:

Category

Examples

Disclosure Level

Legal Research

Case law search, statute analysis, precedent finding

Standard disclosure

Document Drafting

Contract generation, letter drafting, pleadings

Enhanced disclosure

Document Review

Due diligence, disclosure review, contract analysis

Standard disclosure

Summarisation

Case summaries, document summaries

Standard disclosure

Communication

Email drafting, client correspondence

Enhanced disclosure

Case Analysis

Outcome prediction, risk assessment

Enhanced disclosure

Translation

Document translation

Standard disclosure

Transcription

Meeting notes, dictation

Minimal disclosure

  1. AI for Legal Research

9.1 When using AI for legal research:

(a) Verify all case citations

independently;

(b) Check that cited cases exist and say what AI

claims;

(c) Verify statutory references are

current;

(d) Cross-reference with authoritative

sources;

(e) Document verification steps taken.

  • AI systems frequently 'hallucinate' case citations that do not exist. ALWAYS verify citations through official sources (Westlaw, Lexis, BAILII, legislation.gov.uk).

9.2 Disclosure for research should state:

(a) AI was used to assist with legal

research;

(b) All citations have been independently

verified;

(c) Research has been reviewed and supplemented as necessary.

  1. AI for Document Drafting

10.1 When using AI to draft documents:

(a) Review and edit all AI-generated

content;

(b) Ensure drafting reflects client's specific

circumstances;

(c) Apply professional judgment to structure and

content;

(d) Verify accuracy of all statements and

references;

(e) Ensure compliance with applicable rules and requirements.

10.2 Disclosure for drafting should state:

(a) AI was used to assist with document

preparation;

(b) The document has been reviewed and adapted for the client's

needs;

(c) Professional responsibility for the document remains with the solicitor.

  1. AI for Document Review

11.1 When using AI for document review:

(a) Understand the AI's capabilities and

limitations;

(b) Apply appropriate quality

sampling;

(c) Review AI-flagged items and a sample of non-flagged

items;

(d) Do not rely solely on AI for critical

determinations;

(e) Document the review methodology.

  1. AI for Communication

12.1 When using AI to draft client communications:

(a) Review all AI-drafted communications before

sending;

(b) Ensure tone and content are

appropriate;

(c) Verify accuracy of any factual

statements;

(d) Consider whether disclosure is needed in that communication.

  • Never send AI-generated client communications without human review. AI may produce inappropriate tone, inaccurate information, or confidentiality breaches.
  1. AI for Case Analysis and Prediction

13.1 When using AI for case outcome prediction or analysis:

(a) Understand the basis and limitations of the AI's

predictions;

(b) Do not present AI predictions as

definitive;

(c) Apply professional judgment to any

predictions;

(d) Clearly disclose that AI-assisted analysis was

used;

(e) Explain limitations to the client.

  • AI case prediction is inherently uncertain. Clients must understand predictions are indicative only and not guarantees.
  1. Informed Consent Requirements

14.1 Client consent for AI use must be:

(a) Informed - the client understands what AI use

means;

(b) Voluntary - not coerced or presented as

mandatory;

(c) Specific - relates to the types of AI use

proposed;

(d) Documented - recorded in

writing;

(e) Revocable - the client can withdraw consent.

14.2 To give informed consent, the client should understand:

(a) What AI tools may be

used;

(b) For what

purposes;

(c) What data may be processed by AI

systems;

(d) What safeguards are in

place;

(e) That human review will be

applied;

(f) That they can refuse AI use.

  1. Client Right to Refuse AI

15.1 Clients have the right to refuse AI use in their matter.

15.2 If a client refuses AI use:

(a) You must respect their

decision;

(b) You must not use AI for that client's

matter;

(c) You may explain any impact on fees or

timescales;

(d) You must not discriminate against the

client;

(e) Document the client's preference.

15.3 If you are unwilling to act without AI:

(a) You must be transparent about

this;

(b) The client should be given the choice to instruct

elsewhere;

(c) You should facilitate an orderly transfer if necessary.

  1. Vulnerable Clients

16.1 For vulnerable clients:

(a) Take extra care to explain AI use

clearly;

(b) Consider whether AI use is

appropriate;

(c) Ensure the client genuinely

understands;

(d) Involve appropriate support persons if

needed;

(e) Be particularly careful with automated communications.

ℹ SRA Code Rules 3.4 and 6.2 require consideration of client vulnerability. This extends to ensuring vulnerable clients understand and consent to AI use.

  1. Documenting Consent

17.1 Document AI consent by:

(a) Including AI consent in the client care

letter;

(b) Using a separate AI consent form for significant AI

use;

(c) Recording consent in attendance

notes;

(d) Confirming consent in writing (email is acceptable).

17.2 Retain consent documentation for the file retention period (minimum 6 years).

Part E: Competence and Supervision

  • AI DOES NOT REPLACE PROFESSIONAL COMPETENCE. You remain responsible for all advice and work product, regardless of AI assistance.
  1. Professional Competence with AI

18.1 Under SRA Code Rule 3.2, you must provide a competent service. This means:

(a) Understanding the AI tools you

use;

(b) Knowing their capabilities and

limitations;

(c) Being able to identify AI

errors;

(d) Applying appropriate professional

judgment;

(e) Not using AI for tasks beyond your competence to supervise.

18.2 Competence with AI includes:

(a) Training on AI tools before

use;

(b) Understanding how the AI generates

outputs;

(c) Knowing what types of errors to look

for;

(d) Keeping up to date with AI

developments;

(e) Recognising when AI use is inappropriate.

  1. Human Review Requirements

19.1 ALL AI outputs must be subject to human review before:

(a) Being provided to

clients;

(b) Being filed with courts or

tribunals;

(c) Being relied upon for

advice;

(d) Being used in

negotiations;

(e) Any other material use.

19.2 Human review must be:

(a) Substantive - not merely

cursory;

(b) By a competent person - qualified to assess the

output;

(c) Critical - actively looking for

errors;

(d) Documented - records of review kept.

  • Providing AI output directly to clients without substantive human review

  • Filing AI-generated documents without verification of all facts and citations

  • Relying on AI case analysis without independent professional assessment

  1. Quality Assurance

20.1 Implement quality assurance for AI use including:

(a) Verification of factual

statements;

(b) Independent checking of legal

citations;

(c) Review of tone and

appropriateness;

(d) Consistency with client

instructions;

(e) Compliance with applicable rules and requirements.

20.2 For significant AI use, consider:

(a)

Second-person

review;

(b) Supervisor sign-

off;

(c) Sampling and audit

procedures;

(d) Client review before finalisation.

  1. AI Hallucinations and Errors

21.1 AI 'hallucinations' are a known and serious risk. AI may:

(a) Fabricate case citations that do not

exist;

(b) Misstate the holdings of real

cases;

(c) Invent statutory

provisions;

(d) Generate plausible but incorrect legal

principles;

(e) Create fictional precedents, judges, or parties.

  • Lawyers have been sanctioned for citing AI-hallucinated cases in court filings. ALWAYS independently verify all citations and legal statements.

21.2 To mitigate hallucination risk:

(a) Verify ALL case citations through authoritative

sources;

(b) Check statutory references on

legislation.gov.uk;

(c) Cross-reference legal principles with practitioner

texts;

(d) Be sceptical of unfamiliar

authorities;

(e) Document verification steps.

Part F: Confidentiality and Data Protection

  1. Confidentiality Obligations

22.1 SRA Code Rule 6.3 requires you to keep client affairs confidential. When using AI:

(a) Consider what data you input into AI

systems;

(b) Understand how AI providers handle input

data;

(c) Use AI systems with appropriate confidentiality

protections;

(d) Anonymise or redact sensitive information where

possible;

(e) Avoid inputting highly sensitive data into general-purpose AI.

22.2 Key confidentiality considerations:

(a) Some AI systems use input data for

training;

(b) Data may be stored by AI

providers;

(c) AI providers may be in other

jurisdictions;

(d) Enterprise/business versions may have better

protections;

(e) Legal-specific AI tools may have sector-appropriate safeguards.

  • Free consumer AI tools often use your input data for training. This may breach client confidentiality. Use enterprise or legal-specific AI with appropriate data protections.
  1. Data Protection Requirements

23.1 Under UK GDPR, when using

AI

you must:

(a) Have a lawful basis for processing client data through

AI;

(b) Inform clients about AI processing in your privacy

notice;

(c) Ensure AI providers have appropriate

safeguards;

(d) Consider DPIA for high-risk AI

processing;

(e) Comply with data subject rights.

23.2 Automated decision-making:

(a) Article 22 UK GDPR restricts solely automated decisions with legal

effects;

(b) Ensure human involvement in significant

decisions;

(c) Provide information about automated

processing;

(d) Allow data subjects to request human review.

  1. AI System Selection

24.1 When selecting AI systems, consider:

(a) Data handling and privacy

policies;

(b) Whether data is used for

training;

(c) Where data is processed and

stored;

(d) Security certifications and

standards;

(e) Enterprise vs consumer

versions;

(f) Legal-sector specific tools vs general-purpose

AI;

(g) Provider reputation and stability.

  1. Third-Party AI Providers

25.1 When using third-party AI providers:

(a) Review their terms of service and privacy

policies;

(b) Ensure appropriate data processing

agreements;

(c) Verify security measures are

adequate;

(d) Consider jurisdiction and data transfer

issues;

(e) Maintain records of providers used.

Part G: Liability and Insurance

  1. Professional Responsibility

26.1 You remain professionally responsible for all advice and work, whether AI-assisted or not.

26.2 This means:

(a) You cannot blame AI for errors in your

advice;

(b) You cannot disclaim responsibility because AI was

used;

(c) Professional negligence applies regardless of AI

use;

(d) You are responsible for verifying AI

outputs;

(e) SRA may

take action

for AI-related failures.

  • 'The AI got it wrong' is NOT a defence to professional negligence or regulatory action. You are responsible for all work

product

.

  1. PII Coverage for AI Use

27.1 Ensure your Professional Indemnity Insurance covers AI use:

(a) Review policy terms regarding AI and

technology;

(b) Check for any AI-related

exclusions;

(c) Notify insurers of significant AI use if

required;

(d) Ensure cover is adequate for AI-assisted

work;

(e) Seek advice if uncertain about coverage.

27.2 Considerations for insurers:

(a) Some policies may have technology

exclusions;

(b) Cyber insurance may also be

relevant;

(c) Insurers are developing AI-specific

positions;

(d) Disclosure of AI use may be required.

  1. Liability for AI Errors

28.1 If AI causes an error that harms a client:

(a) You are liable as the supervising

professional;

(b) The AI provider is unlikely to be liable to your

client;

(c) You may have recourse against the AI provider in limited

circumstances;

(d) Your PII should respond as for any professional

error;

(e) SRA may investigate as for any service failure.

Part H: Sra Compliance

  1. SRA Principles and AI

29.1 AI use must comply with the SRA Principles:

Principle 1: Rule of Law

AI must not be used to circumvent legal requirements or facilitate improper conduct.

Principle 2: Public Trust

Transparency about AI use supports public trust. Concealing AI use may undermine trust.

Principle 3: Independence

AI must not compromise your independent professional judgment.

Principle 4: Honesty

You must be honest about AI use. Passing off AI work as entirely human may be dishonest.

Principle 5: Integrity

Appropriate AI use, with disclosure and quality controls, demonstrates integrity.

Principle 6: EDI

Be aware of potential AI bias. Ensure AI use does not discriminate.

Principle 7: Best Interests

AI use should serve client interests. Consider whether AI is appropriate for each client.

  1. SRA Code of Conduct

30.1 Key SRA Code rules relevant to AI:

Rule 1.2 - Not abuse position: Do not use AI in ways that abuse your position or clients.

Rule 3.2 - Competent service: Ensure AI use supports (not undermines) competent service.

Rule 3.4 - Client needs: Consider individual client needs regarding AI use.

Rule 6.3 - Confidentiality: Protect confidentiality when using AI systems.

Rule 8.6 - Informed decisions: Clients need information about AI use to make informed decisions.

Rule 8.11 - Not misleading: Communications must not mislead about AI use.

  1. Reporting AI Issues

31.1 Consider whether SRA reporting is required if:

(a) AI use has caused material client

harm;

(b) There has been a significant confidentiality breach via

AI;

(c) AI-related issues reveal systemic

problems;

(d) The matter may otherwise require SRA notification.

Part I: Law Society Requirements

  1. Law Society AI Guidance

32.1 The Law Society has issued guidance on AI use including:

(a) AI Ethics

Guidance;

(b) Generative AI Practice

Note;

(c) Technology and Legal Services

Guidance;

(d) Data Protection Guidance relevant to AI.

32.2 Key Law Society recommendations:

(a) Understand AI tools before

use;

(b) Maintain human

oversight;

(c) Verify AI

outputs;

(d) Consider confidentiality

implications;

(e) Keep records of AI

use;

(f) Update policies for AI use.

  1. Best Practice Standards

33.1 Law Society best practice for AI includes:

(a) Have an AI use

policy;

(b) Train staff on AI tools and

risks;

(c) Conduct risk assessments for AI

use;

(d) Implement quality

controls;

(e) Review and update practices

regularly;

(f) Engage with sector guidance and developments.

Part J: Fca and Consumer Protection

  1. FCA Consumer Duty

34.1 Where FCA Consumer Duty applies, AI use must support good outcomes:

(a) Products and services - AI should enhance, not diminish, service

quality;

(b) Price and value - AI efficiency gains should benefit

consumers;

(c) Consumer understanding - Be transparent about AI

use;

(d) Consumer support - AI should not impede access to human support.

  1. Consumer Rights

35.1 Consumer Rights Act 2015 applies to AI-assisted services:

(a) Services must be performed with reasonable care and

skill;

(b) AI errors do not excuse failures in care and

skill;

(c) Services must be as described - describe AI use

accurately;

(d) Consumer remedies apply for substandard AI-assisted services.

35.2 Digital Markets, Competition and Consumers Act 2024:

(a) Transparency requirements apply to AI-enhanced

services;

(b) Unfair practices prohibitions

apply;

(c) Consumers must not be misled about AI use.

Part K: Record Keeping

  1. AI Use Records

36.1 Maintain records of AI use including:

(a) What AI tools were

used;

(b) For what

purpose;

(c) On which

matters;

(d) What prompts or inputs were

used;

(e) What outputs were

generated;

(f) What human review was

conducted;

(g) What verification steps were taken.

  1. Disclosure Records

37.1 Maintain records of AI disclosures:

(a) What was disclosed to the

client;

(b) When disclosure was

made;

(c) How disclosure was

made;

(d) Client's response/

consent;

(e) Any client refusal of AI use.

  1. Retention Requirements

38.1 Retain AI-related records:

(a) For the same period as other matter records (minimum 6 years

);

(b) Longer if required for specific matter

types;

(c) In a retrievable

format;

(d) With appropriate security.

Part L: General Provisions

  1. Warranties

39.1 By accepting these Terms, you warrant that you will:

(a) Disclose AI use to clients before providing AI-assisted

advice;

(b) Obtain informed client consent for AI

use;

(c) Apply human review to all AI

outputs;

(d) Verify accuracy of AI-generated

content;

(e) Protect client confidentiality when using

AI;

(f) Maintain records of AI use and

disclosures;

(g) Comply with all regulatory

requirements;

(h) Use AI only within your competence to supervise.

  1. Suspension and Termination

40.1 The Platform may suspend or terminate your registration if:

(a) You fail to disclose AI use to

clients;

(b) You provide AI-generated advice without human

review;

(c) There is evidence of AI-related client

harm;

(d) You breach confidentiality through AI

use;

(e) You fail to maintain required

records;

(f) You are subject to regulatory action for AI-related issues.

  1. Governing Law

41.1 These Terms are governed by English law.

41.2 The courts of England and Wales have exclusive jurisdiction.

SCHEDULE 1: AI USE DECLARATION FORM

I hereby declare and confirm that:

Declaration

Confirmed

I understand my obligation to disclose AI use to clients

I will disclose AI use BEFORE providing AI-assisted advice

I will obtain informed client consent for AI use

I will apply human review to all AI outputs

I will verify accuracy of AI-generated content

I understand AI can hallucinate and produce errors

I will verify all AI-generated citations independently

I will protect client confidentiality when using AI

I will maintain records of AI use and disclosures

I understand I remain professionally responsible for all work

I have reviewed my PII coverage for AI use

I will respect client wishes if they refuse AI use

I will comply with all SRA requirements regarding AI

Signed: _______________________________________________

Name: _______________________________________________

SRA ID: _______________________________________________

Date: _______________________________________________

Schedule 2: Client Ai Disclosure Template

For Client Care Letter:

ℹ ARTIFICIAL INTELLIGENCE DISCLOSURE: We may use artificial intelligence (AI) tools to assist in providing legal services to you. This may include using AI for legal research, document drafting, document review, or other tasks. When we use AI: (1) All AI outputs are reviewed by a qualified solicitor before being provided to you; (2) We apply professional judgment to all AI-assisted work; (3) We take steps to verify the accuracy of AI outputs; (4) We protect your confidential information when using AI systems. You have the right to request that we do not use AI in your matter. If you have any questions about our use of AI, please ask. We will inform you if AI is used in any material way in your matter.

For Specific AI Disclosure:

ℹ AI USE NOTIFICATION: We wish to inform you that artificial intelligence tools [were used / will be used] in connection with your matter for the following purpose(s): [specify purpose - e.g., legal research, document drafting, document review]. [A qualified solicitor has reviewed and verified / will review and verify] all AI outputs before they are provided to you. Our professional responsibility for the advice and work product remains the same regardless of AI assistance. If you have any questions or concerns about our use of AI, or if you would prefer that we do not use AI for your matter, please let us know.

For Documents with AI Assistance:

ℹ This document was prepared with the assistance of artificial intelligence tools. It has been reviewed and verified by [name], Solicitor. [Firm name] remains professionally responsible for this document.

SCHEDULE 3: AI USE CATEGORIES AND DISCLOSURE REQUIREMENTS

AI Use Category

Description

Disclosure Requirement

Verification Required

Legal Research

Using AI to search for and summarise legal authorities

Standard - in client care letter + if material citations used

Verify all citations independently

Document Drafting

AI generates initial drafts of documents

Enhanced - specific notification before providing document

Full review and adaptation

Document Review

AI reviews documents for specific issues

Standard - in client care letter

Sample verification

Summarisation

AI summarises documents or information

Standard - in client care letter

Accuracy check

Client Communication

AI assists drafting emails/letters

Enhanced - for substantive advice

Tone and accuracy review

Case Prediction

AI predicts case outcomes or risks

Enhanced - clear limitations stated

Professional judgment overlay

Translation

AI translates documents

Standard - in client care letter

Quality check (ideally human)

Transcription

AI transcribes meetings/dictation

Minimal - routine administrative use

Accuracy check

SCHEDULE 4: CLIENT CONSENT FORM - AI USE

Client Name: _______________________________________________

Matter Reference: _______________________________________________

Date: _______________________________________________

I/We have been informed about [Firm Name]'s use of artificial intelligence (AI) tools in the delivery of legal services. I/We understand that:

Statement

Understood

AI may be used for legal research, drafting, review, or analysis

All AI outputs are reviewed by qualified solicitors

The firm remains professionally responsible for all work

AI tools may process my/our information

Appropriate confidentiality safeguards are in place

I/we can refuse AI use and request human-only service

My/Our consent:

☐ I/We CONSENT to the use of AI tools as described in my/our matter

☐ I/We DO NOT CONSENT to the use of AI tools in my/our matter

☐ I/We consent to AI use EXCEPT for: _______________________________________________

Signed: _______________________________________________

Date: _______________________________________________

Schedule 5: Ai Quality Assurance Checklist

Quality Check

Completed

By Whom

Date

AI output reviewed by qualified person

Factual statements verified

Legal citations checked (Westlaw/Lexis/BAILII)

Statutory references verified (legislation.gov.uk)

Content adapted for client's specific circumstances

Tone and language appropriate

Consistent with client instructions

No confidential information of others disclosed

Compliant with applicable rules/requirements

Client disclosure made

Schedule 6: Ai Risk Assessment Template

Risk Factor

Low

Medium

High

Mitigation

AI accuracy for this task type

Sensitivity of client information

Consequences of error

Complexity of legal issues

Client vulnerability

Jurisdictional issues

Time pressure affecting review quality

Confidentiality of AI system

Overall risk assessment

Decision: ☐ Proceed with AI

use ☐

Proceed with enhanced

controls ☐

Do not use AI

Approved by: _______________________________________________

Date: _______________________________________________

Schedule 7: Ai Use Policy Template

[firm Name] Artificial Intelligence Use Policy

  1. Purpose

This policy governs the use of artificial intelligence (AI) tools in the delivery of legal services by [Firm Name].

  1. Approved AI Tools

[List approved AI tools and their permitted uses]

  1. Prohibited Uses

Using AI outputs without human review

Inputting highly sensitive client data into non-secure AI systems

Presenting AI work as entirely human-produced without disclosure

Using AI for tasks beyond reviewer's competence to verify

Relying on AI for final professional judgment

  1. Mandatory Requirements

Disclose AI use to clients before providing AI-assisted advice

Obtain informed client consent

Review all AI outputs before use

Verify all citations and factual statements

Maintain records of AI use

Report AI-related issues to [designated person]

  1. Review

This policy will be reviewed [annually / when significant changes occur].

SCHEDULE 8: AI RED FLAGS AND PROHIBITED USES

RED FLAGS - Proceed with Extreme Caution

  • Case citations you don't recognise - ALWAYS verify independently

  • Legal principles that seem novel or unfamiliar

  • Statistical claims without clear sources

  • Confident statements about uncertain areas of law

  • Advice that differs significantly from your own knowledge

  • Outputs that seem too comprehensive or perfect

Prohibited Uses

  • Filing AI-generated documents without verifying all citations

  • Sending AI-drafted advice to clients without substantive review

  • Using AI for automated decision-making without human involvement

  • Inputting client identifying information into unsecured AI

  • Using AI to circumvent supervision requirements

  • Concealing AI use from clients

  • Using AI for tasks you cannot competently supervise

Best Practices

  • Use AI to enhance efficiency while maintaining quality

  • Verify all AI outputs through authoritative sources

  • Be transparent with clients about AI use

  • Maintain human oversight and professional judgment

  • Keep records of AI use for quality and accountability

  • Stay informed about AI developments and guidance

  • Report concerns about AI use to compliance/management

Document Information

─────────────────────────────────────

Regulatory Framework

SRA Standards and Regulations 2019 (as amended 2025)

SRA Principles

SRA Code of Conduct for Solicitors, RELs and RFLs

SRA Code of Conduct for Firms

Law Society AI Ethics Guidance

Law Society Generative AI Practice Note

UK GDPR and Data Protection Act 2018

FCA Consumer Duty 2023

Consumer Rights Act 2015

Digital Markets, Competition and Consumers Act 2024

EU AI Act (where applicable)

NCSC AI Security Guidance

─────────────────────────────────────

Key Resources

SRA: www.sra.org.uk

Law Society: www.lawsociety.org.uk

ICO (AI and Data Protection): ico.org.uk

NCSC AI Guidance: www.ncsc.gov.uk

─────────────────────────────────────

Related Documents

Platform Terms - SRA Compliance Validation V1.0

Platform Terms - KYC/AML Compliance V1.0

Solicitor Terms and Conditions V1.0

Privacy Policy V1.0

─────────────────────────────────────

Document Version: 1.0

Effective Date: January 2026

Last Updated: January 2026

Next Review: July 2026

─────────────────────────────────────

  • AI TRANSPARENCY IS ESSENTIAL. Clients have the right to know when AI has been used in their matter. You must disclose AI use, obtain informed consent, apply human review, and maintain professional responsibility for all work.