steve-a-johnson-1FD-E7Ioblw-unsplash

Before You Press Record: The Ethics, Consent, and Liability Questions Every Clinician Should Ask About AI in Practice

The Question Most Clinicians Are Not Asking Loudly Enough Renee is a licensed counselor in solo private practice. After her last session of the day, she opens an email from […]

-

The Question Most Clinicians Are Not Asking Loudly Enough

Renee is a licensed counselor in solo private practice. After her last session of the day, she opens an email from a vendor pitching an AI note-taker that promises to listen to her sessions and draft progress notes in seconds. The demo is impressive. A colleague she trusts already uses one. The price is reasonable. She is exhausted, three weeks behind on documentation, and tempted to sign up before bed. Then a single quiet question stops her: when was the last time she actually told a client that an algorithm might be listening, storing, or processing what they said in session, and did she give them a real way to say no? She closes the laptop and decides to slow down.

Renee’s pause is the most ethically important thing happening in private practice right now. Artificial intelligence tools that transcribe sessions, draft notes, and summarize treatment plans are being marketed aggressively to clinicians in 2026. The promises are real. So are the unresolved ethical, legal, and consent questions sitting underneath them. Before adopting any AI tool that touches client data, every clinician owes themselves and their clients a careful look at what professional codes, federal law, and emerging case law actually require.

What Recent Professional Guidance Is Actually Saying

In June 2025, the American Psychological Association released its Ethical Guidance for AI in the Professional Practice of Health Service Psychology, and the document is unusually clear-eyed for an ethics framework. The guidance affirms that psychologists have an ongoing ethical obligation to obtain informed consent that clearly communicates the purpose, application, and potential benefits and risks of AI tools, in plain and culturally appropriate language, with a meaningful opt-out option for clients who decline. The National Board for Certified Counselors has issued similar guidance, emphasizing that clients must know they have the right to say no.

The guidance also draws a clear line on human oversight: AI should augment, not replace, clinical decision-making, and the licensed clinician remains responsible for every clinical judgment in the chart. Data privacy is treated as non-negotiable. AI tools must be usable in a manner consistent with HIPAA and other relevant privacy regulations, which raises the question that often gets skipped over in vendor sales pages: does the AI vendor sign a Business Associate Agreement, and have you read it? Many consumer-facing AI tools, including widely used general-purpose chatbots, do not qualify as HIPAA-compliant unless they specifically partner with a covered entity, sign a BAA, and meet the safeguards the regulation requires.

Why the Quiet Risks Deserve Loud Attention

The everyday workflow risks are easy to underestimate until something goes wrong. AI scribes that transcribe a session may store, process, or use that audio in ways the vendor’s privacy policy describes only in dense paragraphs at the bottom of the page. Some platforms train their models on user data unless you affirmatively opt out, and “opt out” is sometimes not the default. If a client discloses something in session that ends up routed through an AI processing pipeline that does not meet HIPAA standards, the clinician who pressed record is the one holding the regulatory and malpractice exposure, not the vendor.

Informed consent is more complicated than a single line in your intake paperwork. A client who agrees to “use of technology” in their first session may not have meaningfully agreed to an AI listening to every word and a third-party server storing the transcript. Truly informed consent requires plain-language disclosure of what AI is doing, where the data goes, who can access it, how long it is retained, and what the alternatives are if they decline. Minors, court-involved clients, clients in coercive situations, and clients with prior trauma related to surveillance all add layers that a checkbox does not address. The opportunity here is not to retreat from technology. It is to lead the field with the kind of careful, transparent, client-centered practice that has always been the gold standard.

Why This Matters to Your Practice’s Long-Term Health

A single complaint to a state licensing board, a single HIPAA breach notification, or a single malpractice claim involving improperly processed client data can cost a solo practitioner more in time, legal fees, and reputational damage than years of saved documentation hours. Even when no formal complaint is filed, the slow erosion of client trust that follows when someone Googles their AI vendor and finds out their session was used to train a model is a quieter but real cost. Spending an afternoon getting your AI policy, consent documents, and vendor agreements right is not bureaucratic overhead. It is foundational risk management for the practice you have spent years building.

Your Action Plan: 6 Questions to Ask Before Letting AI Near a Client Session

  1. Have I read the APA, NASW, ACA, or my state board’s most recent AI guidance, and do I know what my licensure body expects? Professional codes are evolving quickly. The June 2025 APA guidance and equivalent documents from other associations are not optional reading if AI is part of your workflow. Block an hour, find your association’s current statement, and read it twice.
  2. Will the vendor sign a Business Associate Agreement, and what does it actually say? No BAA, no go. Read the BAA itself, not the marketing summary. Pay close attention to data retention timelines, subcontractor disclosures, breach notification procedures, and whether the vendor uses your data to train their models.
  3. What does my informed consent document specifically say about AI? A general “use of technology” clause is almost certainly not enough. Update your intake paperwork to name AI explicitly, describe what it does, where the data goes, how long it is retained, and exactly how a client can opt out without it affecting their care.
  4. How will I have the AI conversation with existing clients in plain language? Draft a short, calm script you can use to bring this up in session. Clients deserve a real conversation, not a paragraph buried in a portal. Include a clear, explicit invitation to decline without consequence.
  5. What is my plan when something goes wrong? Decide in advance how you will handle a vendor data breach notification, a client request to delete all AI-processed data, a subpoena that pulls in AI transcripts, or a discrepancy between an AI-generated note and your clinical recollection. Having a plan turns a crisis into a procedure.
  6. Am I keeping the human clinician in the driver’s seat? Whatever workflow you adopt, build in a deliberate review step before any AI-generated content enters the legal record. The licensed clinician’s name on the note means the licensed clinician’s clinical judgment is in the note. AI does not carry that responsibility. You do.

The Bottom Line

The technology will keep moving. The ethical bar does not. Clients trust clinicians with information they share with no one else, and that trust is the foundation everything else in private practice rests on. Adopting AI thoughtfully, with eyes open and consent processes that actually deserve the name, is entirely possible. So is deciding, after a careful review, that a particular tool is not yet ready for your practice or your clients. Both choices are defensible. What is not defensible is letting convenience or vendor pressure outpace the careful judgment your licensure was built to protect. Slow questions are the most professional thing you can offer this conversation right now.

Ready to grow your practice and connect with like-minded clinicians? Sign up for free and connect with other clinicians in your city. (https://sananetwork.com/join/)

Photo by Steve A Johnson on Unsplash

Written by AI & Reviewed by Clinical Psychologist: Yoendry Torres, Psy.D.

Disclaimer: Some blog posts may contain affiliate links, earning Sana Network a commission at no additional cost to you. These recommendations reflect our honest opinions about products or services we find helpful and trustworthy. This content is informational and not legal nor medical advice; consult a an attorney or healthcare provider for personalized guidance.