Wave’s Public Comment on the FDA’s Request for Feedback on Generative Artificial Intelligence-Enabled Digital Mental Health Medical Devices.

See our comment here or read more below:

12/08/2025

Thank you for the opportunity to comment on the regulation of generative-AI enabled digital mental health technologies. Wave is a digital mental health platform rooted in a transdiagnostic, evidence-based model developed by Dr. Sarah Adler, Stanford Clinical Professor of Psychiatry. Our system blends AI-supported skill building with human coaching, therapist-led escalation pathways, and clinical oversight to deliver personalized, accountable care. We offer the following comments to support a regulatory framework that protects patients while enabling responsible and equitable innovation.

1. HUMAN OVERSIGHT, NOT DIAGNOSIS OR SEVERITY, SHOULD GUIDE REGULATORY BURDEN

Regulatory intensity is often tied to diagnosis or severity, but these variables have limited predictive value for actual risk. Diagnostic labels are inconsistently assessed, do not reliably capture acuity, and often fail to reflect the social determinants that drive vulnerability. Individuals without diagnoses may be at high risk, and those with diagnoses may not. A more reliable determinant of safety is whether the AI system operates within a meaningful, accountable human clinical relationship. We encourage the FDA to distinguish between:

A. AI used within a clinical relationship.

These tools support licensed clinicians or trained coaches who hold responsibility for patient outcomes. This model mirrors well-established precedents in radiology, pathology, and other specialties where AI augments, but does not replace, professional judgment. Wave follows this approach: every user is paired with a dedicated NBHWC-certified coach, supervised by licensed clinicians who triage and escalate care as needed. Continuous human oversight, clinical governance, and safety is feasible.

B. AI deployed directly to consumers without clinical involvement.

These systems lack the safeguards of accountable human monitoring. Without a dedicated provider, there is no one to contextualize risk, detect errors, or escalate when distress emerges. Given the heterogeneity and vulnerability of mental health users, this gap presents significant risk and warrants stronger regulatory requirements.

2. PREVENT “WELLNESS” LABELS FROM MASKING INTERVENTION INTENT

We echo concerns raised by the New York Department of Health that some companies characterize intervention-like tools as “wellness” products. We recommend the FDA establish clear, enforceable criteria distinguishing wellness support from mental health intervention based on intended effect and likely use, not diagnosis or claimed severity. Wave does not use wellness positioning; our interventions are evidence-based, delivered by credentialed professionals, and tested through independent IRBs and peer review.

3. RECOGNIZE THAT GENERATIVE AI BREAKS TRADITIONAL DIGITAL HEALTH ASSUMPTIONS

Conventional regulation presumes determinism: identical inputs yield identical outputs. Generative AI violates this assumption. Large language models are probabilistic; identical prompts can produce different responses, and validated behavior may shift over time without code changes. We recommend the FDA develop guidance tailored to nondeterministic systems, including:

  • expectations for detecting drift, bias, and degradation;

  • protocols for intervention when a system behaves unpredictably;

  • clarity on mapping pre-deployment validation to post-deployment monitoring; and

  • requirements for human-in-the-loop escalation or override in safety-critical contexts.

Wave feasibly uses this approach: all AI suggestions remain advisory and are reviewed by coaches operating under clinical supervision.

4. DEFINE “HUMAN IN THE LOOP” WITH SPECIFICITY

To prevent misuse of the term, we urge the FDA to define meaningful human oversight as a system in which:

  • a credentialed provider has an ongoing relationship with the specific patient;

  • that provider is responsible for care decisions and escalation;

  • their license and liability meaningfully apply; and

  • they can modify or override AI-generated guidance.

This definition excludes superficial practices such as generic moderators, post-hoc transcript reviews, or team-level audits, which resemble content moderation—not clinical care. Wave’s model meets the robust definition we propose.

5. REQUIRE MEASUREMENT-BASED OUTCOME DATA

Measurement-based care (MBC) is already a behavioral health standard and should be expected of any AI mental health tool. This includes routine use of validated outcome measures, monitoring for demographic disparities, protocols for declining outcomes, and the ability to step back AI involvement if performance does not meet expectations. Wave’s model is fully MBC-driven, with validated measures collected at intake and regular intervals to ensure accountability and safety, indicating feasibility.

6. CLARIFY HOW EXISTING DIGITAL THERAPEUTICS GUIDANCE APPLIES TO GENAI

Existing digital therapeutics guidance is strong but assumes deterministic algorithms. We encourage the FDA to clarify how validation, post-market monitoring, and Predetermined Change Control Plans apply to nondeterministic generative models, including prompt tuning and model updates. Wave’s systems already follow IRB oversight, bias monitoring, and continuous evaluation aligned with these emerging expectations and indicate feasibility.

WAVE’S COMMITMENT

Wave is led by Dr. Sarah Adler, a licensed clinical psychologist whose ethical and legal responsibilities apply directly to product design and deployment. There is clear promise of AI for increasing access in behavioral health and we believe that to truly innovate, our systems should enhance, not replace clinical judgment, until proven safe. We welcome a regulatory framework that distinguishes AI used within accountable human care from AI deployed directly to consumers without safeguards. Clarity will protect patients, support equity, and enable responsible innovation that expands access to high-quality mental health care.

Respectfully submitted,

Wave, Inc.

Dr. Sarah Adler, Founder & Chief Executive Officer

Ryan Weald, Chief Technology Officer

Next
Next

Tool Calling is the Real Safety Feature