19 August 2025
Published by Bogdan Rau

Maximizing the Value of AI: Community Participatory Design is Non-Negotiable

Our perspective and response to the RFI on the NIH Artificial Intelligence Strategy – participatory design, laser-focused governance, trust and transparency.

Artificial intelligence (AI) will enable safety net organizations, such as community health centers (CHCs) and critical access hospitals (CAHs), to unlock unprecedented value from population and patient-level data. Indeed, we are deeply engaged in enabling a future where every corner of healthcare can turn data into better health and human services for all of us. However, in order to maximize the potential and ensure equity in impact, we believe the development of AI capabilities can’t be done in isolation, and it will be up to healthcare leaders and organizations such as CHCs/CAHs to roll up their sleeves, lean in, and shape how AI is developed, deployed, and leveraged in their communities. In our mind, this means fostering a culture of AI learning, encouraging and engaging their organizations in participatory design, focusing on targeted data and AI governance, and establishing enterprise expectations of trust and transparency from AI vendors and service providers. We were thrilled to make this perspective visible in the NIH’s recent RFI inviting comments on the NIH Artificial Intelligence (AI) Strategy, and are sharing it with the broader community for consideration and to learn from your perspective!

 

1. Engage In and Encourage Participatory Design

Community-based participatory approaches to interventions and health services design have shown, time and again, effectiveness in a range of settings. In the case of healthcare AI, however, only 0.2% of literature describes any meaningful community involvement in the development, validation, or implementation of AI applications. Excluding vulnerable stakeholders in the early stages of product design and development has been shown to produce significant impacts on fairness of outcomes, leading to uncontrolled and unintended consequences, including denial of services.

Following the adage “nothing about me without me,” safety net organizations need to advocate for more than just a seat at the table. To empower their organizations to actively engage in the development of AI capabilities, healthcare leaders from across our health system must reflect on the minimum requirements to achieve meaningful inclusion, and crystalize those expectations across their organization. While many “participatory” frameworks exist, Delgado et. al. succinctly describe four dimensions to these expectations:

    2. Laser Focus Your Data & AI Governance

    Historical technology under-investments across the safety net will leave CHCs/CAHs disproportionately exposed to AI-related risks, including bias, discrimination, and potential adverse effects due to ineffective/inappropriate use of AI. Most at risk will be organizations that: a) lack the workforce necessary to evaluate AI capabilities; and b) those that do not have focused data and AI governance infrastructure to establish the rights and accountabilities for effective ownership of AI. While systemic infrastructure and workforce gaps aren’t likely to be resolved in the near term, there are laser-focused tactics that healthcare leaders can employ right now to maximize the impact that AI can have within their organizations. The goal of these tactics is to formalize an organization’s AI posture and help organizations facilitate internal and vendor conversations with an informed and clear perspective on enterprise expectations, evaluation criteria, and ultimate ownership and accountability.

    3. Establish Enterprise Expectations of Trust and Transparency

    No single definition of “trustworthy AI” exists to date. Many frameworks converge on either core technical principles of explainability (understanding AI decision-making), reproducibility (consistent replication of results), and fairness (equitable treatment and bias mitigation). However, technical excellence alone will not be sufficient in health and human service contexts, and trust in AI extends beyond algorithmic performance. Your organization’s expectations of trust and transparency should encompass both technical and human-centered elements that build confidence among providers, and seed trust in the patients and communities they serve. At the core of trust and transparency will be the engagement organizations have in developing AI capability (see #1 above).

    At the core of your organization’s expectations of trust and transparency may reside:

    Principle Example
    Risk Disclosure and Communication
    Proactively identify and communicate failure modes, and associated risks to providers, patients, or communities at large.
    A community health center pilots an AI model to flag patients at high risk for uncontrolled diabetes. The model underperforms in patients with irregular visit patterns (such as migrant farm workers). This risk / limitation is broadly communicated to end-users and is used as supplement to clinical judgment.
    Procedural Transparency 
    Ensure visibility into processes, methods, and AI decision-making workflows. Go beyond explainability of individual AI-decisions.
    When using an AI-driven risk tool for post-discharge follow-up, CHC staff share relevant information directly with the patient, in plain language. By opening the “black box” to patients, the clinic reinforces that AI is being used to enhance (not replace) existing, trusted relationships between the providers and the community.
    Stakeholder Engagement 
    Evaluate and ensure meaningful participation and the inclusion of lived experiences in the design and development of AI capabilities.
    Before deploying a chatbot for answering common patient questions in multiple languages, a CHC involves its patient advisory council and promotoras in the design of guardrails and response patterns.
    Local Adaptation / Responsiveness 
    Recognize, enable, and communicate customizations required for AI capabilities to account for differences in context, culture, and population characteristics.
    An AI tool for depression screening outreach emphasizes confidentiality and connection to in-person counseling for clinics serving primarily older adults, while including text-based therapy and peer support groups for clinics serving younger populations.

    Meaningful participation, a laser-focused AI governance approach, and establishing enterprise trust and transparency expectations require healthcare organizations such as CHCs and CAHs to be active participants in the AI development lifecycle. This means taking an active and informed stance in vendor negotiations (well beyond just price), and a proactive approach to engaging with industry, academia, and state and federal government agencies. To most benefit from AI, CHCs and CAHs must be in the room when the most critical decisions are being made.

    Amplify the value of your data today

    Thinqpoint empowers your teams with blended domain knowledge, expertise, advisory and execution that pushes well beyond the technical. Our team has championed and delivered innovative solutions to healthcare organizations large and small, and across the complete spectrum of data analytics.

    Our Thoughts
    Read more about our thoughts on data strategy, analytics and artificial intelligence in healthcare.

    Mental Health in California: An Inflection Point

    Historic investments will reshape California’s mental health care landscape, with the backdrop of a worsening mental health crisis.

    Maximizing the Value of AI: Community Participatory Design is Non-Negotiable

    Our perspective and response to the RFI on the NIH Artificial Intelligence Strategy – participatory design, laser-focused governance, trust and transparency.

    California’s Risk Stratification Strategy: Standardized Tiering with a Statewide Lens

    Standard, statewide risk intelligence may give managed care plans and provider organizations another tool in the shed. Operationalizing will need a surgical approach.

    Quick Wins in Population Health? Three Pragmatic AI Analytics Moves to Elevate Your Population Health Strategy

    Health systems don’t need to dive into the AI wave head-first in order to reap the benefits of commoditized AI today.

    AI Analyst Agents: The Great Equalizers in Analytics for Community Health

    The best of AI-driven analytics products will be those that are informed by the communities that will most benefit from them.

    AI is Automating Analytics in Healthcare: Can Your Organization Maximize the Benefits?

    The future of healthcare analytics belongs to those who intentionally adopt AI – not those who simply deploy it.