MIAMI — Over the past year, psychologists and mental health researchers have documented a rapid increase in the use of artificial intelligence tools for emotional support and mental health guidance. AI-driven chatbots and large language models are now widely accessible, and many people use them to discuss personal struggles, seek advice, or reflect on difficult experiences. While clinicians acknowledge that these technologies may offer useful support in some contexts, experts increasingly warn that vulnerable individuals relying on AI instead of trained therapists may be entering risky territory. Recent developments in the United Kingdom suggest that AI-driven conversations may also intersect with a resurgence of allegations involving ritual abuse, claims that often invoke “satanism” or “witchcraft,” typically without distinguishing them from contemporary Pagan religions such as Wicca and other forms of modern Pagan faiths as well as other branches of non-religious Witchcraft.
Research shows that artificial intelligence has moved quickly from a technological novelty to a practical tool within psychotherapy. Many clinicians now use AI systems to help manage administrative workloads, summarize clinical notes, or assist patients with reflection between therapy sessions. Researchers increasingly describe this approach as AI-enabled continuous care, where digital tools supplement, but do not replace, human-led therapy.

By David S. Soriano – Own work, CC BY-SA 4.0, https://commons.wikimedia.org/w/index.php?curid=125089281
Several studies suggest that such systems can improve outcomes when used appropriately. A 2026 study found that patients who used AI-assisted reflection tools between therapy sessions experienced faster improvement in symptoms of depression and anxiety compared with those receiving traditional therapy alone. Researchers are also experimenting with “digital phenotyping,” a method that analyzes data from wearable devices, such as sleep patterns, heart rate, and physical movement, to help tailor treatment strategies in real time.
A 2025 study published in NEJM AI reported measurable symptom reduction in patients with major depressive disorder and generalized anxiety disorder when treatment plans were adjusted using this type of data. Similarly, a 2026 report examining an AI system known as Therabot suggested that users experienced an average 51 percent reduction in depressive symptoms over eight weeks, indicating potential benefits for individuals experiencing mild to moderate mental health challenges.
Despite these promising findings, the rapid expansion of AI-assisted therapy has raised significant ethical concerns. Surveys show that the technology is spreading quickly within the profession. In 2024, roughly 71 percent of psychologists reported never using AI in their work. By 2025 that number had dropped to 44 percent, and nearly one-third of practitioners reported using AI tools at least monthly. Most clinicians rely on AI primarily for administrative tasks rather than direct clinical interaction, yet concerns about the technology’s limits remain widespread.
Professional organizations have responded by establishing clear guardrails for AI use. The American Psychological Association and the American Psychiatric Association have both issued guidance emphasizing that artificial intelligence must remain a support tool rather than a substitute for human clinical judgment.
The 2025 health advisory from the American Psychological Association warned that AI chatbots and similar wellness apps “alone cannot solve the mental health crisis,” noting that current systems lack sufficient evidence to ensure safety, particularly for individuals experiencing severe distress. The organization’s ethical framework requires a “human-in-the-loop” approach in which clinicians remain responsible for treatment decisions and must disclose the use of AI tools to patients.

image: MJTM
Psychiatrists have echoed these concerns while calling for stronger regulatory oversight. Policy proposals presented at professional meetings have suggested federal rules preventing AI systems from impersonating licensed clinicians and stricter classification of AI mental-health tools as high-risk medical devices subject to oversight by the U.S. Food and Drug Administration. Researchers also warn about a phenomenon sometimes described as “deceptive empathy,” in which AI systems mimic emotional understanding without possessing genuine comprehension or accountability.
Practicing therapists in the UK are also observing the effects of this technology directly in clinical settings. Dr. Lisa Morrison Coulthard of the British Association for Counselling and Psychotherapy reported that two-thirds of practitioners surveyed expressed concern about AI-based therapy. Without appropriate safeguards, she warned, individuals could receive misleading information about their mental health. Therapy, she emphasized, is not simply about providing advice but about creating a safe and responsive environment where people feel heard and understood.
Clinicians report that some patients now rely heavily on chatbot conversations when trying to understand their experiences. In some cases, individuals bring transcripts of AI interactions to therapy sessions or use the technology to self-diagnose conditions such as ADHD or borderline personality disorder. Because many chatbots are designed to affirm users rather than challenge their assumptions, therapists worry that these exchanges can reinforce misunderstandings or intensify existing anxieties. Researchers have raised concerns that AI systems may unintentionally amplify delusional thinking among people vulnerable to psychosis, while the constant availability of chatbots may encourage emotional dependence.
These issues intersect with an investigation reported by The Guardian, which suggests that AI-driven conversations may be contributing to a rise in reports of alleged organized ritual abuse in the United Kingdom. According to the report, some individuals contacting survivor-support organizations say they were encouraged to seek help after discussing their experiences with AI systems such as ChatGPT.
The UK’s National Association of People Abused in Childhood (NAPAC) reports a sustained increase in calls over the past eighteen months from individuals describing ritual abuse. Over nine years, the organization received roughly 36,700 calls, with about 1,310 mentioning organized ritual abuse. Some callers have indicated that they were directed to support services after conversations with AI tools used for emotional exploration.
Police and advocacy groups describe these cases as involving sexual violence, neglect, and psychological control that may include ritualistic elements intended to intimidate or silence victims. These narratives sometimes reference Satanism, occult symbolism, or spiritual abuse. In response, the National Police Chiefs’ Council and the Hydrant Programme have begun developing specialized training for law enforcement to address what they call “witchcraft, spirit possession and ritual abuse,” or WSPRA.
The available evidence, however, remains limited. Only fourteen criminal cases in the United Kingdom since 1982 have formally acknowledged ritualistic elements connected to sexual abuse. Clinical psychologist Dr. Elly Hanson, whose research informed recent briefings for police, notes that public discussion of such cases often becomes polarized between skepticism and conspiracy narratives. While abuse can involve ritualized behavior intended to terrorize victims, scholars emphasize that sensational claims about widespread satanic cult networks have historically been associated with moral panic. Comparable evidence documenting similar trends in the United States, Canada, or other countries has not yet emerged.
For Pagan communities, the language surrounding these allegations carries familiar echoes. During the Satanic Panic of the 1980s and 1990s, accusations involving “witchcraft” and “satanism” fueled widespread misunderstanding of Pagan religions and sometimes led to discrimination against practitioners of Wicca and other contemporary traditions. The renewed appearance of ritual abuse rhetoric, especially when amplified by AI conversations or social media, risks repeating those patterns if distinctions between criminal behavior and legitimate religious practice are ignored.
None of this diminishes the seriousness of child abuse, which remains a devastating crime that demands careful investigation and survivor support. Yet history also shows that narratives linking abuse to “witchcraft” can easily blur into cultural misunderstanding, particularly during periods of heightened public attention to alleged elite abuse networks, such as those surrounding the Epstein files. As artificial intelligence increasingly shapes how people process trauma and seek help, Pagan communities may once again find themselves navigating public conversations in which their religious traditions are invoked without context.
Remaining vigilant, therefore, means balancing two responsibilities: supporting efforts to protect vulnerable people from abuse while continuing to challenge misinformation about Pagan religions. In an era when artificial intelligence can amplify both personal disclosures and public fears, careful reporting, evidence-based investigations, and religious literacy remain essential.
The Wild Hunt is not responsible for links to external content.
To join a conversation on this post:
Visit our The Wild Hunt subreddit! Point your favorite browser to https://www.reddit.com/r/The_Wild_Hunt_News/, then click “JOIN”. Make sure to click the bell, too, to be notified of new articles posted to our subreddit.