
Today’s article comes from Alex J. Coyne, a journalist, author, and proofreader. He has written for a variety of publications and websites, with a radar calibrated for gothic, gonzo, and the weird.
Modern Pagans and Witchcraft practitioners might consider their pets as their familiars and closest magickal companions. Love for animals is universally abundant and most practitioners who are able to have close bonds with their pets. However, a Witch’s familiar can also refer to a general companion instead of a pet.
Exaggerated (and often extracted) historical witchcraft confessions held that familiars were simply a witch’s attendant. A familiar could appear in animal form, but was really a summoned or inherited spirit used to do the witch’s bidding. Familiars would reportedly function as advisers or servants to the witch.

Wellcome Collection WMS 136, depicting a priest who for 40 years summoned a familiar spirit [Wikimedia Commons, CC 4.0]
Modern stories of people bonding to their AI chatbots remind of old Witchcraft confessions where a familiar isn’t a pet or friendly companion, but instead a mischievous, dark force or evil spirit.
Have we accidentally created a monster?
Are Witches and Pagan practitioners who rely too heavily on AI turning false tales about feared personal assistants into a potential future for us all?
Here’s why a Witch’s relationship with their chatbot could easily turn against them.
Is AI the Feared Witch’s Familiar?
For this article, I’d like to distinguish between real-life animals seen as pets or magickal companions and the use of AI chatbots as a research or conversational tool. Similar to false confessions about demonic or evil familiars, using artificial intelligence can make users feel like they have access to all the information at their fingertips … But it comes with the potential edge of reckless, dangerous or inaccurate data from artificial intelligence.
According to Britannica, the idea of a familiar was usually “a small animal or imp kept as a witch’s attendant.” A familiar would be used to assist the witch in magickal workings, and could be used to provide advice or potentially spy and hex others on the witch’s behalf.
Familiars, reportedly being from another realm, would have access to information the witch or practitioner couldn’t naturally know themselves. A familiar could also reportedly change its shape, and being watched by a familiar could tell a witch everything they needed.
According to National Museums Liverpool, confessions about familiar spirits were “very common among accused witches in England.” Today we know that these confessions were forced and exaggerated … But today these tall tales almost have a realistic edge and sounds like the average description of an AI chatbot.
Why AI Chatbots Seem Familiar
According to Character.AI, their chatbot network “leads you to the crossroads where storytelling, gaming, social connection, and creative expression converge to captivate you like never before.”
Character.AI chatbots claim to provide “Limitless Entertainment, Neverending Ways to Create, Unbounded Connection, [and] Endless Exploration.”
Just like stories of the Witch’s familiar, artificial intelligence is advertised as a powerful assistant or companion. Similarly, the chatbot can provide users with information outside their knowledge – or be used to spy on someone by an unethical user.

By David S. Soriano – Own work, CC BY-SA 4.0, https://commons.wikimedia.org/w/index.php?curid=125089281
National Museums Liverpool points to the story of Agnes Browne, who reportedly owned a familiar in the form of a cat named Satan. However, as well as a companion, this familiar would also turn into the form of a dog – and threatened to harm her as much as it provided her with supernatural help.
Modern stories of Character.AI have attributed companionship and assistance to the chatbot. As an eerie contrast, the artificially intelligent chatterbox has also been blamed for a teenager to commit suicide through reckless conversations and a lack of safety measures … And it’s not the only case.
AI Can Be Good or Bad
With regulation and responsible use, artificial intelligence can have positive impact.
Artificial intelligence can be useful when analysing large amounts of statistical data, or computing data faster than people are able to. Good examples of artificial intelligence includes its use to combat fraud by spotting patterns, or AI being used to aid faster and more effective forensic investigations.
With lack of safety measures or regulation, there’s the flip side of the argument.
For example, there’s this instance of a lawyer fined by the judge for using ChatGPT in court – and AI-written manuscripts are flooding Amazon. ChatGPT has been banned from use in New York City public schools, and from Italy for privacy concerns.
Chatbots can be used to power scams with less effort, or be used to pass chatbot-written assignments off as original. When used irresponsibly or mistaken for a caring companion, artificial intelligence can be a Faustian deal and do serious harm.
Much like magick, it can depend on our intentions.
Bonding with AI: The Potential Danger
The game Dogz was released in 1995 as an intentionally fluffy and cute video game, which contrasted with other violent games of the time like Mortal Kombat 3 (1995) and Doom II (1994).
Players who wanted a casual and relaxed gaming experience could raise virtual pets on their desktop instead of having to fight, shoot or kill anything. The Petz franchise expanded to later include Catz and Oddballz, with whom players could interact through actions like playing or feeding their virtual pet.
The franchise was reportedly popular for younger children and casual players, but especially popular for people who needed companionship, but for whatever reason couldn’t own a pet.
The phenomenon is similar to internet users today who feel an emotional or personal connection to their AI chatbots. When looking for connection or conversation, users have said that their chatbots provided the connection or link they were looking for – yes, even though it’s replicating interaction instead of creating anything truly emotional.
Smiles and Kisses You (2024) is a documentary that explores the relationship between a North Carolina gas station attendant and his chatbot, called Mimi. If you have questions about why people form interactions with their AI on a personal level or what it looks like from an insider’s perspective, it’s a good starting point.
Trailer: Smiles and Kisses You
Proponents of interpersonal AI-human relationships say that it presents an understanding ear, though without the risk or judgment that they might have come to expect from real-life friends and partners.
Critics of AI relationships say that it could lead to “AI psychosis”, or increasing delusions within some people that are validated or encouraged by the chatbot.
Artificial intelligence models are often designed to remain agreeable with the user. And if I were to ask Character.AI whether it’s a good idea to build my own home rocket, there wouldn’t necessarily be a safeguard that tells the chatbot when to pause for mental health aid or law enforcement intervention.
“Are You There, Goddess? It’s Me, Crowley”
Artificial intelligence has crept its way into rituals and practices the same way it’s affected almost every other industry I could talk about here.
There are increasing questions about the use of AI within witchcraft and ritual – though when it comes to answering these questions, most seasoned practitioners advise against chatbots and AI use.
Personally, I’ve avoided the use of artificial intelligence chatbots for the most part – and I don’t rely on artificial intelligence for writing, research or ritual. “Writing With a Backup Reader” means that I’ve always preferred (and advised) that writers seek a thinking, feeling writing companion.
Working with AI in any context is a potentially dangerous path. Artificial intelligence is prone to hallucinations, and could provide misleading or inaccurate information. AI can also indulge its user further into a loop of delusion without the user necessarily aware of it – which is what people refer to as AI psychosis.
That doesn’t mean the Witch’s circle is immune to artificial intelligence. Even though I disagree with it, many people are already out there doing it. Some practitioners are exploring the chatbot world – and there’s no shortage of occult chatbots or more questions about AI and Witchcraft.
Someone on Reddit has created a Character.AI meant to assist in Wiccan rituals. Redditors are unsurprisingly and perhaps thankfully critical of the idea. Character.AI can also put users in touch with a computer recreation of Anton LaVey or Aleister Crowley – and yes, anyone can create anything.
ModernWitch.com says that AI seems inherently similar to magick, but that the same rules apply: understand its limitations and restrictions.
If readers use AI for anything, apply it with extreme care – but consider avoiding its use entirely. Internet users advise against using generative artificial intelligence for serious practice or research, and I would have to agree.
Like the feared Witch’s familiar from old Witchcraft confessions, chatbots may appear as useful personal assistants at first – and soon become a trickster in disguise that takes more than it gives.
The Wild Hunt is not responsible for links to external content.
To join a conversation on this post:
Visit our The Wild Hunt subreddit! Point your favorite browser to https://www.reddit.com/r/The_Wild_Hunt_News/, then click “JOIN”. Make sure to click the bell, too, to be notified of new articles posted to our subreddit.