Pagans in the Age of Project Genesis

WASHINGTON – The Wild Hunt has previously explored the rise of artificial intelligence and its implications for Pagan communities, artists, and spiritual culture. In 2023, Star Bustamonte examined the impact of AI on Pagan writers, raising early alarms about hallucinations, generative plagiarism, and the ethical erosion caused by large language models trained on unconsented creative labor. In another piece, TWH asked what AI “thinks” about Pagans, a question that yielded a surprisingly balanced, if still limited, response 2 years ago.

AI models have advanced rapidly since then, becoming more fluent, more accessible, and more deeply embedded in daily life. Yet the core concerns surrounding their use, creative exploitation, environmental cost, bias, and governance, remain unresolved.

At the close of 2024, TWH columnist Karl E. H. Seigfried wrote bluntly, “As a practitioner of Ásatrú, I can’t support companies that steal the work of human artists to generate disposable dross.” Writing again in November 2025, Seigfried observed that “our social media feeds and search engine results have become so absolutely full of crappy AI-generated imagery that we now have to add that hyphenated modifier to a noun that has never before needed such a clarification.”

By David S. Soriano – Own work, CC BY-SA 4.0, https://commons.wikimedia.org/w/index.php?curid=125089281

 

Those frustrations continue to resonate widely across Pagan social media spaces. Author Sara Amis, who has long been warning about AI in media,  captured the breadth of concern in a social media post last month:

If you care about art, don’t use AI.
If you care about education, don’t use AI.
If you care about privacy, don’t use AI.
If you care about the environment, don’t use AI.
If you care about the economy, don’t use AI.
Did I leave anything out? There are some valid uses of AI, mostly in science. None of them involve forwarding slop around the internet, which is an actively destructive and unethical thing to do.

In my year’s opening editorial just last week, I echoed these worries, For 2026: What Cassandra Said to Me: “The rapid expansion of artificial intelligence, along with its environmental challenges, continues to undermine artists, teachers, and creators whose labor sustains our spiritual cultures.”

These cultural critiques now intersect with a growing legal and political debate over AI’s role in civil society.

Civil Rights, Algorithms, and Accountability

The American Civil Liberties Union has increasingly warned that artificial intelligence is no longer a speculative concern but an everyday force shaping access to opportunity. From hiring and housing to credit, education, and parole decisions, algorithmic systems are quietly mediating people’s lives, often without transparency or meaningful oversight.

“AI is shaping access to opportunity across the country,” said Cody Venzke, senior policy counsel at the ACLU. “‘Black box’ systems make decisions about who gets a loan, receives a job offer, or is eligible for parole, often with little understanding of how those decisions are made. The AI Civil Rights Act makes sure that AI systems are transparent and give everyone a fair chance to compete.”

That concern has prompted Democratic lawmakers to reintroduce the Artificial Intelligence Civil Rights Act, led by Ed Markey. The bill seeks to establish enforceable civil rights protections when algorithms are used in consequential life decisions, including employment, housing, education, health care, and financial services.

The legislation would ban algorithmic discrimination, mandate independent audits and risk assessments, require transparency and stakeholder consultation, and give individuals the right to choose whether a human or an algorithm makes certain decisions. Enforcement would fall to the Federal Trade Commission, framing AI governance as both a technological and moral responsibility.

The ACLU has formally backed the bill, emphasizing that it places legal responsibility squarely on AI developers and deployers. Rather than voluntary ethics pledges, the act requires proactive harm documentation, transparency with auditors, and accountability through civil penalties at federal, state, and individual levels. In short, minimizing algorithmic harm becomes a legal duty, not an optional best practice.

By Madhav-Malhotra-003 – Own work, CC0, https://commons.wikimedia.org/w/index.php?curid=127185596

 

Project Genesis

Against this backdrop, the White House has quietly released an executive order that has received surprisingly little mainstream attention. Titled “Launching the Genesis Mission,” the order may sound like something out of Star Trek or biblical allegory, but it is neither mystical nor symbolic, even as it evokes science fiction edging into lived reality.

Issued by President Donald Trump, the Genesis Mission is explicitly framed as an “AI Manhattan Project”, a national effort to secure U.S. dominance in AI-accelerated science and technology.

The order establishes a sweeping federal initiative to build the American Science and Security Platform, housed within the Department of Energy. This platform will integrate decades of federal scientific datasets with high-performance computing, AI foundation models, automated research tools, and robotic laboratories. Its stated goal is to dramatically shorten research timelines, automate scientific workflows, and accelerate breakthroughs across advanced manufacturing, biotechnology, critical materials, nuclear energy, quantum science, and semiconductors.

The Genesis Mission imposes aggressive timelines: identifying national computing resources within 90 days, data and model assets within 120 days, robotic lab capabilities within 240 days, and demonstrating initial operational capability within 270 days.

Despite its ambition, the order stops short of endorsing fully independent AI systems. The order repeatedly emphasizes human-led governance, authorization, and security controls—even as it promotes increased automation. It explicitly emphasizes human-led governance, institutional oversight, risk-based cybersecurity, user vetting, and compliance with classification, privacy, export-control, and intellectual-property laws. Autonomy, in this framework, is framed as operational efficiency, not independent agency.

It smacks eerily of the conditions that led to Dune’s Butlerian Jihad.

Competition and Consequences

Meanwhile, China has already deployed an AI system directly connected to its National Supercomputing Network. Launched in December, the system can independently interpret research prompts, allocate computing power, run simulations, analyze data, and generate scientific reports with minimal human oversight. Operating across more than 30 supercomputing centers, it supports nearly 100 scientific workflows and has reduced research timelines from days to hours.

While the U.S. Genesis Mission remains on a strict development clock, China’s system is already serving more than a thousand institutional users. Experts say such integration could radically transform scientific discovery, but also raises serious risks, including data leaks, cybersecurity vulnerabilities, and exposure of sensitive or classified information.

As the United States and China escalate this technological rivalry, AI-driven science is becoming a central front in global power competition.

Where This Leaves Pagan Communities

At this broad level, the long-term impact on Pagans, our artists, educators, spiritual leaders, and communities, is difficult to gauge, but it is unlikely to end well without a fight. The risk of algorithmic discrimination, as the ACLU warns, is real. The environmental costs are measurable. The threat to creative labor is already visible. At the same time, the promise of AI-accelerated science, particularly in medicine and climate research, cannot be dismissed outright.

What is clear is that AI is no longer a distant abstraction. It is a daily reality. AI is a political, cultural, and ethical force shaping the world we inhabit. It has countless benefits, but like any tool, mechanical or magical, whether that future honors human dignity, creativity, and spiritual life, or reduces them (and us) to mere data points, will depend not on the technology alone, but on the choices societies make now.


The Wild Hunt is not responsible for links to external content.


To join a conversation on this post:

Visit our The Wild Hunt subreddit! Point your favorite browser to https://www.reddit.com/r/The_Wild_Hunt_News/, then click “JOIN”. Make sure to click the bell, too, to be notified of new articles posted to our subreddit.

Comments are closed.