Skip to content

The Dispatch’s Artificial Intelligence Guidelines

We regularly get questions from Dispatch readers, listeners, and contributors about our stance on artificial intelligence and how we incorporate the technology into our work, if at all. Those reaching out are right to ask; AI-generated content is seemingly everywhere you look these days, and it’s becoming increasingly difficult to determine what is real—and who to trust. 

Our readers and listeners subscribe to The Dispatch because they trust our journalists’ expertise, judgment, and voice. That trust is our most valuable asset—we work hard to earn it every day—and we will not risk it for marginal efficiency gains. At the same time, we won’t categorically rule out tools that can help us deliver those readers and listeners better—and more—work. 

Given how quickly these technologies are advancing, we determined that one document could not capture every possible use case and whether or not it is permitted. Instead, we outlined a set of general guidelines for AI use across the organization, grounded in a simple philosophy: AI should augment human judgment, not replace it.

Below are four core principles underpinning this philosophy—as well as a (non-exhaustive) list of prohibited uses of AI technology—that apply to full-time Dispatch staff and freelance contributors alike.

Core Principles

  1. Humans own the output. Nothing The Dispatch publishes or sends to readers, listeners, prospects, or partners should be generated by AI, barring a handful of exceptions (e.g., AI-generated audio versions of human-written articles) in which the AI use is both cleared with The Dispatch’s executive team and prominently disclosed to the reader or listener. Any external-facing work that AI assisted with must be meaningfully reviewed, edited, and approved by a human who takes full responsibility for its contents. This applies equally to the editorial and business teams.
  2. AI assists; it doesn’t replace thinking. There’s a difference between using AI to handle rote tasks so you can focus on higher-value work and using it as a crutch that weakens your core competencies. We encourage the former and discourage the latter. Think of AI tools like an intern or a personal assistant: Hand off the more menial, time-consuming tasks that don’t require much expertise to free up your time to do the work that you are uniquely positioned to do.
  3. Treat AI output as unvetted source material. Large language models (LLMs) hallucinate. They confidently present false information. They have biases baked into their training data. Any factual claim, suggestion, or output from an AI tool should be verified through reliable sources before being trusted—just as you would verify a tip from an unknown source or Wikipedia.
  4. When uncertain, ask. AI capabilities are evolving rapidly, and no policy can anticipate every scenario. If you’re unsure whether a particular use case is appropriate, ask your editor. They can escalate to the executive team if needed.

Prohibited Uses

The following uses of AI are not permitted:

  • Generating publishable content: AI cannot write articles, newsletters, or any other content that will be published under The Dispatch’s name. This includes bluntly asking AI to “improve” or “make better” a draft in ways that let the AI reshape your argument or voice.
  • Structural outsourcing: Writers should not ask AI how to structure their thoughts or organize their articles. This is a key part of the writing process, and outsourcing it to an AI tool will result in a less distinct piece and, more importantly, erode the writer’s critical thinking skills.
  • Outsourcing critical thinking: Using AI as a substitute for your own analytical judgment, e.g., relying on it to identify the weaknesses in your argument rather than developing that critical faculty yourself. Part of what makes Dispatch journalism valuable is the rigor our writers and editors bring to their own work.
  • Wholly AI-generated art: We do not use AI-generated images as article/newsletter art or in logos and branding. Designers and artists may use AI-enhanced tools (like Adobe’s AI features) as part of their creative process, but the output should reflect human creative direction.
  • Manipulating audio or video: AI audio and video tools may be used to improve quality on the margins, but never to alter what someone said or to generate synthetic speech attributed to a real person.
  • Publishing AI-generated images as reality: We will not publish AI-generated images that depict events or scenes as if they were real photographs, unless the image itself is the subject of the story and is clearly labeled as AI-generated.

Freelancers and Contractors

The same core principles that apply to members of The Dispatch staff hold true for all freelance writers, commissioned contributors, and contractors producing work for The Dispatch: AI can assist with research, transcription, and similar tasks, but cannot be used to generate publishable content. Freelance contributors are responsible for the accuracy, originality, and voice of the work they submit.

If a freelancer is unsure whether a particular use of AI is appropriate for a commissioned piece, he or she should ask their assigning editor before submission.