Is Virginia Dropping the Ball on AI and Policing? The High-Stakes Race for Fair Rules

Is Virginia Dropping the Ball on AI and Policing? The High-Stakes Race for Fair Rules
Photo by Scott Rodgerson / Unsplash

Who’s watching the watchers? As artificial intelligence transforms police work, Virginia’s leaders promised strict rules to protect public trust. But the state just missed its own key deadline—and the stakes are growing.

The Youngkin administration, backed by Attorney General Jason Miyares, set out in early 2024 to write clear, ethical guidelines for how Virginia police can use AI—from facial recognition to surveillance tools. Those rules were supposed to land in October. Today, they're nowhere in sight, with AI creeping further into law enforcement each day. If Virginia’s push to regulate high-tech policing stalls, who pays the price? Let’s dive in.


🚨 Overdue & Under the Microscope: The Problems with Virginia’s AI Policing Push

  • Missed deadlines pile up: Governor Glenn Youngkin ordered the Attorney General and Public Safety Secretary to develop AI standards for all executive branch law enforcement by October 18, 2024. As of June 2025, those rules are still not released—over seven months late.
  • Rapid AI expansion: Law enforcement agencies in Virginia and across the country are ramping up use of AI. This includes surveillance systems, license plate readers, and even AI-assisted crime reports—before comprehensive rules are in place.
  • Patchwork & politics: Virginia has a hodgepodge of rules about law enforcement’s use of AI, but tried and failed to pass wider, binding legislation. Bills for consumer protection, anti-bias, and transparency have been vetoed or stalled in committees.
  • Bias risks looming: Studies (like a 2019 federal review) show AI-powered facial recognition can encode racial bias. And as more private tech firms profit from policing tools, data privacy and fairness are in the crosshairs.
  • Big contracts, little oversight: Virginia State Police inked deals worth millions—including a $200,000 contract with Dataminr for social media monitoring and a 15-year, $54 million fingerprint upgrade with Tech5—while the public remains in the dark on oversight.

Underneath the delay is a deeper struggle: can fast-moving AI tools outpace government’s ability—and political will—to regulate them? The escalating tension between innovation, privacy, and accountability is playing out in real time.


✅ The Big Ideas: Virginia’s Push for Ethical, Transparent Police AI

  • Youngkin’s AI Executive Orders: In January 2024, the governor’s order required agencies to create ethical AI standards, with a mandated approval process. In February 2025, another order banned Chinese company DeepSeek’s AI on state government devices, signaling growing concern over tech sourcing and security.
  • Facial recognition guardrails: A 2023 law restored police facial recognition—after an earlier ban—but with new checks, reflecting lawmakers’ struggle to balance innovation with rights protections.
  • Public input and expert involvement: Governor Youngkin’s office touted Virginia as a pioneer in public AI guidelines and organized an AI Task Force with stakeholders to finalize robust standards.
  • Data protection moves: Some initiatives have targeted specific AI vendors and sought tougher rules on how long surveillance data can be stored and by whom.

These efforts show that Virginia is taking the AI policing debate seriously—at least on paper. With the right guardrails, advocates believe AI could help reduce human bias, sharpen crime-solving, and improve public safety, all while keeping public trust intact.


🚧 Barriers and Black Holes: Risks, Delays & Unanswered Questions

  • 🚧 Chronic delays and lack of transparency: Despite the governor’s executive order, there’s still no clear timeline for final standards. State officials, including Youngkin’s spokesperson, have not publicly explained the holdups or committed to a completion date—leaving the public and police agencies in limbo.
  • ⚠️ Technical and ethical complexity: Even supporters worry that without strict oversight, AI systems trained on biased data can perpetuate racial discrimination and erode civil liberties. As criminologist Steven Keener notes, “historical biases… can potentially allow for racial biases in the system to continue.”
  • 🚧 Private tech and data security: Outsourcing surveillance to private companies like Dataminr (which recently secured a $200,000 First Alert contract with VSP) raises key questions: Who owns the collected data? How long is it held? Who audits these companies for misuse?
  • ⚠️ Failed legislative efforts: In 2025 alone, a signature AI bill for “high-risk” system oversight was vetoed, while proposals for the Artificial Intelligence Transparency Act and other regulations died in committee. Virginia remains without comprehensive consumer AI protections.

The result: vast gaps in oversight just as AI becomes central to modern policing, threatening public trust in law enforcement and civil rights alike.


🚀 Final Thoughts: Can Virginia Catch Up—Or Is Public Trust on the Line?

Virginia’s bold talk on AI rules has yet to produce real-world accountability for police technology. For progress to mean something, the state must:

  • Enact clear, transparent standards—before new AI tools become entrenched, and not after.
  • 📉 Ensure independent oversight and public input, not just behind-the-scenes contracts with private vendors.
  • 🚀 Prioritize fairness, privacy, and security as new tech reshapes public safety and civil liberties.

Will Virginia’s leaders break the gridlock and deliver? Or will the lag in AI policing rules erode hard-won public trust? What do you think: is Virginia moving fast enough—or leaving us all behind?

Let us know on X (Former Twitter)


Sources: Dean Mirshahi. Youngkin administration has missed deadline to set AI rules for state police, June 5, 2025. https://www.vpm.org/news/2025-06-05/youngkin-ai-standards-overdue-state-police-miyares-cole-dataminr

H1headline

H1headline

AI & Tech. Stay Ahead.