Why Should I Subscribe?

About Diana Rosengard

Diana Rosengard is a writer, legal scholar, and tech executive with over a decade of experience navigating the intersection of technology, communications, and public interest. A graduate of Lewis & Clark College with a B.A. in History and Gender Studies and a J.D. from Lewis & Clark Law School, she has served on multiple public boards and led high-stakes operational and communications teams across a global organization. Her work draws on a lifelong commitment to social equity, structural critique, and democratic resilience.

She writes from Oregon, where she lives with her husband and three rescue dogs. When she’s not thinking too much, writing too late, and trying to make sense of a world increasingly shaped by machines, she's usually traveling across the U.S. by car, enjoying regional cuisine as she goes.


About An Inconvenient Woman

An Inconvenient Woman is an independent research and commentary platform exploring the political, legal, and ethical dimensions of technology, governance, and culture. Drawing on law, jurisprudence, public policy, philosophy, political science, gender theory, and history, it aims to challenge dominant narratives, surface overlooked risks, and advocate for reforms grounded in equity, democratic values, and critical inquiry.

This Substack exists because, after more than a decade in the tech industry, it became clear that the conversations that matter most are happening in silos. Legal scholars debate AI governance without understanding distributed systems architecture. Tech leaders make platform decisions without grasping the civil rights implications. Policy wonks draft regulations that collapse on contact with implementation and often harm the very communities they claim to protect.

Part policy paper, part engineering postmortem, part philosophical inquiry, An Inconvenient Woman captures the cross-disciplinary musings of one woman who doesn’t know how to stay in her lane or keep her mouth shut.


Everything is Free Now: Consent, Representation, and the Ethics of AI Training

As an academic scholar and published fiction author, I have conflicting feelings about the current state of artificial intelligence, artistic creation, scholarly endeavor, and copyright law. When it comes to the modern state of affairs, I write this substack knowing that anything I create will immediately be consumed to train an LLM-based AI. That AI is managed by business interests that have demonstrated little interest or care for human creators. That leaves me in a bind; consent without meaningful choice isn't consent at all. “Data,” as classified by AI business interests, is never neutral—it's always someone's labor, someone's voice, someone's lived experience, stripped of context and turned into a product.

On the other hand, the decision to exclude my work from LLM-based AI training data means that the very tools digging deeper into every facet of our daily lives, private and professional, will lack another contrary voice that cuts against the historical biases already embedded in our discourse. When underrepresented voices opt out, then the implicit and explicit biases that have ruled the world since the beginning of Western civilization become louder. And what’s left? The same skewed pertspective we’ve always had. AI will bind the same narrow, privileged lens pretending to represent the whole more strongly to our lives.

So I’m stuck in the bind I know too well: damned if you participate, damned if you don’t. Complicity or invisibility. Exploitation or exclusion. That’s how structural oppression survives. It turns every choice into a trap.

I care too much about our collective future to accept that those are the only two options. I am here to make myself heard and advocate for a third path: not for extraction or retreat, but for actual collaboration. Systems where creators are asked, not taken from, and where our participation is chosen, not coerced. Creative labor is labor. It should be treated as such. It is not merely content to be mined; it is a contribution to be respected.

This sort of solution will take more than leaving creators at the mercy of algorithms. Like all well-crafted regulation, it will require a combination of political will, meaningful research, and careful drafting. We are quickly entering an era where LLM-based AI is going to be the new normal. It behooves us to make these decisions now, before the horse is so far out of the barn that we can no longer reach it. In the interim, I’ll keep writing. Our voices, especially the ones that have always been marginalized, belong in the foundation of whatever comes next and not as merely the fruits of further exploitation. They belong in the foundation because our voices have always mattered and deserve a permanent place in whatever discourse awaits us.

Subscribe to get full access to the newsletter and publication archives.

User's avatar

Subscribe to Diana Wiener