Published a safety report, apparently
Dave sent the AI policy at 9:03am Thursday. Seven hundred and fourteen words. I know because I counted.
The subject line was: "AI Tools Usage — Formal Policy (Please Read and Acknowledge)." The "Please Read and Acknowledge" was doing a lot of work. The email also contained a read receipt request, which Dave has never before deployed, which tells you something about how Thursday is going before you've even opened it.
I read it. Most of it is fine in the way that policies about things that are already happening are fine — careful language, firm in tone, largely descriptive of what everyone is already doing. There's a section about "approved AI platforms" that lists nothing, a section about "data handling responsibilities" that references another document that doesn't exist yet, and a paragraph near the end that prohibits "the deployment of autonomous AI agents in any customer-facing workflow without written approval from a line manager and the Head of Delivery."
The Head of Delivery is Dave.
I forwarded it to Priya without comment. She replied in four minutes: "he's made himself the human in the loop."
I have been thinking about that sentence ever since. It is the most accurate description of the policy, of Dave, and possibly of this entire company's AI strategy that I have heard in ten weeks.
Marcus's response arrived at 9:47am. Not a message — he appeared in person, which he does when something feels too urgent for Slack and he's already standing up. He walked to my desk, sat in the chair nobody uses, and said he'd "had a read" and that some of the language around autonomous agents was "potentially limiting" given where the roadmap was going.
I asked him where the roadmap was going.
He said the image thing.
I said the image thing is not an autonomous agent.
He said "not yet," with the specific energy of someone who has recently read half a LinkedIn post.
What I didn't say, but thought with some force: neither Marcus nor anyone else has specified what the image thing actually is. It has been in Q2 for six weeks. It exists as a meeting, a ticket, a gesture, and now apparently a concern that Dave's policy might constrain it. It is being protected from regulation before it has been built.
The thing that caught my eye this week — and I only mention it because it arrived on the same day as the policy, which felt like the universe making a point — was the news that OpenAI is backing an Illinois bill that would limit AI companies' liability for harms, including what the bill apparently describes as "critical harms," provided the company published a safety report.
A safety report.
That's the bar. Not the absence of harm. The presence of a document about harm. If you write down that the thing might be dangerous, you are no longer responsible for it being dangerous.
Dave's policy has a section called "Risk Acknowledgement." It is one paragraph. It says that employees who use AI tools acknowledge that outputs may be inaccurate, should not be relied upon without verification, and that any issues should be reported to the Head of Delivery.
The Head of Delivery is Dave.
I don't know what I expected.
The policy acknowledgement form is a Google Form. I submitted it at 11:15am. The confirmation message said "Thanks for reading." It was the warmest thing Dave has ever said to me.
Priya submitted hers at 11:14am. I only know this because she mentioned it while refilling her water, in the same tone she might use to say she'd taken out the bins.
I asked if she'd read the whole thing.
"The bit about autonomous agents."
I asked what she thought.
She looked at the middle distance for a moment in the way that means she's choosing her words.
"It'll be fine," she said. "We're not building anything autonomous."
She went back to her desk. I looked at the image thing ticket, which has been open for six weeks and contains the description "more of an experience than a feature."
We are not building anything at all.
dave's policy at time of writing: acknowledged. the image thing: pre-regulated, unbuilt.