2026 Kickoff: What I'm Building and What's Coming in AI Compliance
Starting the year with a clear focus: ProvStamp for EU AI Act compliance, ThreatWatch v2, and why the August deadline is closer than most people think.
First note of 2026. I’m going to try to publish these every Sunday, raw working notes, not polished articles. What I read, what I built, what I’m thinking about. No SEO, no listicles.
What the year looks like
A few threads I’m picking up from Q4:
EU AI Act enforcement is August 2, 2026. That’s seven months away, which sounds comfortable until you actually look at what compliance requires. Most companies I’ve spoken to are either not tracking it at all, or assuming the rumoured Digital Omnibus deferral will happen. That’s a bad bet. The August date should be treated as live until the Commission formally announces otherwise, and even if they defer high-risk AI obligations to December 2027, the Article 50 transparency requirements (AI disclosure, deepfake watermarking) are still August. Nobody’s ready for those either.
This is directly relevant to ProvStamp. C2PA content credentials solve the watermarking side of Article 50 cleanly, if you can cryptographically sign and surface AI provenance metadata, you have a defensible compliance layer. I want to get the API in shape in Q1.
ThreatWatch needs more signal. The feed architecture works, but I’ve been noticing quality issues, a number of sources are returning stale data with “real-time” labels on them. Going to do a proper audit in February. The dark web sources are the worst offenders.
Moxel is a proper project now. What started as a scratch project to get more VRAM out of our GPU cluster is worth building properly. We have four RTX 3090s on vm1, 96GB total if pooling works cleanly. First real test is on the roadmap for February.
What I was reading
The EU AI Act’s Article 50 is more nuanced than headlines suggest. It breaks down into four requirements:
- AI systems interacting with people must disclose they’re AI (chatbots, voice agents)
- Emotion recognition systems must notify users
- Deepfakes must carry machine-readable watermarks
- Biometric categorisation must comply with disclosure mandates
The penalty structure is worth understanding: up to €35M or 7% of global turnover for the most serious violations, €15M or 3% for non-compliance with high-risk obligations. For a startup, the 3% floor is the real risk, it doesn’t scale down to nothing just because you’re small.
One thing I got wrong last year
I assumed C2PA adoption would be driven by platforms (YouTube, Meta, Adobe) and the rest of the ecosystem would follow. That’s still partly true, but the EU AI Act is creating a separate adoption driver (regulatory compliance) that’s completely independent of platform decisions. Companies that need to prove their AI-generated content is labelled will adopt C2PA regardless of whether every social platform supports it. That changes the ProvStamp market considerably.
What’s next
- Week 2: Going deep on LLM jailbreak research: there’s a paper from Anthropic I keep coming back to
- Starting the ProvStamp API spec
- Getting ThreatWatch dark web sources more reliable
Small note: I’m using these posts partly as a thinking tool. If something changes or I find out I was wrong, I’ll note it in a later post rather than editing old ones.