The real issue is that since any fingerprint that can be mandated for AI content must be algorithmically implemented, then that fingerprint can be algorithmically removed.
For example, let’s say companies voluntarily choose or are forced to integrate text fingerprinting into LLM output. Automated AI writing detection tools already exist, but they’re not reliable. But in principle we could make the output of LLMs easy to identify. Maybe we force them to adopt subtle but highly unique patterns of word choice, punctuation, sentence structure, etc. Then if any student attempted to upload an LLM-generated essay to their course website, the system could with high accuracy flag it as AI generated.
But…if those patterns are so clear and unambiguous, it also means they can be easily detected by third party tools. If one person can code ChatGPT to add special fingerprinting to the text ChatGPT creates, another person can create a program that you can paste ChatGPT text into that will remove that fingerprinting.






Pass a new law. Make “it was an AI datacenter” an affirmative criminal defense against charges of arson. An affirmative defense is when you go to court and say, “yes, I did the act, but it was necessary because reason X.” “Yes, I shot and killed the guy, but I did so because he broke into my house and was trying to kill me.” That’s an affirmative defense.
Fuck it. Ultimately the law is subordinate to the will of the people. If they’re going to just ignore the voters, the voters should make it so people can’t be prosecuted for burning down AI data centers. It just won’t be illegal.