Corporations are trying to set the precedent that they can not be held responsible for what their AI does. If it required an employee action to follow through then there’s a point of liability. Zero oversight isn’t a bug of AI, it’s a feature. It puts more distance between the people at the top and any liability or consequences they might face.
‘Why I could not have known this software was wrong 90% of the time, I’m not a computer scientist. It’s beside the point that all those mistakes AI from the company we contracted were in our favor. Regardless that’s in the past, the new generation of Artificial Intelligence will correct those mistakes and will detect 10% more fraud. It’s wonderful that we finally have a tool to combat the rampant fraud and bad actors that has taken over this country.’
Then the incentive becomes to stop a whistleblower faster if they had a dead-man cache than if they released it all at once. There’s no guarantee that the whistleblower is being honest or is capable enough to undertake something like that but there’s always a risk that the whistleblower may disclose that information anyway. Better to stop the whistleblower by arresting them first and deal with the potential fallout than negotiate. Now the whistleblower’s reputation is ruined, if they’re behind bars they’re effectively silenced, and they’re as good as dead to society without all that messy work trying to fake a suicide.