-
OpenAI No Longer Takes Safety Seriously
OpenAI and its competitors are racing as fast as they can to develop systems that are as capable and as autonomous as possible. -
Lawfare Daily: DHS Under Secretary Robert Silvers on the CSRB's Report on the Summer 2023 Microsoft Exchange Online Intrusion
Could Microsoft have prevented the 2023 cyber intrusion? -
How AI Is Changing Tech Policy Politics in Washington
A new battle is emerging between AI regulation optimists and AI regulation skeptics. -
Lawfare Daily: Peter Salib on AI Self-Improvement
What are the risks of AI self-improvement? -
AI Will Not Want To Self-Improve
Classic arguments for AI risk assume that capable, goal-seeking systems will naturally attempt to improve themselves, but a closer look at the operative incentives reveals a more complicated story. -
Amnesty Flags Possible Spyware Abuse in Indonesia
The latest edition of the Seriously Risky Business cybersecurity newsletter, now on Lawfare. -
Lawfare Daily: Pablo Chavez on Digital Solidarity
What is digital solidarity? -
Five More Observations on the TikTok Bill and the First Amendment
The data-privacy justification is substantial, the national security posture helps the government, and the law covers more than just TikTok. -
Legal Challenges to Compute Governance
Controlling AI through compute may be necessary, but it won’t be easy. -
Targeting TikTok
Whether Congress can single out TikTok remains an unanswered constitutional question. -
When Manipulating AI Is a Crime
Some forms of “prompt injection” may violate federal law. -
Incentives for Improving Software Security: Product Liability and Alternatives
Tort liability is the wrong approach to improving software security; process transparency and Executive Order 14028 offer a path forward.