What if, for example, commercial use of agentic AI was governed by a contract dynamically generated and negotiated by AI systems? The contract could specify different risk sharing terms based on the inherent risks of the activity under consideration, themselves modeled quantitatively by teams of AI actuaries and economists. It could consider the likelihood of third-party harms (harms caused by an AI system to a bystander uninvolved in the contract) and include provisions for compensating third parties.
so depending on the industry the AI is being deployed, the company would assume contractual liability if the model operates outside of the intended purpose of the company. This would make AI companies assume more liability in low risk scenarios such as automation for routine tasks or cloud computing, and assume less liability in high risk scenarios such as automation in hospitals. But overall, the terms would be contractual and tort would be ad hoc. Oh and the contracts would be made by AI.