What happens is that weak models hallucinate (sometimes causally hitting a real problem) that there is a lack of validation of the start of the window... without understanding why they, if put together, create an issue.
这一发现揭示了AI漏洞检测的严重局限性:弱模型只能通过模式匹配'发现'表面相似的问题,却无法理解问题之间的因果关系。这表明当前AI在网络安全中的应用可能存在系统性盲点,值得深入研究。