If I can verify that the vulnerability is a real one, do I really care how and why the neural net found it?
I really hope you're just feigning ignorance to further discussion. To a scientist, the how and why matters more than any single anecdote or data point ever can. We care about all the vulnerabilities brought to our attention that aren't real (false positives) and all the ones the ML will miss (false negatives). We want to determine causal factors, not just correlations. We want to see if the class of vulnerabilities detected might point to another set of related vulnerabilities that a human can see, but that no ML has yet been trained to spot. And, like @Drone says, from a business/organizational perspective, we should be asking what mistakes we made in our hiring and development process that got us into such a sorry state to begin with. These systems would probably show the most bang-for-the-buck if they were applied to the high-priced mistakes of high-priced executives, but it's funny how those in power never seem that eager to apply this tech to their own jobs. 0797539680f18e47d1a463263d0db9e41c4c3bd9b528ba2e6d362afb88145cca