AI is currently mainly hype currently. Thats the type of AI that people call AI at the moment, what are technically LLMs, such as ChatGPT, CoPilot, Gemini etc.
All they do is learn on exist datasets, and combine and format it into usable chucks of data.
In the case of computer security vulnerabilities, this is no different. For the most part, AI driven vulnerability checkers take known previous flaws and check against them - essentially no different to what old school hacking toolkits used to do. Ultimately it needs clever hackers to find bugs in the first place, and work out exploits, though AI may be able to better chain various exploits to create a viable way to compromise anything.
With open source, there is a security risk in that these LLMs can be trained on source code, and use this to find programming mistakes. This is nothing new, just AI is easier for the lazy script kiddies to run, and will be more flexible than the old toolkits they used to use.
Sadly, mostly with open source, AI is generating a lot of spurious bug reports to the software developers, causing real bug reports to get lost/downgraded/closed in amongst all the fake dross. This is probably the biggest worry of all.
It remains the case that the easiest way to hack systems is to target the organic crap between the keyboard and chair. AI can create some convincing words for that. And, as a top notch security guy once told us, those that think they can't be duped are generally the easiest to dupe.