Thanks for taking a look at the docs. That section covers the default behavior of the CLI, which acts as a standard OSV known-vulnerability checker (since basic signature hygiene is still step one).
The semantic/behavioral analysis we built to hunt for these Telnyx/LiteLLM zero-days is a new module we just pushed this weekend. You trigger it using the --supply-chain flag (which requires an Anthropic API key).
When run with that flag, it moves past the OSV database and runs the LangGraph intent analysis on the actual dependency code. I'll get the landing page updated today to make the --supply-chain flag and LLM capabilities more prominent.
The 'flashlight not a blocker' distinction is the right call. Curious to know - how do you handling false positive rate in practice?
In our experience with LLM-based code analysis, the signal-to-noise ratio is the thing that determines whether teams actually use the tool or just forget about it after a week.
For us, this was a very fast 0-60 project meant to help people quickly identify if they were breached by the LiteLLM supply chain attack (with other detection support). That's part of the reason our tool runs recursive checks, so developers can go to their e.g. ~/source directory and quickly see if they were pwned.
We've had almost zero false-positives with the AI detection in our configuration -- granted we haven't had a whole lot of testing given the short timeframe we started in, so take this with a grain of salt
It has a 2-part process. First, it does a simple depencency check against Google's OSV, then there's a supply chain check that requires an AI key. This secondary check uses code signature checks to identify files that have "risky" behavior (e.g. eval, lots of encoded code etc) and passes that to an AI to identify whether it's likely malicious code hidden behind the "risky" behavior.
Thanks for taking a look at the docs. That section covers the default behavior of the CLI, which acts as a standard OSV known-vulnerability checker (since basic signature hygiene is still step one).
The semantic/behavioral analysis we built to hunt for these Telnyx/LiteLLM zero-days is a new module we just pushed this weekend. You trigger it using the --supply-chain flag (which requires an Anthropic API key).
When run with that flag, it moves past the OSV database and runs the LangGraph intent analysis on the actual dependency code. I'll get the landing page updated today to make the --supply-chain flag and LLM capabilities more prominent.
The 'flashlight not a blocker' distinction is the right call. Curious to know - how do you handling false positive rate in practice?
In our experience with LLM-based code analysis, the signal-to-noise ratio is the thing that determines whether teams actually use the tool or just forget about it after a week.
For us, this was a very fast 0-60 project meant to help people quickly identify if they were breached by the LiteLLM supply chain attack (with other detection support). That's part of the reason our tool runs recursive checks, so developers can go to their e.g. ~/source directory and quickly see if they were pwned.
We've had almost zero false-positives with the AI detection in our configuration -- granted we haven't had a whole lot of testing given the short timeframe we started in, so take this with a grain of salt
Disclaimer: I work on this code
The linked page seems to be a normal known vuln checker? From doc :
""" The tool will:
"""It has a 2-part process. First, it does a simple depencency check against Google's OSV, then there's a supply chain check that requires an AI key. This secondary check uses code signature checks to identify files that have "risky" behavior (e.g. eval, lots of encoded code etc) and passes that to an AI to identify whether it's likely malicious code hidden behind the "risky" behavior.
Disclaimer: I work on this project.