The speed is really cool but the fact that your rules are written as rust code meaning that new rules need a new binary. That might be fine but just wanted to point it out to anyone who's interested.
Running security checks at linter speed is a big deal for
CI pipelines. What's the false positive rate in practice?
That's usually the tradeoff with fast static analysis ā
speed vs accuracy. Would love to know how you benchmarked it.
From a quick look it seems like it's "as fast as a linter" because it is a linter. The homepage says "Not just generic AST patterns", but I couldn't find any rule that did anything besides AST matching. I don't see anything in the code that would enable any kind of control or data flow analysis.
Some of the checks here seem very brittle. For example this one[1].
In the context of security scanning (versus, say, listing), I think it's reasonable to expect the tool to be resilient to attempts at obfuscation (or just badly written code that doesn't adhere to normal Python idioms around import paths).
Looks interesting, will give it a run on the codebase at $work. One thing that would be nice to see in the README are benchmarks on larger codebases. Everything in the benchmark table is quite small. Iād also list line count over files, since the latter is a much better measure of amount of code.
For context, the codebase I work on most often has 1200 JS/TS files, 685 rust files, and a bunch more. LoC is 13k JS, 80k TS, and 155k Rust
The speed is really cool but the fact that your rules are written as rust code meaning that new rules need a new binary. That might be fine but just wanted to point it out to anyone who's interested.
Running security checks at linter speed is a big deal for CI pipelines. What's the false positive rate in practice? That's usually the tradeoff with fast static analysis ā speed vs accuracy. Would love to know how you benchmarked it.
From a quick look it seems like it's "as fast as a linter" because it is a linter. The homepage says "Not just generic AST patterns", but I couldn't find any rule that did anything besides AST matching. I don't see anything in the code that would enable any kind of control or data flow analysis.
Some of the checks here seem very brittle. For example this one[1].
In the context of security scanning (versus, say, listing), I think it's reasonable to expect the tool to be resilient to attempts at obfuscation (or just badly written code that doesn't adhere to normal Python idioms around import paths).
[1]: https://github.com/PwnKit-Labs/foxguard/blob/a215faf52dcff56...
Looks interesting, will give it a run on the codebase at $work. One thing that would be nice to see in the README are benchmarks on larger codebases. Everything in the benchmark table is quite small. Iād also list line count over files, since the latter is a much better measure of amount of code.
For context, the codebase I work on most often has 1200 JS/TS files, 685 rust files, and a bunch more. LoC is 13k JS, 80k TS, and 155k Rust
It is still quite fast on that codebase, fwiw. 10.7 ms.
Legitimately, I have had to stay away from certain linting tools because of how slow they are. I'll check this out.
cfn-lint is due for one of these rewrites, it's excruciating. I made some patches to experiment with it and it could be a lot faster.