Any interest in working out a standard API for calling linters?

That’s great to hear!

So I’m thinking more of an API and not a log/result format, e.g. teyit.editors:call() would be registered as an entry point and editors would call that callable directly to get the linting results. My hope is the API would work well enough that it would just sit in between the UI of the linter and the logic code of the linter.

Asking linters to support SARIF is definitely an interesting idea, although I’m not sure how annoying it would be to run a linter, read a file, parse it, and then display the results compared to just calling a function from Python and getting the results directly.

2 Likes

And to make it explicit, that’s an open question to other editors whether parsing a SARIF file would work for them. :grinning_face_with_smiling_eyes: SARIF support is a generic thing that other systems support to asking for it to help us out might be a motivator for linters (although by itself it doesn’t solve the disoverability or unified API problem if linters don’t all use the same CLI flag to control this).

2 Likes

Wouldn’t a standardized output format be sufficient? Is an actual callable API required? One reason i find a standardized API problematic is that the command line interfaces of the few linters I’ve used seem to be pretty different. I assume current CLI usage would inform any API creation.

Forgive me if I’m putting the cart before the horse here. I realize that’s not quite what you’re asking in this thread.

1 Like

It might be sufficient, just not ideal for users. I’m just imagining trying to explain to a new user “to use your linter, find out how to call it, add the appropriate flag to generate a SARIF file, and then tell us where that file is” (and that’s even if they were provided with a placeholder to let us tell you where to write the SARIF file).

Required? Probably not. Would make the experience for users smoother? Yes.

So if this wasn’t in an editor where setting and/or changing CLI flags to a linter wasn’t a pain I might agree. :wink: But since this is from an editor perspective you typically want users to use configuration files, which means the linter itself will pick up the settings via its own default configuration file search/load mechanism.

1 Like

The alternative, low tech solution is we have a way for linters to provide the regex they want us to use to parse their output. If that object was specified as an entry point then that would also be discoverable.

1 Like

Or provide a JSON output option? (Is that what you’re referring to, @smontanaro?)

1 Like

Yeah, JSON or output standardization.

I had in the back of my mind that somehow text editors and IDEs have managed to deal with the output of compilers for a long time. I’m an Emacs user and have probably encountered a dozen different C/C++ compilers on a number of different operating systems over the years. Parsing output? Yeah, you can call it that, but just barely. I imagine the Emacs approach has changed a bit over the years, but in the old days (when I still grubbed around with Emacs LISP), the code that dealt with compiler output had an alist of (I think) (compiler regex) pairs. (Yup, it’s still there, though it’s grown significantly over the years.) Depending on the particular compiler, the appropriate regex was a matter of a one-time selection. This method of operation was established long before JSON was a gleam in anybody’s eye, but if you had JSON available, I could easily see the compiler called with a --format=json flag to produce structured input to the caller, in much the same way that you can just toss “&_rss=1” on an eBay search URL to get RSS instead of HTML output. Having JSON as an output option might reduce the work necessary for such a project to settling on the structure of the output.

Brett, i assume none of this is news to you though. As you’re not someone i think of as one who casually tosses out suggestions for more work, maybe you can elaborate a bit more, maybe offer up a scenario or two where current practice is insufficient. Also, FWIW, as an old curmudgeon I will tell you I am almost certainly not going to give up Emacs as my programming editor/IDE (I know a few people who feel the same way about vim), so there are two examples of non-Python callers of flake8, pylint, etc.

1 Like

Nope. VS Code has the concept of tasks for this exact purpose of parsing compiler output.

I try not to. :wink:

Same for any editor really. Hence why I’m trying to find consensus around this idea instead of just saying, “VS Code is doing this for us and anyone who follows what we say will get support” as I feel like that’s throwing our weight around a bit too much.

So I would disagree with that characterization for two reasons. One, VS Code extensions are written in JavaScript, so you can add us to your list. :wink: Two, you’re still calling Python to run flake8, pylint, etc., so it isn’t that you have “non-Python callers”, you’re just saying “direct callers of the linters”. You could conceivably call a wrapper command/library like flake8-standardized or something, but you’re still calling Python regardless from a non-Python app probably for a decent chunk of folks.

So running a linter requires three things:

  1. What linters does the user want to run?
  2. When does the user want to run the linter(s)?
  3. How do you run the linter(s)?
  4. How do you get results from the linter(s)?

For knowing what linter(s) a user wants to be using, typically that’s done via a configuration of some sort for the editor (e.g. for VS Code it’s your settings.json). The typically problem with that, though, is it can trip up beginners who have simply been told to “use flake8”. It’s also a nicer experience if you can just detect any and all installed linters in a virtual environment and then automatically use them.

As for when to run linters, that doesn’t necessarily plug into when you want to run a compiler. Linters usually run either manually, on every save, or on every edit. Various editors like VS Code thus have specific triggers for running linters which do not overlap with when e.g. compilers run. That also means that linters are expected to feed through a specific system which does not necessarily align with the “just give us a regex and we will automatically handle surfacing errors” system.

Assuming you do know what linter(s) you want to run and when to run them, the next question is how do you do that? Every linter has its own approach to execution. E.g. some may lint all files if you don’t specify an argument while some may require at least . to start a linting. So even with standardized output you need to know how to execute the linter(s) appropriately.

And then there’s the suggestion of standardizing the output. That’s for obviously reasons, else we wouldn’t be having this conversation. :wink:

So to answer my own questions:

  1. What linters does the user want to run? Use an entry point for automatic discovery.
  2. When does the user want to run the linter(s)? VS Code takes care of this for me.
  3. How do you run the linter(s)? Standardized API which the entry point specifies the location of.
  4. How do you get results from the linter(s)? Standardized API unifies the results.
1 Like

there are two examples of non-Python callers of flake8, pylint, etc.

So I would disagree with that characterization for two reasons. One, VS Code extensions are written in JavaScript, so you can add us to your list. :wink:

(attempting reply by email with quote trimming - I’m skeptical this will work properly)

Got it. I misunderstood what you originally wrote, thinking you meant that only Python programs would be calling these tools.

1 Like

For people as ignorant as I was: https://github.com/PyCQA.

There have been proposals that IDLE users be able to run either one particular code checker (which I vetoed) or generically any locally installed code analysis program over an editor’s contents. A GSOC student worked on the latter, but users would be required to register an import name and subprocess command line. We did not finish, and one of the issues was variation in line output formats.

Since IDLE is all Python, my personal preference would be for a lint API (LAPI), similar to the DBAPI to get an iterable of tuples such as (filename, line, column, comment), to use in a ‘for’ statement. For general callers, a generic format for output lines would be really nice. The pycodestyle ‘filename:line:col: comment’ layout looks decent. But I don’t care about the particulars. To not break current users, a standard command-line option could be required by programs with a different form, and accepted by programs that already use that form.

I would like there to be a PEP defining the standard so I would be allowed to use it in IDLE.

1 Like

This sounds like the simplest solution, though would probably nicer if we would extend it a bit. Since 3.8, the AST has position start-end location, instead of just the start, and this is extremely useful when you are highlighting a particular expression in a long line.

I am not sure how much of the linters actually support it (as of right now) (due to they still support older versions), but since this is going to be long term API, it might be nicer to consider a bit of extended format. Maybe (filename: PathLike, position: Tuple[int, int, Optionalint], Optional[int]], message: str, extra_properties: Dict[str, Any]).

I would like there to be a PEP defining the standard so I would be allowed to use it in IDLE.

+1! If anyone interested in starting to work, please let me know. I’d be happy to contribute to the efforts.

That’s basically what I’m after as well (maybe a little more structured than a tuple, but same idea that you get an iterable back to loop over).

If we can get an API then doing a wrapper to normalize output wouldn’t be difficult. And using entry point would allow for automatic tool discovery.

My assumption is this would become a PEP to make sure linters in general were aware of it.

My current thinking of an API is:

Class Diagnostic:
    path: os.PathLike[str]
	position: ((int, Optional[int]), (Optional[int], Optional[int]))  # To support col/row range.
	severity: SeverityEnum  # Or string with predefined values.
	message: str
	message_id: str
    tool: str

entry_point(root: Optional[os.PathLike[str]] = None, path: Optional[os.PathLike[str]] = None, source: Optional[str] = None) -> Iterable[Diagnostic]

At this point I’m waiting to see who speaks up with interest and then going to PyCQA to see what the interest is over there.

2 Likes

As a member of the PyCQA, but by no means a spokesperson for them, this is an exciting proposal that I think the PyCQA is in a good position to be involved with.

If you want to float the idea to the group then I would suggest getting in contact with the group directly: https://meta.pycqa.org/en/latest/getting-in-touch.html

2 Likes

That’s great to hear!

I fully expect to pull them in as this whole idea isn’t useful without PyCQA buy-in. :slight_smile:

My plan is to email PyCQA in the new year to give any other editor folks time to chime in before I fork this conversation.

2 Likes

I have two thoughts here really:

First: What have I created? The ‘A’ in PyCQA was intended as a joke cue Frankenstein-esque horror music

Second, and more seriously: A programmatic API makes sense. If it was just a matter of formatting, then this would already be done (flake8 has pluggable formatters and flake8-json prints JSON and has a CodeClimate formatter as well). This, however, creates a few areas of worry:

  1. Thinking specifically about Flake8 which has a configuration option merging based on “hierarchy” (CLI overrides local project config file which overrides the user config). How would the API handle the configuration there?

  2. Flake8, pycodestyle, and others don’t really have severity levels today (despite naming of the violation codes), without wanting to grow that, would there be some way of skipping that?

  3. Flake8, pylint, and others can use multiprocessing for parallelism to speed things up. Is that something that needs to be considered while thinking about this API?

  4. What are the output constraints besides the dataclass you outlined? Today, Flake8 goes to great lengths to report things in order top-to-bottom but could provide faster results with unordered output. Is that something the editors would be comfortable with?

  5. What happens when the LSP changes (as I’m certain it eventually will)? What will editors have to do to conform? What about linters?

  6. I’d rather wait on finalizing the PEP until we’ve had implementations (similar to how the IETF works) where linters go through a spike to ensure that everything’s doable. I’d rather us have the right APIs rather than have a PEP that mostly works.

1 Like

PyPA member nods sympathetically… :slightly_smiling_face:

4 Likes

I have made the post to code-quality@.

It wouldn’t because that instantly becomes tool-specific. My assumption is users will set things in the appropriate config file and the tool’s normal setting resolution will be relied upon from those config files.

Could have a default value if not specified. (BTW, the most common complaint about flake8 that we get is the lack of that specification as we then have to come up with it and everyone inevitably disagrees :slight_smile: ).

I don’t think so. If it’s transparent then it shouldn’t be a concern.

I personally couldn’t care less about report order. We have to parse the output to use the appropriate VS Code hooks to surface the results. I think a key thing here is the API would be driving the linters as libraries, not tools with UIs; the editors are the UI in this case, so you can leave that stuff to us.

Don’t worry about LSP stuff. That’s a stretch goal for me. And my proposal above is already over-engineered for the future since none of the linters I know about specify start/stop positions and half of them only specify the line and no column.

But to directly answer your question, LSP has mechanisms for communicating protocol support between client and server. As long as this API provides the data necessary for linters to express what they need to, then whatever LSP server uses it can do the translation to support what LSP needs.

This is one potential bonus to having a SARIF JSON object as the return type since that itself is versioned and has been worked through.

Fine by me!

1 Like

That makes sense. I’ve thought about it a bunch and I think that the problem started with pycodestyle (then pep8) calling things “errors” and “warnings”. I’ve thought about allowing plugins to report that, but it doesn’t fit in to the default format we’re using and that’s part of the API we provide folks anyway so we wouldn’t want to break it. I think if we’re going to report severity though, it’s not as useful without confidence (e.g., how confident is the tool that this is actually a big problem. Bandit has these concepts already and they’re quite useful together) so if anything, I’d argue that flake8 should grow that ability to collect/report severity and confidence and then the plugins need to provide those values.

That’s exactly what I wanted to hear, honestly.

Perfect. I didn’t want us stepping on toes for VS Code, that said, if I remember correctly, flake8’s parallelism is per-file so this is moot anyway as call happens per file.

Just wanted to make sure there wouldn’t be assumptions. That’s something I’d like to make sure ends up in the PEP when one is written.

To be clear, I just want the group to have the flexibility to iterate on the API if necessary. I’m not against writing the PEP up-front. Just want to make sure we’re not all finalizing a PEP before we have had an opportunity to kick the tires

1 Like

Yeah, no way I would want to do that. I always assumed the process of this would be:

  1. See if enough editor maintainers were interested.
  2. See if enough linter maintainers were interested.
  3. Iterate on an API together.
  4. Write a draft PEP.
  5. Try out a PoC to make sure we didn’t overlook anything.
  6. Get the PEP accepted.
  7. Update the linters and editors to start using the PEP.

I think we are at steps 1 & 2. :grinning_face_with_smiling_eyes:

1 Like

Since this idea didn’t gain enough traction, I’m considered it rejected.

I have not given up on trying to lower the cost of tool integration, though! When me and my team have more to share I will post to this category (I have ideas, but I have to see how they pan out first).

1 Like