Meta: how we evaluate / reach consensus on / approve ideas as a community

There was a touch on this in the e-mail group a while ago.

My personal insight from that was that with enough effort it is possible to devise a set of quantifiable dimensions on which the proposal can be evaluated.

I am still in progress on devising a similar thing for my own needs, but given the development stage which I am in there wasn’t a big need for this yet.

However, I think python community could potentially benefit from devising something along these lines.

STEP 1. Evaluation if the idea is desirable.

  1. Necessity
    a) Use-cases found by regexp searches in stdlib
    b) Use-cases found by regexp in external popular libraries
    c) Actual use case and manually collected examples from which the proposal stems
  2. Poll of keenness of python community
    a) poll results of python core-devs
    b) poll results of general python community

STEP 2. Evaluation for best route for achieving desired result. In other words, comparison of different alternatives on different dimensions:

  1. Implementation efficiency
    a) Memory usage
    b) CPU usage
  2. Readability
  3. Brevity and elegance of syntax which will be used to do desired thing

Weigh these by deemed importance which aligns with long-term objectives of python community and improve the process along the way.

This probably wouldn’t impact much of what core-devs are doing regardless, but could open up a portal for ideas that get unnoticed, forgotten or face unfair shutdown, due to various reasons. To name a few:

  1. Headspace of people making decisions is not friendly to ideas that are not in line with how they see things at the moment.
  2. Person that proposes idea gives up quickly due to uncertainty of how to proceed / be heard.
  3. Unwillingness by community to endeavour more seriously with implementing their idea in cpython because there is uncertainty of how decision of whether to merge it in will be made. Thus, risk/reward ratio is unknown, which is uncertainty on top of uncertainty.

If there was a good framework for this, I think advantages would outweigh disadvantages. It could significantly increase willingness to contribute and if the process is robust enough to not let bad ideas get through, then bad ones will just not get merged. But exploring bad ideas could be as important as working on a good ones and I find that currently there is little motivation and even less encouragement to do so.

If there was a clear signal from python team:

  1. Ok, interesting idea. We think it isn’t easily achievable / worth it, but if you want to voluntarily work on it - go ahead. If yes, then:
  2. Prove that it is useful and needed. These are the things and tools to do so. If check then:
  3. Go ahead and explore implementation. You will be judged according to these dimensions and we will try to destroy your idea in these ways. If check:
  4. Ok, good work. Write a PEP. Standard procedure follows

So the benefit and time spent by someone would be much greater than supervision needed. And the risk of something going wrong would be minimised as the process improves with experience.

Maybe you would need to shut down the idea even if it checked all the points perfectly. Then it would uncover new dimensions that need to be included, etc…