Mylist.shorten(n)?

What about a list.shorten(n)? It should be equivalent to:

def shorten(self, n=1):
    for _ in range(n):
        self.pop()

or to mylist[:len(mylist)-n], but in-place.

I think it’s not difficult to implement in CPython. Don’t know if it’s of much use.

Marco Sulla asked:

“What about a list.shorten(n)?”

What about it?

Why do you want it? When would you use it? What purpose does it meet?

If you want to delete items from a list, just delete items from the
list.

del mylist[-n:]

Marco: “Don’t know if it’s of much use.”

Why are you proposing things that you don’t have a use for?

3 Likes

When proposing any addition, whether it’s builtin (such as adding a method to list would be) or to the standard library, it absolutely has to be useful in order to be seriously considered. Otherwise, we would just end up with an endless number of well-meaning, but ultimately useless features that add to the maintenance cost in the form of developer time. Dev time is at a premium in any large project, but it’s especially so in one that’s primarily volunteer-driven.

Suggestions are always welcome, but I would recommend only proposing new features that would (potentially) have a practical use case. Generally speaking, the main goal of a proposal is to convince everyone else that the feature you’re proposing has a good enough use case to justify its addition into the language. If even you aren’t convinced of the feature’s usefulness, there’s not much to be gained from making the proposal in the first place.

2 Likes

Ok, ok, I suggested something stupid because I didn’t remember del mylist[-n:]

I suggested it because I needed it in a piece of code. I suppose no one getting hurt or died for the question :smiley:

I mean, I can’t do a statistic every time I have an idea. Maybe people read the question and says Hey, I also needed it!. If no one say something, amen.

I suppose the next time I simply do not say “I’m not sure it’s useful” and stop :smiley:

Anyway, I can confirm del mylist[n:m] is very fast, so there’s no need of other APIs :slight_smile:

>>> timeit("l=list(range(10000)); del l[1:]", number=100000)
13.804484945954755
>>> timeit("l=list(range(10000))", number=100000)
13.827276774914935

Mmmmmhhhh… no, I have to correct myself. I’ve done a more rigorous test with pyperf:

$ sudo python3.9 -m pyperf system tune --affinity 0
[blablabla]
marco@buzz:~$
marco@buzz:~$ python3.9 -m pyperf timeit "l=list(range(100000)); del l[1:]" --rigorous --affinity 0 
.........................................
Mean +- std dev: 4.28 ms +- 0.27 ms
marco@buzz:~$
marco@buzz:~$ python3.9 -m pyperf timeit "l=list(range(100000)); a = l[1:]" --rigorous --affinity 0 
.........................................
Mean +- std dev: 3.43 ms +- 0.08 ms
marco@buzz:~$
marco@buzz:~$ python3.9 -m pyperf timeit "l=list(range(100000))" --rigorous --affinity 0 
.........................................
Mean +- std dev: 2.24 ms +- 0.06 ms

It seems that creating a new list slicing the old one is faster than using del.

Maybe del also create the sublist and then delete the slice indicated after it?

Ok, I must correct myself again :smiley: I’ve done a little more tests and it seems, as it seems logical indeed, that deleting instead of sliding is faster, if the size to delete is less than the half of the list. Indeed:

marco@buzz:~$ python3.9 -m pyperf timeit "l=list(range(100000)); del l[-50000:]" --rigorous --affinity 0 
.........................................
Mean +- std dev: 2.74 ms +- 0.10 ms
marco@buzz:~$
marco@buzz:~$ python3.9 -m pyperf timeit "l=list(range(100000)); a = l[:-50000]" --rigorous --affinity 0 
.........................................
Mean +- std dev: 2.67 ms +- 0.06 ms

At the middle, the times starts to be identical.

I think there’s no one that needs or prefer to remove more than the half of a list instead of getting the shorter sublist with slicing.

End of discussion :smiley:

There was certainly no harm done, I just wanted to let you know that you should consider the practical use case of the feature rather than on the basis of “well, this might be a useful shortcut to me”. CPython has a much higher barrier of entry for considering new features, since we have to maintain them for a significant period of time after they’re added. Also, the users have to invest time learning the new feature.

You don’t necessarily have to do a statistical analysis, but you should do some investigation of other code (in existing production code, third party libraries, or in the standard library) and consider how much it could benefit from having your proposed addition.

Think: does my suggestion improve performance, efficiency, security, or readability by a significant margin? Does it conform to the design philosophy of Python? If so, you can proceed with making a proposal and argue the benefit that implementing your proposal would have.

Actually if anything, I’m glad that you specifically said that. Omitting any argument for a practical use case implies that the suggestion might not have one. It’s better to be outright and honest about it, so that we can help people to understand the proposal process. If you didn’t explicitly say that, someone would’ve very likely asked “What practical use case does this have?” and we would’ve come to the same exact conclusion.

Also, don’t feel bad about it or think that it was a “stupid” suggestion, many people do the same exact thing. The process of making a suggestion for a feature addition in a widely used language is rather involved, and the bar is high. Most people have far more rejected ideas than approved ones, and that’s perfectly okay. I certainly do.

In fact, just earlier this year, I did almost the same exact thing. I proposed changing the current working directory (.) to be the default for os.walk() as a quality of life improvement since it would be useful in my own code. But rightfully, it was politely rejected by @storchaka and @taleinat. I think what @storchaka said could apply equally well to both of our proposals:

So I think the value of this feature is tiny. It is not worth the effort for implementing, documenting, reviewing and maintaining it. On other hand, it increases the burden for learning Python.

(Message 346970 - Python tracker)

1 Like

Just be careful with n<=0: For zero, the above will empty the list!

@taleinat Buahahahahha! Yes, burn them all!

A couple notes here:

  1. Better not to include the building of the list in the timing - In this case, the list building takes MUCH longer than the deleting:

    In [3]: %timeit list(range(100000))
    1.51 ms ± 17.7 µs per loop (mean ± std. dev. of 7 runs, 1000 loops each)

    In [4]: long_list = list(range(100_000))

    In [5]: %timeit del long_list[-50_000:]
    53.8 ns ± 0.622 ns per loop (mean ± std. dev. of 7 runs, 10000000 loops each)

When I test only the deleting, I get essentially the same timing whether I delete half, most, or a tiny bit of the list:

In [8]: long_list = list(range(100_000))                                        

In [9]: %timeit del long_list[1:]                                               
52.2 ns ± 0.624 ns per loop (mean ± std. dev. of 7 runs, 10000000 loops each)

In [10]: long_list = list(range(100_000))                                       

In [11]: %timeit del long_list[-1:]                                             
51.6 ns ± 0.36 ns per loop (mean ± std. dev. of 7 runs, 10000000 loops each)

In [12]: long_list = list(range(100_000))                                       

In [13]: %timeit del long_list[-50_000:]                                        
50.8 ns ± 0.303 ns per loop (mean ± std. dev. of 7 runs, 10000000 loops each)
  1. lists automatically resize themselves. So I suspect that when you delete a little bit, it just has to change the length of the list, but when you delete a lot, they need to reallocate a new list internal pointer with a smaller buffer.
  • I may be wrong, I’ve never looked at the code for shrinking lists, but it does keep reallocating when a list grows.
  • And the timing above indicates that it makes almost no difference anyway.