Move to itertools

one is defined:
Return the first item from iterable , which is expected to contain only that item. Raise an exception if iterable is empty or has more than one item.

This tool is not just very useful, but comes in handy when fixing type errors. Unfortunately some interfaces in NumPy are as follows:

def atleast_1d(x: ArrayLike, /) -> Array:

def atleast_1d(x: ArrayLike, y: ArrayLike, /, *arys: ArrayLike) -> list[Array]:

def atleast_1d(*arys: ArrayLike) -> Array | list[Array]:

So, when someone does: atleast_1d(*l), they get Array | list[Array] even if they know that l has length one. If one were part of the standard library, they could do atleast_1d(one(l)), and they would get Array as desired.

1 Like

They can do that right now, just by using more_itertools[1] or by defining one themselves. So yes, this would be convenient, but probably no more so than many of the other functions in more_itertools.

  1. It’s not like the dependency should be an issue, they are already depending on numpy. ↩︎

Isn’t it fairly trivial to implement it with unpacking? Not sure I’d use a one function at all, I’d just use [value] = iterable instead.


It took me some thought to understand what you were doing there :slight_smile:

So logical, yet somewhat unexpected.

1 Like

I often encounter code patterns like these:

_, _, a, _, b = iterable
_, a = iterable

Using a, = iterable (or is just one of the many use cases.


I use unpacking for this quite often but I worry about its meaning being cryptic if I use it in demonstrations that I am showing to others. It can also be awkward just because you need to break something out into a statement rather than being able to do it inline as an expression:

y = func(one(x))

# Or

[z] = x
y = func(z)

Having a function also means that you can use it with other functional things like map(one, ...) etc.

This is one of those cases where I would use it somewhat regularly if it was a builtin and was something that was a well-known Python idiom that others could be expected to recognise but at the other extreme I certainly would not depend on a library for it. In between having a builtin or needing an external library there is the possibility of writing a function or of it being in the stdlib. Often I want this in an interactive context and in that situation I don’t really want to make a function and also am less likely to use something that I would need to import from anywhere even it was in the stdlib.

This sort of discussion comes up a lot e.g. previous discussion about adding first:

I like’s comment from there about adding functions that are easy enough to write yourself:

functional language people don’t hesitate to “build in” any number of
functions easily implemented in terms of other ones. This started
already with LISP, which very quickly, e.g., added (CADR x) for (CAR
(CDR x)), (CADDR x) for (CAR (CDR (CDR x))) and so on - then went on
to also add additional spellings (FIRST, SECOND, NTH, etc). The point
in that context is to have common spelling and endcase behavior for
things - no matter how simple - that are reinvented every day


I feel similarly about stuff like this, but I’m leery of cluttering up builtins with a lot of fairly niche functions.

Idly thinking about this conflict, I wonder if it would be helpful to have namespaced “builtins”. Basically a stdlib module that’s always imported[1], but not in your global namespace if you don’t need it. Or perhaps lazily imported, if that’s possible.

  1. and not necessarily implemented in python ↩︎

1 Like

Chris, I was not aware that this does work.
I am familiar with unpacking to variables:

a, b, c = [1, 2, 3]

But I wasn’t aware that there is a “list-like syntax” for this:

[a, b, c] = [1, 2, 3]

Is there any difference between this two code lines?
And are there situations where the first variant will not work and the second variant is needed?

Do you have a pointer to the python docs where this second syntax is described?
I have tried to find it, but have failed…

Nope, no difference. You can think of the first one as a tuple-like syntax and the second as list-like syntax. The only real difference is with the one-element unpack, where you have a trailing comma in the tuple variant but can omit it in the list variant, which is why I prefer the brackets in that situation.


See here, in particular:

target          ::=  identifier
                     | "(" [target_list] ")"
                     | "[" [target_list] "]"
Assignment of an object to a target list, optionally enclosed in parentheses or square brackets, is
recursively defined as follows.
1 Like

People should really learn assignment unpacking as it is also used, just as frequently, in for statements.

>>> for a, in ([1], [1,2]): print(a)
Traceback (most recent call last):
  File "<pyshell#9>", line 1, in <module>
    for a, in ([1], [1,2]): print(a)
ValueError: too many values to unpack (expected 1)

[a] and (a,) also work.


Even setting aside the a, = it syntax, what does one offer in comparison to next ? (or next(iter(it)))

Both the code and the potential errors are easier to read.

Only if you already know what they mean. And if you already know what it means, [a] = it is also easy to read. It also has the extremely significant advantage of being generalizable to other uses.

next doesn’t error out if there is more than one element in the iterator, missing a big part of the motivation. This isn’t first, it’s one.


Oh, I get it. It’s an equivalent of rv, = a, not rv, _* = a.

1 Like