A way to have the default value for a parameter as the default of another function: the Default singleton

Often, I use the defaults of another function in my new function or class. For example:

def mysorted(it, key=None, reversed=False):
    # some code
    return sorted(it, key=key, reversed=reversed)

The majority of times I want to simply adhere to the sign of the original
function. If sorted changes the default of reversed from False to
True, I adhere no more to the standard (ok, this will never happen, but
it’s only an example).

Can Python introduce a Default singleton object?

def mysorted(it, key=Default, reversed=Default):
    # etc
    return sorted(it, key=key, reversed=reversed)

If the parameter is Default, the function will put the default value
for that parameter. If no default exists, a ValueError should be
raised.

1 Like

A “value that isn’t a value” has a lot of problems. For instance, can you put this Default into a dictionary, as a key or a value? What about a custom class mapping, where putting something into the dictionary involves calling __setitem__? If the answers to those questions are different, you have created a nightmare where things simply don’t work the same way for core data types; if you say “Yes” to both, then you have to explain when Default behaves as a value and when it behaves as a non-value (since, by your rules, calling __setitem__ with Default as one of its parameters should raise); and if you say “No”, then you have a major problem with this not actually being a usable value (for instance, you wouldn’t be able to use it in **kwargs).

For what you’re doing here, I recommend instead using *a,**kw so you don’t have to depend on carrying around a useless argument (you have no way of being able to factor in reversed into your wrapper here anyway) and magically copying it in.

@Rosuav I don’t think that “a value that is not a value” has lot of problems. Python already have None, NaN and Ellipsis.

About *args and **kwargs, I agree, this is what I do usually, especially in class methods that overrides a subclass method. But you can’t do it always. For example, you could have a function that calls two functions:

def myfun(arg_f1_1, arg_f1_2, arg_f2_1, arg_f2_2):
    f1(arg_f1_1, arg_f1_2)
    f2(arg_f2_1, arg_f2_2)

@vovavili You have always to redeclare the default values of sorted.

@rosuav:

Yes.

It will very rare that you want to put Default in a custom dict.

Rare is irrelevant. Is it possible, or not?

All of those are very definitely values. If None happens to be the default value for an argument, then it is indistinguishable from not passing that argument, but that’s the same as any other (there are plenty of places where zero behaves like a non-value, for instance), and in all other ways, it is a perfectly valid value (can be used as a dict key or value, can be stored in any type of variable or data structure, etc). Even more so, Ellipsis is a perfectly normal value that happens to be interpreted specially. “NaN” is a bit different, as it’s not a specific value but an entire category of floats (calling “NaN” a value is like calling “whitespace” a value - lots of distinct strings can be nothing but whitespace), with the special feature that it compares unequal to itself. All of these behave perfectly sanely in dictionaries, custom classes, etc:

>>> {None: None}
{None: None}
>>> n1, n2 = float("nan"), float("nan")
>>> d = {n1: "n1", n2: "n2"}
>>> d[n1]
'n1'
>>> d[n2]
'n2'
>>> class MyDict:
...     def __init__(self): self.store = {}
...     def __setitem__(self, key, value):
...             print("Setting", key, "to", value)
...             self.store[key] = value
...     def __getitem__(self, key):
...             print("Retrieving", key)
...             return self.store[key]
... 
>>> d = MyDict()
>>> d[n1] = "n1"
Setting nan to n1
>>> d[n2] = "n2"
Setting nan to n2
>>> d.store
{nan: 'n1', nan: 'n2'}
>>> d[n1]
Retrieving nan
'n1'
>>> d[n2]
Retrieving nan
'n2'

In every way, these ARE values. They are not like the SQL “NULL” value which is sometimes not a value, nor are they like null pointers in C, but they are perfectly reasonable values in every context.

This Default would not behave that way. It would have to have some magic that changes that behaviour when it is passed as a function parameter.

@Rosuav

Well, no. To be more clear, I think you can do:

d = {"a": Default}
MyDict(d)

Actually not very sure to be honest, since __init__ could raise ValueError in this case?

Anyway, this will raise a ValueError for sure:

d = {}
d["a"] = Default  # ValueError
md = MyDict()
md["a"] = Default  # ValueError

Anyway, they are limitation, but I don’t think it’s a major problem for its usability.

That was precisely the question I asked about the behaviour in dict and in custom mappings. So what you’ve decided here is that it’s “No” to both. By your description, it is possible to have a dict containing Default, but not to put it in afterwards; that’s already pretty weird. What about if you make that dict using the constructor instead - is this legal?

d = dict(a=Default)

Presumably not. What about this?

def f(**kw):
    print(kw["a"])

f(a=Default) # ValueError?
d = {"a": Default}
f(**d) # ValueError?

def g(a=Default):
    f(a=a)
g() # ValueError?

@functools.wraps(g)
def h(*a, **kw):
    return g(*a, **kw)
h() # ValueError?

As you can see, non-values are a problem, as they will invariably lead to contradictions.

@Rosuav Honestly… I don’t see the contradiction, since there’s no default value, so of course all the examples you listed will raise ValueError.

Okay, if that’s your decision, lemme rework that a bit.

def f(a=1):
    print(a)

f(a=Default) # ValueError?
d = {"a": Default}
f(**d) # ValueError?

def g(a=Default):
    f(a=a)
g() # ValueError?

@functools.wraps(g)
def h(*a, **kw):
    return g(*a, **kw)
h() # ValueError?

All I’ve changed is the definition of f. Now which ones raise? Presumably the first one doesn’t. Does the second? Does the third? The fourth? What if I add another wrapper function:

def wrapper(*a, **kw):
    return h(*a, **kw)

Is this still legal? Is wrapper legal with the previous definition of f? At what point does ValueError get raised? wrapper can’t scan all the way through the call chain to figure out whether f has a default or not.

If I understand the suggestion correctly, you can achieve the goal for yourself with the following:

Default = type('Default', (), dict(__repr__=lambda self: 'Default'))()

def mysorted(seq, *, key=Default, reverse=Default):
    sorted_kwargs = {}
    if key is not Default:
        sorted_kwargs['key'] = key
    if reverse is not Default:
        sorted_kwargs['reverse'] = reverse
    ...
    return sorted(seq, **sorted_kwargs)

The Default class itself is not useful enough to include in builtins (you could just as easily use ... or Default = object()), there’s no realistic way to add special handling of a Default object into the language itself, and this is quite a niche use case anyway. I don’t think there’s anything actionable here.

3 Likes

@rosuav: I don’t know why they should raise ValueError. f() defines a default value for its parameter, so why they should raise an error?

PS: actually I’m not sure that

d = {"a": Default}

will not raise a ValueError

Even easier:

def mysorted(seq, **kw):
    ...
    return sorted(seq, **kw)

This covers the vast majority of these sorts of cases, with the added bonus that any new kwargs are also passed along, without any effort from you. Its biggest downside is that the signature for mysorted doesn’t say what arguments it accepts (hence my earlier comment about potential improvements to signature algebra) but you’d still have part of that limitation with =Default anyway (you get the names of the arguments only).

2 Likes

Because they do not know whether it defines a default or not. How can they?

Well, but as I said, this don’t work if you want to call two or more functions with their default values.

Recommendation: Pick an example that actually shows your proposal in a good light, then, rather than mysorted() which is far better handled with existing tools :slight_smile:

2 Likes

They don’t have to. Example

def f(a=1):  return

def g(a=Default):
    return f(a=a)

g()  # does not raise

def h(a=default):
     return g(a=a)

h()  # does not raise

def j():
    return h()

j()  # does not raise

and I don’t know why they should raise.

Yes, you can do it, but with a lot of boilerplate code.

from other_module import f # You have no idea what this is

def g(a=Default):
    return f(a=a)

def h(**kw):
    return f(**kw)

class Demo:
    def __init__(self, a=Default):
        f(a=a)

g(); h(); Demo()

Which of these will raise, and at exactly what point? Does it depend on how f is defined?

Also: what is the signature of Demo.__new__, and is it relevant to your answer?