I run into it somewhat frequently that I would like to write a code of the form
with x = y:
do_stuff()
which should have the behaviour that inside the with x has the given value, and afterwards x is reset (usually deleted).
For the most part it would a convenience tool to keep my namespace clean. And to make it easier to develop and debug functions. But there’s also situations like
with Path.__bool__ = lambda self: self != Path('/'):
while my_path:
do_stuff
where it would allow you to write something that currently needs a context manager to be robust very compactly.
To me it feels intuitive that while should work this way, and I haven’t been able to find any particularly good alternatives to this (missing) feature.
Advice on how I could achieve this instead is also appreciated
@JamesParrott:
The first, and most important problem with with y as x: is that it doesn’t work:
with 'a' as x:
print(x)
AttributeError: __enter__
x isn’t reset afterwards either:
with open(mock_data_path / 'my_file.tht') as f:
pass
print(f)
# <_io.TextIOWrapper ...>
for i in f:
pass
# ValueError: I/O operation on closed file.
A second (lesser) problem is that it’s unintuitive to put the name after the expression. It reverses the usual flow of Python. With x=y it’s immediately clear you’re assigning the value of y to x. With x as y I have to pause and digest what’s happening before I recognise that x is assigned to y. with open(file) as f gives enough context for it to be clear immediately, with x as y would not.
@pf_moore:
If you’d point out a way in which this feature would harm you that’d be another matter. And it’s fair there’d need to be support from more than one person. But just because it wouldn’t help you think about code, that doesn’t proof there doesn’t exist a group of people (besided myself) who would benefit from this feature.
Now to arrive with some actual evidence that other people would also like this:
Finally some reasons ‘just use a function’ isn’t a perfect substitute:
Let’s say I’m working in a notebook with 20 defined variables, (some of which are expensive to compute,) and I’m designing a function:
def my_new_func(price_ds, file_name='tmp.txt', convert_currencies=False, x=x, z=z):
...
...
...
tmp_ds = ...
y = ...
...
return ans
but something goes wrong inside my function.
With an ideal with statement, I could make very small changes
with(price_ds=small_price_ds, file_name='tmp.txt', convert_currencies=False, x=x, z=z):
...
...
...
tmp_ds = ...
y = ...
# ...
# return ans
and then I could examine the objects created within this scope in the next cell by just calling tmp_ds, y. And I wouldn’t have overwritten price_ds or file_name or convert_currencies.
Now I’ll grant you can achieve all this in other, more cumbersome manners.
You could write
def my_new_func(price_ds, file_name='tmp.txt', convert_currencies=False, x=x, z=z):
...
...
...
tmp_ds = ...
y = ...
return tmp_ds, y, ... ...
...
return ans
tmp_ds, y, ... ... = my_new_func(price_ds=small_price_ds, file_name='tmp.txt', convert_currencies=False, x=x, z=z)
But now the flow is going down before it goes up and then goes down again, and there’s significant faf in writing those return statements and then unpacking them. Every time you change a bit of the code in the function you may also have to change 2 other lines of code in other places.
but datasets don’t print nicely, and now you can’t experiment with the intermediate objects in other cells.
Final option is to rename the function arguments that have name collisions with objects you don’t want to overwrite, assign them global-scope values, and un-indent the function body. But it’s quite easy to accidentally overwrite something I discover later I wanted to keep, you get an expanding name space, if I see I have a ds or an x I don’t immediately know where it was defined or what it means, auto-complete becomes less useful.
I usually go this route. It’s not the end of the world. Still better than Java.
Python is such a good language precisely because it has so many features for the sake of convenience. To me this would be a reason to make it even more convenient, not to block convenient features that do no harm.
Syntax changes are a serious matter and justifications for such is hard and requires considerations from many more angles than was touched upon in this thread.
For functionality you desire there are many ways to achieve it. Maybe they will not be as perfect as you desire (or maybe they will be), but with enough effort you can find a good solution for this.
If you post this in “Help” topic with “How can I achieve this?”, I am sure people will be happy to help you find the one which is most suitable for your needs.
For a solid idea proposal (especially syntax change proposal) you did skip many steps. And it is not a big deal, sometimes it can result in serious consideration just by chance.
OK, I’m not sure what the point of using a string literal in a with statement is, but you’ve really thought about this.
Regarding “not cleaning up x afterwards”. Yes. Currently there’s no expectation that an indented with block creates a new scope. Allowances were made for list comprehensions, but in general introducing a new scope, where previously there was none, requires extraordinary supporting evidence, I feel. with statements are understood to be syntactic sugar for __enter__ and __exit__ methods in a tryfinally block. A very strong case is needed to change the way Python users can currently reason about their code.
Have you looked at rolling your own wrapper, that calls del on the new name in the with afterwards?
I’m sorry if you got the impression that was what I meant. It wasn’t. In order to successfully argue for a new feature in Python, you need to demonstrate that the benefits outweigh the costs. The costs in this case include:
Someone has to implement the feature. I’m not clear if you’re offering to do so, but if you’re not, that’s a cost.
The feature needs to be maintained. That’s an ongoing cost for the core dev team.
Documentation, 3rd party books and training courses, etc. need to be updated to cover the new feature.
Tools like linters, code formatters, syntax highlighters etc., would need updating.
The important point to remember is that by default, Python will stay the same. If you want this change to be made, you need to persuade people to support the proposal (ultimately the core developers, but it can be useful to get community support to help make the case to the core devs). Personally, I don’t see the value of this idea - but you don’t have to care about my view and that’s fine. But it’s extremely unlikely that this feature will get implemented if you don’t come up with a more persuasive pitch for it.
To be blunt, that’s not a lot of interest in the feature, given that literally millions of people use Python. And the responses (use shorter functions, don’t re-use variable names) seem sufficient to me. Maybe they don’t to you - but at some point you need to explain why. Which leads to:
First of all, notebooks are an atypical environment, that encourage a style of coding (a long stream of top-level statements) that isn’t the normal way Python is expected to be used. That’s not to say that features which make notebooks easier to use aren’t welcome, but they need to have value in a general development context, otherwise they are likely to be too “niche” to succeed.
That sounds like something you could use a debugger for. The debugger in VS Code lets you break into a function’s execution and examine variable values, for example. The stdlib pdb debugger does the same. Maybe the notebook environment doesn’t offer a sufficiently powerful debugger, but that’s not a compelling reason for a language change.
Sure. It also has a good debugger available in the stdlib. Just because you aren’t aware of its capabilities, or use an environment that doesn’t support it[1] doesn’t mean Python lacks convenience features for debugging.
Again, no-one is blocking anything here. I’m trying to point out to you what is needed to get the “convenient feature” you propose added to the language. It’s up to you whether you feel that’s too much effort for you.
I don’t know if Jupyter supports the debugger - maybe it does, and you simply weren’t aware of tha fact? ↩︎
All three of your linked examples should, in my opinion, be addressed with a function call. This is a misuse of context managers. Context managers create “context”—not new scopes.
Your code will be a lot easier to read if you create functions whenever you want to do something like this. Not only will the function call create a scope for you, but it:
has a possibly-informative name,
it’s reusable,
its outputs are clear, and
as a shorter function, it’s easier to read than a long one.
I think having these subscopes you want is a violation of the design principle of not creating monolithic functions.
there’s significant faf in writing those return statements and then unpacking them.
Use a dataclass. Don’t unpack long tuples like this.
This is exactly like writing with open(...) as f, and again only works because the object itself (the open file) has __enter__ and __exit__ methods. So again, this doesn’t address the original post.
Sure. If you want a scope, then create a scope, using a function.
If you don’t want to create a function, but still want to assign a variable, and delete it afterwards etc.:
try:
x=y
else:
do_stuff()
finally:
del x
But from the thread’s title, the Walrus is an obvious thing to try too. The proposed syntax could be confused with it, which is another reason against it.