Linked Booleans Logics (rethinking PEP 505)

Yeah, started thinking about this as well. And I see 2 possible concepts:

a) It is a simple string with only IDE highlight suggestion and possibly automatic dedenting (but then the issue remains for variable capture)

b) It is t-string-like, but instead of interpolating, it captures variables that are used in mini-language. But in this case, what language is that? Is it limited to Python syntax?

(a) would be useful for code string inputs, such as implementations of multiline-lambda-via-string or sub-interpreter code input.
(b) is more purposed for mini string DSLs.

I think you would only raise an army of objections by making python doing non-python things.

The most lightweight and ‘possible right now’ syntax I end up with is (nn being an instance of a suitably tailored class):

~(nn >> a | b & c[d].e)

Suitable and practical for fast-coding, but probably not for final code.

1 Like
a = None
b = 1
c = None
# Returns:
result = None

And

a = None
b = 1
c = {'a': Namespace(e=1)}
d = 'a'
# Returns:
result = (1, 1)

Correct?

Yes, correct, also possibly `result = [1, 1]’ for the second try.

1 Like

Yup, I am on the same page.

By the way, this apparently is called “Maybe Monad”.

I used to see this in other languages that have this integrated, but never knew that this has a name.

1 Like

This would not work actually, because c[d].e is evaluated before applying the container on it, while it should be applied to c before getkey and getattr.

PEPs 335, 531, 532 have already been rejected, they cover almost all the development (re)produced here.
What we have left is the c-strings idea, which covers a much wider usage scope than PEP 505, but should probably be the subject of another thread.

1 Like

So I am quite on board with:

Well, not necessarily that, but that I think some customised DSL environment could be helpful for various applications.

Of course it is always possible to just implement custom DSL via string-parsing, but the issue with it is that inevitably is going to be very slow and would not be attractive for ad-hoc applications, such as "None-aware operators:

%timeit 1 if a is None else 2    # 15 ns

well, ok, this is a bit wishful thinking, but let’s say maybe class that I have been playing with

d = {'a': [1]}
%timeit maybe(d)['a'][0] | 2    # 1.3 ”s

In comparison to something that parses string:

import string
FMTR = string.Formatter()
%timeit FMTR.format('{} {} {}', 1, 2, 3)    # 4 ”s

And this is lower bound. To make it do what maybe does above it would easily go to 10”s, maybe even 20”s.

E.g. Symbolic expression parsing:

s = '(-1 * -2) + (-3 * 2) + (-4 * -1 * -3)'
parser.parse(s)    # 50 ”s

And this is a reasonable performance as this is very minimal optimised implementation. E.g. performance of lark doing the same thing:

parser.parse(s)           # 160 ”s
cython_parser.parse(s)    # 120 ”s


So I think all I am trying to say is that implementing string DSL can definitely be easily done. e.g.:

a = maybe('a | b & c[d].e', a=1, b=2, c=3, d=4)

But it would be both:
a) very slow (in comparison to something done in Python)
b) inconvenient as parameters would inevitably have to be provided manually (not simply looked up)



However the issue with something like:

WithDeferredContext{ a | b & c[d].e }  # pseudo-code

is that for this to be justified it needs to be a proper multi-purpose tool (such as re for string parsing) as opposed to just be able to fit one problem.

Like by overriding functions in there

# pseudo-code
WithDeferredContext(
    enter_fun = maybe,
    f1 = partial(maybe, *args_f1, **kwargs_f1),
    exit_fun = maybe.get()
    ){ a | f1:b & c[d].e }  

→ The roots are identified (a, b, c), enter_fun is applied to all of them except b where f1 is applied instead. After evaluation of the expression, exit_fun is applied to the results and its output is the final output.


Then partial(WithDeferredContext, enter_fun=..., exit_fun=...) becomes a generalizable tool creator
 for many niche cases.

Yet I am not sure performance, generalizability, and simplicity of this would be enough to make it worth it.

Yup, exactly.

Most importantly, I would like the core logic to be customisable. Not sure how could that look like, but I would like to be able to pick what operators do what and also short-circuiting behaviour.

For decisions on customisation, it would be good to find more applications for this.
Personally, I like to have at least 3 so to cover the space properly.

Basically every operators except the short-circuiting ones (and and or) can be overriden by changing the enter_fun with class instanciation with conveniently implemented magic methods (This would restrict the usage into the python layer thus not providing performance gains).
But you have to be careful with the precedence order. Regarding this, the bitwise operators are quite well-placed (but the comparison operators acts later, thus they cannot be used properly within the context). Also there might be complex implications regarding the selection of the roots if you go this way.
I fear this “root-logic” is too counter-intuitive to make it acceptable in a fully customizable way.

Can you say more ?

1 Like

This is only my opinion and preferred approach, but what I meant to say is that I would ideally find at least 2 more practical applications that such DSL would be able to handle. 2 reasons:

  1. Implementation cost doesn’t seem justified given the only application that was discussed
  2. Having more applications that act as pillars for decision making would contribute to better thought out design which is more forward looking, meaning it has a higher chance to fit another problem that is not yet known without breaking API.
1 Like

The topic of this thread is covered by this one I guess :

1 Like