Allow `obj[key]` to fallback to attribute-based `__getitem__`

Background

In current Python, special methods do not behave like normal user-defined methods: they bypass __getattribute__, and they also don’t fall back to __getattr__.

The expression obj[key] only works if the type defines a mapping or sequence slot.
If it does not, Python immediately raises TypeError, even if the object does provide a __getitem__ method dynamically.

This leads to surprising limitations for objects that delegate or map behavior to another object.

Example 1

Suppose we have an object acting as a dynamic view or proxy:

class ConfigView:
    def __init__(self, target):
        self.target = target

    def __getattr__(self, name):
        # forward lookups to the underlying object
        return getattr(self.target, name)

If the underlying object implements __getitem__, we would naturally expect view["path"] to work the same as view.target["path"]

But currently it fails with:

TypeError: 'ConfigView' object is not subscriptable

even though __getitem__ is available via attribute lookup.

Example 2

When using _Config as a read-only, delegation-based wrapper, a load_config proxy must manually forward every operation it wants to expose:

class _Config(dict):
    def __getitem__(self, key):
        value = super().__getitem__(key)
        if isinstance(value, dict):
            return _Config(value)
        elif isinstance(value, list):
            return [_Config(v) if isinstance(v, dict) else v for v in value]
        return value

    def __getattr__(self, key):
        if key.startswith("__") and key.endswith("__"):
            cls = type(self)
            if key in cls.__dict__:
                func = cls.__dict__[key]
                return func.__get__(self, cls)
            return super().__getattribute__(key)
        return self.__getitem__(key)

    # Explicitly block all mutation to preserve immutability
    def __setitem__(self, key, value): raise NotImplementedError
    def __setattr__(self, key, value): raise NotImplementedError
    def __delitem__(self, key): raise NotImplementedError
    def __delattr__(self, key): raise NotImplementedError

    def items(self):
        for key, value in super().items():
            if isinstance(value, dict):
                yield key, _Config(value)
            elif isinstance(value, list):
                yield key, [_Config(v) if isinstance(v, dict) else v for v in value]
            else:
                yield key, value


class load_config:
    def __init__(self, filepath):
        self.filepath = filepath

    def _load(self):
        row = yaml.safe_load(open(self.filepath, "r", encoding="utf-8"))
        return _Config(row)

    # Without a fallback, every operation must be forwarded manually
    def __getitem__(self, key): return self._load()[key]
    def __getattr__(self, name): return getattr(self._load(), name)
    def __setitem__(self, key, value): return self._load().__setitem__(key, value)
    def __setattr__(self, key, value):
        if key == "filepath":
            return super().__setattr__(key, value)
        return setattr(self._load(), key, value)
    def __delitem__(self, key): return self._load().__delitem__(key)
    def __delattr__(self, key):
        if key == "filepath":
            return super().__delattr__(key)
        return delattr(self._load(), key)

It would be better if code below can work:

class load_config:
    def __init__(self, filepath: str):
        self.filepath = filepath

    def __getattribute__(self, key):
        row = yaml.safe_load(open(
            object.__getattribute__(self, "filepath"),
            "r",
            encoding="utf-8"
        ).read())
        return _Config(row).__getattr__(key)

This example reloads the file at the root level on every access, which would obviously be inefficient in real use. It’s only meant as a demonstration: there are real scenarios where a wrapper needs to delegate almost all operations to the underlying object.

Proposal 1

When a type does not provide a mapping or sequence slot, allow obj[key] to fall back to an attribute lookup for __getitem__, following normal attribute resolution rules (__getattribute__, __getattr__, delegation, etc.).

What’s more, the same pattern could apply to methods like __contains__ or __len__, and even __eq__ wouldn’t be impacted, because Python automatically provides an implementation when one is not defined, meaning the fallback would never trigger for it anyway.

This would significantly improve the behavior of dynamic proxy objects.

I have made a simple test on __getitem__ in PR to show this idea could work and do no affect to current existing code, and Github Actions show that all tests have been passed(except skipped).

I would like to hear thoughts on whether this fallback would be acceptable as an enhancement.

Additional Question

Inject special method on instance results into unexpected outputs:

import yaml

__all__ = ["config"]


class _Config(dict):
    def __getitem__(self, key):
        value = super().__getitem__(key)
        if isinstance(value, dict):
            return _Config(value)
        elif isinstance(value, list):
            return [_Config(v) if isinstance(v, dict) else v for v in value]
        else:
            return value

    def __getattr__(self, key):
        if key.startswith("__") and key.endswith("__"):
            cls = type(self)
            if key in cls.__dict__:
                func = cls.__dict__[key]
                return func.__get__(self, cls)
            return super().__getattribute__(key)
        return self.__getitem__(key)

    def __setitem__(self, key, value):
        raise NotImplementedError(
            f"Config object is read-only. Tried to set {key} = {value}"
        )

    def __setattr__(self, key, value):
        raise NotImplementedError(
            f"Config object is read-only. Tried to set {key} = {value}"
        )

    def __delitem__(self, key):
        raise NotImplementedError(f"Config object is read-only. Tried to del {key}")

    def __delattr__(self, key):
        raise NotImplementedError(f"Config object is read-only. Tried to del {key}")

    def items(self):
        for key, value in super().items():
            if isinstance(value, dict):
                yield key, _Config(value)
            elif isinstance(value, list):
                yield key, [_Config(v) if isinstance(v, dict) else v for v in value]
            else:
                yield key, value


class load_config(_Config):
    def __init__(self, filename):
        def wrapper(self, func):
            def inner(*args, **kwargs):
                print("wrapper operated")
                self = load_config(filename)
                return func(self, *args, **kwargs)

            return inner

        with open(filename, "r", encoding="utf-8") as f:
            data = yaml.safe_load(f)
        super().__init__(data)

        for key, value in _Config.__dict__.items():
            if callable(value):
                self.__dict__[key] = wrapper(self, value)


config = load_config("config.yaml")

if __name__ == "__main__":
    print("show type")
    print(config.__getitem__)
    print(config.__getattr__)
    print(config.__setitem__)
    print(config.__setattr__)

    print("\ntest read")
    print(config.testA.testB["testC"]) #wrapper operated
    print(type(config).__getitem__(config, "testA").testB["testC"]) #wrapper bind on load_config, so no output
    print(config["testA"].testB["testC"]) #expect output, but no output due to Cpython code. Does this need improvement?

    print("\ntest modify")
    try:
        config.testA.testB["testC"] = 1
    except NotImplementedError as e:
        print("success:", e)
    try:
        type(config).__getitem__(config, "testA").testB["testC"] = 1
    except NotImplementedError as e:
        print("success:", e)
    try:
        config["testA"].testB["testC"] = 1
    except NotImplementedError as e:
        print("success:", e)

    print("\ntest create")
    try:
        config.new_attr = 1 #expect output, but no output.
        print("faild")
    except NotImplementedError as e:
        print("success:", e)
    try:
        config["new_attr"] = 1 #expect output, but no output.
        print("faild")
    except NotImplementedError as e:
        print("success:", e)

output:

show type
<function load_config.__init__.<locals>.wrapper.<locals>.inner at 0x0000000001433CE0>
<function load_config.__init__.<locals>.wrapper.<locals>.inner at 0x0000000001433BA0>
<function load_config.__init__.<locals>.wrapper.<locals>.inner at 0x0000000001433D80>
<function load_config.__init__.<locals>.wrapper.<locals>.inner at 0x0000000001433EC0>

test read
wrapper operated
success
success
success

test modify
wrapper operated
success: Config object is read-only. Tried to set testC = 1
success: Config object is read-only. Tried to set testC = 1
success: Config object is read-only. Tried to set testC = 1

test create
success: Config object is read-only. Tried to set new_attr = 1
success: Config object is read-only. Tried to set new_attr = 1

As we can see how python deal with different special methods on instance is different!

And then inject special methods on class will output expected but different compare to instance:

class load_config:
    def wrapper(func):
        def inner(self, *args, **kwargs):
            print("wrapper operated")
            with open(object.__getattribute__(self, "_Config__filename"), "r", encoding="utf-8") as f:
                data = yaml.safe_load(f)
            return func(_Config(data), *args, **kwargs)
        return inner

    def __init__(self, filename):
        object.__setattr__(self, "_Config__filename", filename)


for key, value in _Config.__dict__.items():
    if callable(value) and key not in load_config.__dict__.keys():
        setattr(load_config, key, load_config.wrapper(value))


config = load_config("config.yaml")


if __name__ == "__main__":
    # same test cases as above

output:

show type
<bound method load_config.wrapper.<locals>.inner of <__main__.load_config object at 0x000000000147ECC0>>
<bound method load_config.wrapper.<locals>.inner of <__main__.load_config object at 0x000000000147ECC0>>
<bound method load_config.wrapper.<locals>.inner of <__main__.load_config object at 0x000000000147ECC0>>
<bound method load_config.wrapper.<locals>.inner of <__main__.load_config object at 0x000000000147ECC0>>

test read
wrapper operated
success
wrapper operated
success
wrapper operated
success

test modify
wrapper operated
success: Config object is read-only. Tried to set testC = 1
wrapper operated
success: Config object is read-only. Tried to set testC = 1
wrapper operated
success: Config object is read-only. Tried to set testC = 1

test create
wrapper operated
success: Config object is read-only. Tried to set new_attr = 1
wrapper operated
success: Config object is read-only. Tried to set new_attr = 1

You can see, how python deal with special methods on intance and class is also different!

Proposal 2

Different behavior may brings great trouble to programmers and it is necessary to make a PEP if we want to standardize the behavior of special methods as big change will be employed to python core.

I would like to hear thoughts on whether it’s a good idea to standardize the behavior of special methods.

Remind

This topic has been edited sereral times to show my idea clearly.
I am not English native speaker, some words my be generated by LLM. But, I will try my best to type words by myself now! Some words may cause confusions.
If something I did wrong, just tell me directly!

I completely disagree. ConfigView doesn’t implement __getitem__, so subscripting doesn’t work. That makes perfect sense and doing something else would be unexpectedly magical.

edit: I understand that you are expecting it to go through the __getattr__ logic and find the __getitem__ method on the inner object, but this doesn’t happen because __getattr__ isn’t the standard path

7 Likes

Thanks for the feedback — and you’re absolutely right about how things work today.
ConfigView doesn’t define __getitem__, so the current behavior is completely consistent with the existing rules. I fully understand why CPython raises TypeError here.

The reason I started this discussion is precisely because I understand that limitation.
There are many real-world cases where an object is intentionally acting as a proxy or wrapper and delegates all attribute lookups to an inner object. In those cases, __getitem__ is available and meaningful, but CPython never attempts attribute resolution for subscripting, so the delegation chain is cut off prematurely.

This proposal isn’t trying to introduce “magical” behavior — just to make subscripting consistent with how other special methods already fall back to attribute lookup when no slot is defined. The advantages would be:

  1. Cleaner delegation / proxy patterns
    Classes that forward attributes via __getattr__ wouldn’t need to manually re-implement __getitem__ just to allow subscripting to work naturally.

  2. No impact on existing code or semantics

    • If a type defines a mapping or sequence slot, that still takes precedence.
    • If the fallback isn’t present, the same TypeError is raised.
    • The fallback only triggers in cases where Python would already be about to throw an error.
      So this doesn’t break anything.
  3. Essentially zero performance cost
    In practical code, nobody intentionally triggers “object is not subscriptable” on performance-critical paths. Outside debugging, it almost never occurs.
    The fallback only runs in that rare case, so performance should be unaffected.

My goal is simply to explore whether this small enhancement would make Python’s behavior a bit more consistent and ergonomic for dynamic objects and delegation use cases, while keeping the language’s current guarantees intact.

I’m very open to further discussion or concerns — I appreciate you taking the time to share your view.

I understand what you’re trying to do here. From a writing perspective, it would be nice to send absolutely everything to the proxied object unless specifically overridden.

From a reading perspective, I would be very surprised if setting __getattr__ to proxy would result in __getitem__ or __contains__ or __eq__ to resolving to the proxied object’s dunder methods.

5 Likes

I understand your concern, and I think it’s an important distinction to highlight.

The proposed fallback only applies when the type truly has no corresponding slot or method.
Python does not automatically synthesize __getitem__ or __contains__ for user-defined classes, so fallback for these methods is safe: the interpreter only attempts attribute lookup if there is no mapping/sequence slot and no explicit attribute on the class.

In other words, if you intentionally define either method (or if a C-level slot exists), the normal behavior takes full precedence and the fallback never triggers. So for __getitem__ (the part that I’ve implemented so far) and a potential extension to __contains__, there is no risk of unintentionally overriding existing behavior.

__eq__, however, is a completely different situation.
Python does auto-generate a default __eq__ for many classes, which means attribute-based fallback on __eq__ would never happen.

So to summarize:

  • __getitem__ fallback is safe → Python never creates a default one.
  • __contains__ fallback would also be safe → same reason as above.
  • __eq__ fallback would be also safe → Python auto-generates it, so fallback never happen.

Thanks again for raising the point.

I would hate this. If I’m wrapping an object, it’s because I want more control over it. Right now, if I want my users to be able to do obj[key], I can define a two-line __getitem__, and I’m not implicitly opting into a much broader API than I plan to support. The proposed behavior flips that default, which would be far more error-prone and significantly harder to roll back later.

6 Likes

The language reference says:

For instance, if a class defines a method named __getitem__(), and x is an instance of this class, then x[i] is roughly equivalent to type(x).__getitem__(x, i) .

Not only __getitem__, but many other special methods are defined at the class level. I think this change is a breaking change and would require a PEP.

Thanks for pointing this out. The documentation does indeed describe special-method resolution in a way that resembles the normal attribute lookup order (__getattribute__ → user-defined attribute → __getattr__). However, in CPython’s actual implementation, special methods behave very differently: most of them completely bypass both __getattribute__ and __getattr__ and are resolved only through their corresponding type slots.

This inconsistency is precisely why the change I’m proposing is possible without affecting other special methods. Because CPython already prevents __getattribute__ from intercepting special-method lookup, adding a fallback to a single special method (__getitem__) does not alter the semantics of other methods like __eq__, __add__, __contains__, etc. They continue to resolve strictly through their slots, exactly as before.

In other words, this proposal does not generalize special-method interception.
It only adds a narrowly scoped fallback for __getitem__ when no slot is present, a case where the interpreter would otherwise immediately raise TypeError. This makes it isolated, predictable, and safe.

I fully agree that modifying the entire special-method resolution model would require a PEP.
But this change does not alter the model globally; it adds behavior only in the otherwise-error path of a single method. All existing tests pass, and the change does not influence any special methods other than __getitem__.

I hope this clarifies why the proposal is narrow enough to be safe while still addressing a real usability gap.

Most special methods are looked up on the class. They’re not subject to the usual __getattr__ handling, unless you define that on the metaclass.

1 Like

I understand your concern, but nothing in this proposal would take control away from you.

If you want to control subscripting behavior, you can still do exactly what you do today: define __getitem__ on your wrapper. That continues to override everything — the new behavior never takes precedence over an explicitly defined method or an existing mapping slot.

This change doesn’t require any modifications to your current classes, and it doesn’t automatically broaden the API unless you explicitly choose to delegate it. If you don’t want subscripting to fall through to the wrapped object, you simply don’t define __getattr__ / __getattribute__, or you avoid forwarding __getitem__ within them. In that case, the behavior remains exactly as it is today.

So the existing default stays fully intact, and the fallback only applies in the narrow scenarios where a class intentionally delegates and would otherwise end up with an unnecessary TypeError.

You’re absolutely right — most special methods are resolved strictly at the class level through the type slots, and they don’t go through normal __getattr__ unless the metaclass overrides that behavior. That’s exactly why this proposal is both feasible and contained: since CPython already bypasses __getattribute__ / __getattr__ for special methods, adding a fallback only for __getitem__ (and only when no slot exists) doesn’t affect the resolution of any other special method.

In other words, the current slot-based lookup model stays intact.
This isn’t an attempt to generalize special-method interception; it’s a narrowly scoped enhancement for one method in the specific case where Python would otherwise immediately raise TypeError.

The predictable slot-first behavior remains unchanged, and the fallback activates only when there is no mapping/sequence slot and no class-defined __getitem__.

It’s available through attribute lookup on the instance, not on the class.

__getitem__ also won’t work if it exists as an instance attribute so I wouldn’t expect it to work if found dynamically on an instance through __getattr__.

class InstanceNoMagic:
    def __init__(self, target):
        self.target = target
        self.__getitem__ = target.__getitem__

ex = InstanceNoMagic({"path": "/usr/bin/python4"})

print(ex.__getitem__)  # exists
print(ex["path"])  # not subscriptable
1 Like

Exactly — and it’s precisely because Python currently behaves this way that I’m proposing this improvement. The fact that __getitem__ found on the instance (or via __getattr__) is ignored is the limitation I’m trying to address.

1 Like

Wouldn’t the easiest solution be to add __getitem__ to your proxy?

class ConfigView:
    def __init__(self, target):
        self.target = target

    def __getattr__(self, name):
        # forward lookups to the underlying object
        return getattr(self.target, name)

    def __getitem__(self, name):
        return self.target[name]

You’ve mentioned this a few times. What special methods do you mean?

2 Likes

Adding __getitem__ manually is certainly the straightforward workaround, and it’s what I do today. The proposal isn’t meant to replace that option — it’s simply trying to make delegation patterns a bit less repetitive in cases where a proxy already forwards all other behavior.

That’s why I see this as a very small, localized enhancement rather than a large semantic change. It affects only the narrow error path where Python would otherwise raise TypeError, and doesn’t alter the general special-method model.

I don’t consider this a limitation, and I don’t consider the proposal an improvement. I don’t think instance attributes should change the behaviour of a class in this way. I think it would prove more confusing for different instances of the same class to behave differently and I don’t think __getitem__ should be special cased as a special method that works if set on instances.

1 Like

I understand your perspective, and it’s reasonable to prefer that instance attributes not influence class-level behavior. The way I see this proposal, though, is closer to a small piece of syntactic sugar rather than a shift in semantics. It doesn’t change the behavior of classes that already define __getitem__, nor does it require different instances to behave differently unless the class itself has chosen to delegate behavior through __getattr__.

In practice, it simply removes a bit of boilerplate for proxy-like patterns where the class already forward-delegates everything else. For these cases, the fallback avoids forcing the author to re-write a method whose logic they’ve already delegated.

So it’s not meant to redefine how classes behave — just to make a specific, intentional pattern a little less repetitive.

But your proxy doesn’t forward all other behavior:

class Proxy:
    def __init__(self, target):
        self.target = target

    def __getattr__(self, name):
        return getattr(self.target, name)

p = Proxy("Hello, World!")
print(len(p))

gives:

Traceback (most recent call last):
  File "/tmp/proxy.py", line 10, in <module>
    print(len(p))
          ^^^^^^
TypeError: object of type 'Proxy' has no len()
1 Like

You’re right! I haven’t generalized this behavior to other special methods. At this point I’m only exploring the idea for __getitem__, and trying to understand whether the concept makes sense at all, what the use cases look like, and what risks it might introduce.

That’s why I’m asking for feedback now: to gather perspectives and identify any potential issues before considering whether this should extend further or stay narrowly scoped. And if it turns out that the idea is viable and safe, I would look into applying the same approach to other special methods where it makes sense.

You’ve gotten a good amount of feedback so far. You’d have to explain why __getitem__ should behave differently than other special methods. You’ve said this will make it behave more like other special methods, but it won’t. It will make it behave differently.

2 Likes