BehaviorEnum: an Enum subclass that bundles a value with a callable - stdlib candidate?

The problem

A common pattern is dispatching to different behavior based on an enum member. Two approaches dominate:

if/elif (or match/case):

  • Dispatch logic is coupled away from the enum definition
  • Adding a member requires editing every dispatch site
  • O(n) sequential scan - cost grows with member position
  • Exception messages and logging require per-branch repetition

dict of callables:

  • O(1), but the constant becomes a bare str - no type safety, no IDE completion
  • Mistyping "New" instead of "NEW" silently falls through
  • The callable is still separated from the constant it belongs to

Neither approach co-locates the constant with its behavior or provides O(1) dispatch with full enum semantics.

Proposed solution: BehaviorEnum

A new Enum subclass where each member pairs a constant value with a callable, accessible via a do attribute:

from enum import BehaviorEnum

class TaskStatus(BehaviorEnum):
    NEW         = "new",         lambda: print("Starting task...")
    IN_PROGRESS = "in_progress", lambda: print("Already running.")
    COMPLETED   = "completed",   lambda: print("Nothing to do.")

TaskStatus.NEW.value      # "new"
TaskStatus.NEW.do()       # Starting task...
TaskStatus("new").do()    # lookup by value + dispatch - O(1)

The callable lives on the member; dispatch is O(1) via attribute access; value and behavior are co-located at definition time.

A secondary benefit: BehaviorEnum.__new__ validates the callable at class-creation time, so you cannot define a member without wiring up its handler. Incomplete dispatch becomes a definition-time error rather than a runtime surprise.

Implementation

The implementation is intentionally minimal:

class BehaviorEnum(Enum):
    """Enum where each member bundles a constant value with a callable behavior."""
    def __new__(cls, value, do):
        if not callable(do):
            raise TypeError('%r is not callable' % (do,))
        obj = object.__new__(cls)
        obj._value_ = value
        obj.do = do
        return obj
  • No metaclass override required
  • Regular methods can coexist with members
  • Members are picklable across all protocols (restored by value lookup; the callable is never pickled)
  • Aliases, iteration, __repr__, and functional creation syntax all work as expected

Relationship to prior proposals

This space has been discussed before. The following summarises the relevant prior art and how this proposal differs.

2017: “Callable Enum values” (python-ideas)

Stephan Hoyer proposed a CallableEnum where def FOO(): inside the class body would become a member (python-ideas, April 2017 - thread). It was rejected - correctly - because bare function definitions in a class body trigger the descriptor protocol, requiring a metaclass override that prevents defining regular methods on the same enum.

BehaviorEnum avoids this entirely. Members use tuple syntax (NAME = value, callable), which is not subject to the descriptor protocol. No metaclass override is needed.

2019–2021: Callable values closed as “not a bug”

Two bug reports (python/cpython#82556, python/cpython#89820) about callable enum values were closed as “not a bug” - the behavior is intentional. BehaviorEnum does not change that behavior; it sidesteps it via tuple syntax.

functools.partial workaround

The community workaround was wrapping callables in functools.partial. In Python 3.13, partial gained __get__ (python/cpython#121027), breaking this workaround (python/cpython#125316). The officially recommended replacement is @enum.member (3.11+), but that doesn’t provide a named do attribute, O(1) dispatch semantics, or a reusable base class.

aenum

Ethan Furman’s aenum library extends the enum module with additional types (AutoNumberEnum, OrderedEnum, UniqueEnum, etc.) and is the destination core developers have historically pointed people toward for enum-adjacent patterns. It does not provide a CallableEnum or equivalent - the specific dispatch-with-callable use case has no existing home there, in aenum or in the stdlib.

Prior CallableEnum proposals BehaviorEnum
Member syntax def FOO(): ... (descriptor conflict) FOO = value, callable (tuple, no conflict)
Value access Value is the callable Separate value and do attributes
Dispatch member() or member.value() member.do(...)
Metaclass override needed Yes No
Regular methods allowed No Yes

Performance

Methodology

Three dispatch strategies were benchmarked across all 100 member positions of a 100-member enum:

  • Enum + match/case - standard match statement
  • Dict + lambda - dict keyed by string, values are callables
  • BehaviorEnum - .do() call directly on the member

Each position was timed with timeit (5 000 iterations × 7 repeats, minimum taken) to suppress scheduler noise. Statistical testing used a Kruskal-Wallis H-test (non-parametric) followed by Dunn post-hoc with Bonferroni correction. Complexity was confirmed via OLS linear regression of runtime vs. position.

Results

Method Mean (”s) Complexity
BehaviorEnum 0.0353 O(1)
Dict + lambda 0.0386 O(1)
Enum + match/case 0.7633 O(n)

All pairwise differences are statistically significant (Kruskal-Wallis H = 1667.76, p ≈ 0).

The chart below plots runtime vs. member position continuously across all 100 positions. The O(n) growth of match/case is unambiguous (RÂČ = 0.986, slope = 0.016 ”s/step, p = 1.09e⁻âčÂČ). BehaviorEnum and Dict are flat throughout (slope ≈ 0, p > 0.25 for both).

At the extremes: match/case costs 0.038 ”s at position 1 and 1.542 ”s at position 100 - a 40× increase. BehaviorEnum stays at ~0.035–0.048 ”s regardless of position. The ~9% mean advantage of BehaviorEnum over Dict (p = 1.17e⁻Âč⁎⁞) reflects that getattr + .do() is cheaper than a dict key lookup and call. The variance of match/case across all positions and repeats is also far wider - its cost is unpredictable in a way that BehaviorEnum and Dict are not.

The key finding is not the absolute speed difference but the position-dependence of match/case: real-world dispatch cost varies based on where a member happens to fall in the enum definition. BehaviorEnum is O(1) and insensitive to enum size or member ordering.

Questions for the community

  1. Is do the right attribute name, or would something like behavior, handler, or fn be clearer?
  2. Should the callable validation in __new__ be opt-out (a class variable flag) for cases where users want to store a non-callable alongside the value?
  3. Is there appetite for this in the stdlib, or is aenum still the right home?

The thing about using LLMs to write something like this is that they help to produce something long but then no one (or at least not me) can be bothered to read it because it’s LLM output so the length isn’t actually useful. It would be better if you had just written 2-3 paragraphs without an LLM that just clearly explain what you are talking about without the absurd embellishment.

Don’t use LLMs in this situation. They do not help for you to communicate your ideas to others because others will be distrustful of you when they can see that your text is LLM-generated.

Also don’t edit the post above. If you want to repost then make a new post or start a new thread with links back and forth but don’t edit and totally rewrite the original post.

11 Likes

Thanks for the advice. I did use LLM to reduce the content from my draft blog which was much longer. To go forward, should I just create a new post without LLM and get rid of this one?

Yes, but the key is to write something short. Dot points or 2-3 paragraphs (as Oscar said) to illustrate the idea to start with.

This isn’t a PEP, the minutia of detail can be debated later, but if there’s no consensus on adding it in the first place, then it’s only wasting your time explaining all of that.

1 Like

Just reply here with the human written version.

Personally I also suggest dropping all the performance stuff[1] and the bibliography. Just describe the problem you have, what you’re proposed idea is to solve it and why you think it trumps any idea that doesn’t bend the scope of enum in quite such an extreme way.


  1. matching enums is very unlikely to be the bottleneck in any code ↩

1 Like

Appreciate the notes Carlos and Brenainn! I’m going to write a cleaner version to keep the focus on the idea, free from the LLM fluff and blog style narrative. I don’t want to use this post for driving the discussion as the LLM topic looms over it and might distract from the idea.

3 Likes

I agree with the length issue. LLMs tend to “fill in the blanks” with information that “makes sense” to advanced Python users. But everyone here is an advanced Python user, and we can all “fill in the blanks”.

Definitely do what the others have said:

  • Write your idea as concisely as possible.
  • Organize your idea into: motivation, proposal, and alternative solutions.

That said, I think LLMs can improve the quality of posts on ideas if you use them correctly. After you write your proposal, ask the LLM questions like:

  • Has this idea been considered before?
  • Did I consider the most important alternative solutions?
  • Polish my text; make it concise, simple, and clear.
  • Which objections might people raise?
1 Like

Not this one. The other three maybe, but not this one. Getting an LLM to polish your text seldom actually makes it better; I would much rather read something that’s full of typos than something that’s potentially full of hallucinations. And no, “I checked the AI’s output” doesn’t cut it, for the same reason the Man in Black didn’t trust Inigo - I’ve known too many checked-AI-output messages. So please, don’t let an AI generate the text that you post, even if it’s “polishing” your own text. Just post your own text.

But if asking an LLM if this has been considered before is helpful, by all means, do so. In that usage, the AI is basically serving as a fuzzy-matching search across many many years of discussions and conversations, and that’s something it can certainly do. Just remember to look at everything that it claims to have found, because it’s much more obvious when you fail to check its output there.

3 Likes

I don’t think “polish my text” adds “hallucinations” so much as it simply improves bad or wordy prose.

Also, when I added that, I had non-native English speakers in mind. It’s just a distraction when someone’s language skill gets in the way of their expression.

I don’t agree with this either. Ultimately, people are responsible for what they write. If you feel like someone writes bad posts, don’t read them. It doesn’t matter whether they used AI or they simply write poorly. And conversely, if people write good posts, it doesn’t matter whether AI helped or not.

No need to fight a war (you’re not going to win) against AI. You and I both hate badly-expressed posts. IMO: Fight a war against bad posts.

To get back to the original post, perhaps this is the best (and earliest) example of AI hallucinating:

Wouldn’t d[“Key”] raise a KeyError when we have a d = {“NEW”: ...}?

Also imo the Enum.<attr>.do() attribute access is kind of wired, imo Enum.<attr>() would work better.

Ignoring all the talk about performance, I still have a question. Would getattr(BehaviorGroup, f“do_{thing}”)() not be better? That’s already pretty common (at least as far as I know).

E.g. I made a basic emulator for one of my assembly languages in Python lately, where it’s just a class Emulator with a run loop, where each instruction is just getattr(self, f”do_{o code}”)(*operands). This solution already works well enough, no?

Incidentally, you can do this with dict of callables, and I think you should. Just make the keys enumeration objects:

class A(Enum):
    first = 0
    ...

callables = {first: lambda: ...}

It is a much bigger distraction if the LLM gets in the way of their expression. It doesn’t matter how exactly they prompted the LLM because I haven’t seen the prompt and I have to be suspicious of all of the text.

The problem with “polish my text” is that fundamentally it implies that you are prompting an LLM and then pasting its output for others to see. As soon as that is the workflow it is very difficult to say where the line is between “polish this” and “write this for me”. I can’t count how many times I have seen someone say something like “I used an LLM to find typos” when it is obvious that the whole text is LLM-generated complete with all the usual hallucinations.

A better place to draw the line is here: don’t paste LLM output for others to see. You the human should be thinking about each sentence as you type it (or dictate or whatever).

3 Likes

No strong opinion about where BehaviourEnum should live, but this is the version I would suggest:

from enum import Enum, member


class BehaviorEnum(Enum):
    """
    Enum where each member bundles a constant value with a callable behavior.
    """
    def __call__(self, *args, **kwds):
        return self._value_(*args, **kwds)

    def __init__(self, *args, **kwds):
        # create class structure if it doesn't exist
        if getattr(self.__class__, '_func_mapping', None) is None:
            self.__class__._func_mapping = {}
        self.__class__._func_mapping[self.name.lower()] = self
        # update the docstring
        self.__doc__ = self._value_.__doc__

    @classmethod
    def _missing_(cls, value):
        """
        support case-insensitive, name-as-value lookups
        """
        if value.lower() in cls._func_mapping:
            return cls._func_mapping[value.lower()]

And in use:

class Activation(BehaviorEnum):

    @member
    def SIGMOID(x: float) -> float:
        "Return the sigmoid of x"
        return 1 / (1 + math.exp(-x))

    @member
    def RELU(x: float) -> float:
        "return the relu of x"
        return x * (x > 0)

Calling is as simple as Activition.RELU(4.7), and we get a nice help():

class Activation(BehaviorEnum)
 |  Activation(*values)
 |
 |  Method resolution order:
 |      Activation
 |      BehaviorEnum
 |      enum.Enum
 |      builtins.object
 |
 |  Data and other attributes defined here:
 |
 |  RELU = <Activation.RELU: <function Activation.RELU>>
 |      return the relu of x
 |
 |
 |  SIGMOID = <Activation.SIGMOID: <function Activation.SIGMOID>>
 |      Return the sigmoid of x
 |
 |
 |  ----------------------------------------------------------------------
 |  Data descriptors inherited from enum.Enum:
 |
 |  name
 |      The name of the Enum member.
 |
 |  value
 |      The value of the Enum member.
 |
 |  ----------------------------------------------------------------------
 |  Static methods inherited from enum.EnumType:
 |
 |  __contains__(value)
 |      Return True if `value` is in `cls`.
 |
 |      `value` is in `cls` if:
 |      1) `value` is a member of `cls`, or
 |      2) `value` is the value of one of the `cls`'s members.
 |      3) `value` is a pseudo-member (flags)
 |
 |  __getitem__(name)
 |      Return the member matching `name`.
 |
 |  __iter__()
 |      Return members in definition order.
 |
 |  __len__()
 |      Return the number of members (no aliases)
 |
 |  ----------------------------------------------------------------------
 |  Readonly properties inherited from enum.EnumType:
 |
 |  __members__
 |      Returns a mapping of member name->value.
 |
 |      This mapping lists all enum members, including aliases. Note that this
 |      is a read-only view of the internal mapping.
3 Likes

Why is a dict mapping enum members to functions not sufficient for this purpose?

I find enums best kept simple. Whenever I’ve worked on code that gives enum members special behaviors or complex values, I find myself wishing that the enum were just an enum, and the associated datastructures were isolated from “enum-ness”.

The implementation with member-decorated functions is cool (genuinely! I think that’s pretty neat), but I have trouble thinking of a case in which I’d prefer it to a separate lookup table. A lookup table also gives me a place to precisely describe the types of functions contained within (which are probably of uniform type? else why are they all in the enum?).

4 Likes

Depending on the situation, it can make sense to keep the data and the behavior together.

1 Like

The enum.member decorator can be made to support typing the callables being decorated:

class member[**P, R]:
    def __init__(self, value: Callable[P, R]) -> None:
        self.value = value

    def __call__(self, *args: P.args, **kwargs: P.kwargs) -> R:
        return self.value(*args, **kwargs)

Usage:

class Activation(BehaviorEnum):
    activation_member = member[[float], float]

    @activation_member
    def SIGMOID(x: float) -> float:
        "Return the sigmoid of x"
        return 1 / (1 + math.exp(-x))

    @activation_member
    def RELU(x: float) -> float:
        "return the relu of x"
        return x * (x > 0)

Passes mypy

1 Like

But you don’t “have to be suspicious”. Either the text is good or it’s bad. It doesn’t matter where it came from.

And anyway, within a few years, you won’t probably won’t be able to distinguish human from LLM, so I think this is a bit of a pointless thing to worry about.

I agree with this approach, the @member syntax is definitely cleaner and having the behavior defined inline makes more sense than referencing an external function through the tuple.

I had kept the tuple of value and function because in my specific use case the value was a camel cased string like "Create"which I was using in both logs and as the input to the handler, but with your approach I could get that back with operation.name.title() and the case-insensitive _missing_ handles the lookup from the event payload cleanly too, so this is a neater design overall.

CREATE_OPERATION = "Create"
UPDATE_OPERATION = "Update"
READ_OPERATION = "Read"
LIST_OPERATION = "List"

def operation_handler(event: dict):
    try:
        operation = _validate_operation(event.get("operation"))
        if operation == CREATE_OPERATION:
            validate_create(event)
            create_operation(event)
        elif operation == UPDATE_OPERATION:
            validate_update(event)
            update_operation(event)
        ...
    except ValidationException as ve:
        log.error("Create operation failed due to validation")
        raise ve

This is the code that motivated the idea. When I first saw it I thought, why not just use an enum instead of string constants? But even after converting to one the handler looked exactly the same, I had just swapped string constants for enum members without changing the structure. So I understood why the original developer never bothered.

A dict helps, but now you have two things to keep in sync - and in this codebase they already hadn’t. Every error message said “Create” regardless of which operation failed.

I wanted to make enums worth reaching for in this kind of scenario and encourage better typing along the way - BehaviorEnum was just one way I thought that could work.

It does matter though. Generating large volumes of seemingly coherent text is now much less expensive than reading and assessing that text.

For a significant fraction of your peers, this is a serious problem. If it isn’t for you, then I consider you lucky.

Huge volumes of generated text disrupt human-to-human communication. The whole point of posting here is to discuss with other people, not bots or bots-by-proxy.

If you replace one with another without changing structure, then structure does not change. This is not surprising.

You could have actually changed things so that you’re looking up callbacks based on enum members. Or you could do the same with string constants and typing.Literal to get exhaustiveness checking. To me the benefit of an enum in Python is declaration of intent – it makes code more readable. But there’s no magic to an enum. It’s little different from a predefined set of constants.

I think you approached things with too strong of an expectation that adding an enum would solve problems. This looks to me like a case where you can improve the flow of your code with or without enums.

That has very little to do with enums. The code has a logical error.


I’m pretty much unmoved from my earlier stated position that I prefer a separate dispatch table. I don’t know that there’s much to do with that. Maybe a better example would be convincing, but I somewhat doubt it if it’s a matter of preference.

4 Likes