Efficient Counter.total method using a total_values variable performing accounting of values during addition or subtraction from counter

Title: Efficient Counter.total method using a total_values variable


I propose introducing a total_values variable in the Counter class to efficiently calculate the sum of counts without iterating over self.values() each time. This improvement aims to enhance the performance of the total method, especially for large Counter instances.

Currently, the total method uses sum(self.values()), which has a time complexity of O(n), where n is the number of elements in the Counter. By maintaining a separate total_values variable, we can achieve constant time complexity for the total method.


  1. Initialization:
  • Introduced self.total_values = 0 in the __init__ method to initialize the total_values attribute.
  1. Total Method:
  • Modified the total method to return self.total_values instead of sum(self.values()).
  1. Update Method:
  • Updated the update method to increment total_values when elements are added, ensuring it works efficiently with iterable updates.
  1. Delitem Method:
  • Updated the __delitem__ method to decrement total_values when elements are deleted.
  1. Other Methods:
  • Modified methods like subtract, __add__, __sub__, __or__, __and__, __iadd__, __isub__, __ior__, and __iand__ to update total_values accordingly during operations.
  1. Internal Methods:
  • Updated internal methods like _keep_positive to adjust total_values when elements are removed.


class Counter(dict):
    def __init__(self, iterable=None, /, **kwds):
        self.update(iterable, **kwds)
        self.total_values = 0  # Initialize total_values

    def update(self, iterable=None, /, **kwds):
        if iterable is not None:
            if isinstance(iterable, _collections_abc.Mapping):
                if self:
                    self_get = self.get
                    for elem, count in iterable.items():
                        self[elem] = count + self_get(elem, 0)
                        self.total_values += count
                    self.total_values += sum(iterable.values())
                _count_elements(self, iterable)
                self.total_values += sum(iterable)
        if kwds:

    def __delitem__(self, elem):
        if elem in self:
            self.total_values -= self[elem]

    def total(self):
        'Sum of the counts'
        return self.total_values

# Example usage
c = Counter(a=3, b=2, c=1)
print(c.total())  # Output: 6

c['d'] = 4
print(c.total())  # Output: 10

del c['a']
print(c.total())  # Output: 7


This enhancement aims to improve the performance of the total method, especially in scenarios with large Counter instances. The maintenance of the total_values variable ensures constant time complexity for obtaining the sum of counts, making the Counter class more efficient and responsive. Feedback and discussions on this proposal are welcome.

You mean making that one particular operation more efficient, at the expense of making everything else less efficient?

Such a variable was btw discussed in the issue that led to the creation of the method. How would you resolve the issue Mark brought up, temporarily setting a value to 1e100, causing inaccuracy?


I’m curious what kind of code would benefit from this optimization. It seems like this is only helpful if you need the total a lot while the Counter is still changing, and I’m not sure when that would come up. But I mostly use Counters in a sort of create-once, read-often way–once I’ve created one I don’t modify it again.

1 Like

Hi @jamestwebber, thanks for taking the time to discuss this.

That’s a valid point. The optimization I’ve introduced may be most beneficial in scenarios where a Counter is frequently updated, especially in situations where updates are interleaved with the need to compute the total. Here are a few scenarios where this optimization could be particularly useful:

  1. Real-time Analytics:
    If you’re using Counters to keep track of real-time events or statistics, where updates are frequent and you need to query the total frequently, this optimization can offer performance benefits.

  2. Dynamic Data Sets:
    In cases where the Counter represents a dynamic dataset that changes frequently, and you need to query the total during these changes, the constant time complexity of the proposed total method could be advantageous.

  3. Incremental Processing:
    If your application involves processing data incrementally and requires updates to the total as the Counter evolves, the proposed optimization could lead to more efficient computations.

While the optimization might not be crucial for use cases where Counters are created once and rarely modified, it can significantly improve performance in scenarios with more dynamic or frequently updated Counters.

But, as @pochmann pointed out, it improves performance for this one thing at the expense of slightly more work on every update to the counter.

The performance improvement is going to depend on the relationship between the number of elements you’re tracking and how often you’re updating the values and checking the total. i.e. if you have N keys in your Counter, it only makes sense to cache the total if you need to check it very frequently–more than once every N updates. Otherwise, the time spent on updating the total would be more than the cost of calculating it on demand.

It also seems pretty simple to track the total separately, on the occasions that this is valuable. And doing so would provide a clear benchmark of whether it’s worth doing at all, since it’s effectively the same code that would be added to the class.

1 Like

Hi @pochmann, thanks for taking the time to discuss this.

The proposed modification aims to optimize a specific operation without significantly affecting the overall performance of other functions in the Counter. All the changes introduced run constant-time operations.

A potential issue could be in the update function, where we calculate the sum/total_value of the given iterable in linear time when the counter is empty.

In this case, the update function internally uses the dict update, which inherently runs through the iterable, resulting in an O(n) operation. In practice, due to hash collisions and the need to resolve them, the time complexity is often considered to be O(m+n). So, the performance of this particular modification should align with the existing complexity of the update function when the Counter is not empty. The intention here is to maintain consistency and not introduce additional overhead.

In the context of the issue raised by Mark, where temporary values were set to 1e100, causing inaccuracy, the variable in question is tailored to resolve such precision issues without compromising the efficiency of other operations. I believe this approach strikes a balance between addressing specific concerns and maintaining the performance integrity of the existing functionality. I’d love to hear your thoughts on this and any suggestions you might have for further improvement.

I’m curious to hear your thoughts on this approach and if you have any suggestions for further improvement. How do you think we can enhance precision-related concerns while keeping the performance at its best?

P.S. Thanks for linking the previous issue, I propose we revisit that here.

Thanks for delving into the nuances of the performance considerations.

You’re absolutely right, the effectiveness of this modification indeed relies on the usage patterns, specifically the frequency of updates and total checks in relation to the number of elements being tracked. It’s a delicate balance, and I appreciate your insightful perspective on when it makes sense to cache the total.

While it introduces a bit more work on every update, the intention is to enhance performance for scenarios where checking the total is a frequent operation. The idea is to provide an option for users who value rapid total checks and are willing to trade off a bit of update overhead.

Regarding tracking the total separately, it’s a valid suggestion. However, one consideration for incorporating it directly into the class is the convenience it brings. Users wouldn’t need to manage a separate variable or add extra code; the Counter itself takes care of it.

Your point about a clear benchmark with separate tracking is intriguing. It would be interesting to explore that avenue and compare the performance trade-offs in practical scenarios. I’m open to further discussions on the best way forward. What are your thoughts on striking a balance between simplicity, convenience, and performance optimization?

This feature isn’t something that would make a difference to me[1], so I’m inclined to say it isn’t worth adding. I also have absolutely zero authority over such a decision. :sweat_smile:

I think a strong proposal to make this change would involve some real-world examples of where it would make a difference. That is, not hypothetical scenarios, but real code out in the world that would benefit from this change (ideally with a benchmark showing the improvement).

  1. although I probably also wouldn’t care, as I’m not using Counter in a performance-critical way ↩︎

How does your proposal provide an option? As written, it replaces the existing behaviour with one with different trade-offs. So people for whom the existing behaviour is better would be penalised.