This is the expected basic approach - using the formula.

To improve performance, we could generate the counts all at once. The performance impact will probably be negligible in actually difficult cases, though (i.e., the cases where it wouldn’t be super-fast no matter what), because this isn’t saving any time on the actual multiplication, just on setup.

Using .count will have to re-scan the entire list again each time to find the count of each factor. To avoid this, instead of using a set, we can use a data structure that naturally counts the things put into it: collections.Counter. It’s essentially a dictionary where the values correspond to counts (with a bit of extra logic).

from math import prod
from collections import Counter
print(prod(
(factor**(multiplicity+1) - 1)//(factor - 1)
for factor, multiplicity in Counter(prime_factors).items()
))

Optimizing any further would require rather detailed study and benchmarking. For example, one might suppose that division is expensive, so it would be better to compute a separate prod for the numerator and denominator, and do a single division at the end. However, multiplication isn’t constant-time, so this might backfire.

My point is that it probably isn’t optimizing the bottleneck, which (I imagine) is all the multiplications and divisions that will subsequently occur. Suppose for example our list is [2] * 100 + [3] * 100, such that we will end up computing ((2**101 - 1) // 1) * ((3**101 - 1) // 2). However, the numbers have to get really big before that matters, in my testing.

But aside from that, in practical cases it won’t really be quadratic, because numbers with a lot of prime factors typically also have a lot of overlap in those numbers. Prime factors are not equally likely, after all; a given large range of integers contains more even numbers than it does numbers divisible by 3.