Class inheritance: changing the treatment of a parameter

I am generating a derived class, and would like this to treat one parameter of the base class differently from how the base class treats this. In particular, I want color RGB values, which in the base class are expected to be from 0-1, to be given on the conventional 0-255 scale.

I have tried this, not really expecting it to work, and it did not. Is there any way to modify how a class interprets the base class’s own initializing variables?

from vpython import sphere, vector
class Spheres(sphere):
    def __init__(self, color):
        self.color = color/255

P.S. I realize it would be easier to create a wrapper for the color value, as below, but am still curious if the above is possible.

def RGB(color):

Rgb colours are often represented by a tuple of 3 values, either int or float.
For example:

black = (0, 0, 0)
green = (0, 255, 0)

It is possible that you can use a 24bit or 30bit int as the colour value.

What type you need for color?

A different approach would be to call the superclass’s initializer with slightly different parameters.

class Spheres(sphere):
    def __init__(self, color):

This may be more effective for you. (Do check whether you want to divide by 255 or 256 though, and also whether you want float division or floor division.)

I solved the problem with the following:

class Spheres(sphere):
    def __init__(self, *args, **kwargs):
        if "color" in kwargs.keys():
            kwargs["color"] /= 255
        super().__init__(*args, **kwargs)

Thanks very much for the suggestion to reference the superclass, Chris; I didn’t know this was possible!!!

For some bizarre reason, Visual Python uses values between 0-1 for each parameter of RGB. Don’t ask me why.

That looks good! Yeah, I wasn’t sure how it would be formatted, but that’s decent. BTW, you can simplify that condition slightly - instead of asking if “color” is among the dictionary’s keys, you can just ask if “color” is in the dictionary itself

if "color" in kwargs:
    kwargs["color"] /= 255

I would still recommend checking whether you want to divide by 255 or by 256, and whether you want to use /= or //=, but that looks like a pretty decent way to do it.

That way the colour depth doesn’t matter. Not everything is 8-bit

Cameron Simpson

It’s pretty easy to mask off low-order bits for e.g. 4-bit color.
Conversion from a decimal value seems harder, whatever the color bit depth.

Yes, but if you ever want to go to higher color depth, it becomes incompatible (since eg getting 4-bit color now requires you to mask off a different set of bits).

It’s pretty easy to mask off low-order bits for e.g. 4-bit color.

And for 10 bit colour?

Conversion from a decimal value seems harder, whatever the color bit

The nice thing about 0.0…1.0 is that it is depth agnostic. Maybe
provide some properties for the desired colour depths, eg:

 def r8(self):
     return int(self.r * 255)

or something like that? Equally you can provide setters to allow
assigning an 8-bit value.

Anyway, my main point was that a float 0…1 range design decision may
have been driven by being agnostic to the colour depth.

Cameron Simpson

Exactly. Color computations are a lot easier with a normalized float in the range 0.0…1.0. And they are more precise as well, because the float offers a (practically) continuous range of values, while e.g. uint8 has only 256 discrete values. And thus 100% intensity of some component is 255, and 50% is… 128: not very accurate.