Unittest mocking: What's the difference between MockClass() vs MockClass.return_value?

What’s the difference between MockClass() vs MockClass.return_value? They seem to be the same?
Code from unittest.mock — mock object library — Python 3.12.3 documentation

from unittest.mock import patch


class Class:
    def method(self):
        pass


with patch('__main__.Class') as MockClass:
    instance = MockClass.return_value
    print(MockClass)
    print(MockClass.return_value)
    print(MockClass())

Output:

<MagicMock name='Class' id='1971932896848'>
<MagicMock name='Class()' id='1971892150288'>
<MagicMock name='Class()' id='1971892150288'>

The difference is that return_value is settable. return_value specifies the value to return when the mock object is called, and you can change that value by setting it. By default it returns a new child mock object.

1 Like

Normally, one patches methods of the class instead, for finer control. This usage is probably easier to understand. When you set the .return_value of a patched method, you basically tell Pytest, “don’t ever actually use the real method; whenever the code would call this method, instead skip all the calculation and return the stored .return_value”.

But - classes are callable, and normally, calling the class creates an instance.

Therefore, when you patch the class, you have the option to change what is returned by calling it - by setting the .return_value of the mock. If you don’t, then calling it gives you an instance just like normal.

1 Like