Type-hinting Numpy functions to accept both arrays and scalars, pedantically

Consider the following function:

import numpy as np

def law_of_cosines(side1, side2, angle):
    return np.sqrt(
        np.square(side1) +
        np.square(side2) -
        2 * side1 * side2 * np.cos(angle)
    )

This function can accept more or less arbitrary combinations of Numpy arrays of float-based dtypes, “array-like” objects (sequences, objects implementing certain magic methods) containing either Numpy float objects or native Python floats, or individual “scalar” Numpy float objects or native Python float instances.

This topic has been discussed at least once before:

However my goal is to be at least somewhat pedantic. I want to accept a reasonably broad range of input types, while also being strict about dtypes. For example, the proposed solution in the thread above would not prevent me from passing ['a', 'b', 'c'] into my law_of_cosines function, and I would definitely like to prevent that.

I can do pretty well by assuming that inputs are only float or NDArray[np.floating[Any]], but that leaves out ArrayLike entirely, which is not parameterizable. Thus my function will accept 3.5 and np.asarray([3.5]) but not [3.5], which seems unfortunate.

What is the currently recommended best practice for typing Numpy functions that are meant to be “forgiving” about their input types? Or is this still a WIP topic?