Working on a piece of code that will find the square route of a number using Herons method, It’s not all that important to the question but I’ll put it at the bottom of this anyway. I’m currently doing it with a list of perfect square routes from 1-100, but then it cuts off at 100 so it doesn’t work for higher numbers (I did get it to work for 3000 but i think if you go high enough it just won’t work) How would I find the closest perfect square to any number?
Forgot the code sorry
perSqrs = [1,4,9,16,25,36,49,64,81,100]
N = float(input("Number: "))
E = 0
E = min(perSqrs, key=lambda x: abs(x - E))
for i in range(10):
E = (E + N / E) / 2
print(E)
print(E * E)
Can’t you just take the square root of the number, round it, then square it to get the closest perfect square?
def psqr(x):
return int(round(x**0.5))**2
Edit: Ah, Heron’s method is a way to get the square root of a number. I guess this way is cheating, then.
I’ve asked this question 3 times now on different platforms and every single time it’s. “square the number and round it” and then “Edit: Oh yeah you can’t do that”
That’s because it’s the obvious answer to the question in the title. If you clarify the title, you might get better answers.
I’ve added “without using sqrt” to the title.
Found some interesting information by looking at Methods of computing square roots - Wikipedia :
- The initial estimate for Heron’s method doesn’t need to be a perfect square
- There are several ways of getting an initial estimate of the square root without calculating a square root (also one that involves computing the square root of a number between 0 and 100, which your program already does)
It’s quite well described, I guess you’ll be able to solve your problem using this resource. Don’t hesitate to ask if you have trouble with the implementation in Python.
Didn’t even think of that, thanks
Thanks!! I’ll look ino that
Heron’s method is a method for approximating the square root of a given number via a sequence of estimates which converges to the true value - it does not involve a predefined list of perfect squares.
Yet you’re using a predefined list of perfect squares to search for your answer - how large that list should be depends on the given number x
- and your solution is a function that minimises the absolute value of the difference between x
and numbers in your predefined list of perfect squares.
An alternative solution, which does not require a predefined list of perfect squares, would be to calculate the answer as the largest integer square not exceeding, and possibly equal to, the given number x
. So you’re reversing the problem.
The range of integers 1...x
whose squares are bounded by x
is obviously bounded: if x >= 4
then the square root of x
cannot exceed (1/2)x
; if x >= 9
the square root of x
cannot exceed (1/3)x
; if x >= 16
the square root of x
cannot exceed (1/4)x
, etc. The pattern here is that if n
is the largest positive integer such that x >= n^2
then sqrt(x) <= (1/n)x
.
An example: if x
is 101
then you’re searching for integers in the reversed range 10..1
whose squares do not exceed 101
- clearly you would stop at 10
because 10^2
is your answer. The upper bound of 10
does not come from taking the square root of 101
, but uses the fact that as 100
is the largest square not exceeding 101
, then sqrt(101) <= (1/10)x
.
If you like these kind of problems, you should check out @sr-murthy’s Python package for continued fractions - Thanks to this discussion I just learned about its existence - it’s really nice.
It’s a bit helpful, actually. There’s something you missed, and the fact that it doesn’t actually matter helps understand the problem properly.
First, let’s look at these lines carefully:
I guess that you added E = 0
because you were getting a NameError
otherwise. But this misunderstood the problem. I guess you wanted this line to find an E
value from perSqrs
that’s closest to N
. But in this case, we need to compare the values according to abs(x - N)
, not abs(x - E)
. The code you have does not use a different E
value each time in the lambda
, because the entire min
call has to complete before E
is assigned again. So the lambda
is actually equivalent to lambda x: abs(x - 0)
(using the E
value from before), i.e. lambda x: abs(x)
, i.e. just using abs
. In other words, you always just start the loop with E = 1
.
When debugging, it’s important to check your assumptions!
Anyway, the point is that you don’t actually need to find the integer floor square root in order to get started. As far as I’m able to tell, that’s not what Heron did. And in fact, the theory turns out that any guess will work (although negative guesses will converge to the negative square root instead) - it just takes a few more steps. Not even that many more, because of how it works.
For example, if you tried checking the E value each time through the loop with a larger N that “doesn’t work”, say, 5000, what you’ll see is that it does get pretty close. You’ll also see that halfway through, it gets a value that’s pretty close to 100. So if the initial estimation had worked like you planned, it would basically save the first few guesses, and use the last few to get from “pretty close” to “within floating-point error”.
The way we fix this is to… check the E value each time through the loop The loop code
for i in range(10):
doesn’t really make sense - the 10
didn’t come from anywhere and doesn’t really relate to how the code works. Instead, because we know the loop theoretically gets us closer each time, we want to repeat the loop until the result is close enough - for example, until the limit of floating-point precision means we can’t improve.
When debugging, it’s also important to understand the specification fully!
I should add that when I said:
if
n
is the largest positive integer such thatx >= n^2
thensqrt(x) <= (1/n)x
.
it might be pointed out that if we knew such an n
then we would have the answer as n^2
. That wasn’t my point.
For an arbitrary integer, of course, we will not generally know such an n
, but we can start with an initial estimate by using a square root we already know as an initial estimate and upper bound for the square root we’re looking for. The initial estimate is a key feature of Heron’s method.
Thanks.
For square roots there are continued fraction representations, which can be used for approximations.
If you had a list of all squares spanning your input, you could use binary search to pretty efficiently find the location of your input in that list (number of operations proportional to the number of bits in the length of the list of squares). Then pick whichever neighbor is closer to the input.
The bisect
module implements binary search for you. It just remains to supply the list of squares.
Two things make this quite practical:
-
In Python 3,
range(n)
acts ike a “lazy list”: no list is actually constructed, but it can still be indexed as if it were a list. Sobisect
can use it as if it were a list. -
The
bisect_XXX()
methods accept an optionalkey=
argument, to compute the value to be compared from the sequence value. This can by used to compute all (& only) the squares needed by a specific search, on the fly.
Put it all together:
import bisect
def closesq(n):
if n <= 1:
return n
# largest r such that r^^2 <= n
r = bisect.bisect_right(range(1, n),
n,
key=lambda i: i * i)
if (r+1)**2 - n < n - r**2:
r += 1
return r * r
And then:
>>> for i in range(33):
... print(i, closesq(i))
0 0
1 1
2 1
3 4
4 4
5 4
6 4
7 9
8 9
9 9
10 9
11 9
12 9
13 16
14 16
15 16
16 16
17 16
18 16
19 16
20 16
21 25
22 25
23 25
24 25
25 25
26 25
27 25
28 25
29 25
30 25
31 36
32 36
However, bisect
needs to ask its sequence argument for its length, and that will blow up if n
is very large:
>>> range(10**20)
range(0, 100000000000000000000)
>>> len(_)
Traceback (most recent call last):
...
OverflowError: Python int too large to convert to C ssize_t
You could, of course, work around that by skipping use of bisect
and range
, and writing your own binary search in Python. But, as above, there’s no need to compute any squares in advance.
closesq(10**1000)
works just fine :-). I just had to provide hi
and disable the C version of bisect
.
Nice trickery! Although I can’t recommend it, I enjoyed and appreciated it .
For pedagogical purposes, I’ll flesh out a self-contained version that doesn’t rely on bisect
, and works for ints of any size. This basically came from copying bisect.py
’s bisect_right()
and specializing it to this single search application. The biggest “win”, though, comes from establishing, at the start, much narrower bounds on the region to be searched.
def closesq(n):
if n <= 1:
return n
k = n.bit_length()
# 2**(k-1) <= n < 2**k, so
# 2**((k-1)/2) <= sqrt(n) < 2**(k/2)
lo = 1 << ((k - 1) // 2)
hi = 1 << ((k + 1) // 2)
assert lo * 2 == hi
assert lo * lo <= n < hi * hi
lo += 1
while lo < hi:
mid = (lo + hi) // 2
if n < mid * mid:
hi = mid
else:
lo = mid + 1
assert lo == hi
r = lo - 1
#assert r == math.isqrt(n)
rsq = r * r
rp1sq = (r + 1) ** 2
assert rsq <= n < rp1sq
return rp1sq if rp1sq - n < n - rsq else rsq
But I’m unclear on what the OP actually wants. As the commented-out assert
above says, all the fiddling to get r
can be done in one step using math.isqrt(n)
(which is, by far, the easiest and fastest way to get the result). So it really is computing the square root of n
.
But then, in general, there’s no way to determine when an int is a perfect square short of finding its square root. You can sometimes prove an int isn’t a perfect square more simply (e.g., no int i
satisfying i % 4 == 2
can be a perfect square, and more generally any square must have an even number of trailing 0 bits), but demonstrating that it is a square generally requires exhibiting a square root.
Mark also shared various isqrt implementations. Don’t know whether Heron’s is among them…
Perhaps a good replacement for their min(list of squares, key=diff)
solution:
def closest_square(n):
i = sq = 0
while n > sq + i:
sq += 2*i+1
i += 1
return sq
Mark wrote the code for CPython’s math.isqrt()
- he’s an expert here.
Yes, the snippet you linked to covers Heron’s method. But not by that name. To a mathematician (which Mark is), “Heron’s method” is just a simple special case of Newton’s root-finding method.
Faster but ugly version of the above, reducing the number of calculations:
def closest_square(n):
i = sq = 0
while True:
mid = sq + i
if n <= mid:
return sq
i += 1
sq = mid + i