I think we are barking up the wrong tree in some ways.
A real question is related to education and how suitable some languages are versus others. The compile/interpret differentiation is not what I see as even slightly important.
Let me explain. Back when I was in high school, I used to sneak into the college computer lab and play around and largely self-educate on languages like BASIC or FORTRAN and others. For BASIC and some others, I used a teletype that printed on rolled paper and could save or load from paper tape. FORTRAN not so much.
There was then a significant difference in interactivity. One had to be typed and put in order and so on and when I got inevitable errors, it could take days to make replacement cards and wait for the result. Nothing ran at all until there were no errors and even then, logic errors took even more time once it compiled. BASIC let me edit as I went and insert new lines of code and it would run until it hit an error. It had the capacity to be interactive and stop and ask for input and so on. The FORTRAN I used required info to be put into a data section after the main program to provide the input statically.
Years later I was teaching computer languages like Fortran which required submitting decks of punch cards and getting back results in hours or even days as reams of printouts. At the same time, I was able to do my own work on a minicomputer using languages like PASCAL where the edit/compile/edit/compile/edit/run went a bit faster and I had other better facilities like editors. Sort of half interactive.
I then switched to C and later C++ and even some S along with the many other mini-languages UNIX came with including sh/csh/ksh while at Bell Labs but noted that there were some sort of more interpreted languages I used such as AWK or PERL which at the time really were interpreted live.
What I think counts is the ability to get partial feedback even before sending in a perfect program. Perhaps not universally, I associate compilers with being rather unforgiving and interpreters working with you to some point before they object. In particular, languages like Python and R let you pause in mid program and ask for the values of variables or evaluate some statement so maybe you can see if things are as expected or have diverged and need debugging. Environments like RSTUDIO actually will have a window that can show the values of all variables at a glance.
But many compiled programs can be run in a debugger that allows you to similarly pause and examine and even make changes on the fly. Someone with enough training, can actually do quite a bit and realistically every compiled language is simply interpreted once while many interpreted languages actually half-compile when they can including some able to compile small segments on demand and just-in-time.
Many interpreted language programs are run without human connection once they are ready and debugged. Whether they evaluate the original code or some kind of byte code or have been converted to machine language may only matter in some edge cases, such as where the code creates some new code dynamically and evaluates it.
What is important depends on the education being imparted. Some courses teach ideas and concepts within computer science and sometimes something like Turtle Graphics meets some such needs well enough. Others want to teach you how to actually get things done as an individual and perhaps in a narrow range of applications and some languages may be fine for that.
Yet others want you to learn to work with groups and cooperate and maybe even use other forms of parallelism and very different languages may meet those needs.
If the goal is education or the goal is getting jobs and so on, some may be a better fit. Today, there are many jobs wanting a Python programmer but others value something like RUST.
Teaching some people many things can expand their horizons. Other people seem to want to learn just one thing and will mostly be confused that different languages have so many different variations.
But one reality is that so many of these other considerations do not give as much weight about caring if a language is interpreted or half-interpreted and care more about how well they suit needs.
Python once suited some needs well-enough that it became a teaching language in many eyes. But, I suggest an opinion that it has been changed so much that perhaps it no longer is ideal for simple introductory classes. There are too many ways to do anything and when students ask how to do things, I regularly see advanced answers offered at a wrong level for them to meet classroom expectations.
If an assignment is teaching you simple ways to use lists, it may be too early and confusing to suggest a one-liner using list comprehensions nested or suggest they use numpy. Those are nice things for after they have learned some basics.
So, what do we mean by interpretation versus compilation. I program in oodles of languages and notice some differences that can make a language easier or harder to work on linearly or other ways and that may almost dictate whether it should be seen as compiled or interpreted. I do note that despite what some say, Python may increasingly be mostly compiled as ever more functionality is rewritten to call functions in libraries created using languages like C/C++ and others. It is mainly the uppermost levels that can be seen as interactive or …
Consider languages that require a variable to be declared, including the exact type, before it can be used. Functions that call other functions must be shown in a way that allows the inner call to be declared or even defined earlier in the code. Other languages scan down within a file and automatically note all function names declared and go back and work on interpreting or compiling. It is hard to look ahead while I am still typing!
Yet others just create a variable when it is used. Moments later it can be given some other type to hold and it just works. Some play all kinds of different games on when variables are in scope or whether you can have many with the same name but different signatures and many more such concepts.
Some concepts make it fairly hard to be interactive as when I am typing in line by line and forgot to declare a variable earlier or realize my function calls another not yet defined. This means more planning in advance, or saving code in an editor, modifying, and resubmitting.
There is no right or wrong here, merely choices and some are great for a production where errors must be avoided, but at the same time, are not a great match for education.
So, no matter how you choose to educate students, they need to know a bit about the rest of the world and not expect much. Another thread here has been discussing whether Python could benefit from some form of delayed evaluation. Languages like R have always had that and may want some form of forced immediacy. In my experience, people who learn both can experience quite a bit of cognitive dissonance. An ideal education tool may still need to be coupled with some education about other possible ways, if only as a warning. Your first language may leave a serious mental imprint.
Luckily for me, I never seem to have had a first language in anything as I typically encounter many at the same time, LOL!