Struct _PyGenObject has been moved to internal/pycore_genobject.h

Hi,

Related to C API: Remove private C API functions (move them to the internal C API) · Issue #106320 · python/cpython · GitHub, 3.14 started failing for GitHub - sumerc/yappi: Yet Another Python Profiler, but this time multithreading, asyncio and gevent aware. because of following yappi/yappi/_yappi.c at 1d3f7501701e1f050b6dcd6a86fd36aec08185c7 · sumerc/yappi · GitHub


 PyGenObject *gen = (PyGenObject *)PyFrame_GetGenerator(frame);
    if (gen == NULL) {
        return 0;
    }

    return gen->gi_frame_state == FRAME_SUSPENDED;
...

Is there any way to get state of the generator from C-API?

Yes, you can opt-in for the internal C API which is a good fit for debuggers and profilers.

The Fix build for Python 3.14 by Steap · Pull Request #191 · sumerc/yappi · GitHub fix LGTM: define Py_BUILD_CORE macro and use pycore_genobject.h header file.

1 Like

@vstinner , I hit some issues with above. Now we can compile successfully on 3.14 but I don’t see FRAME_SUSPENDED state set for some functions inside the profiler callback(which is set by PyEval_SetProfile)

Is this something expected?

This is a simple reproducer:

import asyncio
import yappi

async def foo():
    await asyncio.sleep(1.0)


yappi.set_clock_type("WALL")
with yappi.run():
    asyncio.run(foo())
yappi.get_func_stats().print_all()

See the results on 3.13 and 3.14:

yappi (master*) » py 314_asyncio.py | grep foo (3.14)
..sktop/p/yappi/314_asyncio.py:4 foo  2      0.000011  0.000136  0.000068
yappi (master*) » pyenv global 3.13
yappi (master*) » py 314_asyncio.py | grep foo (3.13)
..sktop/p/yappi/314_asyncio.py:4 foo  1      0.000011  1.002795  1.002795

The total time should be ~1.0 sec like 3.13, but we don’t get on 3.14. Pls note that second column is the total time.

When I debug the issue, I see on 3.13, function receives FRAME_SUSPENDEDevent but on 3.14it always is set to FRAME_EXECUTING. Like the place of the profiling tracing functions change so that we never get those states somehow?

I was using this to detect coroutinesuspends with checking this FRAME_SUSPENDED in the PyTrace_RETURNevent of the callback… It was working like that until 3.13.

Any idea?

For more information when the issue happens (we call await and it triggers a PyTRACE_RETURN event) and then we don’t see FRAME_SUSPENDED, here is the callstack when this happens:

Program received signal SIGINT, Interrupt.
__pthread_kill_implementation (no_tid=0, signo=2, threadid=140737352833920) at ./nptl/pthread_kill.c:44
44	./nptl/pthread_kill.c: No such file or directory.
(gdb) bt
#0  __pthread_kill_implementation (no_tid=0, signo=2, threadid=140737352833920) at ./nptl/pthread_kill.c:44
#1  __pthread_kill_internal (signo=2, threadid=140737352833920) at ./nptl/pthread_kill.c:78
#2  __GI___pthread_kill (threadid=140737352833920, signo=signo@entry=2) at ./nptl/pthread_kill.c:89
#3  0x00007ffff7442476 in __GI_raise (sig=sig@entry=2) at ../sysdeps/posix/raise.c:26
#4  0x00007ffff763d453 in _call_leave (frame=frame@entry=0x7ffff5e2ac20, ccall=<optimized out>, arg=<optimized out>, self=<optimized out>) at probe/profiler.c:1153
#5  0x00007ffff763de4c in _bf_callback (self=<optimized out>, frame=0x7ffff5e2ac20, what=<optimized out>, arg=0x7ffff7d89200 <_Py_NoneStruct>) at probe/profiler.c:1293
#6  0x00007ffff7ac11cc in call_profile_func (arg=<optimized out>, self=0x7ffff5df1290) at Python/legacy_tracing.c:51
#7  sys_profile_return (callable=0x7ffff5df1290, args=<optimized out>, nargsf=<optimized out>, kwnames=<optimized out>) at Python/legacy_tracing.c:89
#8  0x00007ffff7aba4aa in _PyObject_VectorcallTstate (kwnames=0x0, nargsf=9223372036854775811, args=0x7fffffffb028, callable=0x7ffff5df1290, tstate=0x7ffff7dfee28 <_PyRuntime+315624>)
    at ./Include/internal/pycore_call.h:169
#9  call_one_instrument (event=2, tool=6 '\006', nargsf=9223372036854775811, args=0x7fffffffb028, tstate=0x7ffff7dfee28 <_PyRuntime+315624>, interp=0x7ffff7dc7fa8 <_PyRuntime+90728>)
    at Python/instrumentation.c:985
#10 call_instrumentation_vector (instr=<optimized out>, tstate=0x7ffff7dfee28 <_PyRuntime+315624>, event=2, frame=<optimized out>, arg2=<optimized out>, nargs=<optimized out>, nargs@entry=3, args=0x7fffffffb020)
    at Python/instrumentation.c:1172
#11 0x00007ffff7abc7cb in call_instrumentation_vector (args=0x7fffffffb020, nargs=3, arg2=<optimized out>, frame=<optimized out>, event=<optimized out>, tstate=<optimized out>, instr=<optimized out>)
    at Python/instrumentation.c:1144
#12 _Py_call_instrumentation_arg (tstate=<optimized out>, event=<optimized out>, frame=<optimized out>, instr=<optimized out>, arg=<optimized out>) at Python/instrumentation.c:1219
#13 0x00007ffff789c54a in _PyEval_EvalFrameDefault (tstate=0x37a7c3, frame=0x7ffff67d8088, throwflag=-169498458) at Python/generated_cases.c.h:7570
#14 0x00007ffff7926e85 in _PyEval_EvalFrame (throwflag=0, frame=0x7ffff5e37888, tstate=<optimized out>) at ./Include/internal/pycore_ceval.h:119
#15 gen_send_ex2 (closing=0, exc=0, presult=0x7fffffffb370, arg=0x7ffff7d89200 <_Py_NoneStruct>, gen=0x7ffff5e37840) at Objects/genobject.c:259
#16 PyGen_am_send (self=0x7ffff5e37840, arg=0x7ffff7d89200 <_Py_NoneStruct>, result=0x7fffffffb370) at Objects/genobject.c:294
#17 0x00007ffff64631d4 in task_step_impl (state=state@entry=0x7ffff5e1c0d0, task=task@entry=0x7ffff5f2af30, exc=exc@entry=0x0) at ./Modules/_asynciomodule.c:3123
#18 0x00007ffff6464760 in task_step (state=0x7ffff5e1c0d0, task=0x7ffff5f2af30, exc=<optimized out>) at ./Modules/_asynciomodule.c:3463

I think the instrumentation layer doing something differently. or should I try switching from legacy API to sys.monitoring?

Any idea is appreciated here.

I answered myself here, but really hoping maybe there is a better alternative than this: