Needed help to use multi-threading on GPUs

Hi, I’m Divyanshu. I am trying to implement multi-threading on GPUs. I am using amazon sagemaker’s Jupyter Lab as the interface.

import concurrent.futurest1 = time.time()
with concurrent.futures.ThreadPoolExecutor() as executor: 
   future_to_pair = {executor.submit(custom_summarizer_2, pair): pair for pair in pairs}    
   for future in concurrent.futures.as_completed(future_to_pair):   
         print(future.result(), time.time()-t1)

When I run this code, It uses only CPU. But I want to use multi-threading to all devices(CPUs and GPUs).
Can anyone please help me with this? I will be greatful of you. If you can provide me any sort of help or advie.

Running code on a GPU is a lot more complicated than that.

I believe on of the most popular solutions for using GPU compute from Python is numba, but I know next to nothing about it myself.

https://numba.pydata.org/