Post requests using threading

is this a good practice. can it cause any problems

def post_request(req_data, header):‘’,
json=req_data, headers=header, timeout=20)

count = 1000
i = 0
while i < count:
headers = {‘Content-type’: ‘application/json’, ‘uid’: str(uuid.uuid1())}
t1 = threading.Thread(target=post_request, args=(data, headers), daemon=True)
i = i + 1

You’re doing 1000 of these. (a) you might run out of Threads, though
probably well over 1000 and (b) you may exceed the capacity of the
server you’re contacting and © depending on your other code, you may
need some locking/access-control when you store or process the results
of the post.

Usually one wants to limit the number of parallel connections. Even if
the server copes, there will usually be a performance peak - up to that
many connections increases throughput for you, beyondthat the server
slows down and everyone loses. And of course most servers are shared, so
you don’t want to prevent service to other users. So you may want a
semaphore to limit the request calls, example:

from thrreading import Semaphore

S = Semaphore(16)

def post_request(req_data, header):
    with S:'',
                      json=req_data, headers=header, timeout=20)

This doesn’t limit your threads, but no more than 16 calls
will run at once.

Cameron Simpson

1 Like

Instead of using primitives such as a Semaphore, you could use the concurrent.futures library:

import uuid
from concurrent.futures import ThreadPoolExecutor, wait
import requests

executor = ThreadPoolExecutor(max_workers=16)
futures = []
for i in range(1000):
    future = executor.submit(,
            'Content-type': 'application/json',
            'uid': str(uuid.uuid1()),

1 Like