Extracting number of LinkedIn search results

Hi please I need help extracting the number of results for each keyword in .txt file.
For example, my keyword is “buy cars”, the element I want from the search is “172 results”


Now let’s say I have 100 keywords in a .txt file and want to extract the number of results for each how can I achieve this?:
l tried this code but no results:

import requests 
import random
import csv
import time
import numpy
from bs4 import BeautifulSoup
from time import sleep
from selenium import webdriver

# Delays


# Read the keywords from a file
with open("keywords.txt", "r") as file:
    keywords = file.read().splitlines()

# Define the User-Agent header
driver = webdriver.Chrome()

# Create a new CSV file and write the headers
with open("results.csv", "w", newline="") as file:
    writer = csv.writer(file)
    writer.writerow(["Keyword", "Total Results"])

    # Perform the search for each keyword and write the total number of results to the CSV file
    for keyword in keywords:
        response = requests.get(f"https://www.linkedin.com/search/results/companies/?keywords={keyword}", driver=driver)
        delays = [3, 5, 7, 4, 4, 11]
        time.sleep(numpy.random.choice(delays))
        soup = BeautifulSoup(response.content, "html.parser")
        result_stats = soup.find("h2", class_= "pb2 t-black--light t-14")
        if result_stats:
            total_results = result_stats.get_text()
            writer.writerow([keyword, total_results])
            print(f"Keyword: {keyword}, Total Results: {total_results}")
        else:
            writer.writerow([keyword, "Not found"])
            print(f"Keyword: {keyword}, Total Results: Not found")

I’m almost certain scraping this website is against its terms of service. They may have apis you can use to get this info legitimately.

Besides that, I don’t think think requests.get takes in a driver argument. I think you’d have to use webdriver.get or something like that to control the web browser directly.

If you had the cookies or some sort of session, then you could use requests.get.