Import data into csv file from a list of websites that is scrapped with a HTML parser

I am a Newbie with the below question
I have successfully scrapped a website that looks for all weblinks on that site and stores them in a list, i print the list and can see approx 45 weblinks.
I want to send the data to a .csv file. My issue is the results is its taking the last web link from the list and printing it in one row with each character populating a single cell.
My goal is to print all the 45 links in one column with each one under the other…
The code is below… thank you for any help

def init(self):
print(“Running WebScraper”)
r = requests.get(‘Python Tutorial | Python Programming Language Tutorial for Beginners’)

    soup = BeautifulSoup(r.content, 'html.parser')
    for a_href in soup.find_all("a", href=True):
        results = (a_href["href"])        

        csv_file = 'C:/Users/twbry/OneDrive/Desktop/Python_Project/academic.csv'
        with open(csv_file, 'w', newline='', encoding='utf-8') as csvfile:
            writer = csv.writer(csvfile)
            #for item in results:  
        print("print results", results) 


The for loop iterates over the result of soup.find_all("a", href=True), so it runs the body of the loop for each link found.

Inside the loop, you’re opening the CSV file for writing (the “w”) and that’s creating an empty file each time.

What you should be doing is putting the for loop inside the with statement so that it opens the file and then finds and writes the rows.

1 Like

Thank you Matthew, I gave it a try and its exactly what I was looking for