Hi all,
Firstly I’m 100% new to Python. I’ve been given a script from a supplier who’s purpose is to go into each ticket, look to see if there is an attachment and if so create a folder and download the attachment into that folder. When I run it after about 10 mins of it working perfectly I get an error, which from googling looks like it’s a problem with special characters.
Error…
*File "C:\Users\ME\source\repos\Supplier\Supplier\Supplier.py", line 65, in <module>*
* download_ticket_attachments()*
* ~~~~~~~~~~~~~~~~~~~~~~~~~~~^^*
* File "C:\Users\ME\source\repos\Supplier\Supplier\Supplier.py", line 57, in download_ticket_attachments*
* download_attachment(attachment_url, ticket_id, attachment_name)*
* ~~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^*
* File "C:\Users\ME\source\repos\Supplier\Supplier\Supplier.py", line 19, in download_attachment*
* with open(os.path.join(output_dir + "/" + str(ticket_id), attachment_name), 'wb') as f:*
* ~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^*
*OSError: [Errno 22] Invalid argument: 'C:\\Supplier/542744\\Purchase-Order-#1977782-(Placed-on:-21-Oct-2024-11:45-).pdf'*
There are about 50 tickets that this works absolutely fine for. Everything works as expected and yes the files open perfectly. Some of them have special characters in such as comma’s, percentages and ampersand. But I think it’s the back/forward slashes in the file names causing windows to freak out.
Below is the code I’ve been given.
> import os
> import requests
> import json
>
> # Configure your Supplier instance details
> subdomain = 'MyCompany' # Replace with your Supplier subdomain
> api_token = '123456789FakeToken' # Replace with your API token
> email = 'users@email.com' # Replace with your email associated with Supplier
> output_dir = 'C:\Supplier' # Directory to save attachments
>
>
> # Authenticate with Supplier API
> auth = (email + '/token', api_token)
>
> # Function to download attachment
> def download_attachment(attachment_url,ticket_id, attachment_name):
> response = requests.get(attachment_url, auth=auth)
> if response.status_code == 200:
> with open(os.path.join(output_dir + "/" + str(ticket_id), attachment_name), 'wb') as f:
> f.write(response.content)
> print(f"Downloaded {attachment_name}")
> else:
> print(f"Failed to download {attachment_name}")
>
>
> # Function to get all tickets and download attachments
> def download_ticket_attachments():
> url = f'https://{subdomain}.Supplier.com/api/v2/tickets.json'
>
> while url:
> response = requests.get(url, auth=auth)
> if response.status_code != 200:
> print(f"Error fetching tickets: {response.status_code}")
> break
>
> tickets_data = response.json()
> for ticket in tickets_data['tickets']:
> ticket_id = ticket['id']
> print(f"Processing ticket #{ticket_id}")
> attachment_url = f'https://{subdomain}.Supplier.com/api/v2/tickets/{ticket_id}/comments.json'
> attachment_response = requests.get(attachment_url, auth=auth)
> if attachment_response.status_code == 200:
> comments = attachment_response.json()['comments']
> for comment in comments:
> if 'attachments' in comment:
>
> if len(comment['attachments']) > 0 :
>
> # If there are attachments, create a folder to store the ticket's attachments,
> if not os.path.exists(output_dir + "/" + str(ticket_id)):
> os.makedirs(output_dir + "/" + str(ticket_id))
>
>
> for attachment in comment['attachments']:
> attachment_url = attachment['content_url']
> attachment_name = attachment['file_name']
> download_attachment(attachment_url, ticket_id, attachment_name)
> else:
> print(f"Failed to fetch comments for ticket #{ticket_id}")
>
> # Check if there are more pages of tickets
> url = tickets_data['next_page']
>
> # Run the script
> download_ticket_attachments()
I’m not 100% useless on code but I know nothing about Python. I’m under a deadline to get this done and the supplier will not support their own Python Script as they’ve done this as a “favour”. We do not have the ability to view let alone edit at source. So that’s out the window too. We need to remove our data from the supplier before April.
Appreciate your help and support on this.
Thanks
Justin