Export data base to Excel file


I try write code to export data base to Excel file.
Ageing I have a problem with modules.
I try find how install that and still don’t works.
Please help, thanks.

# beginning of code

import pandas as pd
import openpyxl

# welcome
print("That is simple data base.")
print("Please input date.")

# input data
username = input("Name: ")
usersurename = input("Surename: ")
age = input("Enter age: ")
height = input("Enter height: ")
yob = input("Enter YoB: ")

# open file
f = open("data_base.txt", "a")

# write date to file
f.write("Name: ")
f.write("Surename: ")
f.write("Age: ")
f.write("height: ")
f.write("yob: ")
# close file

# open file data base and print data
f = open("data_base.txt", "r")

# export to excel
df = pd.DataFrame([[username, usersurename, age, height, yob], [12, 22, 32], [31, 32, 33]],
                  index=['Name', 'Surnename', 'Age', 'height', 'yob'], columns=['a', 'b', 'c'])

# end export to excel

# end of code
Python 3.11.0rc1 (main, Aug 12 2022, 10:02:14) [GCC 11.2.0] on linux
Type "help", "copyright", "credits" or "license()" for more information.

======== RESTART: /home/piotr/python/data_base/02_data_base_export_to_excel.py ========
Traceback (most recent call last):
  File "/usr/lib/python3.11/idlelib/run.py", line 578, in runcode
    exec(code, self.locals)
  File "/home/piotr/python/data_base/02_data_base_export_to_excel.py", line 3, in <module>
    import pandas as pd
  File "/usr/lib/python3/dist-packages/pandas/__init__.py", line 22, in <module>
    raise ImportError(
ImportError: Unable to import required dependencies:


Importing the numpy C-extensions failed. This error can happen for
many reasons, often due to issues with your setup or how NumPy was

We have compiled some common reasons and troubleshooting tips at:


Please note and check the following:

  * The Python version is: Python3.11 from "/usr/bin/python3.11"
  * The NumPy version is: "1.21.5"

and make sure that they are the versions you expect.
Please carefully study the documentation linked above for further help.

Original error was: No module named 'numpy.core._multiarray_umath'

As the section of the error message you posted entitled “IMPORTANT: PLEASE READ THIS FOR ADVICE ON HOW TO SOLVE THIS ISSUE!” tells you, make sure to compare the Python and the NumPy version shown to ensure they are correct.

Per the NumPy 1.21.5 release notes, NumPy 1.21.5 is not compatible with Python 3.11, only 3.10 and below, as it was released nearly a year before Python 3.11. The first NumPy version to support Python 3.11 is NmPy 1.23.2. Therefore, you’ll need to upgrade to NumPy 1.23.2 or later to use it (and thus Pandas) on Python 3.11.

Also, it seems you haven’t actually implemented the operation you describe in the title—exporting your dataframe to an Excel XLSX file. You don’t need to import openpyxl directly for that; since you’re already using Pandas you can just call df.to_excel() on your dataframe, which uses openpyxl under the hood.

Finally, when adding code or output to your post, please make sure you enclose it in code fencing so it is formatted correctly for others to read and copy, as I’ve done for you this time. You can do so with the </> button in the toolbar, or via typing triple backticks above and below the code in question, optionally with the language name (e.g. python) for syntax highlighting, like this:



In addition to the above, you don’t need 17 lines of code to write the text file. You could have a list object to hold the data, then loop through said list and write each list item, in sequence.

To add: maybe something like this

fields = ('Name', 'Surname', 'age', 'height', 'YoB')

data = []

for item in fields:
    entry = input(f"Enter your {item}: ")
    data.append(f"{item}: {entry}")

I didn’t want to get too off-topic, but yeah, that’s a good start—though it could really use some further refinements. I would suggest the following high-level approach:

  1. Store your input data into a data structure, such as a dictionary, instead of individual variables
  2. Use a for loop over the “database” keys to ask the user to input each one, and insert them into the dictionary
  3. Use a comprehension (or for loop) to construct each line of the text file, then join them together with newlines
  4. Open the file in a with block with the correct encoding, and write the whole thing in one go
  5. Finally, create the dataframe directly from the dictionary

Here’s what this looks like, with comments explaining each part:

import pandas as pd

# Step 0: Create a dictionary in which to store the user data
input_data = {}

# Step 1: Use a for loop to get and store the data for each field
for field_key in ["Name", "Surname", "Age", "Height", "YoB"]:
    field_value = input(f"Enter {field_key}: ")  # Use an f-string to insert the key
    input_data[field_key] = field_value  # Store the data for each field in the dict

# Step 2: Convert the data to the desired text file output string
# This uses a list comprehension, like a for loop but in a single expression
# To build a list of key: value lines in the output text file content
output_lines = [f"{key}: {value}" for key, value in input_data.items()]
output_text = "\n".join(output_lines)  # Join the lines into a string w/newlines

# Step 3: Write the file all in one go
# This uses a with statement, to open & close the file safety and automatically
# And the correct UTF-8 encoding, otherwise it won't work with non-English names
with open("data_base.txt", "w", encoding="UTF-8") as out_file:

# Step 4: Create the dataframe directly from the dictionary
df = pd.DataFrame([input_data])

Now, if you need to add an additional key, change the key names or add more users, you only need to change one line of the code and everything will work. Plus, its much shorter and simpler for you to read.

Python 3.10.6 (main, Mar 10 2023, 10:55:28) [GCC 11.3.0] on linux
Type "help", "copyright", "credits" or "license()" for more information.