Python source code to dynamic-link C library

Think about it, you just developed a Python project, which consists of a bunch of .py script. You can run it on your own linux server, which of course, has a Python environment.

But you have to implement this Python project into another linux server, which has no Python environment, or with a Python environment but get a version issue, or with a Python environment but doesn’t have some third-party packaged your project rely on (numpy, pandas, tornado, tensorflow…). Trying to make sure and complete all the environment you need can be very troublesome. Even if you fix one linux server, maybe someday you get another one to implement, and you have to perform the same procedure again, again and again…

My goal is to wrap the python interpreter and all frequently used packages (numpy, pandas, tensorflow…) into a big dynamic library(.so) on linux, from my own linux server, which of course, has a Python interpreter environment. Also, I’ll convert my .py script of my project to .c or .so using cython and gcc command from my own linux server.

In this way, after this big dynamic library is generated, I can copy this file to any other linux server. Then, after I copy my project to this linux server (already .c and .so), I can run it through linking with the big dynamic library above. I don’t have to use an Python interpreter, and also I don’t have to install any third-party packages like numpy or tensorflow. Is this possible?

1 Like

PyOxidizer is actually fairly close to what you’re asking for.

1 Like

Docker is also a good solution, with the added benefit of consistent and containerised environments.

1 Like

That’s a great way to ensure that security bugs in your application
never get fixed.

When I install a security fix to the Python interpreter on my systems, I
expect that every application and script that runs in Python will use
the upgraded interpreter with the new fix. Your system would ensure that
there were hidden interpreters with dozens of third-party libraries
buried in .so files that would probably never get the necessary security

No thank you.

Of course this is “possible”, but honestly you would be better off
looking at writing a simple deployment script and running that.

Pretty much every Linux server now has Python, at least for the most
popular distros (Debian, Ubunto, Fedora, Red Hat);

Windows now supports Linux from the Microsoft App store.

So at least two of the three major platforms make installing Python

Once you have Python, you can run your deployment script that installs
pip, installs the packages you need, and copies your application onto
the system. Write it once, run it over and over again.

And each server will have the most up to date environment instead of an
old, obsolete, insecure environment.

Howdy Xixiang Yu,


And for commercial projects you also need to ensure that every customer works on the very same set of versions of said packages - as you just can test your software for one or at least just a limited, small number of such sets - unless you do not want to sell untested software.

That exactly is the reason, why I developed

blythooon · PyPI

Python Runtime Environment for Scientific Applications with Qt based GUI - Blythooon, Part 1 - YouTube

(Belonging example project: COVID Demo App - YouTube )

Nice idea :+1:! Good luck and keep us informed!

Cheers, Dominik

that’s a great way to ensure that you have to care for customers with thousands of different systems and that every new bug / incompatibility also finds the way into at least one of said systems:

I agree with @steven.daprano
Don’t forget about the standard Python module venv and pip freeze / pip install -r which solve some of the issues raised.

1 Like

Hi Peter,

I have no objection against using venv / pip; by the way, Blythooon does exactly this…

But I as well have no objection against Xixiangs idea of creating

Both approaches have advantages as well as disadvantages.

Blythooon is a NET installer. Nice, if the computer, on which you want to install, is connected to the internet. Not so nice for the rest (although I made it possible to let Blythooon download the necessary packages on another computer).

My objection just was against “the most up to date” part. I prefer “the tested” ones…

Cheers, Dominik

First of all I have to clarify that all the scenarios we’re discussing are about Python3. It’s meaningless to discuss Python2 since basically every linux system comes with a Python2.

I’m trying to solve the problem on linux servers. Below are the procedures but still have some problems. Start with the simplest one

On a linux server with Python3 installed,

  1. Use cython --embed command to convert into a hello_world.c file.
  2. Get the python3.7m folder from /root/anaconda3 and copy into the same directory as hello_world.c.
  3. Get the and from /root/anaconda3 and copy into the directory /usr/lib.
  4. Run command ldconfig
  5. Run gcc -I ./python3.7m/ -lpython3 hello_world.c -o hello_world.out && hello_world.out, SUCCESS!!!
    But all the above are on a linux server with Python3 installed

On a linux server without Python3 installed, apart from uploading all the files from the other linux server, the procedures are basically the same.

  1. upload the hello_world.c file from the other linux server.
  2. upload the python3.7m folder from the other linux server.
  3. upload the and from the other linux server and copy into the directory /usr/lib.
  4. Run command ldconfig.
  5. Run gcc -I ./python3.7m/ -lpython3 hello_world.c -o hello_world.out && hello_world.out, FAIL!!!
    The Error message were below:

Anyone who knows how to deal with this bug? Keep in mind we wanna fix the problem on a linux server without Python3 installed. Thank you!

I just came across this PyEmpaq that looks like it might also solve some of these needs (Disclaimer: I haven’t used it yet)

1 Like

Too busy these days. I’ve fixed the problem in September but wait until now to have time to keep updated. The detailed solution procedures are as below:

Normally in a linux server, you could run a python script only after you install the necessary python environment, and the python executation is highly dependent on the versions of any third-party packages, which brings a lot of troubles if you have to deploy a python-based project on any linux server. Here I fixed and removed the python environment dependencies of deploying a python-based project, so that you can run python codes on any linux server, even without python software installed.

This tech plan involves two linux servers (one with a python3 environment for preparation called the preparation server, the other without any python environment for deployment, called the deployment server).

The basic idea of our tech plan: Use gcc and Cython to convert all .py files to executable files or .so dynamic libraries, among which start-engine script is transformed to executable file and all the middle imported files are converted to .so dynamic libraries. Then copy all already structured python head files folder and dynamic link libraries folder, and combine all above into a brand new large folder. Finally upload this final folder onto the deployment server for deployment. You could directly run the executable file to start the engine and you don’t have to do any compile mission on the deployment server.

The specific operations could be classfied as preparation phase and deployment phase. Let’s describe them separately:

The Preparation Phase (at preparation server) consists of 2 steps as below:

Step 1: Prepare all the head files and dynamic libraries. The procedures are as below (you only have to operate this once ever, and you could use the folder already packaged on any deployment mission later on):
a. We have to prepare all dynamic libraries for python original interpreter and third-party packages in advance. We need to prepare them well on preparation server. The conda version of python environment could bring some convenience to this job, since it already contains lots of third-party packages the original python doesn’t have. For those packages conda doesn’t include, you could always use command python3 -m pip install packages or download the wheel file to install them. Currenly I already installed tensorflow, pytorch, keras, pyspark, sklearn, kafka, pymysql, cx_Oracle, tornado, requests, dask, elasticsearch, simply because I’m working on deep learning and big data computation. If there is any packaged that is not frequently used but used in your project, you can download them as well.
b. Please keep in mind Cython is mandotary because you have to convert python script into .c file, and you could embed the python interpreter into the generated .c file with the --embed argument.
c. All successfully installed third-party packaged are automatically saved in /root/anaconda3/lib/python3.6/site-packages. As long as you make a link to in /root/anaconda3/lib, you can fulfil the calling for third-party packages in site-packages while running the executable file.
d. Keep well the /root/anaconda3/lib (dynamic libraried folder) and /root/anaconda3/include/python3.6m (head files foler). You could directly use them in step 2.

Step 2: Wrap all python scripts into .c files or .so dynamic files. Operate everytime when you have a deployment mission.
Here we have 2 different situations (single-script situation, namely only one python script for your project and multiple-scripts situation, namely multiple python scripts for your projejct).

SIngle-script situation procedures:
a. Copy the /root/anaconda3/lib dynamic libraries folder and /root/anaconda3/include/python3.6m head files folder generated from Step 1 into the folder where single-script is:
Command: cp /root/anaconda3/lib /root/anaconda3/include/python3.6m .
b. Wrap the single-script into .c file:
Command: cython -3 --embed
c. Move the final dynamic libraries file in ./lib to /usr/lib directory:
Command: mv ./lib/ /usr/lib
d. Run command ldconfig
e. Compile and link .c file into the executable file:
Command: gcc -I ./python3.6m -L ./lib -lpython3.6m test.c -o test.out
f. You don’t have to do any operation for all the rest files

Multiple-scripts situation procedures:
a. Copy the /root/anaconda3/lib dynamic libraries folder and /root/anaconda3/include/python3.6m head files folder generated from Step 1 into the folder where multiple-scripts are:
Command: cp /root/anaconda3/lib /root/anaconda3/include/python3.6m .
b. Wrap the start_engine script into .c file:
Command: cython -3 --embed
c. Wrap all the middle-imported scripts into the .so dynamic libraries. Note that the name of the dynamic libraries have to be exactly the same as the original middle-imported scripts except from the suffix. Operate the following commands for each middle-imported scripts:
Command: cython -3 --embed
Command: gcc xxxxxx.c -I ./python3.6m -fPIC -shared -o
d. Move the final dynamic libraries file in ./lib to /usr/lib directory:
Command: mv ./lib/ /usr/lib
e. Run command ldconfig
f. Comlile and link the .c file generated from start_engine script into the executable file:
Command: gcc -I ./python3.6m -L ./lib -lpython3.6m test.c -o test.out
g. You don’t have to do any operation for all the rest files

After the Preparation Phase is over, you should have a folder containing lib (dynamic libraries foler), python3.6m (head files folder), executable file converted from start_engine file, .so dynamic libraries converted from middle-imported files (only in multiple-scripts situation).

The Deployment Phase (at deployment server):

  1. Upload the folder generated from the Preparation Phase into the deployment server
  2. Delete all the python scripts lefted
  3. Modify the configure file based on your need
  4. Move the final dynamic libraries file in ./lib to /usr/lib directory:
    Command: mv ./lib/ /usr/lib
  5. Run command ldconfig
  6. Run executable file ./test,out, DONE!!!

any one insterested in this topic?

Thank you Dominik.

I already totally fixed the problem. See my latest reply below for details.

1 Like