Various issues with python on Ubuntu WSL

Context

I am running Ubuntu 22 on WSL2 on windows 11.

I installed 2 versions of python 3.12.4
One with ./configure --prefix=/usr/local/python-debug --with-pydebug

And one with ./configure --prefix=/usr/local/python-optimized --enable-optimizations --with-lto

Their commands are also different. e.g. pip-o runs pip for python-optimized, while pip-d runs pip for python-debug.

Issue 1: make python-optimized build directory error

Cloning cpython and ./configuring for python-optimized, then running make results in find: ‘build’: no file or directory but it still works.

Issue 2: pip abstract methods and deprecations

When running pip-d install [something] I get the error

/home/plum-upc/.local/lib/python3.12/site-packages/pip/_internal/metadata/importlib/_dists.py:73: DeprecationWarning: Unimplemented abstract methods {‘locate_file’}
return cls(files, info_location)

/home/plum-upc/.local/lib/python3.12/site-packages/pip/_internal/metadata/importlib/_envs.py:111: DeprecationWarning: pkg_resources is deprecated as an API. See
from pip._vendor.pkg_resources import find_distributionshttps://setuptools.pypa.io/en/latest/pkg_resources.html

but it seems to work.

Issue 3: that’s a lot of tensorflow errors

import tensorflow as tf

:488: DeprecationWarning: Type google._upb._message.MessageMapContainer uses PyType_Spec with a metaclass that has custom tp_new. This is deprecated and will no longer be allowed in Python 3.14.
:488: DeprecationWarning: Type google._upb._message.ScalarMapContainer uses PyType_Spec with a metaclass that has custom tp_new. This is deprecated and will no longer be allowed in Python 3.14.
2024-07-07 11:14:14.543702: I tensorflow/core/util/port.cc:113] oneDNN custom operations are on. You may see slightly different numerical results due to floating-point round-off errors from different computation orders. To turn them off, set the environment variable TF_ENABLE_ONEDNN_OPTS=0.
2024-07-07 11:14:14.554207: E external/local_xla/xla/stream_executor/cuda/cuda_fft.cc:479] Unable to register cuFFT factory: Attempting to register factory for plugin cuFFT when one has already been registered
2024-07-07 11:14:14.569696: E external/local_xla/xla/stream_executor/cuda/cuda_dnn.cc:10575] Unable to register cuDNN factory: Attempting to register factory for plugin cuDNN when one has already been registered
2024-07-07 11:14:14.569731: E external/local_xla/xla/stream_executor/cuda/cuda_blas.cc:1442] Unable to register cuBLAS factory: Attempting to register factory for plugin cuBLAS when one has already been registered
2024-07-07 11:14:14.579766: I tensorflow/core/platform/cpu_feature_guard.cc:210] This TensorFlow binary is optimized to use available CPU instructions in performance-critical operations.
To enable the following instructions: AVX2 AVX512F AVX512_VNNI FMA, in other operations, rebuild TensorFlow with the appropriate compiler flags.
2024-07-07 11:14:16.646948: W tensorflow/compiler/tf2tensorrt/utils/py_utils.cc:38] TF-TRT Warning: Could not find TensorRT

and also

tf.config.list_physical_devices()
2024-07-07 11:14:19.280232: I external/local_xla/xla/stream_executor/cuda/cuda_executor.cc:984] could not open file to read NUMA node: /sys/bus/pci/devices/0000:01:00.0/numa_node
Your kernel may have been built without NUMA support.
2024-07-07 11:14:19.306586: I external/local_xla/xla/stream_executor/cuda/cuda_executor.cc:984] could not open file to read NUMA node: /sys/bus/pci/devices/0000:01:00.0/numa_node
Your kernel may have been built without NUMA support.
2024-07-07 11:14:19.306622: I external/local_xla/xla/stream_executor/cuda/cuda_executor.cc:984] could not open file to read NUMA node: /sys/bus/pci/devices/0000:01:00.0/numa_node
Your kernel may have been built without NUMA support.
[PhysicalDevice(name=‘/physical_device:CPU:0’, device_type=‘CPU’),
PhysicalDevice(name=‘/physical_device:GPU:0’, device_type=‘GPU’)]

This one also has a 2nd part. I have 2gpus. I am on a laptop with an intel CPU, so the intel UHD GPU is GPU 0. GPU 1 is nvidia. Maybe /physical_device:GPU:0 is nvidia and The intel one just doesn’t show.

Thanks in advance for trying to help fix my crazy issues!

Did you rename them manually after the fact, or… ?

I think this comes from one of the cleanup targets in the Makefile trying to remove contents of a build/ subdirectory that isn’t there. Presumably benign.

It’s better to use code blocks for terminal output, but this is still readable enough.

A DeprecationWarning is just a warning, and it comes from Python. It’s warning about standard library functionality that will be removed. Pip still provides this functionality for backwards compatibility, but whatever you’re trying to do is using it, and will eventually have to update it.

The problem is specific to certain "[something]"s that you’re trying to install; it definitely won’t affect everything. Saying any more will depend on the details of the “[something]” in question.

When you installed Tensorflow, did Pip tell you that it was installing from a wheel (some .whl file), or from source (.tar.gz)? If it was from source then there are a lot of things that can go wrong and we’ll probably need a lot more detail to hope to solve the problem. You might be better off with a Tensorflow-specific forum.

I copied it after the fact. e.g. python-debug/pip3 → python-debug/pip-d but pip3 is still there.

So should i rebuild and add a build subdirectory?

What about the errors that are not DeprecationWarnings?

I uses pip-d install tensorflow[and-cuda], so whatever is the default. I think (but not sure) it was .whl

Please try to start over and see exactly what happens if you try to pip-d install tensorflow in a fresh (virtual) environment.