Requesting A true Access Control Mechanism access modifiers feature enhancement

Feature or enhancement

These are just feature request, so please don’t chop my head off. Thank you.

Anyways, I would love to see,
A true Access Control Mechanism access modifiers such as public, protected, and private in Python to be used in Classes, Objects and Functions
As well as !!important that protects statements within code.

Pitch

Why this feature or enhancement should be implemented and how it would be used.
It would allow developers to protect sections of their code from being manipulated from outside sources.
It would also prevent outside sources from injecting code within the environment , application or whatever the developer has created.

Rust is known for its secure measures it has, and its a new language in the industry.
Why can’t python secure itself this way as well?

Again for security reasons that would protect the code from all kinds of securities and vulnerabilities from
injections & other outside sources such as cyber crimes.

Also would be nice to be able to implement a modifier called - !!important
Which constantly checks that particular or statement and makes sure it is the last to execute all the time,
within that statement so it prevents from being injected or changed with malicious code from outside sources including api’s as its constantly checking against its original code. So injections can’t change it.

Many other coding languages in the industry have Access Control Mechanism for “security” purposes.
for example, Java, Rust, C#, C++, PHP, Typescript, and others. Rust has gain its notoriety because of its security.
It is there for a reason.

With the amount of cyber crimes happening across the world and businesses being dependent on python
for many aspects of their company. From software’s to network automation.
It is pretty obvious that python seriously needs to be more secured by implementing some security standards within its language
so developers can take important steps to secure company products from cyber criminals.
It would be another layer of security, instead of just depending on servers.
The application itself can also do protection within itself and prevent malicious changes.

I am actually shocked that python has yet to implement a true Access Control Mechanism access modifiers within the language for their Classes, Objects and Functions.

And saying that Python Foundation and Its Community has a “consenting adults” policy,
is surely not going to prevent cyber criminals from continuing to commit their crimes and
bringing down corporations / companies servers and websites.

I would really like to see a true Access Control Mechanism Access Modifiers implemented in python really soon.
until then, python will be like a 304 on 42nd street waiting to eventually be injected and infected, and crippling half the population infrastructure,
as cyber criminals wipe their rear end with python like toilet paper, just like they do with javascript.

Anyways, thank you for your time and for reading.
:smiling_face_with_three_hearts:

Python is extremely allowing of runtime manipulation. Python allows you to override import system, rewrite ast import time (pytest uses this), to add new attributes to an already defined class, to introspect stack frames and pull out variables from them, and many more. There are a ton of indirect ways to access a variable besides direct access. I don’t see how it’s plausible to make a variable “private” in a strict sense given the sheer amount of runtime introspection python offers.

Access controls like you’re saying simply don’t work that way. It is trivially easy to bypass C++'s private qualifier (just cast the pointer). Access controls have nothing to do with preventing malicious code.

What they do is they make certain operations easy, and certain others hard. There is an implicit contract between library authors and library users (eg application developers) to the effect of “If you promise not to touch anything marked private, I’ll avoid breaking everything in the next upgrade”. Which means that the moment you break that, you lose all backward compatibility guarantees.

This is nothing whatsoever to do with handling malicious code. If malicious code is running on your computer, you have already lost. Even if Python had some kind of “private” marker, it wouldn’t stop people doing a crazy amount of damage just using the public memories.

If you want “a true Access Control Mechanism”, you’re talking about sandboxing, not private/public markers on class members.

2 Likes

I don’t really understand what you’re saying is made possible by Python’s object model that constitutes a security risk. Could you provide an example of the kind of exploit you have in mind? What are the steps a malicious actor could take to inject code into a Python process where someone is not importing untrusted code?

Here are just some sources that says otherwise.

From Department of Defense, NSA and Homeland Security Cyber Crime Division’s.

“Access modifiers, especially private, have beneficial protections against cyber criminals from injecting malicious code into a production release of a program or application. Malicious code can do damage by corrupting files, erasing your hard drive, and/or allowing hackers access. Malicious code includes viruses, Trojan horses, worms, macros, and scripts. Malicious code can be spread by e-mail attachments, downloading files, and visiting infected websites. To prevent malicious code, it is important to use secure coding practices and access modifiers to ensure that code is only accessible to authorized users.”

and

“Using access modifiers can prevent unauthorized access to sensitive data and methods that should only be accessed by specific classes or subclasses. This can help prevent cyber criminals from injecting malicious code into production releases of programs or applications. However, it is important to note that access modifiers alone are not sufficient to prevent all types of cyber attacks. Other security measures, such as input validation, output encoding, and error handling, should also be implemented to ensure the security and stability of the application.”

https://www.cisa.gov/
.

So, is Rust, Typescript, C#, Go all wasting their time and should just get rid of Access Modifiers as a whole,
And solely depend on trust of human beings behavior?

Rust is known as one of the most secured language to date. Due to their “Ownership” rules.
Are they wasting their time on that and should just get rid of it and be wide open like Python Language?

Can you please provide the full references to the given quotes?
Googling on site:cisa.gov yields no results.

Is this in a context where you’re running untrusted code, or trusted code? Need WAY more context.

As the statement stands this is not true. A hacker can bypass private trivially, and frequently do.

I can’t find this quote with a web search, certainly not on any US government web site.

I’m going to ask the question because I really really hope the answer is “no”… Was this text generated by ChatGPT?

2 Likes