Requesting a code review part two

Remember that we are all here, using our free time. We do it for fun and to benefit the community.

It seems all of us agree that reviewing AI-generated ot enhanced code is neither fun to benefits the community – you can disagree, but we decide for ourselves what we want to spend time on.

AI aside, you haven’t told us what problem you are trying to splve – what the specifications are, etc – so we can’t really review your code anyway.

One thing I see, and others have mentioned is what seems like useless (or worse) use of subprocess that is either really, really bad code, or one of the goals of the project is to understand a bit about how subprocesses work – how can we comment without knowing that? (ditto for logging).

I still have a lot of questions, but I’m going to stop because it’s just frustrating you guys.

Your perspective is super valuable, but consider the perspective of a newbie who is using ChatGPT and is not geeky enough to read the documentation.

The documentation doesn’t require you to be geeky :slight_smile: We’ve already given the recommendation that ChatGPT is stopping you from learning and thus should be discarded. I’d recommend starting here:

2 Likes

I’m more likely to use a cheat sheet like https://www.pythoncheatsheet.org/ to help me finish my code.

So what’s the difference between copy and paste from https://www.pythoncheatsheet.org/ and asking an AI for simple code that isn’t that complex?

Since I am a newbie and English is not my first language, I am having a hard time making my point.

I’m using ChatGPT to express my idea, but I want to use Code Review to help me fix what I’ve written.

Does it really matter where my idea comes from? If it’s not that sophisticated?

I’m using a grammar checker that’s not perfect - should I be ashamed of using it to help me express my idea?

Yes, it DOES matter. That’s what we’re saying. ChatGPT is a bad tool for writing code with, and it will make the code review process harder and less productive.

It was presumably designed for English grammar, which means that you’re using it for exactly what it’s intended for. ChatGPT is not designed as a code writing tool, so using it as one is not a good idea.

I have made use of Google Translate to help me communicate (let’s say someone comes into an English Twitch stream and speaks Spanish - I’ll try to give some information in Spanish), and it’s definitely not perfect, but it’s helpful because it is specifically built for translating text between two languages. It wouldn’t be very useful for me to then ask someone to help me improve my Spanish, though, because it’s not MY Spanish.

So if you want to become a better programmer, write code yourself. If you’re going to use any sort of tool to help you write code, though, it shouldn’t be ChatGPT.

The idea of “MY” is a deep rabbit hole.

Can I use codepal.ai? because it’s made for developers.

I have read about half of it, but I am going to try to read the whole context.

1 Like

The relevant part is whether code review can affect future code. Reviewing the text that Google Translate gives me is not going to improve my future use of Google Translate, becaues I don’t control it.

I don’t know it well enough to say. It would probably be a better choice than ChatGPT, but it still has the same problem of making code review less useful. So if you’re going to use these sorts of tools, there’s less value in then coming to humans for code review. I don’t think it’s fair on reviewers to ask them to give you feedback on what you didn’t create.

1 Like

I wanted to focus on improving my code from the very beginning, but it seems like I’m getting lost with the idea of AI…

Yep. Solution? Drop the AI and just focus on improving YOUR code.

1 Like

I really understand where you’re coming from now, sorry for being slow to learn.

Can I ask an AI if I don’t understand something like the magic number? or should I avoid AIs altogether?

If you ask, how would you know whether it’s the right answer? Remember, ChatGPT can be tricked into getting basic arithmetic very very wrong while being completely confident in its responses.

In general, asking an AI to explain a simple concept is equivalent to doing a Google/Bing/DDG search for it and taking the first couple of hits, then massaging it into text that sounds like someone explaining it. So at best, you can get similar results just by doing the web search yourself, and at worst, it’s added new errors.

(For more complicated concepts, it MAY be of more value, in the same way that it’s more valuable to ask Mycroft Holmes than to go to the individual government departments. However, there is still the risk of newly-added errors.)

And while it’s definitely true that any person can add errors to what they’re saying, you don’t usually get responses from people who have JUST searched the web and then made a post. Most of the responses from actual people will include their own experiences and expertise, not just their errors.

1 Like

Example of finished code, but ChatGPT finished my code without adding
any other AI bloat to it, is my code v0.0.

Have you actually run this code? Because I think it would fail
immediately in the first try/except.

Random remarks:

I usually put the main programme in its own function main(), usually
at the top of the script and have the if __name__ == '__main__'
boilerplate down the bottom, like this:

 if __name__ == '__main__':
     main()

this avoid accidental globals and make the scripts pupose more clear.

Without help text, it is not apparent what the various command line
options are for.

This line is effectively nonsense:

 output_bytes = subprocess.check_output("/bin/echo Hello, World", shell=False)

Read the docs on subprocess.check_output to understand why. Plausible
syntacticly valid code which is nonsense is one of the things we
distrust about LLMs like ChatGPT.

Why aren’t you using print() for all these echo commands?

   try:
       answer = input("Proceed? ")
   except KeyboardInterrupt:
       # Handle the KeyboardInterrupt exception
       print('User interrupted the script. Exiting...')
       sys.exit(0) # Exit with status code 0 (success)

This is unusual. Often we don’t catch KeyboardInterrupt unless we need
to do some tidyup. We expect the user to indicate end of input with the
typical keystroke, which is usually ^D on UNIX/Linux and ^Z on
Windows.

When you’ve got codependent options like -r and -t it is normal to
handle them together in a single if-statement, eg:

 if args.t:
     if args.r:
         ... forbidden ...
     else:
         whatever args.r means ...
 elif args.r:
     ... forbidden ...
 else:
     whatever args.t means

rather than repeating these checks in two separate if-statements.

1 Like

Well, that’s my code v0.0… and also, since the discussion mostly ends here, and I need to re-write my code without being generated by an AI.

I hope to get more insight from you and others as I will be asking more on this forum (if I don’t get lazy).

Well, my solution is to ask a bunch of AIs :nerd_face: such as ChatGPT :sunglasses:, Perplexity.ai, Huggingface.co

If their answers are strange to one another, I’ll ask you guys, win-win?

And if they all agree with each other, you’ll accept it as truth?

What do you mean by truth? Even science cannot prove truth.

@Denise_unique, that would deprive you of the opportunity to learn to program. The best approach would be for you to plan, write, test, and refine your code. We hope you will do this and become our ally in urging other learners to do the same.

1 Like

Look, I love getting philosophical as much as the next person does, but I think we can agree that there is such a thing as truth. For example, it is plainly and simply true that 1 + 3 is equal to 2 + 2, and it is plainly and simply false that 1 + 3 is equal to 2 + 1. You can delve into axioms and stuff if you like, but unless you feel like going all Rene Descartes and concluding that “I must exist, because I can think” is the only truth there is, moaning about the nature of truth is counter-productive.

1 Like