I was going through some student’s code and found out that he is using modules that include a main function. It seems that he meant to use files as both modules and scripts. I never have done that and I believe it should not be done. When I have a script, I usually call it like calculate_x and install it as a script with scripts=['caculate_x'] in the setup.py. That way in the command line I can just do calculate_x -a X -b Y. If calculate_x is also a module I would have to write calculate_x.py -a X -b Y, which seems cumbersome.
Also for me scripts are a place where I write the top level code, which uses the code in modules. The code in modules for me is tested in unit tests and the scripts themselves, tend to be small and I do not write unit tests for them. Do you have any ideas on this?
I think what you do is more or less best practise (look into using pyproject.toml) but for someone new to Python, what the student is doing at the moment is harmless.
It’s well worth him using Github, just as a way to easily back up his work as much as anything.
If any of the code he writes is worth publishing, then yes it does then become worth teaching him about packaging.
But at the current moment in time, with a million new things to learn, and other courses too, he’s probably got more than enough on his plate.
I use local files all the time myself for quick experiments and investigating bug reports etc.
I am not sure scripts is the best recommendation nowadays. I guess a console script entry point would be better: entry_points={'console_scripts': ['calculate_x=calculate_x:main']}.
For executing importable modules or packages you might consider using the -m option flag: python -m calculate_x -a X -b Y.
I was going through some student’s code and found out that he is using
modules that include a main function. It seems that he meant to use
files as both modules and scripts. I never have done that and I believe
it should not be done.
it is actually not uncommon. Many of my modules can be used as a main
programme. The trite examples possibly run some self tests, or debugging
code for some issue I’m investigating. The elaborate ones are a proper
main programme mode for the module. Nothing wrong with that at all.
Provides the call to the main function (it needn’t have that name) is
guarded with the usual boilerplate, something like this:
if __name__ == '__main__':
sys.exit(main(sys.argv))
this is all perfectly fine.
When I have a script, I usually call it like calculate_x and install
it as a script with scripts=['caculate_x'] in the setup.py.
Personally I usually just have a stub shell script, eg a script named
“fstags” which goes:
#!/bin/sh
python3 -m cs.fstags ${1+"$@"}
to invoke the module. Put that in my $PATH somewhere (eg in ~/bin)
and I’ve got an fstags command. I’ve got some additional boilerplate,
but that is the core method. If I’m publishing the module then I’d
have:
[project.scripts]
fstags = "cs.fstags:main"
in the pyproject.toml.
Also for me scripts are a place where I write the top level code, which
uses the code in modules.