Talk:Python package guidelines
tests directory
The article states that one shouldn't install a "tests" directory. My package however, after running through setup.py, contains such a directory. What is the recommended way to deal with this? Can this directory be deleted? Couldn't find a clue in other Python packages I've looked at, so can someone maybe point me in the right direction? —This unsigned comment is by Johnpatcher (talk) 13h30, 19 December 2016 (UTC). Please sign your posts with ~~~~!
- You should probably delete it from "${pkgdir}". I don't know of any reason to have such a directory, unless it contains unittests for the module. Which probably means upstream should be advised to fix their setup.py because there is no reason to run unittests on an installed module, and the module itself certainly shouldn't rely on methods hidden in the unittests...
- I don't actually know of any misbehaving modules to point to which violate that rule though.
- Eschwartz (talk) 18:52, 19 December 2016 (UTC)
- To the best of my knowledge, the recommendation is designed to prevent namespace conflicts. If I install your package and execute
python -c 'import tests', and your code is imported, that's bad; If I install your package and executepython -c 'from your_package import tests', and your code is imported, that's fine. Getting this right is a little tricky. For an example of Python code that sidesteps these issues, see this package. The top-level "tests" directory is not — as of this PR — installed by setup.py, and therefore produces no namespace conflicts. The "pulp_smash/tests" directory is installed by setup.py and available as "pulp_smash.tests", and therefore produces no namespace conflicts. Ichimonji10 (talk) 21:56, 19 December 2016 (UTC)
- To the best of my knowledge, the recommendation is designed to prevent namespace conflicts. If I install your package and execute
- I'm experiencing these issues with python-pendulum and python-pytzdata, both of which seem to install a global tests directory. Not too familiar with Python packaging, so I'm not sure if it is ok to simply delete those directories? --Johnpatcher (talk) 06:54, 20 December 2016 (UTC)
- The article states that one shouldn't install a "tests" directory. - unfortunately, this is no longer the case. I think this statement must be added to the article, or at least a discussion on that. I thought for a while about that, and I am really sure that users don't need program tests: they should be done only by developers. -- Ynikitenko (talk) 14:27, 23 June 2022 (UTC)
- It is stated in Python package guidelines#Test directory in site-package and the wording did not change substantially since 2016. — Lahwaacz (talk) 15:59, 23 June 2022 (UTC)
- No, in that section it is stated: not install a directory named just tests into site-packages. I understand this phrase as "you can install tests into site-packages/mypackage-tests". I think there should be a phrase about any tests (that they need to be installed in very rare cases). -- Ynikitenko (talk) 16:12, 23 June 2022 (UTC)
- The wording is basically the same since at least December 2016 which is when this discussion started. As for your wording: there can't be such rule because many official packages would violate it, look at e.g. python-pandas or python-sympy. — Lahwaacz (talk) 17:00, 23 June 2022 (UTC)
- Well, I didn't see that revision (I quoted only this talk here). Thanks for the links! I think, since this is an educational article for package creators, it would be an improvement if we could formulate in what cases tests should be present, and it what they should not? For example, as a hypothetical user of python-sympy I could wonder: do I need to have its tests installed? Why is the repository 10Mb large? (is it because of tests or not...) -- Ynikitenko (talk) 17:09, 23 June 2022 (UTC)
- The phrase is supposed to be about buggy packages which misconfigure setuptools. Because setuptools configuration is ancient and basically a cargo cult, people copy configurations that result in not only the intended
<repository>/<package-name>/being packaged, but also<repository>/tests/. Many packages have this problem, see e.g. here. I added some clarifying text in revision 736240 – flying sheep 15:33, 3 July 2022 (UTC)
- The phrase is supposed to be about buggy packages which misconfigure setuptools. Because setuptools configuration is ancient and basically a cargo cult, people copy configurations that result in not only the intended
Pytest downloads dependencies if unmet
I'm currently looking at a bug in python-keyring caused by the Arch packaging shipping a) without specifying a required dependency and b) without the required version of the dependency being in the Arch repos yet. And yet, python-keyring has a check() function and the tests pass. The reason for this is that pytest actually downloads unmet dependencies from PyPI solely for the purposes of running the tests. This kind of defeats some of the purpose of the check() function, as this ought to be an error. I propose changing the recommended 'setup.py pytest' line to:
python setup.py pytest --allow-hosts ','
Which blocks it from downloading anything, turning unmet dependencies into errors. There are probably similar changes than can be made to other test runners. Chrisjbillington (talk) 03:23, 13 March 2021 (UTC)
Thoughts on PEP-610's direct_url.json
PEP-610 has been supported in poetry for a while now.
It creates a direct_url.json file that serves no purpose except to take up space.
Grawlinson (talk) 19:43, 25 August 2021 (UTC)
- That is not true. https://www.python.org/dev/peps/pep-0610/#freezing-an-environment particularly stands out. FFY00 (talk)
Add Troubleshooting section?
There are two "If"s from setuptools or distutils section that seem to me as Troublesooting-related contents, so I think the should be moved there.
First "If" is:
______
If a package needs setuptools to be built due to including executables (which is not supported by distutils), but only imports distutils, then building will raise a warning, and the resulting package will be broken (it will not contain the executables):
/usr/lib/python3.8/distutils/dist.py:274: UserWarning: Unknown distribution option: 'entry_points' warnings.warn(msg)
An upstream bug should be reported. To work around the problem, an undocumented setuptools feature can be used:
# fails because of distutils python setup.py build # works by using a setuptools shim python -m setuptools.launch setup.py build
______
And the second one is:
______
If a package uses python-setuptools-scm, the package most likely will not build with an error such as:
LookupError: setuptools-scm was unable to detect version for /build/python-jsonschema/src/jsonschema-3.2.0. Make sure you're either building from a fully intact git repository or PyPI tarballs. Most other sources (such as GitHub's tarballs, a git checkout without the .git folder) don't contain the necessary metadata and will not work.
To get it building SETUPTOOLS_SCM_PRETEND_VERSION has to be exported as an environment variable with $pkgver as the value:
export SETUPTOOLS_SCM_PRETEND_VERSION=$pkgver
______
Opinions? -- RafaelFF (talk) 16:33, 28 February 2024 (UTC)
Prefer VCS source for setuptools-scm and friends
I'm proposing to change the notice about python-setuptools-scm, python-hatch-vcs and friends to prefer VCS source instead. These tools actually rely on VCS metadata to determine which files to include & to install.
Passing only the version workarounds the build, but could still silently generates an invalid installation that misses some data files. Even if it's not the case now, it could be anytime in the future.
So here I'm proposing to prefer using VCS sources directly to keep the information and ensure we are packaging correctly. Felixonmars (talk) 17:35, 2 March 2024 (UTC)
- I second this! I spent a whole afternoon banging my head about why some sub-packages did not get installed when building from a release tarball that was identical to a git checkout. Switching to VCS sources finally did the trick. Meadow (talk) 21:34, 15 July 2024 (UTC)
Handling unmet python dependencies
Hi all,
A package I use has a dependency that is older than the version of the library in the repos. Unfortunately upgrading the library to the version provided by Arch breaks the upstream project (which is an "enterprise" project with a long support cycle so they are not likely to update the library if it doesn't cause a security issue), and downgrading the version in Arch breaks several other Arch packages that depend on it.
The best solution we could come up with was to install the project into a virtual environment with the correct version of the package from PyPi, and then have the project always be run from the virtual environment so that it's using a mix of system packages and venv packages.
This is, of course, not ideal, but it was the solution that we came up with. I'd be curious to know two things:
- Is there a better solution to this that I'm not aware of? Maybe a way to create a legacy package that could be installed alongside the new package for the dependency? Something else?
- If there's not a better way to handle this, would documenting the virtualenv setup (you have to jump through some hoops to make sure that it works) be useful, or would that just encourage people to do bad things that aren't often necessary anyways?
Thanks for your feedback.
—SamWhited (talk) 13:08, 14 May 2024 (UTC)
PEP517: Recommend against adding python-wheel to makedepends?
As noted in [1], since python-setuptools 70.1 python-wheel is no longer necessary for bdist_wheel, so python-wheel is only useful if one explicitly needs to manipulate the wheel. I've tested this with ~60 Python PKGBUILDs on my system, and they all build perfectly in chroot well without python-wheel (though this test might be contaminated by the fact that python-setuptools still depends on python-wheel -- still, I also included packages using python-hatchling and python-poetry-core so that should be indicative)
Gesh (talk) 19:46, 26 January 2025 (UTC), edited by Gesh (talk) 11:19, 27 May 2025 (UTC)
Explicitly enumerate pytest plugins used?
pytest autoloads all the plugins installed on the system when it runs. When users build outside clean chroots, this means it may pick up plugins that are incompatible with the package's test suite. And while it's recommended to build in clean chroots, it's also good practice to make PKGBUILDs robust.
This struck with python-pytest-recordingAUR -- it turns out it recursively invokes pytest in a way that parses the recursive invocations' output see pytest/13369 for details, which breaks in the presence of plugins that modify the output, such as python-pytest-pretty and python-pytest-sugar. Since it is infeasible to enumerate all these exceptions as attempted in pytest-recording/169, I looked for an alternative solution.
Gentoo guidelines recommend explicitly enumerating the pytest plugins used by the package. In the case of python-pytest-recordingAUR, this becomes
check() {
...
export PYTEST_DISABLE_PLUGIN_AUTOLOAD=1
export PYTEST_PLUGINS=pytest_httpbin.plugin,pytest_mock,pytest_recording.plugin
python -m pytest
}
(with the module names taken from pytest --trace-config). I can confirm this works in this case, and recommend we pick up this guideline as well.
—This unsigned comment is by Gesh (talk) 18:18, 10 April 2025 (UTC). Please sign your posts with ~~~~!
- Although, I would amend the above recomendation to use
--disable-plugin-autoload -p httpbin -p pytest_mock -p recording(with the names taken from theentry points for these plugins) for better encapsulation (the command-line flag was added in 8.4) --Gesh (talk) 19:43, 19 January 2026 (UTC)
Using uv or pip for packaging.
Why should not we use uv or pip for packaging? oech3 (talk) 08:03, 11 May 2025 (UTC)
- Because they are package managers that download dependencies from PyPI? — Lahwaacz (talk) 08:41, 11 May 2025 (UTC)
- If .pyc is compiled on separated $HOME on $BUILDDIR and extracted target package only,I Consider it is not dirty. oech3 (talk) 08:47, 11 May 2025 (UTC)
Don't add typechecking to checkdepends?
https://wiki.archlinux.org/index.php?title=Python_package_guidelines&oldid=806883 recommends against running style and coverage checkers, which I can see -- no need to abuse checkdepends for making dev shells.
However, type checking seems more like another kind of correctness check to my eyes -- why would we remove it?
Gesh (talk) 11:39, 2 February 2026 (UTC)
- Python is a dynamically and duck-typed language, the presence or absence of type annotations doesn't affect functionality in most cases (unless something like Pydantic is used), so there is no point in checking them in the context of packaging. Bad typing doesn't mean that the package is broken, good typing doesn't mean that the package is working. — andreymal (talk) 12:00, 2 February 2026 (UTC)
- +1 on what andreymal says.
- On top of that, it also introduces additional hassle on other fronts. For example, upstream projects often depend on library stubs (
typing-*) for their tests if the project depends on third-party libraries that don’t ship typing info on their own. We would then have to ship those library stubs, too, lest the type check fails. - Those library stub packages (which are often not maintained by the upstream project itself but by third-party contributors) would have to be maintained in system packages, and kept in sync with each other. Auerhuhn (talk) 12:07, 2 February 2026 (UTC)
- Bad typing doesn't mean the package is broken, good typing doesn't mean the package is working.
- Sure, but by that token we would also compile Haskell programs with
-fdefer-type-errors. I take upstream's choice to use typing as a commitment to try to write typesafe code. - The point re the maintenance burden, though, is well-taken. Our way to approach python packaging basically doubles the number of packages to install if a user wants to run typechecks. I do ship these typing stubs, and it's not too much work to keep them up to date, but bootstrapping might be an issue.
- Re the stubs being third-party -- of course they are, unless you mean that the developers of
typing-pkgare generally different from those ofpkg? Not quite seeing the burden difference either way, but I recognize the burden. - Do your stances change if we're talking about a VCS package on a project that often commits directly to master?
- Gesh (talk) 18:13, 2 February 2026 (UTC)
- I meant that
typing-pkgandpkgoften come from different upstream projects so they require separate PKGBUILDs (as opposed to a single PKGBUILD that would emit both packages.) - Either way, the main problem remains: what’s a PKGBUILD maintainer to do if a typecheck fails? Auerhuhn (talk) 22:18, 22 February 2026 (UTC)
- I meant that
PSA: Only use one venv when making temp installs
Adapting papisAUR to the new recommended venv guidelines, I thought it would be better to make separate venvs for the doc generation and the testing, both of which need installed wheels.
Turns out doing so generates __pycache__ files that embed the paths, causing complaints.
There might be a way to do it, but I'm just using a single venv for now instead.
Gesh (talk) 11:59, 2 February 2026 (UTC)